key: cord-296966-ivp74j43 authors: gottlieb, michael; dyer, sean title: information and disinformation: social media in the covid‐19 crisis date: 2020-05-31 journal: acad emerg med doi: 10.1111/acem.14036 sha: doc_id: 296966 cord_uid: ivp74j43 the novel coronavirus disease of 2019 (covid‐19) is a global pandemic with over 4.7 million cases and 316,000 deaths worldwide.(1) social media, defined as “electronic communication through which users create online communities to share information, ideas, personal messages, and other content,”(2) has played an important role during the covid‐19 pandemic. in fact, social media usage amongst the public has previously been demonstrated to significantly increase in cases of natural disasters and crises.(3) however, it is important to consider the benefits and limitations of this medium. the novel coronavirus disease of 2019 (covid-19) is a global pandemic with over 4.7 million cases and 316,000 deaths worldwide. 1 social media, defined as "electronic communication through which users create online communities to share information, ideas, personal messages, and other content," 2 has played an important role during the covid-19 pandemic. in fact, social media usage amongst the public has previously been demonstrated to significantly increase in cases of natural disasters and crises. 3 however, it is important to consider the benefits and limitations of this medium. there are several key benefits to social media during times of national crises. first, social media can be utilized to facilitate the distribution of new information to providers on the front lines. as examples of this include recent online discussions regarding controversial studies of hydroxychloroquine 5 and remdesivir. 6 additionally, this can allow healthcare leaders to directly communicate with the public, sharing information that was traditionally relegated to medical journals and hospital video sessions. this is evidenced by healthcare leaders having prominent social media accounts with large numbers this article is protected by copyright. all rights reserved of non-physician followers. examples of this include dr. esther choo (113,000 followers), dr. megan ranney (36,000 followers), and dr. jeremy faust (36,000 followers), who have utilized twitter and other social media outlets to increase awareness of healthcare crises and public health needs. moreover, experts can openly debate topics, while identifying and challenging false information in real-time. finally, social media can allow healthcare providers and healthcare systems to identify trends and prepare for surges in acuity. this may allow for advanced procurement and rationing of needed supplies to protect healthcare providers and patients, as well as altering hospital functions to better sustain the anticipated challenges. for example, an awareness of critical national personal protective equipment (ppe) shortages allowed for hospitals to begin to develop strategies to reallocate and ration ppe where possible and engage local businesses to repurpose production for medical supplies. these systems can also share information in a much more open manner to facilitate incidence tracking, as well as identify differences in incidence rates. a similar approach has been used previously to prepare for hurricanes and tornadoes in real-time. 3 however, it is also important to consider some of the limitations of social media in this setting. one major consequence of the massive amount of information being shared via social media is difficulty with filtering information. the online social media tracking program talkwalker tm (new york city, ny, usa) reported that covid-19 had been referenced on social media 40.2 million times from may 12 th , 2020 to may 18 th , 2020. 7 as the sheer volume of social media information rises, the signal-to-noise ratio lowers, and it can become difficult to identify factual and pertinent information. additionally, social media can allow for virtual celebrities and influencers (both medical and non-medical) to have a significant influence on information spread due to their number of followers, regardless of the accuracy of their information. this can lead to rapid spread of incorrect information and significant potential for harm. one study found that physicians are not able to reliably assess the quality of online resources using their gestalt alone. 8 another study found that up to 27% of physician bloggers had undisclosed financial conflicts of interest. 9 this can be particularly problematic given the increased emphasis on novel therapeutic agents this article is protected by copyright. all rights reserved targeted toward covid-19, which may have harmful side effects. this may be even more challenging for the public, as a wide array of physicians appear on social media sites, occasionally proposing strategies which conflict with recommended care. therefore, we propose the following strategies to improve the role of social media during covid-19 and future emergencies. first, we must train both providers and the general public how to properly evaluate social media resources. this should include an assessment of conflicts of interest and looking beyond the headline to review the supporting literature. one approach could be to create online tutorials using real examples from social media to demonstrate the importance in a more tangible manner. additionally, a system could be implemented by which social media outlets are vetted by a third party to ensure accuracy and publicly display the results in order to inform the public of notoriously inaccurate and misleading sources of information. we should also help identify and expand the reach of reliable healthcare experts on social media. twitter has recently approached this by substantially increasing the number of 'verified' healthcare experts on their site. creating and publicizing lists of reliable public health experts, as well as amplifying their messages via social media will also help in this regard. additionally, we should challenge online experts to include their credentials, specialty of training, conflicts of interest, and if active in clinical care, what environment they work in. this would aid users in being able to most clearly contextualize their comments and recommendations. we should also work with social media companies to help adapt algorithms, so that more reliable information appears at the top of search indices. finally, we should create centralized locations for medical professionals to share and disseminate reliable information and engage in discussion, as well as post-publication peer review of the literature in a sustainable centralized database. social media is more important now than ever, but we must be aware of the potential limitations. healthcare providers have been leaders on the front lines of this crisis since the onset. it is important to be leaders in social media, as well. accepted article world health organization. coronavirus disease (covid-19) situation report -120 definition of social media social media usage patterns during natural hazards social media in knowledge translation and education for physicians and trainees: a scoping review hydroxychloroquine and azithromycin as a treatment of covid-19: results of an open-label non-randomized clinical trial compassionate use of remdesivir for patients with severe covid-19 individual gestalt is unreliable for the evaluation of quality in medical education blogs: a metriq study financial conflicts of interest among emergency medicine contributors on free open access medical education (foamed) accepted article key: cord-126250-r65q535f authors: zavarrone, emma; grassia, maria gabriella; marino, marina; cataldo, rasanna; mazza, rocco; canestrari, nicola title: co.me.t.a. -covid-19 media textual analysis. a dashboard for media monitoring date: 2020-04-16 journal: nan doi: nan sha: doc_id: 126250 cord_uid: r65q535f the focus of this paper is to trace how mass media, particularly newspapers, have addressed the issues about the containment of contagion or the explanation of epidemiological evolution. we propose an interactive dashboard: co.me.t.a.. during crises it is important to shape the best communication strategies in order to respond to critical situations. in this regard, it is important to monitor the information that mass media and social platforms convey. the dashboard allows to explore the mining of contents extracted and study the lexical structure that links the main discussion topics. the dashboard merges together four methods: text mining, sentiment analysis, textual network analysis and latent topic models. results obtained on a subset of documents show not only a health-related semantic dimension, but it also extends to social-economic dimensions. on feb 11, 2020, who (world health organization) announced an official name for the syndrome coronavirus 2 (sars-cov-2), that is covid-19. after a month the covid-19 has been declared as pandemic. from december 2019 to march 2020 the covid-19 has spread throughout china and afterwards through italy, claiming victims and contagions 1 . the focus of this paper is to trace how the mass media, particularly information on newspapers, have addressed the issues about the containment of contagion or the explanation of epidemiological evolution. syilvie briand, who general social media manager, affirms: "we know that every outbreak will be accompanied by a kind of tsunami of information, but also within this information you always have misinformation, rumors, etc. we know that even in the middle ages there was this phenomenon" (zaracostats, 2020). communication has an important role in the diffusion of behaviour and contagion, especially regarding the spreading of misinformation. during crises it is essential to spot the best communication strategies in order to respond to critical situations. jin, pang and cameron's (2007) studies underline how important it is to understand public's emotional responses to crisis communication, in organizational and brand crisis but also in public and social crisis, such as infectious disease outbreaks (ido) (vijaykumar, jin and nowak, 2015) . it is not an easy task to understand how communication from public health authorities or social media contents affect public attention and health-related risk evaluation and perception in these situations. it is crucial to constantly monitor public communication activities to find media response during the spread of a disease. to this end, it is important to monitor the information that the mass media and social platforms convey is the proposed shiny dashboard 2 , to represent an alternative way for reading the mass media perspective on tragic events about the viral infection. this contribution was made with the collaboration of cecoms 3 (center for strategic communication iulm university), in order to studies on the planning and design of strategic communication. the contribution at the state of the art is a tool for media monitoring during covid-19 pandemic and a new media studies prospective for the study of crisis management. this paper is structured as follows: section 2 illustrates the methods and the data visualization tools used, while section 3 explains the corpus buildings procedures. section 4 more specifically discusses the results for a source (the guardian), and section 5 presents future works. 2 methodological features co.me.t.a. is optimized to allow a friendly use even to those users who don't have confidence with data analysis. the intuitive layout of user interface is divided between control panel on the left, plotting space on the right and menu bar with the methods on the upper side. the dashboard mixes four methods: text mining, sentiment analysis, textual network analysis and latent topic models. as concerns the latter model we propose a new visualization approach based on network to represent topics and words. figure 2 shows the dashboard's flowchart: (1) content extraction and corpus pre-processing; (2) sentiment analysis and descriptive study of texts: most frequent words and co-occurrence network analysis; (3) application of a model to extract and identify the latent topics within the contents collected; (4) plot network to represent each topic and semantic relationships between the extracted topics and terms. in the first step we defined preprocessing procedure for multilingual sources, using as reference the work done within the european project "positive messengers" 4 . after pre-treatment phase, the dashboard generates the final document-term matrix and cut sparse words. dtm allows to describe the corpus through common visualizations, such as barplot of most frequent words and wordcloud. the sentiment analysis is performed using a baseline dictionary. the sentiment polarity is plotted during time lapse of documents publication. in addition, the dtm can be read like an affiliation matrix to analyse the semantic relationships. using a textual network approach, we built a co-occurrence network and proposed the calculation of centrality measure between words. the last method is latent dirichlet allocation model (blei et al., 2003; griffiths and steyvers, 2004) . lda method is used to extract latent topics and subsequently construct the terms-topics matrix. the model allows to infer the latent structure of topics from recreating the documents in the corpus. this is possible by considering iteratively the relative weight of the topic in the document and the word in the topic. at the base of the lda we find these assumptions: a) the documents are represented as mixtures of topics, where a topic is a probability distribution over words, as a generative and bayesian inferential model; b) the topics are partially hidden, latent more precisely, within the structure of the document (steyvers and griffiths, 2007) . extracted the latent topics, the dashboard selects 20 most associated terms for each topic and it constructs a terms-topics two-mode matrix. starting from this matrix, a two-dimensional network is plotted. textual datasets implemented in co.me.t.a. were built with a scraping of the on-line search results (the search key was "coronavirus") of three italian newspapers ("il corriere della sera", "la repubblica", "il sole 24 ore") and two english journals ("the new york times", "the guardian"). we collect articles starting from 1 february, every 15 days there is an update. at the moment number of articles loaded in co.me.t.a. is 10328, 4380 in italian language and 5940 in english language. this paragraph shows a concise and compact representation of analytical possibilities offered by the dashboard and an idea of the functions put in place for the users. some of the results given by the main tools implemented in co.me.t.a. and related to the guardian (collected from 2020-01-04 to 2020-03-11) are presented below, referred to as the first stage of alert, just before the declaration of pandemic status by the who. the wordcloud above shows not only a health-related semantic dimension, but it also extends to social-economic dimensions. a substantial prevalence of a negative sentiment is highlighted by the examination of the trend in a sentiment analysis on the documents. this underlines the high spikes occurred on january 25th, when the news reported first cases detected in the eu, on february 15th, when chinese government implemented strict quarantine measures to contain the spreading of the virus from hubei region, and on march 11th, which is the day of recognition of the disease as pandemic. with use of latent dirichlet allocation 5 topics where extracted: 5. fifth topic is referred to media and informative context, underling social response to the pandemic. through the words-topic network it is possible to observe how the terms are associated with the referred topic. the network is composed by latent topics, identified through the lda technique and the words associated with the highest probability. this network allows to examine the links between these two dimensions, particularly howthe corpus are distributed among the topics. a node represents a term connected with different topics and indicates that it is not only present in both thematic groups, but it also represents a connection between semantic areas associated with each topic. terms with higher degree centrality (faust, 1997) are "people, virus, health, outbreak, china, public, uk, government, world, cases, wuhan, masks, staff, home, patients". a high level of centrality in these terms means a strong attention to personal protective equipment and national health preparation to the crisis. terms with high level of closeness centrality (bonacich, 1991) are "outbreak, virus, china, government, world". in this case the central semantic dimensions detected by the models are the outbreak of the pandemic and the global spreading of the disease. in the topic network it is possible to identify how the term "outbreak" links different topics related to semantic dimensions of economic, health and mediatic spheres. future works may take into consideration several directions, in order to optimize analysis of information and communication about covid-19 spreading. since the disease has spread globally, the intention of the research is to extend the datasets analysed to other important newspapers in other languages, such as spanish or french. this aims to have a more complete and global representation of the response of mediatic communication to the virus. another purpose of future researches is to identify a connection between sentiment trend extracted from articles and the epidemiological curve to quantify the effect given by death/contagious/healing rates to the communication. an implementation on the dashboard of a sentiment analysis on twitter text from the community could give a description of the public feedback to news, giving indications to media to provide a better communication in crisis situations. one of the most interesting developments for future works is to identify a relation between sentiment given by user tweets and news in future projects intent of the research is to implement the dashboard with two further analytical processes. through correspondence analysis we aim to represent the association structure between a group of extracted keywords and analysed texts, to identify concepts directly unobservable but as results of the measurement of a group of variables. with application of neural networks, it will be possible to better classify texts through textual data measurements in content extraction and corpus pre-processing phases. latent dirichlet allocation probabilistic topic models, in latent semantic analysis: a road to meaning griffths finding scientific topics simultaneous group and individual centralities centrality in affiliation networks integrated crisis mapping: toward a publics-based, emotion-driven conceptualization in crisis communication social media and the virality of risk: the risk amplification through media spread (rams) model coronavirus on social media: analyzing misinformation in twitter conversations a the covid-19 social media infodemic how to fight an infodemic. the lancet key: cord-275152-8if8shva authors: olum, r.; bongomin, f. title: social media platforms for health communication and research in the face of covid-19 pandemic: a cross sectional survey in uganda. date: 2020-05-05 journal: nan doi: 10.1101/2020.04.30.20086553 sha: doc_id: 275152 cord_uid: 8if8shva objectives: (1) to examine the usage of social media and other forms of media among medical students (ms) and healthcare professionals (hcps) in uganda. (2) to assess the perceived usefulness of social media and other forms of media for covid-19 public health campaigns. design: a descriptive whatsapp messenger-based cross-sectional survey in april 2020. setting: makerere university teaching hospitals (muth) and 9 of the 10 medical schools in uganda. participants: hcps at muth and ms in the 9 medical schools in uganda. main outcome measures: we collected data on sociodemographic characteristics, sources of information on covid-19, preferences of social media platform and perceived usefulness of the different media platforms for acquisition of knowledge on covid-19. results: overall, response rate was 21.5% for both ms and hcps. in total, 877 (hcps [136, 15.5%], ms [741, 85.5%]) were studied. majority (n=555, 63.3%) were male with a median age of 24 (range: 18-66) years. social media was a source of information for 665 (75.8%) participants. usage was similar among ms and hcps (565/741 (76.2%) vs. 100/136 (73.5%), p=0.5). among the ms, commonly used social media were: whatsapp (n=705, 95.1%) facebook (n=405, 54.8%), twitter (n=290, 39.1%), instagram (n=178, 24.0) and telegram (n=80, 10.8%). except for whatsapp, male ms we more likely to use the other social media platforms (p= <0.001 to 0.01). mass media (television and radio) and social media were preferred the most useful tools for dissemination of covid-19 related information. conclusion: more than two-thirds of ms and hcps are routinely using social media in uganda. social media platforms may be used for dissemination of information as well as a research tool among ms and hcps. social media alongside other media platforms can also be used as sources of reliable information on covid-19 as well as for dissemination of research findings and guidelines. • this is the first study in sub saharan africa on the use of social media for research during the covid-19 pandemic. • the study also explores perceived usefulness of different media for covid19 public health campaigns. • diversity of the participants consisting both healthcare professionals and medical students. • a relatively large sample size was enrolled in the survey despite a low response rate. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. caused by a novel human coronavirus (sars-cov-2) has rapidly spread to over 200 countries and territories globally [1] [2] [3] . the on-going covid-19 pandemic has threatened the lives of over 3 million people, claiming over 200,000 lives worldwide [3, 4] . researchers globally are racing to identify an effective vaccine and treatment for the viral disease, in order to curb the high morbidity and mortality associated with this virus. who has recommended maintaining a social distance universally to reduce human to human transmission of covid-19 [5] . as a result, there has been widespread lockdown in most countries in a bid to reduce public gatherings and rapid spread of the disease [6] . this has affected nearly all sectors, the health sector not spared. except for covid-19 related studies, other biomedical researches that involve contacts with participants onsite have reduced significantly in many countries [7, 8] . researchers have been advised to utilise virtual means including teleconferencing, virtual lab meetings and research seminars to maintain studies that can be conducted remotely [7] . health campaigns aimed at increasing the awareness of the public on transmission and prevention of the virus are also being employed by various international and local organisations. mass media and social media have been frequently used to disseminate infographics on the pandemic [9] . in this study, we explored the usage and perceived usefulness of social media and other forms of media among medical students (ms) and healthcare professionals (hcps) in uganda. we conducted an online, descriptive cross-sectional study between wednesday 1 st april and sunday 19th april 2020 as part of a larger study assessing knowledge, . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. . https://doi.org/10.1101/2020.04.30.20086553 doi: medrxiv preprint attitude and practices towards covid among healthcare workers [10] and medical students. a quantitative analysis approached was used. participants were derived from 2 settings: 1) medical students from 9 of the 10 universities in uganda offering undergraduate medical degrees with a combined population of about 6,000 students. 2) health care professionals from makerere university teaching hospitals (muths), with a population size of about 1,300-1,500. medical students pursuing bachelor of medicine and bachelor of surgery, bachelor of dental surgery, bachelor of nursing and bachelor of pharmacy and healthcare professionals including nurses, midwives, intern doctors, medical officers, residents and specialists at the muth. individuals aged 18 years or older were included in the study after an informed consent was obtained. students and healthcare professionals who were too ill or were offline during the time of the study were excluded. by employing convenience-sampling method, we used whatsapp messenger (facebook inc., california, usa) for enrolling potential participants. we identified all the existing whatsapp groups of medical students in the various universities and those of healthcare professionals in the different muths. a total of about 3,500 students and 581 healthcare professionals who were members in the several whatsapp groups were approached to participate in the study. an online data collection tool was designed and executed using google forms (via docs.google.com/forms). the google form link to the questionnaire was sent to the enrolled participants via the identified whatsapp groups. independent variables were demographic characteristics were sex, age, and sources of information on covid-19 and dependent variables were usage and perceived usefulness of social media. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. . microsoft excel 2016 was used for data cleaning and coding and stata version 15.1 (statacorp, college station, tx) for analyses. numerical data was analysed using parametric or non-parametric approaches as appropriate. categorical data was summarized as frequencies and proportions and associations between independent and dependent variables were assessed using chi-square test and logistic regression. a p< .05 is considered statistically significant. overall, we achieved a response rate of 21.5% (877/4081: 741/3,500 (ms) and 136/581 (hcps)). majority of the participants were male (n=555, 63%) and medical students (n=741, 84%). the median age of the participants was 24 (range: years mass media (n=681, 78%) and social media (n=665, 76%) were the most used sources of information. female participants were less likely to use journals and websites than male participants. medical students significantly used mass media like tv (aor: 1.7, 95% ci: 1.0-2.7, p=.031) but were less likely to use websites (adjusted odds ratio (aor): 0.1, 95% ci: 0.1-0.2, p<.001) compared to hcps to access information on covid-19, table 1 . . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. about 95% of medical students frequently used whatsapp followed by facebook (55%) and only 1% (n=4) did not frequently use any social media platform, figure 1 . excluding whatsapp, male students were more likely to use the other social media platforms than female medical students (p-values between <.001 and 0.01). medical students ≥24 years were more likely to use facebook (aor: 1.8, 95% ci: 1.4 to 2.5, p<.001) but less likely to use instagram (aor: 1.7, 95% ci: 1.0-2.7, p=.031), table 2 . . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. . . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. ≥24 0 (0) majority of the medical students recognised television, radios and social media as the most useful tools for dissemination of information of covid-19, figure 2 . print media (billboards, banners and newspapers) was perceived as the least useful in covid-19 public health campaigns. is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. the purpose of the study was to assess the usage and perceived usefulness of social media and other forms of media among medical students and healthcare professionals during the covid-19 pandemic in uganda. the results suggest that mass media and social media were the most used sources of information for covid-19 among hcps and ms. among medical students, whatsapp and facebook were the most frequently used social media platforms; and they preferred mass media and social media as the most useful tools for dissemination of covid-19 related information. this is the first study in sub-saharan africa assessing social media usage and perceived usefulness of various media for health campaigns during covid-19 pandemic. covid-19 pandemic is currently the greatest public health concern affecting majority of countries globally. social distancing guidelines and lockdowns have also posed a challenge to public health campaigns. this therefore necessitates a shift from popular print media (newspapers, magazines, banners, etc.) to wireless media. our study suggests that common wireless media like televisions, radios and social media can be effective in improving awareness on covid-19. over 40% of the world's population have access to internet to date [11] . social media are the most used networking sites globally, with facebook being the most used social networking platform [12] . social media can be used for dissemination of knowledge and clearing myths the public has on covid [9] . however, misinformation can be equally spread by social media leading to fear, panic and anxiety among the public [13] . infographics can be therefore be widely distributed via these platforms by verified pages and accounts of public agencies and health officials. who already has dedicated whatsapp numbers and groups in various languages to disseminate info on covid-19 [14] . social media can also be used for reporting probable cases, tracing of contacts, making appointments for tests and delivery of test results to the tested clients. ministry of health uganda has dedicated contacts for the public to report any suspected case of covid-19 to the officials who follow up and perform tests when required [15] . these contacts can also be accessed through whatsapp. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may 5, 2020. . through whatsapp we were able to reach out to over 4,000 medical students and health care professionals within 2 weeks. despite physical social distancing, the society and the world at large are now largely communicating through social media. we were able to achieve a response rate of 21.5% which is in line with a meta-analysis that reported a range of 7% to 88% [16] . it is however lower than the average response rate in the above study (34%) and among physicians (35%) by cunningham and colleagues [16, 17] . a few studies conducted using online surveys during the covid-19 pandemic have also reported promising responses [18, 19] in addition to the covid-19 pandemic, sub-saharan africa has a high burden of other ongoing pandemics, notably hiv/aids and tb [20] . a vast majority of these patients are on regular follow-ups for routine clinical care and for research purpose. clinicians and researchers can utilise the widely available social media platforms to conduct interviews and follow-ups of these patients. chronic care patients who require uninterrupted supply of medications during this lockdown can also maintain communications with their clinical care and research teams through social media platforms. however, ethical concerns that have been discussed before regarding privacy and confidentiality have to be greatly taken into consideration [21] . privacy policies and guidelines must be developed by the research teams in conjunction with their respective institutional review boards to safeguard transfer of potentially identifying data. the study has some limitations. the relatively low response rates limit generalizability. follow up reminders were sent to the prospective participants to improve responses. in conclusion, we have been able to show that social media can be robustly used to collect research data among medical students and health care professionals with high response rates. beyond being a research tool, social media alongside other media platforms can be used as sources of reliable information on covid-19 as well as for dissemination of research findings and guidelines. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) a novel coronavirus from patients with pneumonia in china world health organization. who director-general's opening remarks at the media briefing on covid-19-11 worldometers.info; 2020 [cited coronavirus covid-19 global cases by the center for systems science and engineering (csse) 2020 [cited world health organisation. coronavirus disease (covid-19) advice for the public 2020 covid-19: how doctors and healthcare systems are tackling coronavirus worldwide the covid-19 pandemic and research shutdown: staying safe and productive social media for rapid knowledge dissemination: early experience from the covid-19 pandemic coronavirus disease-2019: knowledge, attitude, and practices of health care workers at social media update the pandemic of social media panic travels faster than the covid-19 outbreak world health organisation. who health alert brings covid-19 facts to billions via whatsapp 2020 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity ministry of health uganda. coronavirus (pandemic) covid-19 2020 comparing response rates from web and mail surveys: a meta-analysis. field methods exploring physician specialist response rates to web-based surveys knowledge and behaviors toward covid-19 among us residents during the early days of the pandemic knowledge, attitudes, and practices towards covid-19 among chinese residents during the rapid rise period of the covid-19 outbreak: a quick online cross-sectional survey challenges of tackling non covid-19 emergencies during the unprecedent pandemic using social key: cord-331766-sdbagsud authors: kung, janet wc.; wigmore, stephen j. title: how surgeons should behave on social media date: 2020-08-30 journal: surgery (oxf) doi: 10.1016/j.mpsur.2020.07.014 sha: doc_id: 331766 cord_uid: sdbagsud this article documents the rise in popularity of social media use by surgeons for personal and professional use. it considers some of the important issues around privacy, patient confidentiality and professionalism and discusses some of the common pitfalls of using social media as a surgeon. social media has fundamentally changed the way we interact with the world and it has become an integral part of many surgeons' personal and professional lives. the most popular platforms include twitter, facebook, whatsapp, instagram and youtube, and professional networking sites such as linkedin and researchgate have gained prominence in the healthcare sector. with their many facilities and applications, social media platforms offer opportunities for patient resources and education, professional networking, research collaboration and dissemination, public engagement and policy discussions, and personal and professional support. 1 a pubmed search for "social media" and "surgery" has generated over 1820 articles to date, of which 118 articles were published in the first quarter of 2020, equating to an average of almost 40 publications per month in 2020 thus far. the volume of internet traffic during the covid-19 pandemic has affirmed the role of social media in contemporary communication. as social media has become ubiquitous, it is critically important for surgeons, whether active enthusiastic users or passive apprehensive observers, to be aware of the potential risks and pitfalls and to take caution and control over their online presence. inappropriate and unprofessional behaviour is susceptible to serious legal ramifications, which may lead to prosecution, professional censure, and loss of certification or licensure. 2 many regulatory bodies and professional organizations across the world have issued guidelines and recommendations specifically to address the use of social media. for example, the general medical council (gmc) 3 and the royal college of surgeons of england (rcs) 4 in the united kingdom, as well as the american medical association (ama) 5 and the american college of surgeons (acs) 6 in the united states, have all provided relevant guidance. in general, applying common sense and adhering to traditional professional standards while online are advised for surgeons who engage in social media. one of the most powerful aspects of social media is the ability to connect instantly to our patients. this very attribute of social media has blurred the boundaries between personal and professional life. accepting a patient's 'friend' request on facebook, and other direct relationships on social media platforms, is strongly discouraged as this may allow bidirectional access to personal information and commentary that is not traditionally appropriate for the surgeon-patient relationship. in the same way that you might not desire your patients to have access to the intimate details of your personal life, 'patient-targeted googling' may yield information about your patient's healthcare and behaviour (e.g. smoking or alcohol consumption) that could colour your judgement and affect your interaction with them and their treatment plans. not only is there the sensitive issue of disclosure of the source of information, there is the serious risk of spiralling into curiosity, voyeurism and invasion of privacy which could destroy the trust of the surgeon-patient relationship and reflect badly on the institution and even the entire profession. 6 to maintain professional boundaries, surgeons should consider separating personal and professional profiles online and comport themselves professionally in both. 3, 5, 7 if a patient were to contact you about their care or ask you to provide medical advice to acquaintances and other 'connections' through your private profile, it should be made clear to them that social and professional relationships should not be mixed. 3 the patient, where appropriate, should be directed to your professional profile and to the professional bodies' websites and associated social media platforms. although the same expectations of patient confidentiality that exist offline apply to online behaviour, there are unique challenges to protecting patient privacy on the internet. 1 privacyencrypted professional or institutional accounts must be used for the purposes of emails and other online communication with patients. such online correspondence should be documented and archived in the patient's official medical records, either in summary or by inclusion of the entire transcript, as it would be for any other medium. online communication with patients should only be used in the context of a pre-existing surgeon-patient relationship and should act to supplement, rather than substitute, a face-to-face clinical encounter. 6 it has become commonplace for surgeons to use social media sites for educational purposes and to obtain advice from colleagues regarding complex clinical problems. this involves sharing surgical information including case scenarios, clinical and operative photographs, or radiographic images. as with all other use of clinical content, these materials should only be used once written informed consent from the patient or surrogate has been obtained. publication of clinical content or images, in whichever online or offline medium, is under the regulation of the caldicott guardians in the united kingdom and the legislation of the health insurance portability and accountability act in the united states. all patient identifiable data should be removed, and this extends to images that allow identification of the patient, such as unique piercings and tattoos. while deidentifying patient information is mandatory, surgeons must be cognizant of the context of the information available online. even images that omit identifying features may inadvertently allow a link to a specific patient; for example, your location or institution may be embedded within photographs and other contents, news media and publicly available vital statistics. 1 although individual pieces of information may not breach confidentiality on their own, the sum of published information online could be enough to identify a patient or someone close to them. 3 the use of closed groups, i.e. groups that are by invitation only and not accessible to the public, offers an additional layer of privacy that allows a more liberal exchange of clinical information and opinion that would be permissible in an open forum. 6 although engagement in these fora can be a useful educational tool and benefit patient care, caution should still be exercised because privacy is never absolute and security is never impenetrable. it is also necessary to uphold the usual professional standards of decorum and respect for the patient and to refrain from comments on a patient's behaviour, history, or characteristics that may be perceived as derogatory or disrespectful. 6 respect for colleagues social media offers an unprecedented opportunity for surgeons to connect across the globe in real time for personal and professional support. a recent study found that social media serves as a valuable tool to enhance the networking and mentorship of surgeons, particularly for women in surgical specialties who may lack exposure to same-sex mentors at their own institution. 8 the value of this type of support network is even more appreciated in remote and rural surgical practice. while there are definite positive aspects of social media, there are users whose online behaviour has a negative impact on their colleagues, the surgical community, and the medical profession as a whole. the seemingly faceless anonymized online existence on social media unfortunately has facilitated propagation of extremist opinion and fuelled unsolicited trolling and cyberbullying. the gmc good medical practice states that 'doctors must treat colleagues fairly and with respect'. 3 this covers all situations and all forms of interaction and communication. 3 bullying, harassment and undermining behaviour must not be tolerated. gratuitous, unsubstantiated or unsustainable comments must not be made about individuals online. although it is acceptable to disagree with colleagues and have a healthy debate on social media, such posts should be respectful, collegial and reflect positively on the profession. 1 if you see colleagues behaving in an unprofessional or inappropriate manner, you have a responsibility to bring this to their attention discreetly so that they have an opportunity to reflect and take action. if the colleague does not make amends and you believe the breach of the code of conduct is serious, then they should be reported to supervisory and/or regulatory authorities. 5, 9 users of social media should be mindful that postings online are subject to the same laws of copyright and defamation as written or verbal communications, whether they are made in a personal or professional capacity. 3, 10 relationship with your employers many educational and medical institutions have specific policies governing social media use by employees. these protocols may place additional restrictions on online activities and describe boundaries that should not be breached in order to avoid disciplinary action and penalties. surgeons should be familiar with and abide by these policies as ignorance is rarely an excuse. screening online profiles is an increasingly common practice during recruitment and hence surgeons must be aware that their online behaviour may influence employability and have consequences while employed. 1,6 although a strong favourable social media presence might be an attractive asset to an employer, content that portrays a surgeon in a controversial light can have a detrimental impact on their career and professional standing. it is crucial for surgeons to avoid posting content that have negative or unintended consequences in the workplace. 1 the internet provides an abundance of healthcare-related resource and advice from all sorts of outlets. this readily available information accessible by a click of a button can be both helpful and confusing for our patients or potential patients. as we are all too painfully aware of the phenomena of 'fake news' and 'sensationalism', it can be notoriously challenging for some lay public to decipher what is satire and falsehood and what is evidence-based science and medicine. the engagement of surgeons in social media can maximize quality patient care by disseminating credible 'filtered' or 'censored' information regarding existing treatments by experts. 11 the contribution of the surgeon's expertise, insights and experience can also help dispel myths and misinformation and reject unsubstantiated advice and deleterious treatment suggestions. as creators of 'fake news' become more sophisticated, even some medical professionals and surgeons can be deceived. it is therefore important to resist taking what is being projected at its face value and this extends to the increasingly popular infographics and visual abstracts. surgeons must be disciplined to trace the origin of the post and ensure that the visual abstract is in fact released by a reputable journal and that the original article has gone through the rigorous process of peer review, copyediting and proofreading. the easily digestible visual abstract is designed for quick dissemination with maximal impact and take advantage of our ability to quickly interpret visual data. it is understandable that given such a user-friendly interface for the busy surgeon to simply accept and be influenced. surgeons must always return to the original research article for a full critical appraisal and remember that a visual abstract is not a substitute for reading the article. just as any single article should not change one's practice, a visual abstract alone should not influence clinical decision making or opinion about the paper. 12 surgeons must also be aware that just because a visual abstract or content has been widely shared or retweeted on social media professional development surgery xxx:xxx platforms, it does not necessarily mean that it is of high quality. in the same way that there are limitations to the traditional journal impact factor, a high altmetric score is merely a summative indicator of attention, dissemination, influence and impact but is not itself evidence as such. careful consideration should be given to the decision of partaking in dissemination of a piece of social media content, particularly whose original poster was not yourself. such act of circulating information implies your open endorsement, as a surgeon and a professional, of that social media user and that specific piece of social media content that you shared. disseminating mistruths and misinformation, even by mistake, can be dangerous. retraction not only is embarrassing but is often suboptimal given the speed at which information can travel with social media to a potentially infinite and impressionable audience. it is prudent to recognize that actions online and content posted and shared can undermine the public trust in the medical profession. public health and awareness campaigns social media provides a powerful platform for public engagement in promoting public health and awareness campaigns. the most topical example is the educational contents on social media emphasizing the role of personal hygiene and social distancing in controlling the spread of covid-19 during the pandemic. another example is the introduction of an optimized social media intervention with the aim of improving organ donation awareness among minorities in the united sates. 13 while such activities are generally well-intentioned patient-centric initiatives, only a fine line distinguishes advocacy from paternalism. with the rapid capricious developments of social media, surgeons must be actively involved and gauge the mood of the public to avoid any misconceptions of coercion and lack of autonomy. in addition, when posting material online, you should be open about any conflict of interest and declare any financial or commercial interests in healthcare organizations or pharmaceutical and biomedical companies. 3, 10 it is, as always, your responsibility to avoid even the appearance of impropriety. if it is not feasible to include a relevant conflict of interest within a post, then the post should not be made. 1 professionalism is the basis of a surgeon's contract with society. 14 the gmc guidance on doctors' use of social media states 'if you identify yourself as a doctor in publicly accessible social media, you should also identify yourself by name. any material written by authors who represents themselves as doctors is likely to be taken on trust and may reasonably be taken to represent the views of the profession more widely'. 3 all content posted on social media, regardless of whether it originates from a surgeon's personal or professional account, should be regarded as visible to the public. 6 privacy settings for each of your social media profiles should be reviewed regularly and conservative privacy settings should be adopted to safeguard personal information and content. 3, 10 it is important to bear in mind that even if using the most stringent privacy settings, it may not be possible to control how widely information is shared and that comments may be taken out of context. 10 there is no absolute guarantee that deleting information offers protection and it is not unreasonable to assume that any information posted online will remain available in perpetuity and might be accessed by anyone at any time. 1 professional online profiles should be regularly updated to provide contact information and accurate representation of the surgeon's credentials. surgeons are encouraged to periodically self-audit and take measures to ensure that the information presented is accurate and professional. 15 establishing online professional profiles and engaging proactively in social media allow better control over your online footprint, whereas failure to update your professional profile and engage in social media might come across as being out of touch. social media has become a ubiquitous part of modern life and is a staple of contemporary communication. it is our professional duty as surgeons to become adept users of the different social media platforms so that we can interact with patients, colleagues, professional societies, and regulatory authorities in an ethical and professional manner. there are many, often hidden, pitfalls with use of social media that we must be aware of and safeguarded against. thoughtful foresight and extending traditional expectations of professionalism to online behaviour, content, and engagement remain the most valuable guide using social media.a best practices for surgeons' social media use: statement of the resident and associate society of the american college of surgeons gmc ama professional guidelines for social media use: a starting point social media in the mentorship and networking of physicians: important role for women in surgical specialties anz social media is a necessary component of surgery practice seeing is believing: using visual abstracts to disseminate scientific research a data-driven social network intervention for professional development surgery xxx:xxx improving organ donation awareness among minorities: analysis and optimization of a cross-sectional study social media and the surgeon online medical professionalism: patient and public relationships: policy statement from the american college of physicians and the federation of state medical boards key: cord-355383-cqd2pa8c authors: olagoke, ayokunle a.; olagoke, olakanmi o.; hughes, ashley m. title: exposure to coronavirus news on mainstream media: the role of risk perceptions and depression date: 2020-05-16 journal: br j health psychol doi: 10.1111/bjhp.12427 sha: doc_id: 355383 cord_uid: cqd2pa8c objective: the mainstream media tend to rely on news content that will increase risk perceptions of pandemic outbreaks to stimulate public response and persuade people to comply with preventive behaviours. the objective of this study was to examine associations between exposure to coronavirus disease (covid‐19) news, risk perceptions, and depressive symptoms. methods: cross‐sectional data were collected from 501 participants who were ≥18 years. exposure to covid‐19 news was assessed as our exposure variable. we screened for depression (outcome variable) with the patient health questionnaire and examined the roles of risk perceptions. multiple linear regressions and mediation analysis with 1000 bootstrap resamples were conducted. results: participants were 55.29% female, 67.86% white with mean age 32.44 ± 11.94 years. after controlling for sociodemographic and socio‐economic factors, news exposure was positively associated with depressive symptoms β = .11; 95% confidence interval (95%ci) = 0.02–0.20. mediation analysis showed that perceived vulnerability to covid‐19 mediated 34.4% of this relationship (β = .04; 95%ci = 0.01–0.06). conclusion: perceived vulnerability to covid‐19 can serve as a pathway through which exposure to covid‐19 news on mainstream media may be associated with depressive symptoms. based on our findings, we offered recommendations for media–health partnership, practice, and research. since the onset of the first reported coronavirus disease (covid-19) patient on 1 december 2019 (huang et al., 2020) , the disease has rapidly spread worldwide to over 100 countries with more than 937,000 cases, 47,256 deaths (world health organization, 2020). as of 3 april 2020, the united states has recorded 239,279 cases and 5,443 deaths, making it the leading nation in the world for the number of cases (cdc, 2020) . these alarming estimations have triggered emergency preparedness efforts and public awareness through the swift dissemination of risk-framed information in the mainstream media (cowper, 2020; dyer, 2020; laupacis, 2020; oxford analytica, 2020) . understandably, it is important to glean up-to-date information regularly to stay in compliance with local and state authorities, as well as to mind the well-being of family and loved ones in different parts of the world. the purpose of this current study was to assess the psychosocial outcomes of exposure to covid-19 news on the mainstream media. american mainstream media is known to shape public perceptions through its widely available news conglomerates (e.g., digital, electronic, and print media) (atton, 2002; nair, janenova, & serikbayeva, 2020) , giving it a significant influence on the psychological outcomes of rapidly shared information during pandemics (leo & lacasse, 2008; meadows & foxwell, 2011) . however, managing the covid-19 pandemic requires a balanced and prompt approach which informs the public on what they can do, without causing a mental health burden (cowper, 2020; keles, mccrae, & grealish, 2020; seabrook, kern, & rickard, 2016) . in an attempt to stimulate public response, threat perception, and persuade people to comply with the preventive policies and regulations, the mainstream media rely on producing news contents that will increase the perceived self-efficacy to protect, vulnerability to the disease, and severity of the pandemic outbreaks (bish & michie, 2010; park, boatwright, & avery, 2019; pieri, 2019) . for instance, in a recent video coverage, cnn interviewed recovering covid-19 patients who described their perceived vulnerability, severity, and experiences with the disease (cnn, 2020). however, frequent exposure to risk-framed news may negatively impact the viewers' mental health (keles et al., 2020; seabrook et al., 2016) . therefore, it is important to investigate the mental health consequence of exposure to pandemic news on mainstream media. the objective of this study was to examine the association between exposure to covid-19-related news on mainstream media, risk perceptions, and depressive symptoms. to satisfy this objective, we deployed a survey tool with validated screening techniques for rapid assessment of the mental health implications of the ongoing pandemic. the study was approved by the institution review board of the university of illinois at chicago. all participants signed the online informed consent form before proceeding with the survey. participants were recruited via prolific, an online crowdsourcing platform for researchers (peer, brandimarte, samat, & acquisti, 2017) . recent studies have established the diversity of the participant pool and the high quality of the data collected from prolific. for example, compared to other crowdsourcing platforms, participants from prolific scored higher on attention-checks, engaged in lesser dishonest behaviour, and were able to reproduce existing results (palan & schitter, 2018; peer et al., 2017) . eligibility criteria for participants were (1) residence in the united states and (2) being 18 years or older. crosssectional data were collected from 502 participants on 25 march 2020, through the qualtrics online survey. exposure to covid-19 news on mainstream media was measured by asking participants four questions starting with the stem statement how frequently do you get information on coronavirus from any of the following sources? (1) cable news channels (e.g., fox news, cnn), (2) local news channels, (3) new york times, and (4) washington post. response options ranged from 1 = never to 5 = daily. responses were summed and divided by 4 to provide a composite score ranging from 1 to 5 for each participant. the most recent information source for covid-19 was assessed by adapting one question from the national cancer institute health information national trends survey (hints) (nelson et al., 2004) . participants were asked the most recent time you looked for information about covid-19, where did you go first? response options included television, friends, and government websites. patient health questionnaire (phq-2) (gelaye et al., 2016) , an established screening tool (intraclass correlation of .92), was used to screen for depressive symptoms. the phq probed well-being over the past 2 weeks (recommended time of self-quarantine): over the past 2 weeks, how often have you been bothered by any of the following problems? (1) little interest or pleasure in doing things and (2) feeling down, depressed, or hopeless. response options were (1) nearly every day, (2) more than half the day, (3) several days, and (4) not at all. responses were reversely coded and averaged to range from 1 to 4 with low numbers indicating low depressive symptoms. perceived vulnerability to covid-19 was assessed by adapting gainforth's perceived vulnerability scale (a = .95) (gainforth, cao, & latimer-cheung, 2012) . three items with 5-point response options (1 = strongly disagree and 5 = strongly agree) were used to assess the participants' anxiety, fear, and worry about being infected with covid-19. items started with the statement thinking about the possibility of being infected with coronavirus makes me feel: (a) anxious, (b) fearful, and (c) worried. perceived severity of covid-19 was measured using a single item that asked respondents coronavirus is a serious infection for me to contract. response options ranged from 1 = strongly disagree to 5 = strongly agree. perceived self-efficacy to practice protective behaviour was assessed using a 4-item measure (ajzen, 2002) (alpha = .83) about participant's perceived confidence and perceived control in protecting themselves against covid-19 infection. an example of an item was it is possible for me to protect myself against coronavirus infection. response options ranged from 1 = strongly disagree to 5 = strongly agree. as covid-19 perceptions of risk and media consumption are likely to be influenced by key demographics (e.g., age, underlying conditions), we collected key demographic variables for statistical control (liu, huang, & brown, 1998; primack, swanier, georgiopoulos, land, & fine, 2009 ). more specifically, participants reported on the following important demographic characteristics: sociodemographic characteristics, for example, age (continuous variable), sex (female, male) race (white, african american, asian, hispanic, american indian, middle east and north africa [mena] and others), and marital status. for marital status, categories included married, divorced, separated, widowed, or single. socio-economic status (ses) characteristics included household income (<$20,000, $20,000-<$35,000, $35,000-<$50,000, $50,000-<$75,000, and $75,000 or more), employment status, and education (less than high school, high school graduate, some college, college graduate, or more). race was self-reported by participants based on the provided categories. it was considered to be an important confounder because both media exposure (johnson, adams, hall, & ashburn, 1997) and depressive symptoms (morris et al., 2011) vary by race/ethnicity. participants' characteristics were analyzed using descriptive statistics such as frequencies, proportions, means, and standard deviations. the one-way analysis of variance (anova) was performed to investigate the mean differences in depressive symptoms by participants' characteristics. pearson correlations were calculated to test bivariate associations between continuous variables. to further investigate the relationship between exposure to covid-19 news on mainstream media and depressive symptoms, multivariable analysis was calculated to test the relationship between the exposure and depressive symptoms. model 1 tested the unadjusted relationships. in model 2, we adjusted for sociodemographic factors, and in model 3, we included the ses variables. finally, mediation analysis was conducted to test the possible mediating role of perceptions in the relationship between news exposure and depressive symptoms. alternative pathways were also tested to address the concern of causal ordering (i.e., the mediating role of news exposure in the relationship between perceptions and depressive symptoms). regression models were fitted in four steps according to the procedures outlined by sobel (1982) to assess whether the association between the independent variable and depressive symptoms was mediated by risk perceptions. statistical tests were two-sided, and a p < .05 was considered statistically significant. effect sizes and their confidence intervals (bootstrapping methods were used to estimate 95% confidence intervals for the indirect effects) were reported to interpret findings (cumming, 2014) . statistical analyses were performed using sas version 9.4 (sas institute inc., cary, nc, usa). after the exclusion of one participant who failed the attention check (table 1) , the remaining participants (n = 501) reported a mean age of 32.44 ae 11.94 years, being females (55.29%), white (67.86%), single/never married (68.46%), college graduate or more (53.71%), and employed (54.89%). participants reported exposure to covid-19 news on mainstream media as 2.73 ae 0.91, depressive symptoms (1.92 ae 0.93), perceived severity (3.73 ae 1.19), perceived vulnerability (3.67 ae 1.07) and, self-efficacy (4.01 ae 0.67). mean occurrences of depressive symptoms by participants' characteristics (table 1) showed that participants who were single/never married (2.05 ae 0.93), with less than high school/high school diploma (2.20 ae 1.06), household income $15,000-$34,999 (2.2 ae 0.97), students (2.10 ae 0.95), and perceived risk of losing their jobs (2.20 ae 0.80) reported higher depressive symptoms. pearson's correlation analysis showed that the occurrence of depressive symptoms was negatively associated with age (r = à.22, p < .001), selfefficacy to protect self (r = à.13, p < .01), and positively associated with perceived vulnerability (r = .23, p < .001). model 1 showed a non-significant positive association between exposure to covid-19 news and depressive symptoms (b = .06; 95% confidence interval [ci] = à0.03 to 0.14). this relationship remained non-significant in model 2 (b = .07; 95%ci = à0.01 to 0.16). however, after including ses in model 3, there was a significantly positive association between the news exposure and the depressive symptoms (b = .11; 95%ci = 0.02-0.20). we also found a significantly positive relationships between news exposure and perceived severity (b = .08; 95%ci = 0.01-0.15); perceived vulnerability (b = .21; 95% ci = 0.13-0.28); and a negatively significant relationship with perceived self-efficacy to practice protective behaviour (b = à.16; 95%ci = à0.27 to à0.04). standardized mediation tests showed perceived vulnerability as mediating 34.4% (bias-corrected 95% ci = 7.79-149.35) of the relationship between exposure to covid-19 news on mainstream media and depressive symptoms (figure 1 ) with an indirect effect of b = .04; 95%ci = 0.01-0.06. we recorded non-significant indirect effects through perceived severity (b = .00; 95%ci = à0.01 to 0.02) and perceived self-efficacy (b = .01; 95%ci = à0.03 to 0.00). in the analysis of the alternative pathway where news exposure mediated the relationship between perceived vulnerability and depressive symptoms, we found a non-significant indirect effect (b = .01; 95%ci = à0.00 to 0.02), a partial mediation of 2.18% (bias-corrected 95%ci = à0.63 to 7.65) of the total effect. in this analysis, the relationship between exposure to covid-19 news on mainstream media and depressive symptoms was partially mediated by perceived vulnerability to covid-19. our findings suggest that perceived vulnerability to covid-19 can serve as a pathway through which exposure to covid-19 news on mainstream media may be associated with depressive symptoms. the non-significance of the alternative pathway model further addresses concerns of causal ordering, which supports our premise. the spread of covid-19 news on mainstream media has been dominated by the proliferation of negatively framed information (cowper, 2020; dyer, 2020) such as uncertainties concerning the virus-host interaction, the evolution of the pandemic (wang, wang, chen, & qin, 2020), increased magnitude of the disease (e.g., the number of cases and fatalities both nationally and globally) (cdc, 2020), momentous government policies (lazzerini & putoto, 2020) , and increasing health care demand (ferguson, laydon, & nedjati-gilani, 2020) which has psychological implications. we, therefore, offer the following recommendations: first, in addition to disseminating information on vulnerability to covid-19, public health workers should work with mainstream media to provide teletherapy and mental health resource contents to their viewers. second, considering the mediating role of perceived vulnerability, and its negative correlation with perceived self-efficacy to practice preventive measures, mainstream media should consider offering balanced information that will increase the viewer's confidence to practice protective measures in addition to the information on vulnerability. third, primary care providers should screen for depressive symptoms while following up with their patients either in person or via telemedicine. our study is not without its limitations: first, our sample consists largely of young, educated adults and is therefore not representative; hence, our results may not be generalizable across the united states and should be interpreted with caution. second, the use of a cross-sectional study design makes it challenging to establish causality and warrants a careful interpretation of our result. third, our measure of exposure to mainstream media was developed by the researchers and not validated. nevertheless, the correlation with other items is consistent with previous studies on media exposure (keles et al., 2020; seabrook et al., 2016) . future studies should therefore consider testing these relationships longitudinally and with more valid measures to improve our understanding of these associations. in this study of 501 participants, perceived vulnerability mediated the relationship between exposure to covid-19 news on the mainstream media and depressive symptoms. in addition to providing information about the vulnerability to covid-19, mainstream media outlets should offer contents that will reduce their viewers' mental health burden at this critical time. covid-19 news, risk perceptions and depression 7 constructing a tpb questionnaire: conceptual and methodological considerations news cultures and new social movements: radical journalism and the mainstream media demographic and attitudinal determinants of protective behaviours during a pandemic: a review recovering patients describe what it's like to contract coronavirus -cnn video covid-19: are we getting the communications right? the new statistics trump claims public health warnings on covid-19 are a conspiracy against him impact of non-pharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand determinants of human papillomavirus (hpv) vaccination intent among three canadian target groups diagnostic validity of the patient health questionnaire-2 (phq-2) among ethiopian adults clinical features of patients infected with 2019 novel coronavirus in wuhan race, media, and violence: differential racial effects of exposure to violent news stories a systematic review: the influence of social media on depression, anxiety and psychological distress in adolescents working together to contain and manage covid-19 covid-19 in italy: momentous decisions and many uncertainties. the lancet global health the media and the chemical imbalance theory of depression community broadcasting and mental health: the role of local radio and television in enhancing emotional and social well-being association between depression and inflammation-differences by race and sex: the meta-health study social and mainstream media relations the health information national trends survey (hints): development, design, and dissemination misinformation will undermine coronavirus responses prolific.ac-a subject pool for online experiments information channel preference in health crisis: exploring the roles of perceived risk, preparedness, knowledge, and intent to follow directives beyond the turk: alternative platforms for crowdsourcing behavioral research media framing and the threat of global pandemics: the ebola crisis in uk media and policy response association between media use in adolescence and depression in young adulthood: a longitudinal study social networking sites, depression, and anxiety: a systematic review asymptotic confidence intervals for indirect effects in structural equation models unique epidemiological and clinical features of the emerging 2019 novel coronavirus pneumonia (covid-19) implicate special control measures novel coronavirus disease 2019 (covid-19) situation report -68 we are grateful to professor david dubois for his helpful advice. all authors declare no conflict of interest. the data that support the findings of this study are available on request from the corresponding author. the data are not publicly available due to privacy or ethical restrictions. key: cord-334574-1gd9sz4z authors: little, jessica s.; romee, rizwan title: tweeting from the bench: twitter and the physician-scientist benefits and challenges date: 2020-11-11 journal: curr hematol malig rep doi: 10.1007/s11899-020-00601-5 sha: doc_id: 334574 cord_uid: 1gd9sz4z purpose of review: social media platforms such as twitter are increasingly utilized to interact, collaborate, and exchange information within the academic medicine community. however, as twitter begins to become formally incorporated into professional meetings, educational activities, and even the consideration of academic promotion, it is critical to better understand both the benefits and challenges posed by this platform. recent findings: twitter use is rising amongst healthcare providers nationally and internationally, including in the field of hematology and oncology. participation on twitter at national conferences such as the annual meetings of american society of hematology (ash) and american society of clinical oncology (asco) has steadily increased over recent years. tweeting can be used advantageously to cultivate opportunities for networking or collaboration, promote one’s research and increase access to other’s research, and provide efficient means of learning and educating. however, given the novelty of this platform and little formal training on its use, concerns regarding patient privacy, professionalism, and equity must be considered. summary: these new technologies present unique opportunities for career development, networking, research advancement, and efficient learning. from “tweet ups” to twitter journal clubs, physician-scientists are quickly learning how to capitalize on the opportunities that this medium offers. yet caution must be exercised to ensure that the information exchanged is valid and true, that professionalism is maintained, that patient privacy is protected, and that this platform does not reinforce preexisting structural inequalities. social media is a rapidly evolving platform for communication that is increasingly being utilized across the academic medicine community. twitter, a free microblogging platform, enables users to read and post 280-character messages called "tweets" [1•, 2] . twitter provides novel opportunities for physician-scientists to interact and collaborate across institutions and diverse fields. it increases access to research and enables real-time discussion of new publications [3] . not only does it serve to disseminate information, it also may be utilized as a means to generate data [4, 5] . as this platform is increasingly integrated into the academic medical community, it is important to consider both the benefits and potential challenges posed by this technology. opportunity to connect and advance common interests. even trainees at an early stage are able to follow and engage with leading experts in a particular specialty with greater ease, thus advancing their understanding of key scholarship or topics of discussion at the forefront of the field [5, 6] . additionally, engagement on twitter prior to and during academic meetings can help build professional relationships and communities that may lead to future collaborations or opportunities for career advancement [1•, 2, 7] . in one recent analysis of tweets during the american society of clinical oncology annual meetings between 2011 and 2016, pemmaraju and colleagues found that both individual authors and overall number of tweets significantly increased over the 5year period [8] . meeting attendees may tweet responses and commentary to presented scholarship and even arrange "tweet ups" or face-to-face meetings for those who met virtually on twitter [5, 9] . and while in the past, missing a national or international conference may have led to loss of access to important new data, ideas, or opportunities for collaboration, now, as academic meetings are increasingly integrated with social media, physicians can watch presentations, participate in discussions, and network with other attendees remotely [1, 10, 11] . mentorship and academic sponsorship can also be practiced through the medium of twitter. mentors or academic sponsors advanced in their field who have increased influence or impact on twitter can promote the accomplishments of their mentees to increase their individual visibility. likewise individuals can promote their own accomplishments including research publications, academic promotions, or awards targeting a broader audience that may result in additional career opportunities [12, 13] . and as social media engagement continues to grow, academic institutions such as mayo clinic have even begun to consider ways to incorporate social media scholarship into metrics for academic promotion and tenure [14• ]. while there are considerable potential professional benefits to engaging in social media platforms such as twitter, there are also challenges. social media may blur the line between the professional and personal identity of a physician and missteps may harm the professional reputation of users [15, 16] . it is therefore critical to compose each "tweet" with the understanding that the post will be public and permanent [5] . in one 2010 study let by chretian et al., 5156 tweets from selfidentified physicians were analyzed over one month. of those, 144 tweets were categorized as unprofessional with 38 representing potential patient privacy violations, 33 containing profanity, 14 with sexually explicit material, and 4 with discriminatory statements. and amongst the 27 users responsible for privacy violations, 25 (92%) were identifiable by full listed name on profile, photo, or linked website [17, 18] . furthermore, physicians are not simply at risk of disapproval by colleagues and patients or punitive actions by employers. a survey of the directors of medical and osteopathic boards revealed 92% (44 out of 48 respondents) indicating at least one of several online professionalism violations had been reported to the board. in response 71% held disciplinary hearings and serious disciplinary outcomes including license restriction, suspension, or revocation occurred at 56% of the boards [18] . in response to these concerns, the american medical association created guidelines for social media use amongst physicians [19] . however, this guidance does not provide clear rules of conduct and should serve as simply the first step in the construction of formal policies and training across institutions for physicians on social media. another key issue that is introduced by the use of twitter is the potential amplification of implicit biases and structural inequality already problematic in academic medicine. while many maintain that twitter can increase equity by opening new channels of communication to diverse individuals across geographic, socioeconomic, and disciplinary barriers, others argue that social media may increase the impact of those who already have the most impact and exacerbate inequality [12] . gender inequalities have already been identified in many key areas across medicine, and gender bias in the way women are addressed and perceived may affect career advancement [20, 21] . how twitter reinforces these biases must be considered. one study by zhu et al. identified twitter users amongst speakers and coauthors presenting at academy health's 2018 annual research meeting and evaluated their most recent tweets. amongst more than 3000 health services researchers, women had less influence on twitter than men with half of the mean number of followers, and fewer mean likes and retweets per year. these differences were largest amongst full professors and similar across the distribution of number of tweets [22• ]. further investigation is needed into whether these inequities exist for other underrepresented minorities on twitter. finally, it is important to acknowledge that twitter may have detrimental effects on the productivity of participants. while there are small steps being taken towards acknowledging activity and scholarship on social media at certain institutions, there is still minimal formal recognition of physician use of twitter in a professional sense [6, 14] . it can be easy to sacrifice the slower more laborious work of designing studies, writing papers or book chapters, and keeping up with patient charting when faced with the potential positive feedback loop of a popular tweet. benefits social media and twitter in particular have radically transformed the landscape of information sharing, and this is especially relevant in relation to biomedical research. the platform presents opportunities for rapid review of new papers, easy access to multiple journals and expert opinions, increased potential for crowdsourcing, and enhanced postpublication peer review. physicians can follow respected journals, professional societies, and mentors or colleagues who may be sharing important advances in the field. in this way, physicians can stay up to date with minimal time expended. tweets and articles can be saved or "bookmarked" to review in more detail later [5] . similarly, researchers may increase the impact of their work by using twitter. one study analyzing 4208 tweets showed that highly tweeted articles were 100 times more likely to be highly cited than less-tweeted articles [23] . journals may also utilize social media such as twitter to increase the impact factor of their work. one group recently proposed instituting a tif or twitter impact factor for journals to measure the academic reach and impact of a journal on the social media platform [24] . twitter has also encouraged innovative forms of communicating research findings. another recent prospective case control crossover study looked at 44 research articles published in the same year in annals of surgery. each article was tweeted in two formats: as the title alone or as the title with a visual abstract. a strong correlation was found between the use of visual abstracts and increased dissemination on social media. additionally, the articles with a visual abstract tweeted received more site visits than the articles without visual abstracts. [25] one area that has expanded rapidly on twitter is postpublication peer review and twitter journal clubs. journal clubs have long served as important tools for propagating new research, practicing evidence-based medicine, and developing skills to evaluate research design and validity of the findings [26] [27] [28] . recently a diverse range of twitter journal clubs have arisen including id journal club, nephjc, jgim twitter journal club, and others [27, 28] . organizers will choose articles and indicate a date and time for the meeting. tweets are organized and referenced by hashtags and participants can follow along or interact by commenting on individual tweets. content experts or authors may be invited and physicians at all levels may join in to learn collectively. and while these meetings often cater to physicians and physicianscientists, journal clubs are typically open to any individual including patients, allowing improved public dissemination of new research advances. crowdsourcing and collaboration during peer review may lead to important findings of design or methodology errors, statistical inconsistencies, or other flaws in publications. in one case, twitter critics rapidly identified errors in methodology in an article in science titled "genetic signatures of exceptional longevity in humans". within a week, the authors released a statement acknowledging a technical error in the lab test used and the paper was eventually retracted [29] . as the speed and breadth of scientific publication increases, twitter remains an important resource to critically appraise the expanding literature. crowdsourcing and network utilization may also be used positively to impact public health efforts by disseminating educational information to communities, amplifying emergency notifications, and enhancing aid efforts when needed [2] . this has been a particularly useful tool during the covid-19 pandemic of 2020 as the cdc and local health departments have used twitter to circulate critical health information. while social media provides an immense opportunity for information uptake and dissemination, there are important caveats to this information exchange. misinformation is rampant, and developing the ability to discern true facts from misinformation is increasingly challenging as technology advances. new innovations such as the verified badge allow users to know if accounts are authentic, though this may not apply uniformly. as pershad et al. noted, while a celebrity may be verified due to his/her role in the public eye, that individual's views on healthcare topics such as vaccination may not be valid health information [2] . additionally, twitter engagement may be purchased unbeknownst to viewers. in one analysis of the asco 2016 annual meeting, the second largest number of retweets was from fake engagement or purchased retweets by a third party [8] . in another study by desai et al., tweets contained in the official twitter hashtags of thirteen medical conferences from 2011 to 2013 were analyzed. the twitter influence of third-party commercial entities was found to be similar to that of healthcare providers [30] . it is critical to curb this fake engagement at professional medical meetings moving forward to reduce bias and promote transparency. even if physician accounts and engagement are authentic, financial conflicts of interest are frequently not revealed on social media. this may also lead to bias in transmission of information, particularly if populations with less medical expertise such as patients are involved. in one study in jama, 504 out of 634 hematologist-oncologists in the usa who use twitter were found to have some financial conflicts of interest [31] . however, no clear regulations regarding disclosure exist in regard to physician social media and this should be duly considered when evaluating information sources. another example of the potential challenges of twitter was demonstrated with the rapid increase in preprints over recent years and notably during the covid-19 pandemic. while preprints are beneficial in making novel findings rapidly available, these manuscripts often have not undergone the full peer review process. inexperience from the media and lay public in distinguishing peer-reviewed from non-peer-reviewed publications can lead to magnification of findings that are erroneous [32, 33] . twitter not only creates unique opportunities for learning about new research findings, it can also provide rich clinical educational content [34] . "tweetorials" or threaded tweets are used frequently to present lessons on clinical topics and engage learners at all levels [35] . teaching podcasts such as "the curbsiders" and "clinical problem solvers" have also utilized twitter to widen their audience and condense important lessons into easily digestible tweets. one systematic review examined 29 studies that assessed the effect of social media platforms on graduate medical education. these modalities were used to share clinical teaching, points, disseminate evidence-based medicine, and circulate conference materials. given the fast-paced nature of medical residency, social media provides a logical space for on-the-go learning and review. one notable finding was that most studies offered mixed results and provided little guidance on how best to incorporate social media platforms formally into graduate medical education [36] . not only does twitter provide opportunities for trainee and continuing medical educations, it also may be used as a critical tool for patient education [37] . in one survey-based study, a breast cancer social media twitter support community was created. respondents reported increased knowledge about their breast cancer in a variety of areas and participation led 31.2% to seek a second opinion or bring additional information to the attention of their treatment team [38] . on twitter, communities can be created for and by patients using diseasespecific hashtags [39] . for rare diseases in particular, these communities can facilitate new avenues for connection, education, and collaboration between patients and physicians working in highly specialized areas [40] . these networks can even be used as modes to propagate information about available clinical trials to diverse populations [41] . important limitations to learning via twitter remain. patient privacy issues can arise, particularly as photos, radiology, and case descriptions are more widely shared [2, 5, 42] . twitter can serve as an echo chamber, where ideas are magnified by like-minded individuals in close networks, reducing the sharing of outside perspectives [6] . finally, the volume of information can overwhelm users, making it difficult to distinguish valuable knowledge from irrelevant comments. there are significant benefits to the effective utilization of social media platforms such as twitter. physicians and scientists may grow their networks, gain career opportunities, expand the impact of their research, connect with patients, stay up to date on novel discoveries, and much more. however, clear frameworks for professional use of this technology are still being developed. it is vital to better understand the risks to patients and providers in order to safely and deliberately integrate this valuable tool into our institutions and practices. conflict of interest the authors declare that there is no conflict of interest. human and animal rights and informed consent this article does not contain any studies with human or animal subjects performed by any of the authors. the use and impact of twitter at medical conferences: best practices and twitter etiquette social medicine: twitter in healthcare scientists in the twitterverse twitter 101 and beyond: introduction to social media platforms available to practicing hematologist/oncologists risks and benefits of twitter use by hematologists/oncologists in the era of digital medicine twitter as a tool for communication and knowledge exchange in academic medicine: a guide for skeptics and novices trends in twitter use by physicians at the american society of clinical oncology annual meeting analysis of the use and impact of twitter during american society of clinical oncology annual meetings from 2011 to 2016: focus on advanced metrics and user trends social media and the practicing hematologist: twitter 101 for the busy healthcare provider tweeting the meeting leveraging social media for cardio-oncology using social media to promote academic research: identifying the benefits of twitter for sharing academic work academics and social networking sites: benefits, problems and tensions in professional engagement with online networking first demonstration of one academic institution's consideration of incorporation of social media scholarship into academic promotion professionalism in the digital age social media and physicians' online identity crisis physicians on twitter physician violations of online professionalism and disciplinary actions: a national survey of state medical boards report of the ama council on ethical and judicial affairs: professionalism in the use of social media evaluating unconscious bias: speaker introductions at an international oncology conference gender differences in publication rates in oncology: looking at the past, present, and future gender differences in twitter use and influence among health policy and health services researchers can tweets predict citations? metrics of social impact based on twitter and correlation with traditional metrics of scientific impact introducing the twitter impact factor: an objective measure of urology's academic impact on twitter visual abstracts to disseminate research on social media the journal club and medical education: over one hundred years of unrecorded history the evolution of the journal club: from osler to twitter the times they are a-changin': academia, social media and the jgim twitter journal club peer review: trial by twitter quantifying the twitter influence of third party commercial entities versus healthcare providers in thirteen medical conferences from financial conflicts of interest among hematologist-oncologists on twitter will the pandemic permanently alter scientific publishing? how swamped preprint servers are blocking bad coronavirus research twitter-based learning for continuing medical education? from tweetstorm to tweetorials: threaded tweets as a tool for medical education and knowledge dissemination the use of social media in graduate medical education cancer patients on twitter: a novel patient community on social media twitter social media is an effective tool for breast cancer patient education and support: patient-reported outcomes by survey disease-specific hashtags for online communication about cancer care rare cancers and social media: analysis of twitter metrics in the first 2 years of a rare-disease community for myeloproliferative neoplasms on social media-#mpnsm cancer communication in the social media age pathology image-sharing on social media: recommendations for protecting privacy while motivating education publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord-031941-bxrjftnl authors: androutsopoulos, jannis title: investigating digital language/media practices, awareness, and pedagogy: introduction date: 2020-09-16 journal: nan doi: 10.1016/j.linged.2020.100872 sha: doc_id: 31941 cord_uid: bxrjftnl nan this special issue, initiated in july 2017 at the 18th world congress of applied linguistics (aila) in rio de janeiro and completed in the summer of 2020 amidst the covid-19 crisis, brings together researchers based in asia, australia and europe with a background in applied and sociolinguistics and extensive expertise on digital language, communication and literacy. based on original research, the six articles in this special issue examine the relationship between digital language practices and critical awareness of language and digital media, and explore how insights in everyday practices and understandings of digital communication may inform language pedagogy in a digital age. although the research they report is neither carried out in schools nor concerned with institutional learning processes, it remains education-relevant as authors investigate informal, extra-institutional digital language and communication among adolescent and young adult informants, most of whom still participate in institutional education at secondary or tertiary level. table 1 provides an overview of sites, participants, and data sources for the six articles. the following brief introduction outlines three themes that all articles address: (a) the intricate bound of language and media in a digital era, (b) the need for a specifically digital approach to critical language/media awareness, and (c) implications of this research for digital language/media pedagogy. the ongoing digitalisation of society on a global scale ( lundby, 2014 ) brings about far-reaching consequences for language and literacy practices. in the first two decades of the 21st century, digital literacy gained importance in all areas of private and professional practice. in many parts of the world today, a wide range of everyday activities depend on algorithm-based web services, with social interaction by means of networked mobile de-e-mail address: jannis.androutsopoulos@uni-hamburg.de vices being commonplace. new patterns of interpersonal and professional communication are in particular adopted by adolescents and young adults, who lead all age cohorts in the frequency of online communication, as repeatedly attested by media research on an international scale, and whose non-institutional practices of digital writing, composing, remixing and interacting are found to blur boundaries between institutional and vernacular literacies (e.g., herring & androutsopoulos, 2015 ; iorio, 2016 ; jones, chik & hafner, 2015 ) . the six articles in this si investigate these new practices at different levels of granularity. three articles operate at a narrow level of granularity, where analytic attention is paid to surface features of digital interaction and connections are drawn between linguistic resources that are mobilized in digital interaction and the affordances of the underlying technologies. in her article on "modeswitching in video-mediated interaction", maria grazia sindoni examines the practice and process of mode-switching, a recently popular practice of switching between speaking and typing in applications that afford simultaneous use of both language modalities. drawing on a corpus of video-calls by postgraduate students (who also transcribed the data and reflected on their multimodal transcriptions in the context of a university project), sindoni's article documents a range of creative interactional functions of modeswitching between speaking and texting on the same video-call platform. in their article on "digital punctuation as an interactional resource", jannis androutsopoulos and florian busch examine how secondary school students in germany use punctuation in informal messaging. focusing on one rarely used, but highly salient punctuation sign, the message-final period, they examine how this sign undergoes a process of pragmaticalization, i.e. a gain of pragmatic functions at the expense of syntactic ones, and how it thereby comes to contextualize, and become reflexively enregistered with, communicative distance and institutional communication. shaila sultana and sender dovchin, in their article on "relocalization in digital language practices of university students in asian peripheries", present data from a digital ethnography project with university students in bangladesh and mongolia. a close analysis of jid: linedu [m5g; september 16, 2020; 5:11 ] data from facebook profile pages shows how these students engage in multilingual practices and recontextualize ( relocalize , in the authors' own term) linguistic features from various languages and sources, thereby positioning themselves in independent, emancipatory and resistant ways towards power regimes in their respective societies. the other three articles operate at a broader level of granularity. they shift focus away from linguistic/multimodal resources at a surface level and towards the a of narratives and ideologies that shape the experience and awareness of digital media users. even though microlinguistic analysis is also part of this approach (see georgakopoulou's use of corpus-linguistics techniques), the main interest here is on the tension between the power of corporations in designing and preconfiguring media ecologies, on the one hand, and responses by users whose awareness of software constraints and affordances becomes crucial in shaping their own semiotic actions, on the other. in their article on "context design and critical language/media awareness", caroline tagg and philip seargeant draw on interviews with facebook users to investigate how they perceive the contextual constraints that facebook sets up and design the context of their own communication through their decisions on whether, what, and how to post. in her article, "designing stories on social media", alexandra georgakopoulou casts a corpusassisted, critical perspective on media reports about the story feature of instagram and snapchat. she examines how the primordial mode of storytelling is reconfigured ('designed') into an app feature that in turn forms the template of mass-scale digital storytelling among (predominantly younger) users, and identifies tensions that arise between the marketing of these app features and their actual affordances f or semiotic practice. the tension between corporate prefabrication and user awareness and agency is further explored by rodney h. jones in his contribution, "the text is reading you: teaching language in the age of the algorithm", where he draws on interviews and other forms of documentation among university students to examine how they reflect about the ways algorithms influence digital communication, how they themselves conceptualize algorithms (which jones classifies in six metaphors), and how they attempt to trick out their workings to their own personal benefit. together, these six articles move back and forth between broader and narrower levels of structural and contextual granularity in digital communication, i.e., between the multimodal resources people mobilize to do digital text and talk and the knowledge and ideologies that shape their experience and action as digital media users. all of the articles in this si explore the intricate bond between language and media at the levels of practice, product, and crit-ical awareness. they draw on critical language awareness (cla), which emerged in the 1990s as a research field at the interface of applied and sociolinguistics, critical discourse analysis, and new literacy studies. drawing on research and institutional interventions, cla aims to make speakers aware of indexical differences in language, the social stratification of language varieties, the unequal way different registers of communication index social power relations, and the use of discourse as a means for social change ( alim, 2010 ; cope & kalantzis, 2009 ; fairclough, 1992 ) . some articles also draw on critical media awareness, an academic field that started out by developing critical readings of (audiovisua) media content, media representations, and grassroots media production ( kellner & share, 2005 ) , and more recently encourages critical understandings of digital communication in a social, political, and economic perspective (e.g., boyd & crawford, 2012 ; kellner & kim, 2010 ; kitchin, 2017 ) . even though scholarship on critical awareness of language and media has developed separately by discipline (linguistics and media studies, respectively), the articles in the special issue support a joint perspective on 'language/media' with regard to exploring communicative practice and awareness. the main reason for this is quite simple. in a digital era, the selection and production of linguistic signs is tightly enmeshed with co-occurring selections of media devices and platforms or applications for the production of utterances and discourses. stylistic choices, in the widest sense of the term, therefore orient not only to individual or imagined audiences, but also to media applications that are enregistered with specific types of addressees and situations ( busch, 2018 ) . for example, when it comes to punctuation signs, school students reflect on pragmatic functions of the period (as discussed by androutsopoulos and busch in this issue) specifically with regard to informal communication via messenger. their awareness of period usage is completely different when it comes to writing an email to a teacher, let alone for their handwritten school essays. likewise, the contributions by sindoni as well as sultana and dovchin reveal language practices and awareness that are specifically valid for particular platforms of online interaction (video-conferencing software and social networking, respectively) rather than for written language as such or for private vs. public communication in general. tagg and seargeant show in their article that social networking users build on their understanding of the algorithmic mechanisms of the formation of digital publics in order to design the context of their contributions. in the 'algorithmic pragmatics' approach advocated by jones, social media users conceptualize algorithms in certain ways with regard to their workings in particular software environments, such as an online shopping or music streaming platform. thus, the articles in this special issue suggest we need to think beyond an apparent divide between language and digital media in terms of communicative practice and metapragmatic awareness. jid: linedu [m5g; september 16, 2020; 5:11 ] we need to find ways to research and theorize their convergence and interplay. this language/media approach gains momentum when articles discuss 'folk algorithmics' (jones) , 'critical media-narrative awareness' (georgakopoulou), or a 'critical language/media awareness' (tagg and seargeant) that aims to understand how people make sense of the tension between what is preconfigured (by algorithms and/or corporate design decisions and marketing strategies) and what might be creatively shaped, and how their understandings of this tension shape whether, what, and how they communicate online. one idea that underpins this special issue is that out-of-school practices, skills and understandings of digital language and communication are, in principle, transferable to institutional language education, and that such transfer might be beneficial. the notion that building on students' out-of-school practices can support the teaching of critical language and media awareness has been put forward by scholars such as gee (2007) and knobel and lankshear (2008) . our interest in the implications of our research for language pedagogy is in the spirit of cope and kalantzis (2009) , whose multi-literacies approach is grounded on bringing together "what was happening in the world of communications" with "the teaching of language and literacy in schools" (2009: 164) . this line of thinking has been part of conversations in linguistics & education ( cekaite & bjork-willen, 2018 ; duran, 2017 ; fernández-fontecha, o'halloran, wignell, & tan, 2020 ; lacasa, martínez & méndez, 2008 ; mcginnis, goodstein-stolzenberg & saliani, 2007 ; rothoni, 2017 ) , and has been taken up in foreign language teaching and learning, for example with regard to expanding the scope of efl writing pedagogy by encompassing vernacular uses of english in various youth cultures (e.g., kim, 2018 ; schreiber, 2015 ; shepard-carey, 2020 ) . this line of scholarship must be distinguished from the thriving research field on computer-supported language learning and teaching (call, e.g., dooly & o' dowd, 2012 ; guth & helm, 2010 ) . call research examines a wide range of practices and arrangements for language teaching and learning with digital technologies, including provision of digital content, testing and assessing, organizing tandems, and other cross-linguistic learning activities. none of the articles here engage with call research. our interest rather lies in a critical digital language/media pedagogy that brings to the fore out-of-school practices that often go unnoticed and unappreciated by educators. it is worthwhile to set this interest in the context of broader attempts towards a sociolinguistically inclusive language pedagogy, thereby integrating language variation and language varieties into curricular content, overcoming language-ideological binaries and limitations, and legitimizing multiple ways of using language in society. so, while the broader idea of introducing vernacular voices and genres into the curriculum is by no means new, in the digital era it finds new fields of application and new semiotic configurations, notably with regard to multimodality and transmedia. together, the articles in this special issue propose a perspective on digital language and media pedagogy that entails a critical examination of standard language ideology and of the primacy of language as an autonomous system of meaning-making. each article contributes to this approach with a different suggestion. sultana and dovchin align with literature on foreign language teaching and learning, which advocates integrating vernacular uses of english to expand the scope of efl writing pedagogy (e.g., rothoni, 2017 ; schreiber, 2015 ) , and argue that students' transmodal and translingual practices be considered in efl teaching. sindoni emphasizes the potential of multimodality for student selfexpression, which counterbalances a sole reliance on language for practices of self-expression and interaction (see also duran, 2017 ; mcginnis et al., 2007 ; schreiber, 2015 ; sultana, 2014 ) . androutsopoulos and busch cast a critical perspective on the way german language textbooks frame language use online, often reducing it to a special vocabulary or jargon (cf., kiesendahl, 2015 ) and eventually sustaining standard language ideologies. on the example of punctuation, they argue that the inclusion of informal digital literacy and metapragmatic awareness into punctuation teaching may promote attention to communicative functions of punctuation signs rather than the now-prevailing normative approach on the linguistic forms themselves. tagg and seargeant advocate a 'social digital literacies education', which aims to build on and enhance people's understanding of the complexities of online interactions. they argue that enhancing language/media literacy requires understanding how people make sense of online interaction, and how this awareness shapes the type and nature of their communication via social media. for jones, an important task for language pedagogy in the age of the algorithm is to raise critical awareness on how algorithmic processes determine aspects of digital communication and interactivity, especially in terms of what is made available to us as an option to consume, i.e. listen, read, watch, purchase. georgakopoulou, too, advocates a critical perspective on social-media stories, uncovering mismatches between the corporate marketing of story-telling apps, these apps' affordances and their users' actual communicative practices. i conclude with an observation from my own teaching experience during the covid-19 pandemic in the spring term of 2020. the pandemic makes us aware of the far-reaching implications of digital media for education, a point extensively discussed by alice chik and phil benson in their commentary for this si. practically all teaching during the early months of the pandemic has depended on digital platforms and especially video conferencing software (such as zoom), and so did all attempts of educators to network and jointly reflect on their practices and solutions for remote teaching. these solutions depend, on the one hand, on top-down regulations, some taken in haste and without consultations with practitioners themselves. on the other hand, they also crucially depend on digital skills that participants bring along to computer-based, video-mediated teaching in times of crisis. in hamburg, for example, we eventually settled for a combination of synchronous and asynchronous modes for all university courses during the spring 2020 term. the asynchronous component is based on various digital blackboard systems that students and staff are already familiar with. the synchronous portion draws on video-conferencing and webinar software, especially zoom, in order to simulate classroom interaction. this was entirely unprecedented in terms of an institution-wide regulation, and its swift implementation in a very short time span caught quite a few members of academic staff by surprise. i soon realized, however, that many of my students somehow seemed to be at ease with video interaction. they operated the platform smoothly, effortlessly switching their cameras and mics on and off, depending on the ongoing activity, and even introduced zoom affordances such as reaction emojis and chatting into the workings of the digital classroom. no one taught them these skills at university. rather, they mobilized non-institutional experiences with video interaction, such as skyping with families and friends, and adapted them to cope a bit more efficiently during the crisis. this is offered as a vivid example for an important point this si aims to make: out-of-school digital skills are transferrable, and research can help identify these skills and understand how they can be transferred into educational institutions. critical language awareness critical questions for big data. information, communication & society digital writing practices and media ideologies of german adolescents. the mouth. critical studies in language enchantment in storytelling: co-operation and participation in children's aesthetic experience. linguistics and education multiliteracies": new literacies, new learning. pedagogies researching online foreign language interaction and exchange. theories, methods and challenges you not die yet": karenni refugee children's language socialization in a video gaming community critical language awareness scaffolding clil in the science classroom via visual thinking: a systemic functional multimodal approach good video games + good learning. collected essays on video games, learning and literacy telecollaboration 2.0: language, literacies, and intercultural learning in the 21st century computer-mediated discourse 2. 0 the routledge handbook of language and digital communication discourse and digital practices: doing discourse analysis in the digital age review of education, pedagogy, and cultural studies toward critical media literacy: core concepts, debates, organizations, and policy. discourse: studies in the cultural politics of education sprachreflexion am beispiel neuer medien. eine bestandsaufnahme in aktuellen deutschsprachbüchern it was kind of a given that we were all multilingual thinking critically about and researching algorithms remix: the art and craft of endless hybridization developing new literacies using commercial videogames as educational tools mediatization of communication indnpride": online spaces of transnational youth as sites of creative and sophisticated literacy and identity work the interplay of global forms of pop culture and media in teenagers' 'interest-driven' everyday literacy practices with english in greece i am what i am": multilingual identity and digital translanguaging making sense of comprehension practices and pedagogies in multimodal ways: a second-grade emergent bilingual's sensemaking during small-group reading heteroglossia and identities of young adults in bangladesh. linguistics and education key: cord-029051-ib189vow authors: li, jianjun; fu, jia; yang, yu; wang, xiaoling; rong, xin title: research on crowd-sensing task assignment based on fuzzy inference pso algorithm date: 2020-06-22 journal: advances in swarm intelligence doi: 10.1007/978-3-030-53956-6_17 sha: doc_id: 29051 cord_uid: ib189vow to solve the problem of load unbalance in the case of few users and multi-task, a fuzzy inference pso algorithm (fpso) crowd sensing single objective task assignment method is proposed. with task completion time, user load balancing and perceived cost as the optimization goals, the fuzzy learning algorithm dynamically adjusts the learning factor in the pso algorithm, so that the pso algorithm can perform global search in the scope of the task space, thus obtaining the optimal task assignment solution set. finally, the fpso algorithm is compared with the pso, ga and abc algorithms on the optimization objectives, such as the algorithm convergence, task completion time, perceived cost and load balance. the experimental results show that the fpso algorithm not only has faster convergence rate than the other algorithms, and shorten the task completion time, reduce the platform’s perceived cost, improve the user’s load balance, and have a good application effect in the crowd sensing task assignment. with the widespread use of mobile users, smartphones have become an important bridge between the physical world and the online world. these advances have driven a new paradigm for collecting data and sharing data, namely group intelligence perception [1, 2] . at present, the application of group intelligence perception mainly includes: air quality monitoring [3] , traffic information management [4] , public information sharing [5] and so on. as the task assignment of the key problem of the crowd-sensing system, it is necessary to meet the optimization objectives under the constraints while completing the specified tasks, such as the shortest time to complete the task, the least perceived cost required to complete the task, the maximum benefit from completing the task, etc. therefore, the main problem solved by the fuzzy inference pso algorithm crowd sensing task assignment method is: how to perform task assignment for the multi-task of less user participants, which can ensure that the given number of tasks is completed in the shortest time, the perceived cost is the lowest, and user load balancing is optimal. for the problem of poor user load balance in mobile commerce, a fuzzy inference particle swarm crowd sensing task assignment method is proposed to improve the global search ability of the algorithm and avoid falling into local optimum. the main contributions of this paper include: (1) a single objective task assignment optimization model is constructed, with task completion time, perceived cost and user load balance as the objective function. (2) based on the task assignment optimization model, a fuzzy inference particle swarm intelligence discernment task assignment method is proposed to solve the task assignment problem in discrete space. (3) through the simulation experiment, the proposed fuzzy inference particle swarm task assignment method (fpso) is compared with pso, ga and abc algorithms. the experimental results show that fuzzy inference particle swarm optimization algorithm can minimize the task completion time, the lowest perceived cost and the maximum user load balance. the paper mainly reviews the literature on single target assignment and dual objective assignment. for single-objective task assignment: xiao et al. [6] considered the independent perceptual task scheme to minimize the task average completion time as the optimization goal, proposed the aota algorithm (average time-sensitive online task assignment algorithm); and considered the cooperative perception. in the task assignment scheme, the lota algorithm (the maximum completion time sensitive online task assignment algorithm) is proposed to minimize the maximum completion time of the task, and the important performance of the two algorithms is proved by simulation experiments. yang et al. [7] considered the biggest problem of budget information in crowd-sensing, modeled by gaussian process and proposed an algorithm bim for quantifying the amount of information based on common standards based on information. the algorithm is suitable for the inability to obtain user cost. xiao et al. [8] focused on the recruitment of users who are sensitive to deadlines for probabilistic collaboration. mobile users perform crowd-sensing tasks within a certain probability range, and can recruit multiple user systems to perform common tasks to ensure expected the completion time does not exceed the deadline, and ag dur (criteria for time-sensitive greedy user recruitment algorithm) is proposed to maximize utility for recruiting users and to minimize perceived cost expenditures during the deadline. azzam et al. [9] proposed a user group recruitment model based on genetic algorithm in order to recruit more participants to perform tasks, considering user interest points, related device perception capabilities and user basic information. by comparing with the personal recruitment model, the user group recruitment model based on genetic algorithm can improve the quality of collecting perceived data and ensure the reliability of perceived results. yang et al. [10] designed the problem of heterogeneous sensor task assignment, and designed the heuristic algorithm to combine the genetic algorithm and the greedy algorithm to achieve the optimization goal of minimizing the total penalty caused by delay. for dual-target assignments: liu et al. [11] mainly studied the multi-task assignment of dual-objective optimization. for fpmt (less participants multitasking), to maximize the total number of tasks and minimize the moving distance as the optimization goal, use the mcmf (minimum cost maximum flow) theory to convert the fpmt problem, and consider the fpmt problem to build a new mcmf model. xiong et al. [12] proposed a task assignment search algorithm based on maximizing space-time coverage and minimizing perceived cost in task assignment. considering the perceived time and the quality of task completion. wang et al. [13] only studied the perceptual task assignment problem to minimize the overall perceived cost and maximize the total utility of group intelligence perception, while meeting various quality of service (qos) requirements, and proposed a new hybrid method combines the greedy algorithm with the bee colony algorithm. messaoud et al. [14] mainly studied the participatory crowd-sensing user, and under the condition of satisfying information quality and energy constraints, to optimize the data perceptual quality and minimize the perceptual time of all participants, the appropriate task participants designed a crowd-sensing task assignment mechanism based on the tabu search algorithm combined with information quality and energy perception. dindar oz [15] proposed a solution to the problem of multi-objective task assignment, and designed a neighboring function that successfully solved the quadratic assignment problem for the metaheuristic algorithm, namely the maximum release of greedy allocation. ziwen sun et al. [16] proposed an attack location assignment (alta) algorithm based on multi-objective binary pso optimization algorithm, which models the task as a multi-objective optimization model. the objective function is total task execution time, total energy consumption and load balancing. the method of nonlinearly adjusting inertia weight overcomes the shortcomings of binary particle swarm optimization (bpso) which is easy to fall into local optimum. in summary, for the crowd-sensing task assignment problem, most researchers only consider one or two optimization goals such as perceived cost, task completion time, and task completion quality, and there are few studies that satisfy both optimization goals. this paper considers the problem of poor user load balance encountered in the process of task problems, combines task completion time and perceived cost, establishes single objective task assignment optimization model, and proposes fuzzy inference particle swarm task assignment method to solve task assignment problem in discrete space. there are two main types of task assignments: multi-participant less tasks and fewer participants multitasking. this topic mainly studies the task assignment of multi-tasks with fewer participants. how to perform reasonable task assignment makes the user load balance more, and the user participation enthusiasm can reduce the task completion time and reduce the perceived cost. in a specific environment, after the cognitive platform publishes the task, the mobile terminal users who are interested in these tasks will confirm the tasks to indicate their intentions, and finally confirm the set of end users u = {u 1 , u 2 , . . . , u n }, and the set of the published task r = {r 1 , r 2 , . . . , r m } (n < m). at the same time, an end user can complete one or more tasks. reasonable task assignment enables each task target to be assigned to the appropriate user or user community to perform. each user is also assigned to a task target that matches its own performance, using each user's maximum energy to complete the task, saving costs, improve task completion rates. in order to satisfy the establishment of the crowd-sensing task assignment model under defined conditions, the following assumptions are made: (1) the task assignment studied in this paper is in a specific time range, and only the participating users and tasks are allocated during this time. (2) the matching of the published tasks and the participating users within the coverage of the specified task area is not a task for all coverage areas. this narrows the scope of the task space and improves the quality of the task completion. (3) for the tasks released by the crowd-sensing platform, users who are suitable for performing the task will be found. there is no case that the appropriate users cannot be found after the task is released. (4) all users move from the current position to the task position at the same speed, regardless of the user's moving speed. (see table 1 ). (1) task completion time the total time taken by n users to complete m tasks is t: which t ij represents the time taken by the i user to perform the j task. (2) load balance load balancing is measured using the ratio of task completion times. the expression is the ratio of each user's completion time to the total task completion time. which t ij represents the completion time of each user on behalf of each user, and t represents the total task completion time. the closer the task completion time to the total task completion time, the more balanced the load is. therefore, the larger the β, the higher the load balance. the perceived cost is related to the distance the user moves to the location of the task and is proportional to the distance traveled by the perceived user to the task location. if the position coordinate of the user 1 is (x u 1 , y u 1 ), and the position coordinate of the task 1 is (x r 1 , y r 1 ), the distance between the user and the task is if the ratio coefficient of the perceived cost and the moving distance is constant α, the perceived cost of the user to the task is c 1 = αd 11 . therefore, the single objective task assignment optimization model is: the constraints are: (1) m tasks are completely assigned to n users; (2) each sensing task can only be executed once by a certain perceived user; (3) the task and the task can only be executed once by a certain user; the number of tasks assigned by each perceptual user is not more than γ ; 1 ≤ a i ≤ γ particle swarm optimization (pso) is proposed to be influenced by bird predation behavior [17, 18] . the pso algorithm is modeled as follows: which v i (t) represents the velocity of the particle i at the iteration time t, i represents the number of particles, i ∈ {1, 2, . . . , n }. w is the weight function, and c 1 , c 2 are the weight acceleration coefficient; random is a uniformly distributed random variable in the interval (0,1); x i (t) represents the current position of the particle i at the iteration time t; x in order to improve the overall performance of the swarm intelligence algorithm, fuzzy inference technology is added to the particle swarm optimization algorithm, and the learning factor in the pso algorithm is dynamically adjusted by the fuzzy inference technology, so that the pso algorithm can perform global search in the task space to avoid the algorithm falling into the local optimal area, so as to get the optimal task assignment solution set. in order to verify that the fuzzy inference pso algorithm improves the performance of the original algorithm, several typical algorithms are selected for comparison. as shown in fig. 1 , the fpso algorithm has a faster convergence rate than other algorithms. in this paper, a fuzzy system with two inputs, two outputs and nine rules is designed. the input is the current optimal performance index (vb), the current iteration number iter; the output is c 1 and c 2 ; the mamdani type [19] is blurred. the system adjusts c 1 and c 2 . (1) fuzzy set: for the input variables v and iter, three fuzzy sets are defined: low, medium, high. for the output variables c 1 and c 2 , five fuzzy sets are defined: low, medium low, medium, medium high, and high, using a triangular membership function. (2) variable range. in order to apply to various optimization problems, the input variables need to be converted to a normalized form, that is: where vb is the current optimal estimate of the population; vb min is the optimal estimate of the population; vb max is the worst estimate of the population; iter is the current number of iterations; iter max is the maximum number of iterations of the algorithm. after normalization, the range of nv and niter is [0, 1]. (3) fuzzy rules: the overall idea of fuzzy rule setting is that in the early stage of algorithm iteration, when the evaluation value is poor, it needs strong global search ability, large shrinkage factor value, and the particle is the most excellent learning ability is stronger than the ability of particles to learn from the society, that is c 1 > c 2 ; in the later stage of algorithm iteration, when the evaluation value is good, it needs less global search ability and shrinkage factor value, and the ability of particles to learn from society is stronger. the ability of a particle to learn optimally from itself, that is c 1 < c 2 . therefore, the following nine fuzzy rules are designed: if nb is low and niter is low, then c 1 is medium low and c 2 is low. if nb is low and niter is medium, then c 1 is medium high and c 2 is medium. if nb is low and niter is high, then c 1 is medium high and c 2 is high. if nb is medium and niter is low, then c 1 is medium low and c 2 is low. if nb is medium and niter is medium, then c 1 is medium and c 2 is medium. if nb is medium and niter is high, then c 1 is medium high and c 2 is high. if nb is high and niter is low, then c 1 is medium low and c 2 is low. if nb is high and niter is medium, then c 1 is medium low and c 2 is medium high. if nb is high and niter is high, then c 1 is medium high and c 2 is high. therefore, the fpso algorithm flow chart is shown in fig. 2: fig. 2 . fpso algorithm flow chart the particle swarm optimization algorithm in the swarm intelligence algorithm is used to map the optimal objective function and constraint conditions of the crowd-sensing task assignment problem to each element of the fuzzy inference particle swarm optimization algorithm to complete the task solving. task assignment process: (1) analyze the task assignment problem of mobile commerce group in a specific environment; (2) establishing a crowd-sensing single objective task assignment optimization model, and determining the task assignment constraints and optimization objective function, and transforming into the evaluation index function of the fuzzy inference particle swarm optimization algorithm; (3) using the fpso algorithm to optimize the search for decision variables; (4) optimized search of decision variables by fpso algorithm, and evaluate the optimization results according to the objective function; (5) when the result satisfies the task assignment requirement, the output result ends with the fpso algorithm. in the set environment of mobile commerce crowd-sensing, taking the drip taxi as an example, the time spent by the passenger to release the taxi task to the vehicle owner to start executing the task within the specified time is recorded as the task completion time; the owner of the vehicle arrives at the distance that the passenger user releases the position of the ride, and the fuel cost, the loss fee, etc. consumed during the period are recorded as task-aware costs; the load balance is reflected in the number of tasks received by each vehicle owner. therefore, the number of simulation tasks is 200, 400, 600, 800, 1000, and the maximum number of iterations of the algorithm is 200. experiments are performed on the matlab r2016b simulation platform. the fpso algorithm parameter settings are shown in table 2 . learning factor c 1 2 learning factor c 2 2 maximum inertia factor w max 0.9 minimum inertia factor w min 0.4 maximum particle speed v max 4 the experiment uses three optimization objectives to compare the advantages and disadvantages of several algorithms, namely task completion time, task-aware cost and load balancing. the experimental results and analysis are as follows: (1) analysis of task completion time the task completion time comparison chart is shown in fig. 3 and fig. 4 . it can be seen from fig. 3 that pso, abc and ga algorithms have good convergence in the early iteration compared with the fpso algorithm. however, with the increase of the number of iterations, pso, abc and ga are easy to fall into local optimum, and fpso algorithm can achieve global optimization and enhance global search ability. therefore, from fig. 3 to complete the task total time and fig. 4 to complete the task average time can be seen the superiority of the fpso algorithm, while the task completion time is minimum. (2) perceived cost analysis in the task assignment model, task completion time, perceived cost, and load balance are used as task optimization goals. in the experiment, we mainly consider the perceived cost of applying different algorithms to complete the comparison of the same number of tasks. the number of tasks is 200, 400, 600, 800 and 1000 respectively in fpso, pso, abc and ga algorithms. it can be seen from fig. 5 , as the number of tasks increases (600, 800, 1000, respectively), the perceived cost of the fpso algorithm is significantly lower than the pso, abc and ga algorithms. (3) load balance analysis load balancing is one of the important optimization goals in the paper. the greater the load balancing degree, the more reasonable the task assignment is, it can be seen from fig. 6 that as the number of iterations increases, the curve of the fpso algorithm is always above the pso, abc and ga algorithm curves, and finally tends to 0.92, so the fpso algorithm is better implemented than the pso, abc and ga algorithms. load balancing optimization goals make task assignments more reasonable. through the above experiments, we know that the fpso algorithm can shorten the task completion time, reduce the perceived cost of the task, and balance the user's task load. therefore, in the actual task allocation, it is possible to improve the timeliness of the task, reduce the operating cost of the platform, and improve the task completion amount of the user. this paper proposes a crowd-sensing task assignment method based on fuzzy inference particle swarm optimization to solve the problem of load unbalance in task assignment. in the proposed method, the fuzzy inference technology can dynamically adjust the learning factor in the pso algorithm, so that the pso algorithm can perform global search in the task space to obtain the optimal task assignment solution set. compared with pso, abc and ga algorithms, it has higher precision and better stability in solving performance. at the same time, applying fpso algorithm for task assignment can greatly shorten task completion time, reduce platform perceived cost and improve user load balance. in the future research, not only the load balancing problem of task allocation should be considered, but also the user interest preference problem in the allocation process is also an important factor to determine whether the user performs the task. the user must not only consider the cost of executing the task, but also considering the degree of interest in the task, a combination of various factors motivates the user to perform the task. therefore, in the next study, the user's interest in the task can be listed as an important factor in the user's work, and the task assignment can be further studied. a survey of mobile phone sensing mobile crowd sensing: current state and future challenges u-air: when urban air quality inference meets big data crowd sensing maps of on-street parking spaces flier meet: a mobile crowd sensing system for cross-space public information reposting, tagging and sharing online task assignment for crowd sensing in predictable mobile social networks selecting most informative contributors with unknown costs for budgeted crowd sensing deadline-sensitive user recruitment for probabilistically collaborative mobile crowd sensing grs: a group-based recruitment system for mobile crowd sensing heterogeneous task assignment in participatory sensing task me: multi-task assignment in mobile crowd sensing near-optimal task assignment for piggyback crowd sensing qos-constrained sensing task assignment for mobile crowd sensing fair qoi and energy-aware task assignment in participatory sensing an improvement on the migrating birds optimization with a problem-specific neighboring function for the multi-objective task allocation problem attack localization task allocation in wireless sensor networks based on multi-objective binary particle swarm optimization particle swarm optimization: an overview multi-species cooperative particle swarm optimization algorithm prediction of biochar yield using adaptive neuro-fuzzy inference system with particle swarm optimization key: cord-292774-k1zr9yrg authors: haldule, saloni; davalbhakta, samira; agarwal, vishwesh; gupta, latika; agarwal, vikas title: post-publication promotion in rheumatology: a survey focusing on social media date: 2020-09-13 journal: rheumatol int doi: 10.1007/s00296-020-04700-7 sha: doc_id: 292774 cord_uid: k1zr9yrg the use of social media platforms (smps) in the field of scientific literature is a new and evolving realm. the past few years have seen many novel strategies to promote engagement of readers with articles. the aim of this study was to gauge the acceptance, opinion, and willingness to partake in the creation of online social media educative material among authors. we conducted a validated and anonymized cross-sectional e-survey with purposive sampling among authors of the indian journal of rheumatology journal over a cloud-based platform (surveymonkey). descriptive statistics are used and values expressed as the number of respondents (n) against each answer. of 408 authors, 102 responded. we found that a large majority (74) supported promotions on smps. visual abstracts (81) were the most preferred means for promotion. a reasonable proportion (54) of the authors held the view that they could make these materials for themselves, with little guidance. however, currently only a few (47) were doing so. awareness on social media editors in rheumatology was dismal (4). citations were the preferred metric of article visibility (95), followed by altmetrics (21). these findings suggest that authors support article promotions on smps, although most do not promote their articles. graphical abstracts are the preferred means of promotions. further, the opinion on logistics is divided, calling for larger studies to understand the factors that need to be addressed to bridge the gap. social media is a broad term that encompasses the use of technology to participate in social networking. the use has extended in the twenty-first century beyond personal networking, to academia and telehealth [1] . the widespread lockdowns and public mandate for social distancing during the ongoing pandemic have further strengthened the case for using social media platforms (smps) for academic communication in this period [2] . several journals have picked up on this trend by sharing their work on smps [3] . smps are increasingly being used to share information from primary as well as secondary, tertiary, and grey literature [4, 5] . of these, original research (primary and secondary articles), comprises the most important updated information for doctors, researchers, and administrators alike [6] . in times when pre-prints are being archived to rapidly disseminate scientific research, articles published in scholarly journals after an extensive peer-review are of greater scientific credence [7, 8] . the rapidly rising number of research articles on an area of interest, such as covid-19, can make it challenging to assimilate the required information [9, 10] . the large volume of information available can be overwhelming and may sometimes hinder the process of learning. furthermore, retention of information may be hampered by the use of only one type of traditional learning cue, i.e., text [11] . the use of newer methods to present information, such as visual (or graphical, fig. 1) , voice, or video abstracts, can be helpful to attract and sustain the reader's attention. experts suggest that we retain 10% of what we read, 20% of what we hear, and 30% of what we see [12] . hence, using additional tools to promote literature on smps may bring a dynamicity to the process. moreover, mixed sensory cues may enhance the learning experience and overcome barriers to poor memory and recall [11] . infographics, video abstracts and voice abstracts (or podcasts) are potential strategies to enhance the scientific readership experience. since these strategies are novel, their implementation comes with challenges of its own, such as an evident lack of clarity as to who should prepare these promotional educative materials. while the authors may be willing to take responsibility themselves, developing such resources is a time-consuming skill, and they may need assistance from trained personnel. moreover, cost constraints may limit the use of professional editing agencies, more so in the developing countries. thus, we aimed at studying the acceptance, opinion, and willingness to participate in the creation of online social media educative material among authors of published self-articles in scholarly journals. the e-survey was designed on an online cloud-based website (survey monkey ® .com) with the intent to cover different aspects of social media editing, such as willingness for social media promotions of (2), means of promotion (4), ethics (3), logistics (3), preference for article metrics, publication models and pre-print archiving (2), current knowledge/use of social media for these purposes (4). the questionnaire featured 22 questions, most (12) of which were multiple choice questions needing a single answer option, while others (9) could have more than one answer option selected, and some (1) needed a single answer to be selected from a list. four items identified the respondent characteristics, and the rest (18) covered various domains listed above. choices were closed ended for most (11) , with an 'other (please specify)' option where deemed appropriate (11) . two rheumatologists and three undergraduate medical students reviewed the questions and confirmed them to be representative of the content and face validity. the survey underwent three rounds of dummy fill-ups to identify errors in wording, grammar, and syntax. the respondents could change the answers before submission but not after it. the survey was partly anonymised with internet protocol (ip) address and emails of respondents being the only linked identifiers. all questions were made mandatory, such that emails of corresponding authors of articles published from 2010 to 2020 in the indian journal of rheumatology (ijr, n = 408) were obtained from scopus. the ijr is a scopus and web of science indexed platinum open-access society journal of the indian rheumatology association with a wide readership in india. the questionnaire was circulated to the list hence obtained. the eligible participants were given a month's time to voluntarily complete the survey from 26 march to 26 april 2020. the survey link was open from the time the authors were first intimated about the study. the cover letter included details on the background and purpose of the study. informed consent was taken at the beginning of the survey and no incentives offered for survey completion. internet protocol (ip) address checks were done to avoid duplicated responses from a single respondent. data handling was completely anonymous, with the ip addresses and email lists remaining with the first and corresponding author. other authors had access to the synthesized data in tables without linked identifiers. an exemption from review was obtained from the institute ethics committee of sanjay gandhi post graduate institute of medical sciences, lucknow as per local guidelines. we adhered to the checklist for reporting results of internet e-surveys to report the data [13] . descriptive statistics were used, and figures designed using the surveymonkey website. data are expressed as median, percentage and inter-quartile range. numbers in brackets signify the number of respondents for the answer choices being discussed. of the 102 respondents, most (83) lived in india and were practicing rheumatology (77, table 1 ). nearly half (50) were in practice for 11 years or more. the average survey time was 5 min and 4 s. the response rate was 25%. over two-thirds (74) said they would like their publication promoted on social media, researchgate (70) being the most preferred platform, followed by twitter (40), facebook (37), whatsapp (35), academia.edu (27) , and linkedin (26) . only five said they would not prefer their publication being promoted. however, only 47 promoted their articles [on research-gate followed by whatsapp (32)] currently, and 26 did not do so. when asked who should promote articles, specially appointed social media editors were preferred by nearly half (46) while 24 felt all editors should do so, and 29 believed only the copyright holders can promote their publication on social media. most felt the use of appropriate hashtags (43) and appropriate timing (42) were crucial for successful promotion, and an equal proportion (46) advocated the use of artificial intelligence-based algorithms for the same. a dismal four people knew the correct number of rheumatology journals that have social media editors. nearly three-fourths (77) were unaware that the ijr had sme (s). visual (or graphical) abstracts were the most preferred (81) means for enhancing visibility among authors, while voice (47) and video (39) abstracts were preferred by fewer. while 54 felt they could prepare a visual abstract by guidance from the editorial team, another 25 felt the editorial team should do it (without charges). on the other hand, 34 felt they could do video abstracts themselves and another 37 said it should be outsourced to a third party. similar proportions (36) were willing to prepare voice abstracts after guidance, though outsourcing was less supported (11). preference for citations as metric (95) was almost unanimous though nearly one in five (21) also preferred altmetrics for visibility. the green open access was the most preferred model (45) followed closely by platinum open access (31) . pre-print archiving was preferred by a minority (15) . we noted good acceptance for social media promotions of published scholarly literature by over two-thirds of the authors, although less than half had actually done so. visual (or graphical) abstracts were the most preferred means for enhancing visibility among authors. however, the opinion on logistics of promotion and preparation was divided. the use of visual abstracts was popular amongst the respondents. however, video and voice abstracts were far less so. this could possibly be, in part due to the perception that making visual abstracts is simpler than the other options. this is supported by our finding that over half felt they could themselves make the visual abstracts with due guidance, as opposed to one-third who felt the same regarding video and voice abstracts. however, further studies should be done to determine the reasons for preferences within the various promotional tools, as well to understand who would be best suitable to prepare them. visual abstracts are a clear, concise format to expeditiously disseminate information, and are now increasingly being encouraged by various journals [14] . at times where healthcare workers are burdened with medical care of covid-19 patients, visual abstracts allow them to find the most relevant material quickly. a recent study showed that infographics summarized medical research literature and were associated with higher reader preference and lower cognitive load during summary review, although no difference was found in late information retention [15] . in the current fast-paced world, where technology governs most aspects of development, scientific literature must evolve to fit the needs of the hour. now more than ever, as we fight a global pandemic, information needs to travel faster, and beyond any man-made borders to be effective. smps are a fitting solution for many of these needs. it has been increasingly found that there exists a relationship between the traditional impact factor and activity on social media, with some sources having a near perfect correlation [16] . however, many researchers feel overwhelmed by the internet and social media due to a lack of scientific guidance [17] . moreover, over 90% professionals use social media for personal reasons, with merely 65% doing so for academics purposes [18] . while citations are the traditionally preferred metric of visibility, altmetrics are now increasingly being identified as an important metric for publication success [19] . altmetrics are non-traditional article-level bibliometrics that gather details of engagement of research work on a wide variety of online platforms, including but not limited to mentions in the news, blogs, and on twitter and other smps, article pageviews and downloads, etc. a drawback of using citations is the inevitable time lag in acquiring them, reducing utility for young scholars. moreover, citations reflect article use by researchers. but in this shrinking world with borderless communication, and there are many more stakeholders than there were before. it is just as important for hospital administrators, policy makers, clinicians, and educators to be informed of the latest developments, and to modify their practices accordingly. altmetrics may provide a more wholesome picture of the article visibility and utility from diverse areas. moreover, altmetric scores on researcher platforms, such as academia and mendeley, may translate into higher cite scores, due to the characteristics of the readership population [20] . a recent study found a high correlation (0.83) between the scimago journal rank scores and the number of followers on twitter, despite adjusting for time since creation of the account, further substantiating enhanced visibility of articles using smps and the potential to enhance journal (and consequently article) metrics with social media practices [16] . acceptance of social media promotions was good among our respondents, although understanding of its utility may be poor, translating to the admittedly low rates of active social media practice for academic promotions. the use of smps for promotions of own articles was inferred as practice. this may be improved by educating healthcare professionals about the various aspects of social media use [21] . notably, the preferred platforms for promotions may vary, with researchgate and whatsapp being popular in india and wechat in china [22] . a recent survey conducted among rheumatologists in the european network recorded facebook as the most commonly use platform, though only 46% used it to establish a professional online presence [23] . further research may provide a better understanding of individual differences. interestingly, the opinion regarding the responsibility of social media promotions was divided among authors. though nearly half felt social media editors (smes) should do so, a significant number felt only editors and copyright holders should take the responsibility. social media promotions require specific skills, time and dedication. it is also essential to be aware of standards for quality and content for effective promotions. smes are more likely to be aware of these, and the various tools to meet these requirements. a lack of awareness regarding availability of smes for rheumatology journals was evident in our respondent population, possibly a reflection of poor interest and interaction of researchers on smps. a shift in the culture from researchers being passive bystanders to active participants in this process may enhance academic engagement [24] . although smes may be proficient at promotions, every author and member of the scientific community can, and should, partake in the narrative themselves. the impressions of tweets are shown to improve with a wider retweeting network. the uses of hashtags and mentions are other effective ways to enhance engagement [25] . audio abstracts may feature as podcasts, which are popular in the west, although the trend may not have caught on so well in india yet. podcasts are increasingly being used for medical education, both within teaching institutions and on an international scale by major journals. [26] . nearly half of our respondents were open to audio abstracts, suggesting this merits exploration in india as well. copyright and plagiarism remain an important concern with social media promotions. when sharing images and infographics, it is essential to check copyright permissions, and provide the source information. some journals allow certain articles to be freely shared for promotions. articles shared on open-access media are known to have better visibility, and this was reflected in our respondents' opinion too. our findings show that pre-print archiving was preferred by less than one-third of the respondents. preprint servers like medrxiv are rapidly gaining popularity, especially since the covid-19 pandemic [27] . even though most of these submissions on archival platforms do not undergo the process of peer review, there has been a boom in literature available through medrxiv/bioarxiv. most journals too, now encourage pre-print submissions. however, resorting to such information for key decisions can lead to disastrous consequences. the authors agreed that the use of artificial intelligence algorithms to promote articles would be a beneficial tool. algorithms, such as the medfact, which was published in the canadian conference on artificial intelligence in 2018, aim at enabling recommendations of trusted medical information within health-related social media discussions [28] . medfact automatically extracts relevant keywords from online discussions and queries trusted medical literature with the aim of embedding related factual information into the discussion [28] . extrapolated to medical literature, this may be one of the solutions to the ethical concerns regarding social media and its lack of credible information. however, social media is a double-edged sword, and it is vital to educate professionals regarding ethical use to deliver credible scientific information and perspectives without engaging in misinformation, data ownership violation, breach of personal privacy, incivility, cyber bullying, and professional misconduct [29] . this is essential as the work of promoting articles on smps is a trinomial; an amalgamation of technological challenges with ethics in science and dealing with widely varying cultural diversity [30] . moreover, it is important to be mindful of the fact that use of smps has addictive potential, and although excessive preoccupation correlates with task-related and relationship-building behaviours, it contributes most strongly to negative social media-related deviant behaviour at the workplace. fortunately, individuals using smps for academic growth are aware of positive as well as negative effects [31] , and thus, reinforcement of this knowledge may potentially curb negative behaviours related to social media use. while our survey is limited by self-selection and recall biases, and unknown characteristics of the non-respondents, it highlights key issues regarding the role of social media in academia, while laying the groundwork for larger studies for understanding these further. the responses were limited in the current study by a short survey duration, coinciding with the onset of the covid-19 pandemic. our anonymized survey did not allow correlation of authors' responses with their publication activity. however, we hope that the primary insights offered by this exploratory study would pave the way for larger global study across non-indian rheumatology journals on the subject. to conclude, authors in rheumatology journal support the use of social media for promotions of published scholarly literature, although this does not translate into practice. the use of graphical abstracts is supported by a majority, with video and voice abstracts being less popular. the opinion on logistics is divided, calling for larger studies to understand the factors that need to be addressed to bridge the gap. medical journals in the age of ubiquitous social media social media and the new world of scientific communication during the covid19 pandemic future evolution of traditional journals and social media medical education searching and synthesising 'grey literature' and 'grey information' in public health: critical reflections on three case studies social media as a tool to increase the impact of public health research primary vs secondary literature in the biomedical sciences: primary vs secondary sources misinformation during the coronavirus disease 2019 outbreak: how knowledge emerges from noise information and misinformation on covid-19: a cross-sectional survey study social media in the times of covid-19 management of rheumatic diseases in the time of covid-19 pandemic: perspectives of rheumatology practitioners from india the impact of presentation style on the retention of online health information: a randomized-controlled experiment: health communication visual abstracts bring key message of scientific research improving the quality of web surveys: the checklist for reporting results of internet e-surveys (cherries) adoption of visual abstracts at circulation cqo: why and how we're doing it exploring the role of infographics for summarizing medical literature immunology and social networks: an approach towards impact assessment an introduction to social media for scientists social media and health care professionals: benefits, risks, and best practices evaluating altmetrics mendeley readership altmetrics for medical articles: an analysis of 45 fields-thelwall-2016-journal of the association for information science and technology-wiley online library social media for medical journals perception about social media use by rheumatology journals: survey among the attendees of iracon 2019 social media use among young rheumatologists and basic scientists: results of an international survey by the emerging eular network (emeu-net) social media for research, education and practice in rheumatology #eular2018: the annual european congress of rheumatology-a twitter hashtag analysis podcasting in medical education: a review of the literature coronavirus research moves faster than medical journals-bloomberg medfact: towards improving veracity of medical information in social media using applied machine learning letter to the editor: social media is a double-edged sword in the covid-19 pandemic challenges for social media editors in rheumatology journals: an outlook impact of social media on academic performance and interpersonal relation: a cross-sectional study among students at a tertiary medical center in east india the authors thank all respondents for filling the questionnaire and dr sakir ahmed for reviewing the survey questions. lg is the social media editor (sme) of the journal of clinical rheumatology and the indian journal of rheumatology. va is the editor in chief of the indian journal of rheumatology. the key: cord-145300-isnqbetr authors: nakov, preslav title: can we spot the"fake news"before it was even written? date: 2020-08-10 journal: nan doi: nan sha: doc_id: 145300 cord_uid: isnqbetr given the recent proliferation of disinformation online, there has been also growing research interest in automatically debunking rumors, false claims, and"fake news."a number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone. an arguably more promising direction is to focus on fact-checking entire news outlets, which can be done in advance. then, we could fact-check the news before it was even written: by checking how trustworthy the outlets that published it is. we describe how we do this in the tanbih news aggregator, which makes readers aware of what they are reading. in particular, we develop media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, and stance with respect to various claims and topics. recent years have seen the rise of social media, which have enabled people to easily share information with a large number of online users, without quality control. on the bright side, this has given the opportunity to anybody to become a content creator, and it has enabled a much faster information dissemination. on the not-so-bright side, it has also made it easy for malicious actors to spread disinformation much faster, potentially reaching very large audiences. in some cases, this included building sophisticated profiles for individual users based on a combination of psychological characteristics, metadata, demographics, and location, and then micro-targeting them with personalized "fake news" and propaganda campaigns that have been weaponized with the aim of achieving political or financial gains. to be clear, false information in the news has always been around, e.g., think of tabloids. however, social media have changed everything. they have made it possible for malicious actors to micro-target specific demographics, and to spread disinformation much faster and at scale, at the disguise of news. thanks to social media, the news could be weaponized at an unprecedented scale. overall, thanks to social media, people today are much more likely to believe in conspiracy theories. for example, according to a 2019 study, 57% of russians believed that the usa did not put a man on the moon. in contrast, when the event actually occurred, there was absolutely no doubt about it in the ussr, and neil armstrong was even invited to visit moscow, which he did. indeed, disinformation has become a global phenomenon: a number of countries had election-related issues with "fake news". to get an idea of the scale, 150 million users on facebook and instagram saw inflammatory political ads, and cambridge analytica had access to the data of 87 million facebook users in the usa, which it used for targeted political advertisement; for comparison, the 2016 us presidential elections in the usa were decided by 80,000 voters in three key states. while initially the focus has been on influencing the outcome of political elections, "fake news" has also caused direct life loss. for example, disinformation on whatsapp has resulted in people being killed in india, and disinformation on facebook was responsible for the rohingya genocide, according to a un report. disinformation can also put people's health in danger, e.g., think of the antivaccine websites and the damage they cause to public health worldwide, or of the ongoing covid-19 pandemic, which has also given rise to the first global infodemic. recently, there has been a lot of research interest in studying disinformation and bias in the news and in social media. this includes challenging the truthiness of claims [6, 52, 81] , of news [17, 33, 36, 37, 42, 60, 61] , of news sources [8], of social media users [3, 26, 49, 50, 51, 59] , and of social media [18, 19, 64, 82] , as well as studying credibility, influence, and bias [7, 8, 20, 45, 49, 51] . the interested reader can also check several recent surveys that offer a general overview on "fake news" [46] , or focus on topics such as the process of proliferation of true and false news online [75] , on fact-checking [71] , on data mining [67] , or on truth discovery in general [47] . for some specific topics, research was facilitated by specialized shared tasks such as the semeval-2017 task 8 and the semeval-2019 task 7 on determining rumour veracity and support for rumours (rumoureval) [28, 35] , the clef 2018-2020 checkthat! lab on automatic identification and verification of claims [4, 5, 13, 14, 16, 31, 32, 38, 39, 57, 58, 65] , the fever-2018 and fever-2019 tasks on fact extraction and verification [72, 73] , and the semeval-2019 task 8 on fact checking in community question answering forums [53, 54] , among others. finally, note that the veracity of information is a much bigger problem than just "fake news". it has been suggested that "veracity" should be seen as the fourth "v" of big data, along with volume, variety, and velocity. 6 in order to fact-check a news article, we can analyze its contents, e.g., the language it uses, and the reliability of its source, which can be represented as a number between 0 and 1, where 1 indicates a very reliable source, and 0 stands for a very unreliable one: f actuality (article) = reliability (language (article)) in order to fact-check a claim (as opposed to an article), we can retrieve articles discussing the claim, then we can detect the stance of each article with respect to the claim, and we can take a weighted sum (here, the stance is a number between -1 and 1, where it is -1 if the article disagrees with the claim, it is 1 if it agrees, and it is 0 if it just discusses the claim or if it is unrelated): note that in formula (1), the reliability of the website that hosts an article serves as a prior to compute the factuality of an article, while in formula (2), we use the factuality of the retrieved articles to compute a factuality score for a target claim. the idea is that if a reliable article agrees/disagrees with the claim, this is a good indicator for it being true/false, and it is the other way around for unreliable articles. of course, the formulas above are oversimplifications, e.g., one can fact-check a claim based on the reactions of users in social media [41] , based on the claim's spread over time in social media [48] , based on information in a knowledge graph [70] , extracted from the web [43] or from wikipedia [72] , using similarity to previously fact-checked claims [64] , etc. yet, the formulas give the general idea that the reliability of the source should be an important element of fact-checking articles and claims. yet, it is an understudied problem. characterizing entire news outlets is an important task on its own right. we argue that it is more useful than fact-checking claims or articles, as it is hardly feasible to fact-check every single piece of news. doing so also takes time, both to human users and to automatic programs, as they need to monitor the way reliable media report about a given target claim, how users react to it in social media, etc., and it takes time to get enough such evidence accumulated in order to be able to make a reliable prediction. it is much more feasible to check entire news outlets. note that we can also fact-check a number of sources in advance, and we can then fact-check the news before it was even written! once it is published online, it would be enough to check how trustworthy the outlets that published it are, in order to get an initial (imperfect) idea about how much we should trust this news. this would be similar to the movie minority report, where the authorities could detect a crime before it was even committed. in general, fighting disinformation is not easy; as in the case of spam, this is an adversarial problem, where the malicious actors constantly change and improve their strategies. yet, when they share news in social media, they typically post a link to an article that is hosted on some website. this is what we are exploiting: we try to characterize the news outlet where the article is hosted. this is also what journalists typically do: they first check the source. finally, even though we focus on the source, our work is also compatible with fact-checking a claim or a news article, as we can provide an important prior and thus help both algorithms and human fact-checkers that try to fact-check a particular news article or a claim. how can we profile a news source? note that disinformation typically focuses on emotions, and political propaganda often discusses moral categories [23] . there are many incentives for news outlets to publish articles that appeal to emotions: (i ) this has a strong propagandistic effect on the target user, (ii ) it makes it more likely to be shared further by the users, and (iii ) it will be favored as a candidate to be shown in other users' newsfeed as this is what algorithms on social media optimize for. and news outlets want to get users to share links to their content in social media as this allows them to reach larger audience. this kind of language also makes them potentially detectable for artificial intelligence (ai) systems; yet, these outlets cannot do much about it as changing the language would make their message less effective and it would also limit its spread. while the analysis of the language used by the target news outlet is the most important information source, we can also consider information in wikipedia and in social media, traffic statistics, and the structure of the target sites url as shown in figure 1: 1. the text of a few hundred articles published by the target news outlet, analyzing the style, subjectivity, sentiment, offensiveness [77, 78, 79, 62] , toxicity [30] , morality, vocabulary richness, propagandistic content, etc.; 2. the text of its wikipedia page (if any), including infobox, summary, content, categories, e.g., it might say that the website spreads false information and conspiracy theories; 3. metadata and statistics about its twitter account (if any): is it an old account, is it verified, is it popular, how is the medium self-describing, is there a link to its website, etc.; 4. whether people in social media, e.g., on twitter, post links in articles to the target sources in the context of a polarizing topic, and which side of the debate are these users from; 5. whether there is liberal-vs-moderate-vs-conservative bias of the audience of the target medium in social media, e.g., in facebook; 6. the language used in videos by the target medium, e.g., in their youtube channels (if any), where the focus is on analysis of the speech signal, i.e., not on what is said but on how it is said, e.g., is it emotional [44] ; 7. web traffic information: whether this is a popular website; 8. the structure of site's url: is it too long, does it contain a sequence of meaningful words, does it have a fishy suffix such as ".com.co", etc. characterizing media in terms of factuality of reporting and bias is part of a larger effort at the qatar computing research institute, hbku: the tanbih mega-project 7 aims to limit the effect of "fake news", disinformation, propaganda and media bias by making users aware what they are reading, thus promoting media literacy and critical thinking. the mega-projects flagship initiative is the tanbih news aggregator [80] , 8 which shows real-time news from a variety of news sources [68] . it builds a profile for each news outlet, showing a prediction about the factuality of its reporting[8,9,10], its leading political ideology [29] , degree of propaganda [12,15], hyper-partisanship [63, 66] , and general frame of reporting (e.g., political, economic, legal, cultural identity, quality of life, etc.), and stance with respect to various claims and topics [11, 27, 55, 56, 69] . for the individual news, it signals when an article is likely to be propagandistic. it further mixes arabic and english news, and allows the user to see them all in english/arabic thanks to qcris machine translation technology. tanbih also offers analytics capabilities, allowing a user to explore the media coverage, the frame of reporting, and the propaganda around topics such as brexit, sri lanka bombings, and covid-19. 9 moreover, it performs fine-grained analysis of the propaganda techniques in the news [21, 22, 24, 25, 76] . 10 we developed tools such as a web browser plugin, 11 a mechanism to share media profiles and stories in social media, a tool to detect check-worthiness for english and arabic [34, 40, 74] , 12 a twitter fact-checking bot, 13 and an api to the tanbih functionality. 14 the latter is used by aljazeera and other partners. more recently, we have been developing tools for fighting the covid-19 infodemic by modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society [1, 2] . tanbih was developed in close collaboration with mit-csail. 15 we were also partners in a large nsf project on credible open knowledge networks, 16 and we further collaborate with carnegie mellon university in qatar, qatar university, sofia university, the university of bologna, aljazeera, facebook, the united nations, data science society, and a data pro, among others. as part of a larger team, including al jazeera, associated press, rte ireland, tech mahindra, and metaliquid, and v-nova, we won an award at ibc 2019 by tm forum and ibc 2019 for our media-telecom catalyst project on ai indexing for regulatory practice. 17 in its 2.5 years of history, the tanbih mega-project has produced 30+ toptier publications, a best demo award (honorable mention) at acl-2020, and several patent applications. the project was featured in 30+ keynote talks, and it was highlighted by 100+ media including forbes, boston globe, aljazeera, mit technology review, science daily, popular science, fast company, the register, wired, and engadget. it is widely believed that "fake news" can and has affected major political events. in reality, the true impact is unknown; however, given the buzz that was created, we should expect a large number of state and non-state actors to give it a try. from a technological perspective, we ca expect further advances in "deep fakes", such as machine-generated videos, and images. this is a really scary development, but probably only in the mid-long run; as of present, "deep fakes" are still easy to detect both using ai and also by experienced users. we also expect advances in automatic news generation, thanks to recent developments such as gpt-3. this is already a reality and a sizable part of the news we are consuming daily are machine generated, e.g., about the weather, the markets, and sport events. such software can describe a sport event from 11 http://chrome.google.com/webstore/detail/tanbih/ igcppjdbignhkiikejdjpjemejoognen 12 http://claimrank.qcri.org/ 13 http://twitter.com/factchecker_bot/ 14 http://app.swaggerhub.com/apis/yifan2019/tanbih/0.6.0#/ 15 http://qcri.csail.mit.edu/node/25 16 http://cokn.org/ 17 http://www.tmforum.org/ai-indexing-regulatory-practise/ various perspectives: neutrally or taking the side of the winning or the losing team. it is easy to see how this can be used for disinformation purposes. yet, we hope to see "fake news" gone the way of spam: not entirely eliminated (as this is impossible), but put under control. ai has already helped a lot in the fight against spam, and we expect that it would play a key role in putting "fake news" under control as well. a key element of the solution would be limiting the spread. social media platforms are best positioned to do this on their own platforms. twitter has suspended more than 70 million accounts in may and june 2018, and these efforts continue to date; this can help in the fight against bots and botnets, which are the new link farms: 20% of the tweets during the 2016 us presidential campaign were shared by bots. facebook, from its part, warns users when they try to share a news article that has been fact-checked and identified as fake by at least two trusted fact-checking organizations, and it also downgrades "fake news" in its news feed. we expect the ai tools used for this to get better, just like spam filters have improved over time. yet, the most important element of the fight against disinformation is raising user awareness and develop critical thinking. this would help limit the spread as users would be less likely to share it further. we believe that practical tools such as the ones we develop in the tanbih mega-project would help in that respect. fighting the covid-19 infodemic in social media: a holistic perspective and a call to arms. arxiv preprint fighting the covid-19 infodemic: modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society predicting the role of political trolls in social media overview of the clef-2018 checkthat! lab on automatic identification and verification of political claims, task 1: check-worthiness overview of the clef-2019 checkthat! lab on automatic identification and verification of claims. task 1: check-worthiness online journalists embrace new marketing function. newspaper research finding credible information sources in social networks based on content and social structure information credibility on twitter battling the internet water army: detection of hidden paid posters semeval-2020 task 11: detection of propaganda techniques in news articles findings of the nlp4if-2019 shared task on fine-grained propaganda detection a survey on computational propaganda detection prta: a system to support the analysis of propaganda techniques in the news fine-grained analysis of propaganda in news articles seminar users in the arabic twitter sphere unsupervised user stance detection on twitter semeval-2017 task 8: rumoureval: determining rumour veracity and support for rumours predicting the leading political ideology of youtube channels using acoustic, textual and metadata information detecting toxicity in news articles: application to bulgarian checkthat! at clef 2019: automatic identification and verification of claims overview of the clef-2019 checkthat!: automatic identification and verification of claims digital journalism credibility study a context-aware approach for detecting worth-checking claims in political debates semeval-2019 task 7: rumoureval, determining rumour veracity and support for rumours in search of credible news in search of credible news overview of the clef-2020 check-that! lab on automatic identification and verification of claims in social media: arabic tasks overview of the clef-2019 checkthat! lab on automatic identification and verification of claims. task 2: evidence and factuality claimrank: detecting check-worthy claims in arabic and english linguistic signals under misinformation and fact-checking: evidence from user comments on social media we built a fake news & click-bait filter: what happened next will blow your mind fully automated fact checking using external sources detecting deception in political debates using acoustic and textual features multi-view models for political ideology detection of news articles the science of fake news a survey on truth discovery detecting rumors from microblogs with recurrent neural networks finding opinion manipulation trolls in news community forums exposing paid opinion manipulation trolls the dark side of news community forums: opinion manipulation trolls hunting for troll comments in news community forums semeval-2019 task 8: fact checking in community question answering forums fact checking in community forums automatic stance detection using end-to-end memory networks contrastive language adaptation for crosslingual stance detection overview of the clef-2018 checkthat! lab on automatic identification and verification of political claims clef-2018 lab on automatic identification and verification of claims in political debates do not trust the trolls: predicting credibility in community question answering forums fang: leveraging social context for fake news detection using graph representation a stylometric inquiry into hyperpartisan and fake news a largescale semi-supervised dataset for offensive language identification team qcri-mit at semeval-2019 task 4: propaganda analysis meets hyperpartisan news detection that is a known lie: detecting previously fact-checked claims overview of the clef-2020 checkthat! lab on automatic identification and verification of claims in social media: english tasks team jack ryder at semeval-2019 task 4: using bert representations for detecting hyperpartisan news fake news detection on social media: a data mining perspective dense vs. sparse representations for news stream clustering predicting the topical stance and political leaning of media using tweets claimskg: a knowledge graph of fact-checked claims automated fact checking: task formulations, methods and future directions fever: a large-scale dataset for fact extraction and verification the second fact extraction and verification (fever2.0) shared task it takes nine to smell a rat: neural multi-task learning for check-worthiness prediction the spread of true and false news online experiments in detecting persuasion techniques in the news predicting the type and target of offensive posts in social media semeval-2019 task 6: identifying and categorizing offensive language in social media (offenseval) semeval-2020 task 12: multilingual offensive language identification in social media proceedings of the 2019 conference on empirical methods in natural language processing fact-checking meets fauxtography: verifying claims about images analysing how people orient to and spread rumours in social media by looking at conversational threads key: cord-342984-3qbvlbwo authors: allington, daniel; duffy, bobby; wessely, simon; dhavan, nayana; rubin, james title: health-protective behaviour, social media usage and conspiracy belief during the covid-19 public health emergency date: 2020-06-09 journal: psychological medicine doi: 10.1017/s003329172000224x sha: doc_id: 342984 cord_uid: 3qbvlbwo background: social media platforms have long been recognised as major disseminators of health misinformation. many previous studies have found a negative association between health-protective behaviours and belief in the specific form of misinformation popularly known as ‘conspiracy theory’. concerns have arisen regarding the spread of covid-19 conspiracy theories on social media. methods: three questionnaire surveys of social media use, conspiracy beliefs and health-protective behaviours with regard to covid-19 among uk residents were carried out online, one using a self-selecting sample (n = 949) and two using stratified random samples from a recruited panel (n = 2250, n = 2254). results: all three studies found a negative relationship between covid-19 conspiracy beliefs and covid-19 health-protective behaviours, and a positive relationship between covid-19 conspiracy beliefs and use of social media as a source of information about covid-19. studies 2 and 3 also found a negative relationship between covid-19 health-protective behaviours and use of social media as a source of information, and study 3 found a positive relationship between health-protective behaviours and use of broadcast media as a source of information. conclusions: when used as an information source, unregulated social media may present a health risk that is partly but not wholly reducible to their role as disseminators of health-related conspiracy beliefs. conspiracism is the tendency to assume that major public events are secretly orchestrated by powerful and malevolent entities acting in concert (douglas et al., 2019) . the idea that such plotting explains social reality was influentially termed 'the conspiracy theory of society' by popper (1969) , and what hofstadter termed 'conspiratorial fantasies ' (1964) are now popularly referred to as 'conspiracy theories'. here, the more neutral 'conspiracy beliefs' is preferred. online, such beliefs are now frequently offered as explanations of coronavirus disease 2019, or covid-19. this outbreak of conspiracism is only the latest wave in an ongoing 'deluge of conflicting information, misinformation and manipulated information on social media' which some researchers have long argued 'should be recognised as a global public-health threat' (larson, 2018, p. 309) . multiple studies have found a link between medical conspiracy beliefs and reluctance to engage in health-protective behaviours with regard to vaccination or safer sex (dunn et al., 2017; goertzel, 2010; grebe & nattrass, 2011; jolley & douglas, 2014; thorburn & bogart, 2005; zimmerman et al., 2005) . this raises the possibility that the circulation of covid-19 conspiracy beliefs might be associated with similar risks. indeed, two recent studies have found a negative relationship between covid-19 conspiracy beliefs and health-protective behaviours intended to help control the covid-19 pandemic (allington & dhavan, 2020; freeman et al., 2020) . youtube and facebook have been identified as major vectors for dissemination of conspiracy beliefs and misinformation, on medical and other topics (avaaz, 2020; bora, das, barman, & borah, 2018; buchanan & beckett, 2014; byford, 2011, p. 11; chaslot, 2017; oi-yee li, bailey, huynh, & chan, 2020; pandey, patni, sing, sood, & singh, 2010; pathak et al., 2015; seymour, getman, saraf, zhang, & kalenderian, 2014; sharma, yadav, yadav, & ferdinand, 2017) . most studies of twitter suggest that it plays a similar role (broniatowski et al., 2018; kouzy et al., 2020; ortiz-martínez & jiménez-arcia, 2017; oyeyemi, gabarron, & wynn, 2014) . but while social media misinformation is both pervasive and popular, its effects are hard to quantify, and it is unclear which groups are most susceptible to its influence (wang, mckee, torbica, & stuckler, 2019) . in the uk, broadcast and print media are regulated (albeit by different mechanisms), while social media are unregulated. for example, when covid-19 misinformation was propagated by david icke and brian rose on the london live television station, the owner of the station was sanctioned by the uk broadcasting regulator for disseminating content which 'had the potential to cause significant harm to viewers' (ofcom, 2020, p. 16) . however, similar content continues to circulate freely on social media platforms (brennen, simon, howard, & nielsen, 2020) . while social media platforms can and do exercise editorial control over the content they disseminate, they appear to do so inconsistently (avaaz, 2020). purveyors of conspiracy beliefs and other misinformation successfully exploit this situation for economic gain (ccdh, 2020; scott, 2020) . we report on three online questionnaire surveys of engagement in covid-19-specific health-protective behaviours, use of social media as a source of information about covid-19, and covid-19 conspiracy beliefs, defined as beliefs which entail that the covid-19 public health critis was produced through intentional agency (whether through manufacture of the coronavirus or through deliberate exaggeration or incorrect attribution of negative health outcomes). the first and third surveys measured adherence to multiple conspiracy beliefs, while the second measured adherence to just one. the first and second measured media usage in very general terms, while the third separately measured informational reliance on legacy media as well as on specific social media platforms. data for study 1 were collected in partnership with citizenme. invitations were sent to all members of a panel of uk residents aged 18 or more who had expressed an interest in answering surveys about covid-19. data for studies 2 and 3 were collected in partnership with ipsos-mori, a member of the british polling council. the sampling frame was a recruited panel of uk adults aged 16-75. stratified random samples were selected, with quotas employed to achieve national representativeness with regard to age within gender, region, working status, social grade and education, using census and mid-year estimates from the office of national statistics. questionnaires were completed online. the data collection followed ethical and data protection procedures at king's college london and at the partner organisations. fieldwork dates were 3-7 april 2020 for study 1, 1-3 april 2020 for study 2 and 20-22 may 2020 for study 3. table 1 contains descriptive statistics for all samples. where gender percentages do not sum to 100, this is because very small numbers of respondents did not self-identify as 'female' or 'male'. due to missing data, total n may vary throughout each study. hypotheses regarding relationships between usage of or reliance on sources of information and either conspiracy beliefs or healthprotective behaviours were tested using mann-whitney-wilcoxon u tests, with effect sizes estimated using vargha and delaney's a with 95% confidence intervals (cis) calculated through bootstrapping with 1000 replications, on the assumption that each sample can be treated as equivalent to a random sample. hypotheses concerning conspiracy beliefs and health-protective behaviours were tested using fisher's exact test both to calculate significance and to estimate odds ratios with a 95% ci (on the same assumption). in all three studies, hypotheses were tested by treating conspiracy beliefs and health-protective behaviours both individually and in combination, with an aggregate variable to indicate whether a respondent held at least one conspiracy belief and another to indicate whether a respondent engaged in all health-protective behaviours. in study 3, measurements of media use were both treated individually and aggregated by recoding ordinal variables as numeric variables and taking the mean, creating one aggregate variable for the legacy media and one aggregate variable for social media. tests covering combinations of raw and aggregated variables are reported in tables in the online supplementary materials, which are prefixed 's'. welch unequal variance t tests were used to test for effects of age and fisher tests were used for effects of gender with regard to aggregate variables, again with a 95% ci. in study 3, logistic regression models were used to control for the effects of multiple variables. 'don't know' responses were treated as missing data. all tests were carried out using base r v. 3.6.1 (r core team, 2019), with the exception of vargha and delaney's a, which was calculated using the r library, rcompanion v. 2.3.21 (mangiafico, 2020) . respondents identified true statements from a list of six statements which included these three conspiracy beliefs: 'the virus that causes covid-19 was probably created in a laboratory', 'the symptoms of covid-19 seem to be connected to 5g mobile network radiation' and 'the covid-19 pandemic was planned by certain pharmaceutical corporations and government agencies'. respondents answered the question 'how do you find out what's going on in the world?' using a five-point scale from 'always from major newspapers and/or tv channels (including online)' to 'always from social media', and were also asked to identify behaviours in which they were engaging from a list of six which included three health-protective behaviours (table 2) . hypotheses h.1.1 a positive relationship between conspiracy belief and preference for social media over mainstream media h.1.2 a negative relationship between health-protective behaviour and preference for social media over mainstream media h.1.3 a negative relationship between conspiracy belief and health-protective behaviour the most commonly held conspiracy belief was cb.1.1 (a laboratory origin for the coronavirus). those holding one or more there was a positive relationship between holding one or more conspiracy beliefs and preference for social media over legacy media as a general source of information, u(n1 = 266, n2 = 665) = 99 987.0, p = 0.001, 95% ci (0.52-0.60). hypothesis h.1.1 is thus supported (online supplementary table s.1.5). there was no relationship between engagement in all healthprotective behaviours and preference for social media over legacy media as a general source of information, u(n1 = 580, n2 = 351) = 100 207.0, p = 0.680, 95% ci (0.46-0.53). there was also no relationship for individual health-protective behaviours. h.1.2 is thus unsupported (online supplementary table s.1.6). there was a very strong negative relationship between holding one or more conspiracy beliefs and following all health-protective behaviours, p < 0.001, 95% ci (0.34-0.61). the strongest effects were observed for cb.1.2 (a connection between covid-19 symptoms and 5g). h.1.3 is thus supported (online supplementary table s.1.7). respondents were asked whether the statement that 'coronavirus was probably made in a laboratory' was true or false. they were also asked how frequently they were checking social media for information or updates about covid-19 and whether they were engaging in each of several health-protective behaviours (table 3) . those holding the conspiracy belief were several years younger, t(950.90) = −6.44, p < 0.001, 95% ci (−7.54 to −4.02), while those who followed all health-protective behaviours were several years older, t(589.10) = 9.28, p < 0.001, 95% ci (6.99-10.74). women were significantly more likely to engage in all healthprotective behaviours than men, p < 0.001, 95% ci (1.65-2.62), and slightly less likely to hold the conspiracy belief, although this was not statistically significant, p = 0.214, 95% ci (0. there was no relationship between engaging in all healthprotective behaviours and frequency of checking social media for information or updates about covid-19, u(n1 = 1785, n2 = 391) = 343 412.0, p = 0.611, 95% ci (0.46-0.52). however, there were significant negative relationships between frequency 1. always from major newspapers and/or tv channels (including online) 2. more from major newspapers and/or tv channels (including online) than from social media 3. equally from major newspapers and/or tv channels (including online) and from social media 4. more from social media than from major newspapers and/or tv channels (including online) 5. always from social media 6. don't know there was a significant negative relationship between holding the conspiracy belief and engagement in all health-protective behaviours, p < 0.001, 95% ci (0.39-0.66). there was also a significant negative relationship between holding the conspiracy belief and engagement in each individual health-protective behaviour. h.2.3 is thus supported (online supplementary table s.2.7). respondents were asked about four of the same health-protective beliefs as in study 2 and about a wider range of conspiracy beliefs than in either of the two previous studies. the media usage question was more detailed, inviting respondents to assess how much of their knowledge about covid-19 was drawn from seven different sources, including four named social media platforms (table 4) . no explicit hypothesis was formulated for information source is.3.7 ('friends and family'), but the implicit hypothesis of an effect was tested for comparative purposes. those holding one or more conspiracy beliefs were slightly younger, t(1597.01) = −4.33, p < 0.001, 95% ci (−5.02 to −1.89). those who followed all health-protective behaviours were several years older, t(943.24) = 6.98, p < 0.001, 95% ci (3.99-7.12). women in the sample were significantly more likely to follow all health-protective behaviours than men, p < 0.001, 95% ci (1.65-2.62), and slightly more likely to hold one or more conspiracy beliefs, although this was not statistically significant, p = 0.164, 95% ci (0.94-1.40). see online supplementary tables s.3.1-4. there was a small but significant negative relationship between use of legacy media as a source of knowledge about covid-19 and belief in one or more conspiracy theories, u(n1 = 884, n2 = 748) = 296 848.5, p < 0.001, 95% ci (0.42-0.48). however, the effect of television and radio considered alone was in most cases statistically significant, while the effect of newspapers and magazines considered alone was not. hypothesis h.3.1 is thus supported, with the caveat that this is primarily due to the contribution made by broadcast media (online supplementary table s .3.5). there was a strong positive relationship between use of social media platforms as sources of knowledge about covid-19 and holding one or more conspiracy beliefs, u(n1 = 882, n2 = 748) = 424 640.0, p < 0.001, 95% ci (0.62-0.67). in almost all cases, there was also a significant positive relationship between each individual conspiracy belief and use of each platform. youtube there was also a smaller but significant positive relationship between holding one or more conspiracy beliefs and use of friends and family as a source of information about covid-19, u(n1 = 878, n2 = 749) = 397 473.5, p < 0.001, 95% ci (0.57-0.63). the implicit hypothesis of an effect for reliance on this information source was thus supported (online supplementary table s.3.7) . there was a positive relationship between use of legacy media as a source of knowledge about covid-19 and following all health-protective behaviours; however, this effect was small and of borderline significance u(n1 = 1610, n2 = 563) = 478 174.0, p < 0.046, 95% ci (0.50-0.56). the effect associated with tv and radio was more significant, u(n1 = 1601, n2 = 561) = 481 068.5, p = 0.006, 95% ci (0.51-0.56), while there was no individual effect for newspapers and magazines, u(n1 = 1594, n2 = 553) = 447 648.5, p = 0.564, 95% ci (0.48-0.54). hypothesis h.3.3 is thus supported, with the caveat that this is largely due to the contribution made by broadcast media (online supplementary table s.3.8). there was a much stronger and more significant negative relationship between use of social media as a source of knowledge about covid-19 and engagement in health-protective behaviours, u(n1 = 1603, n2 = 563) = 342 191.5, p < 0.001, 95% ci (0.35-0.41). except with regard to hpb.3.1 (hand-washing), where there was a weak negative effect that was only significant with regard to youtube, whatsapp and the aggregate social media variable, there was in every case a very highly significant negative relationship between use of each social media platform and each of the health-protective behaviours considered individually ( p < 0.001). as with conspiracy beliefs, youtube was the platform with the strongest association with this undesirable outcome. h.3.4 was thus supported (online supplementary table s.3.9). there was a weaker but still significant negative relationship between use of friends and family as a source of knowledge about covid-19 and engagement in all health-protective behaviours, u(n1 = 1601, n2 = 560) = 240 145.0, p < 0.001, 95% ci (0.40-0.47). the implicit hypothesis of an effect for this source of knowledge was thus supported (online supplementary table s .3.10). there was a strong negative relationship between holding one or more conspiracy beliefs and engagement in all healthprotective behaviours, p < 0.001, 95% ci (0.29-0.47). except with regard to belief cb.3.1 (laboratory origin), this relationship was very highly significant with regard to every combination of behaviours and beliefs (p < 0.001). in relation to the aggregate variable, hpb.3.all, the strongest effect was associated with finally, a series of binomial logistic regression models were constructed to control for multiple variables as predictors of health-protective behaviour (table 5 ). conspiracy belief emerges as a more powerful predictor of health-protective behaviour non-engagement than gender, and as a more powerful predictor than age once controls for media usage are applied. media usageand especially social media usageappears to be the most powerful predictor of all (at least as represented in these models), with age losing much of its predictive power once it is controlled for. (this contrasts with the finding of qualified or absent effects on behaviour in studies 1 and 2. however, those studies did not collect such detailed data on media usage.) the studies reported here find a positive association between covid-19 conspiracy beliefs and use of social media as a source of information about covid-19, and a negative association between covid-19 conspiracy beliefs and covid-19specific health-protective behaviours, with the strongest negative effects being associated with beliefs that imply that the coronavirus may not exist, that its lethality has been exaggerated, or that its symptoms may have a non-viral cause. in addition, study 3 and, to a lesser extent, study 2 find a negative association between use of social media as a source of information about covid-19 and health-protective behaviours. study 3 also finds a weaker negative association for use of friends and family as a source of information, and a positive association for use of legacy media, especially broadcast media. findings are suggestive of a hierarchy of effects associated with different social media platforms, with youtube appearing to be the most problematic, but all being associated with significant negative effects. a consistent finding across the studies is that covid-19 conspiracy beliefs are more likely to be held by younger respondents, and that health-protective behaviour was associated with both age and female gender. however, the regression analysis in study 3 suggests that effects on health-protective behaviour associated with conspiracy belief and social media use are not accounted for by age or gender, and that some of the apparent effects of age may indeed be accounted for by differences in media usage. that is, it may be that older people are more likely to reject conspiracy beliefs and engage in health-protective behaviours in part because they make more use of broadcast media and less use of social media. all three studies suggest that conspiracy beliefs act to inhibit health-protective behaviours and that social media act as a vector for such beliefs. studies 2 and 3 find further evidence of a link between social media and non-engagement in health-protective behaviours, and study 3 finds evidence of an opposite relationship for legacy media, especially broadcast media. in the uk, broadcast media are subject to official regulation, and many print media platforms are subject to voluntary regulation, but social media are largely unregulated. one wonders how long this state of affairs can be allowed to persist while social media platforms continue to provide a worldwide distribution mechanism for medical misinformation. limitations study 1 relies on a self-selecting sample, while studies 2 and 3 rely on stratified random samples from a recruited panel. moreover, all three studies rely on self-reports for measurement both of media usage and of compliance with health-protective behaviours. supplementary material. the supplementary material for this article can be found at https://doi.org/10.1017/s003329172000224x. the relationship between conspiracy beliefs and compliance with public health guidance with regard to covid-19 how facebook can flatten the curve of the coronavirus infodemic: study indicates facebook is rife with bogus cures and conspiracy theories that remain on the platform long enough to put millions of people at risk are internet videos useful sources of information during global public health emergencies? a case study of youtube videos during the 2015-16 zika virus pandemic. pathogens and global health types, sources, and claims of covid-19 misinformation weaponized health communication: twitter bots and russian trolls amplify the vaccine debate assessment of vaccination-related information for consumers available on facebook conspiracy theories: a critical introduction #deplatformicke: how big tech powers and profits from david icke's lies and hate, and why it must stop how youtube's a.i. boosts alternative facts: youtube's recommendation a.i. is designed to maximize the time users spend online. fiction often outperforms reality understanding conspiracy theories mapping information exposure on social media to explain differences in hpv vaccine coverage in the united states coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in england conspiracy theories in science aids conspiracy beliefs and unsafe sex in cape town the paranoid style in american politics the effects of anti-vaccine conspiracy theories on vaccination intentions coronavirus goes viral: quantifying the covid-19 misinformation epidemic on twitter the biggest pandemic risk? viral misinformation rcompanion: functions to support extension education program evaluation london real: covid-19 youtube as a source of information on covid-19: a pandemic of misinformation? yellow fever outbreaks and twitter: rumours and misinformation ebola, twitter, and misinformation: a dangerous combination? youtube as a source of information on the h1n1 influenza pandemic youtube as a source of information on ebola virus disease conjectures and refutations: the growth of scientific knowledge r: a language and environment for statistical computing. vienna: r foundation for statistical computing the youtuber accused of using coronavirus to scam his followers: brian rose has raised nearly £1 million to fund a 'free speech' platform where he'll host video interviews with conspiracy theorists and other guests when advocacy obscures accuracy online: digital pandemics of public health misinformation through an antiflouride case study zika virus pandemic -analysis of facebook as a social media health information platform conspiracy beliefs about birth control: barriers to pregnancy prevention among african americans of reproductive age systematic literature review on the spread of health-related misinformation on social media vaccine criticism on the world wide web acknowledgements. data collection for study 1 was provided free of charge by citizenme. data collection for studies 2 and 3 was part-funded by king's together, an internal programme of funding to enable initial exploratory research and analysis at king's college london, and part-funded by the national institute for health research health protection research unit (nihr hpru) in emergency preparedness and response: a partnership between public health england, king's college london, and the university of east anglia. publication costs for this article were funded by the pears foundation, a family foundation with a focus on understanding, engagement and wellbeing. the views expressed are those of the author(s) and not necessarily those of king's college london, citizenme, the pears foundation, the nihr, public health england or the department of health and social care. key: cord-024640-04goxwsx authors: oates, sarah title: the easy weaponization of social media: why profit has trumped security for u.s. companies date: 2020-05-11 journal: digi war doi: 10.1057/s42984-020-00012-z sha: doc_id: 24640 cord_uid: 04goxwsx american-based social media companies have become active players in digital war, both by accident of design and a subsequent failure to address the threat due to concerns over profits. discussions about the negative role of social media in society generally address the myriad problems wrought by social media, including electoral manipulation, foreign disinformation, trolling, and deepfakes, as unfortunate side effects of a democratizing technology. this article argues that the design of social media fosters information warfare. with its current composition and lack of regulation, social media platforms such as facebook and twitter are active agents of disinformation, their destructive force in society outweighing their contributions to democracy. while this is not by deliberate design, the twin forces of capitalism and a lack of regulation of the world’s largest social media platforms have led to a situation in which social media are a key component of information war around the globe. this means that scholarly discussions should shift away from questions of ethics or actions (or lack thereof) on the part of social media companies to a frank focus on the security risk posed to democracy by social media. this article departs from the usual discussion of the role of social media in society by comparing its value to its destructive elements. the problem with trying to "balance" both the benefits of social media such as its challenge to censorship and ability to aggregate social movements against destructive elements such as disinformation and the loss of privacy suggests we can somehow offset one side against the other. but if social media are making a country vulnerable to a key component of modern warfare, that really cannot be "balanced" even by the ability of social media to give voice to those often ignored by the mainstream media or to allow citizens to find affinity groups online. it's like saying that we can find a "balance" for discussing other national security issues, such as the idea that while an enemy might have superior weapons, at least our nation spent more on social welfare over the past decade. it might be true, but it won't help you win the war when the tanks roll over your borders. as executives in silicon valley bristle at these ideas, let us first consider the nature of modern war and why information has become a key element of contemporary conflict. it is not so much that we need to understand the digital aspect of modern warfare; rather we need to see that digital warfare is a new way of understanding war in the digital age. digital war is the central, rather than peripheral, issue. according to the federation of american scientists, digital war is "a subset of what we call information war, involves non-physical attacks on information, information processes, and information infrastructure that compromise, alter, damage, disrupt, or destroy information and/or delay, confuse, deceive, and disrupt information processing and decision making." 1 one of the most useful frames to consider information war is as the "fifth dimension" of warfare, joining land, sea, air, and space as spheres of battle (franz 1996) . information war is more precise than the notion of "cyber war" (hunker 2010) . indeed, the two concepts of "digital war" and "cyber war" are often conflated, given that "cyber" capabilities could be much broader than information operations and could embrace such tactics as dedicated denial of service (ddos) attacks to bring down servers, the takeover of critical infrastructure via online or local malware, or even drone attacks that are programmed from afar. that kind of conflation is actually not useful, in that it reduces information to a subset of a broader phenomenon and underplays the critical role of information in conflict. while it is disruptive for a country to lose access to the internet due to an attack on a server, this does not have the same insidious danger as subversive propaganda. cutting off access is an obvious act of war that has a clear solution. infiltrating a media system, especially social media, in order to sow seeds of distrust in a way that undermines political institutions is a far more insidious and corrosive act of aggression against a society. this discussion is focused on information war as it is carried out on social media, the battle for hearts and minds, as the most recent, critical development in modern warfare. social media play a very important part in that war and social media are a critical part of both attack and defense in modern warfare. indeed, information operations not only augment, but they often presage or even essentially replace conventional warfare. while the most publicized attack on democracy has been through attempted russian influence in the u.s. elections, countries in europe also have uncovered sustained anti-democracy campaigns by russians in countries ranging from estonia to the united kingdom. while the u.s. and other countries are aware of the threats and are responding, this article argues that it is difficult to have a robust response when u.s. social media companies are key distribution nodes of foreign disinformation. the ways in which narratives about coronavirus have become part of information warfare should sound a particularly ominous warning to democracies. a combination of fear, a large degree of unknown about the virus, as well as the rapid shifts in the evolution of the epidemic highlight how quickly disinformation can travel through social media at vulnerable moments. while national governments and major media outlets also are struggling with covering and framing the outbreak, social media platforms allow for authoritarian communication campaigns to widen and deepen the gaps between citizens and countries just when those connections are most desperately needed to combat an international health crisis. an example of this has been the attempt by authoritarian states (including china) to hint or even outright claim that the virus originated in a laboratory in the united states. now that it is impossible for social media companies to ignore the rising evidence of the central role of social media in inculcating conflict, they have defaulted to two key arguments in their defense: freedom of speech and the idea that the problem is limited to a fundamental misuse of their platforms. a core point emphasized by social media supporters was that the platforms were also vehicles for positive social change, such as in the arab spring, although the platforms showed very little ability or desire to adapt their programs to a range of national laws or norms. a 2008 article called "doing just business or just doing business" highlighted the problem: if internet companies choose to do business in china, for example, they must abide by china's censorship rules. in the case of the yahoo! email service, this meant turning over personal information on hong kong dissidents (dann and haddow 2008) . dann and haddow argued that the companies had violated their ethical standards, in that it was reasonable to surmise that turning over the information would have adverse consequences for their users. this ethical discussion, which took place before the current global reach of u.s. social media companies, now seems touchingly quaint. dann and haddow describe a world in which the main concerns were about privacy, censorship, and state surveillance, notably whether a country could compel an internet service provider to hand over personal information on its users. there was also disquiet about search engines being compliant with information filtering and collection regimes in non-free states. social media companies still face these challenges, but they pale in the face of the ability of states to weaponize social media for both domestic and international use. in particular, the way that social media companies organize users into affinity groups (for example, by friending on facebook or via hashtags on twitter) makes it remarkably easy to find ways to manipulate these groups. when you add on the way that social media companies sell audiences to advertisers by identifying key markers via user activity (friends, posts, clicks, likes, shares, etc.), you have the ability to manipulate both domestic and foreign audiences as never before. while there has been outrage over cambridge analytica, the same capacity to identify and manipulate social media users still lies at the heart of the social media business model. and there is little sustained outrage over that. social media companies would (and do) argue that it's unfair to blame the nature of social media itself if it winds up as the fifth dimension of warfare. siva vaidhyanathan disagrees: in his 2018 book antisocial media, vaidhyanathan asserts that the entire design of social media makes it the perfect carrier for disinformation and that social media companies do little to counter this. indeed, the arguments by mark zuckerberg and other social media officials are at best misinformed and, at worst, disingenuous. given the evidence of the weaponization of social media and the particular lack of foreign citizens to have any right of redress against u.s. companies, 2 it is clear that unregulated and mostly unresponsive dominant media platforms are choosing not to fundamentally change their business model. indeed, shareholders in facebook, twitter, and similar companies would not wish for greater policing of their platforms. the central element of the social media business model is the argument that social media companies are platforms, not providers, of information. this is protected by section 230 of the u.s. communication decency act, which states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." this sounds a little circular but it means that the blame lies with the source of any problematic content, not the platform that provides the content. this has functioned as a get-out-of-jail-free card for facebook and other internet providers if someone, for example, harasses or bullies or even threatens an individual or group. at the same time, section 230 has been hailed by internet advocates as protection of free speech, which is true, but also (inadvertently) creates a great opportunity for proponents of disinformation. 3 section 230 is a very liberal interpretation of u.s. law, which is quite protective of free speech to the point that even hate speech is permissible in the united states. this has created some friction in the united states, but the truly dangerous aspect isn't entirely domestic. it creates opportunities for digital war in two ways. first, it leaves the "digital borders" for disinformation more or less open for foreign states (this is aside from the problems of domestic disinformation). at the same time, as facebook became the dominant social media platform around the world, citizens in other countries find themselves in the same dilemma. while citizens and their leaders discover they need to use the u.s.-based platform in order to communicate and even govern, at the same time social media platforms such as facebook foster disinformation. they have introduced a communications system with a powerful virus of disinformation. this is exacerbated by the fact that facebook also owns whatsapp and instagram, which carry the same problems. a u.s. freedom of speech model has set up a system through which disinformation flourishes and undermines democracy in countries worldwide. it is not that social media companies are unaware or even naive about the way in which their platforms are used for disinformation. rather, there are strong economic incentives for keeping the current laissez-faire 'platform' model. there are three key parts to the model: the needs of the advertisers are primary over those of the users; there is almost no vetting of the identity of who is posting; and content is virtually unmoderated. when content is moderated, it is automated as much as possible and this, so far, has been fairly ineffective. ignoring the role of social media as the fifth dimension of war is lucrative, especially for facebook. according to facebook's 2019 annual report, 4 the company had revenue of $70.7 billion and earned $18.5 billion. despite all the bad press about facebook in the wake of the 2016 russian election interference scandal, daily active users still increased by nine percent to 1.66 billion by the end of 2019. the number of employees at facebook surged as well in 2019, with the company reporting 44,942 workers at the end of 2019, an increase of 26 percent in a single year. overall, the company estimated that on average about 2.5 billion people used facebook at least every month by end of 2019, an increase of eight percent in a single year. the company describes itself in the press release in this way: "founded in 2004, facebook's mission is to give people the power to build community and bring the world closer together. people use facebook's apps and technologies to connect with friends and family, find communities and grow businesses." the self-description highlights the deliberate self-deception: while facebook does provide the services listed, it also furnishes an excellent way to spread malicious disinformation and propaganda to both domestic and foreign audiences. if we want to return to the question of business ethics, one could argue that facebook (and other social media companies) are continuing with a 'hands off' business model to keep costs down and profits up. and it's working. in the five years ended on december 31, 2019 (before the global financial shock from covid-19), facebook stock increased in value by about 163 percent, while the nasdaq index was up about 87 percent over the same period. 5 allowing free access to the platform keeps revenue up and costs down. this raises the question of whether social media could change so that there would be a better balance between the benefits and the drawbacks for society, notably the weaponization of social media to promote foreign disinformation. the real problem is that the users are not the financial priority of facebook or almost any social media site. rather, the advertisers are the key customers being served by the platform. in addition, the investors (in the form of shareholders) also rank far above the users in terms of company priorities. almost any move made to protect the platform and its users from being used in digital war would adversely affect service to the advertisers and the shareholders. as the currency of social media platforms such as facebook is in the number of users, you should keep barriers to entry as low as possible. although facebook is slightly harder to join than twitter, it's still relatively easy to create fake accounts. 6 changing a system in which artificial intelligence is overwhelmingly used to detect fake accounts would be expensive and cumbersome, but still possible (not least by having better ai). but there is a deeper problem. as the value of social media companies is based to a large degree on the number of users, it is not in the interest of the companies (or their shareholders) in either identifying fake accounts or in discouraging them in general. while social media companies may publicly condemn fake accounts and occasionally purge some, in reality fake accounts prop up their business model. content moderation is a more complex issue for social media companies. on the one hand, better content moderation would make a more pleasant experience for a user seeking to avoid extreme speech, pornography, trolling, or what could be considered culturally inappropriate elements of the public sphere (or inappropriate for a particular age or group). social media companies do moderate their content to screen out child pornography, extreme violence, etc., although even this is quite difficult. yet, emotionally charged subject matter drives engagement (celis et al. 2019 ). this dovetails with the need to attract consumers to the site so that they can be marketed to advertisers. thus, a quiet, calm public sphere is not the best for their business model. it's true that it would be much more expensive for social media companies to moderate either content or users in a more forceful and efficient way. but as with fake users, outrageous content also helps the bottom line. so it's a lose-lose for social media companies to police their users and their content, as it would be far more costly and actually might reduce engagement. so their reluctance to do so is more than financial or logistical. moderation works directly against their core economic interests as companies. 7 if the economics of social media make it illogical for the platforms to change, how can we inculcate a shift in how social media works? if we stick to a dialogue that compares the benefits of social media to the problems of social media, we are unlikely to drive change. however, if citizens and policymakers alike can be made aware of the critical role of u.s. social media companies in supporting information operations by foreign states, then change is more likely. this means making the central role of information warfare in modern conflict much more visible and compelling. this will take a significant shift in thinking for citizens, democratic leaders, and social media companies alike: in the american laissez-faire freedom of speech model, it is assumed that misinformation and even disinformation are just part of the marketplace of ideas. however, this is a misconception in a world in which information is deployed into highly engaged, yet highly segregated, communities of voters who are essentially walled off from fact-based journalism and information. in the united states, russia and other foreign entities are able to deploy weapons of mass persuasion directly on vulnerable citizens, with both the vulnerabilities and the deployment system provided to a large degree by social media companies. this vulnerability also came into clearer view during the coronavirus epidemic, as propaganda released for financial and political gain added to the global 'infodemic.' in order to fully understand the role of social media companies in warfare, we need to return to the concept of the fifth dimension of war, i.e. information warfare. it is added to the first four dimensions of land, sea, air, and space. information warfare is, of course, not a new phenomenon but it is radically changed and augmented by the digital sphere. in particular, the advent and wide adoption of social media around the globe gave a unique and unprecedented opportunity for countries to carry out information operations on the citizens of other states. the design of social media allows foreign influence operations to identify key groups, infiltrate them, and manipulate them. in 2016, russians were particularly interested in using wedge issues as ways to polarize and manipulate u.s. citizens and could be effective at doing this with trump supporters via social media (jamieson 2018) . in other words, the design of social media companies is no longer an issue of pros and cons. according to analysts such as vaidhayanathan, facebook's entire design creates the perfect operating theater for information warfare. with social media in its current state, this type of activity cannot be detected or curtailed in a reasonable way because it is embedded in the very nature of social media. it is so easy to pose as another, to gain trust, to polarize, and even convince people to change behavior that american corporations such as facebook create a low-cost weapon for foreign enemies. thus, american corporate ingenuity and business logic created a weapon that is deployed against americans themselves. this again brings up the problem of tradeoffs. americans are now accustomed to the affordances of social media, including the idea that it is "free" to the user. in point of fact, user activity is harvested and sold to advertisers, so users are working for the social media companies in exchange for access. few users, however, either realize or are particularly perturbed by that tradeoff. we could compare this to the rise of the automobile as a mode of transport. initially, cars were seen as independent luxury items and there were few rules to govern their use. the early automobile age was accompanied by chaos and a high rate of accidents. over time, governments took over regulation of the roads and-to a large degree-of car safety through the introduction of seat belts, anti-locking brakes, airbags, and other features. yet, almost 40,000 americans die in car accidents every year and tens of thousands more are injured. 8 so americans do accept tradeoffs. but there has never been a national discussion about the security tradeoffs in social media. what is the level of safety that is necessary? how can we start a national conversation about this? why is it a mistake to leave this in the hands of companies? how can we maintain a robust online sphere, but keep it much safer? are citizens willing to pay for this, either directly or through government support? if the nature of the problem is framed as one of national security, can u.s. regulation work? regulating social media is enormously difficult and that's just in the united states, which has the advantage of being the headquarters of large western social media companies. much of this is due to excessive cyber-optimism in the united states, in particular as an echo of the origins of the internet as a research and educational tool outside of the commercial realm. although that aspect of the internet has been long overshadowed by the commercial web, there is no comprehensive set of laws to specifically regulate the internet. nor is there any public consensus on social media regulation, although just over half of americans now believe that major tech companies should be more heavily regulated. 9 but according to the pew research center, republicans feel that tech companies might already interfere with content too much: 85 percent of republicans suspect that tech companies censor political viewpoints. can americans be convinced that social media regulation is a security issue? we can turn to the issue of terrorism as a shift in thinking from civilian to defense issues. prior to 9/11, the idea that u.s. citizens would go through body and luggage searches for all domestic flights would have been laughable. yet, americans and visitors now go through searches not only at airports, but in a wide range of other public spaces. while this is framed as a defense against foreign attacks, americans face significant threats from domestic terrorism. here is a silver lining to how illuminating the capabilities of social media as a tool for foreign actors to wage disinformation war against u.s. citizens is useful. american democracy also faces significant online domestic threats, from anti-vaxxers to hate groups to disinformation attacks on political opponents. politically, though, there is no will to frame social media as a key part of that problem; rather, the general narrative is that disinformation is an unfortunate side effect of the social media we cannot live without. to change this attitude on the part of americans and u.s. social media companies, it will take a radical shift in understanding how social media has opened up a new and (so far) asymmetric battlefield for enemies of the american state. american perception of conflict and security shifted radically with 9/11. the threat of social media is much less visible and woven not only into the generally opaque nature of social media as people pose, pretend, and market themselves. it also comes at a time when the u.s. government is particularly weak as the political sphere is deeply split over the trump administration's actions. that being said, americans do seem to be waking up both to the threat and the lack of protection from social media companies. while russian interference is seen through a strictly partisan lens, chinese interference creates a much more unified response. nor have social media executives won themselves much praise for their disingenuous performance at congressional hearings. as the coronavirus pandemic has led to heightened awareness of both the importance and vulnerability of trustworthy information, this could be a watershed moment in understanding and valuing the communication ecosystem. youtube chief executive susan wojcicki said that the virus had been "an acceleration of our digital lives" and noted that it caused the platform to speed up changes to direct users to authoritative information. 10 it is particularly relevant to consider how youtube addressed health disinformation in the coronavirus pandemic, in that a study by the oxford internet institute found that the platform had emerged as a "major purveyor of health and wellbeing information" (marchal et al. 2020, p. 1) . the washington post column that quoted wojcicki speculated that health disinformation was taken more seriously than political disinformation by social media platforms. the relative affordances of social media companies might baffle and confuse many americans, but security threats never baffle or confuse americans for long. all it will take is some relatively unified political leadership that can highlight this threat and americans may indeed be ready for significant changes to the current social media model. just as americans pay a security tax for every airline flight, they may be willing to endorse a model of social media that is forced to take responsibility for its users and content. this will likely mean much more stringent government regulation and a change from section 230. while this is disappointing for liberal concepts of free speech, free speech cannot work in a system in which it is weaponized by foreign adversaries. controlling polarization in personalization: an algorithmic framework just doing business or doing just business: google, microsoft, yahoo! and the business of censoring china's internet ks: school of advanced military studies, united states army command and general staff college cyber war and cyber power: issues for nato doctrine cyberwar: how russian hackers and trolls helped elect a president: what we don't, can't, and do know coronavirus news and information on youtube: a content analysis of popular search terms antisocial media: how facebook disconnects us and undermines democracy key: cord-319960-pm95v31c authors: widmar, nicole; bir, courtney; lai, john; wolf, christopher title: public perceptions of veterinarians from social and online media listening date: 2020-06-06 journal: vet sci doi: 10.3390/vetsci7020075 sha: doc_id: 319960 cord_uid: pm95v31c the public perception of the veterinary medicine profession is of increasing concern given the mounting challenges facing the industry, ranging from student debt loads to mental health implications arising from compassion fatigue, euthanasia, and other challenging aspects of the profession. this analysis employs social media listening and analysis to discern top themes arising from social and online media posts referencing veterinarians. social media sentiment analysis is also employed to aid in quantifying the search results, in terms of whether they are positivity/negativity associated. from september 2017-november 2019, over 1.4 million posts and 1.7 million mentions were analyzed; the top domain in the search results was twitter (74%). the mean net sentiment associated with the search conducted over the time period studied was 52%. the top terms revealed in the searches conducted revolved mainly around care of or concern for pet animals. the recognition of challenges facing the veterinary medicine profession were notably absent, except for the mention of suicide risks. while undeniably influenced by the search terms selected, which were directed towards client–clinic related verbiage, a relative lack of knowledge regarding veterinarians’ roles in human health, food safety/security, and society generally outside of companion animal care was recognized. future research aimed at determining the value of veterinarians’ contributions to society and, in particular, in the scope of one health, may aid in forming future communication and education campaigns. pet owners can, and do, share information about their pets online. anyone can easily turn to the internet for veterinary medicine information, to generate related content themselves, and even share information about the veterinary professionals who care for their own animals (i.e., posting reviews, venting frustrations, or tweeting about office visits). veterinary practices can obtain the same advantages as many businesses by moving their clinic-client interactions online, including sharing information with clients, increasing interactions with others, increasing accessibility and widening access to health information [1] . some potential drawbacks include distractions, disagreements, and the low frequency of response [2] . for most pet owners, veterinarians are recognized as caregivers of companion animals and household pets, such as cats and dogs. in the us, 38% of households own a dog, while 25% own a number of database and online/web search tools are available, some of which are tailor-made for specific searches, such as for searching case law or legal documents, whereas others are broader or news media focused. for example, lexisnexis provides universities and government agencies with news and business sources and searching capabilities (lexisnexis, 2018) . recent advances in online media, and in particular web 2.0 technology which facilitates posting/sharing by users, have led to an interest in searching social and online media. increasingly, social media is a platform, available in multiple languages and inviting voluntary contributions from users worldwide that provides searchable databases of the responses, comments, and sentiments of individuals. the netbase platform [17] was employed to study the number of online posts, from twitter and other publicly available sites including blogs, news releases, and online publications, related to keywords associated with veterinarians over the time period from 1st september 2017 to 30th november 2019. the netbase platform is a recognized leader in social media search engines, listening, analytics, and social media intelligence. the time period over which data was collected offers over 2 years of data, facilitating inclusion of both positive and negative news-worthy events and in-depth study of a time period long enough to allow for impacts due to seasonality or other annual factors. due to the nature of social media and online data searches in which accounts, posters, or individual media/posts can be removed or reinstated by the author or the social media platform itself, the specific date of data collection is imperative to note. data for the time period of 1st september 2017 through 30th november 2019 was collected on 10th december 2019. although searches in multiple languages are technologically possible, the language interpretation, slang vernacular used, and shorthand of online media posts must be acknowledged and considered by researchers, and thus this analysis was limited to posts in english. for many topics, but especially within the links between animal species, human caregivers, and pets, there are cultural norms and expectations that vary by geographic region and country. cultural context makes country-specific analysis valuable to facilitate the interpretation of findings. while social listening is technically possible across multiple countries, for this analysis, the geography for all searches employed was limited to the united states, the u.s. virgin islands, and puerto rico. primary search terms, or keywords, along with exclusionary terms, were developed by researchers to facilitate the collection of a dataset encompassing online and social media posts associated with veterinarians, veterinary medicine, and veterinary service locations (i.e., animal hospitals or clinics). primary search terms employed in this search and analysis were veterinarian, #veterinarian, vet office, vet clinic, animal clinic, animal hospital, cat hospital, pet clinic, veterinary medicine, pet hospital. exclusionary terms were identified by researchers through the process of 'tuning' the search, by scanning search hits to identify extraneous media hits and developing exclusionary terms to address top extraneous findings. exclusionary terms used in this analysis were burned body of a woman, woman's burned corpse, hyderabad, #bookclub, #nj, #book, #asmg, #kindle, #mystery, megan stanford, @hallieturich, have been lunged, 31/2 years. two domains were excluded, as they were identified as returning erroneous search hits, specifically boards.4chan.org and yellowbot.com. two authors were excluded from the search and analysis conducted, namely indopremier.com:flickr.com and mainchannel_:twitter.com. sentiment, generally speaking, is the opinion, view, or attitude towards a situation, event, topic, or occurrence. net sentiment, as it is commonly referred to within the social media listening framework and the literature, is fundamentally a construct of comparing positive and negative posts to arrive at a single value that captures the positivity/negativity of the posts returned in each search/analysis. a third category of posts, neutral, is also employed when calculating and reporting on analytics for top words and other data summaries, but is not used in the calculation of net sentiment. the net sentiment referenced throughout this analysis is the result of the total percentage of positive posts minus the percentage of negative posts, thus resulting in a net sentiment that is necessarily bounded between −100% and +100%. social media sentiment was analyzed using the natural language processing (nlp) capabilities of the netbase platform [18, 19] . the american veterinary medicine association (avma) serves as a leading source of information for both the veterinary medical profession and the general public via information accessible on their website and associated media about the profession. the avma offers information for pet owners about their veterinarian; pet owners visiting the avma website who select resources and tools, and then to learn about their veterinarian, face a list of available references (https://www.avma.org/resources-too ls/pet-owners/yourvet) including "finding a veterinarian" and "8 things to consider when choosing a veterinarian". there is also a link on the avma main page to a webpage titled "veterinarians: protecting the health of animals and people" [20] (avma, 2019 https://www.avma.org/resources/pet -owners/yourvet/veterinarians-protecting-health-animals-and-people) which was employed in this analysis to serve as a holistic list of the roles of veterinarians in u.s. society. sixteen total roles were identified and summarized in table 1 by employing the avma (2019) resource to serve as a point of comparison to the search results yielded from the online media search. table 2 . day of the week posts are available from any source for which a time stamp was available. in total, the day of the week can be ascertained for 139,870 data points in the search conducted. the day of the week for posts was slightly higher early in the work week (monday, tuesday) and lower on the weekends (saturday, sunday), but otherwise showed no large deviations in most/least popular days of the week. the top domain for search results was twitter (74%), although reddit.com (9%) and instagram.com (7%) both yielded measurable contributions to search results. net sentiment was evaluated from 163,597 posts for which positive or negative sentiment could be ascertained, which was 9% of the total mentions captured by the search parameterized. in total, 76% of the mentions with sentiment were positive and 24% were negative, yielding a net sentiment of 52%. the inferred demographics of posters on twitter are displayed in table 3 , offering insight into who is posting and what interests those posters have, broadly speaking. admittedly, ascertaining biographical or demographic information of posters from online media is complicated. gender was obtained either from an author/poster statement in their own biographical information or assumed using common baby naming databases (netbase, 2018) . using the 398,354 posts for which gender could be inferred, this search yielded posts by more females (57%) than males (43%). estimated ages are available for sources that include a first name and used the methods employed by the netbase platform, which impose estimated ages based on names and social security administration name data. "age classification is based on the popularity of first names by year of birth according to u.s. social security administration data, which makes this feature more relevant for u.s. markets. this data includes about 65,000 of the most popular first names, covering more than 80% of the u.s. population" (netbase, 2018) . in total, 46% of the posts for which inferred age was assigned (n = 398,525) were from posters 45 years of age or older. generally, this search did not reveal large numbers of posts coming from a specific age bracket of posters. interests were determined using keywords in the biography text written by twitter users. unsurprisingly, pets were tied as the top two interest of posters in the search conducted about veterinary medicine. the top two interests revealed from the 185,543 posters for which interests were discernable were family (28%) and pets (27%), with a very large differential with the third top interest, politics, at only 12%. the week of 17th december 2017 saw a significant dip in net sentiment to 1%, dropping from 72% and 70% just 3-4 weeks earlier. looking at the week of december 17th specifically, 30% of the net sentiment was driven by online media mentioning caring for a new puppy and being on watch for diarrhea, and 22% was driven by media-fueled chatter about a veterinarian in louisiana shooting a neighbor's dog in the head. moreover, the drop in total mentions in that same week is also notable, having dropped from over 31,816 in the week of december 3rd, with a 68% net sentiment, to only 11,734 in the week of 17th december 2017. the week of 25th march 2018 saw net sentiment drop to 3% in another downward spike, with drivers being mentions of the dangers of medicating dogs on flights, commentary about a skin condition in giraffes being investigated via questionnaires with veterinarians, and reporting on a pit bull which was abused in august 2013, but was subsequently re-publicized in news media about abuse. sentiment in the week of 16th september 2018 was influenced most heavily (23%) by stories related to a woman who was charged with practicing veterinary medicine without a license after taking in animals effected by hurricane florence. other headlines included adverse reactions to medications and possible health and safety issues related to overheating while being dried during grooming if precautions were not employed. a significant positive spike in sentiment was apparent in the week of 19th may 2019, driven largely by two references. the first was to a family member of the founding prime minister of singapore with a photo of dogs balancing wedding bands and the mention of employment as a veterinarian. the second to the standardbred retirement foundation having an online auction fundraising event that included the opportunity to shadow a small animal veterinarian. aside from movements in net sentiment or numbers of mentions, the specific attributes, behaviors, terms, and hashtags which drove positive or negative sentiment are displayed in table 4 . top positive attributes referenced treating dogs (16%) and comfort dog assistant (14%), with dog again tied as the top positive term. positive sentiment within hashtags was driven largely by careeroriented words such as #hiring, #job, and #careerarc. interestingly, the only corporation/corporate veterinary clinic to organically appear as a top search result, under things, was banfield (7%). the week of 17th december 2017 saw a significant dip in net sentiment to 1%, dropping from 72% and 70% just 3-4 weeks earlier. looking at the week of december 17th specifically, 30% of the net sentiment was driven by online media mentioning caring for a new puppy and being on watch for diarrhea, and 22% was driven by media-fueled chatter about a veterinarian in louisiana shooting a neighbor's dog in the head. moreover, the drop in total mentions in that same week is also notable, having dropped from over 31,816 in the week of december 3rd, with a 68% net sentiment, to only 11,734 in the week of 17th december 2017. the week of 25th march 2018 saw net sentiment drop to 3% in another downward spike, with drivers being mentions of the dangers of medicating dogs on flights, commentary about a skin condition in giraffes being investigated via questionnaires with veterinarians, and reporting on a pit bull which was abused in august 2013, but was subsequently re-publicized in news media about abuse. sentiment in the week of 16th september 2018 was influenced most heavily (23%) by stories related to a woman who was charged with practicing veterinary medicine without a license after taking in animals effected by hurricane florence. other headlines included adverse reactions to medications and possible health and safety issues related to overheating while being dried during grooming if precautions were not employed. a significant positive spike in sentiment was apparent in the week of 19th may 2019, driven largely by two references. the first was to a family member of the founding prime minister of singapore with a photo of dogs balancing wedding bands and the mention of employment as a veterinarian. the second to the standardbred retirement foundation having an online auction fundraising event that included the opportunity to shadow a small animal veterinarian. aside from movements in net sentiment or numbers of mentions, the specific attributes, behaviors, terms, and hashtags which drove positive or negative sentiment are displayed in table 4 . top positive attributes referenced treating dogs (16%) and comfort dog assistant (14%), with dog again tied as the top positive term. positive sentiment within hashtags was driven largely by career-oriented words such as #hiring, #job, and #careerarc. interestingly, the only corporation/corporate veterinary clinic to organically appear as a top search result, under things, was banfield (7%). comparing search results to roles of veterinarians as identified from avma resources roles for u.s. veterinarians range from caring for animals directly, in animal shelters, pets kept in client's homes, and zoos/aquariums to working with the centers for disease control and prevention (cdc) and national institutes of health (nih) on issues pertinent to both human and animal health and well-being through one health initiatives. table 1 summarizes the roles of u.s. veterinarians as identified from avma [21] . sixteen total roles of veterinarians in the u.s. were documented, although the roles beyond direct pet animal care did not arise organically as top mentions or terms in the searches conducted. given the search intent focused on veterinarians in the client-facing realm of pet hospitals and animal clinics, the lack of diversity of the roles mentioned may be expected. nonetheless, the lack of mention of any of the roles of veterinarians influencing human wellbeing may point to room for further promotion of the diverse roles of veterinarians. online media analytics have been used in a variety of industries to study topics ranging from responses to natural disasters [22] , to investigating product development [23] , to understanding social media in tourism industries [24] . in particular, the advent of social media in which many users create and share content via web 2.0, as opposed to a few content creators broadcasting to many readers/consumers, has changed how online media is used in terms of ongoing conversations, rather than simple posting of news or media pieces [25] . broad topics of conversation incorporating veterinarians and veterinary medicine were uncovered in the search conducted, with over 1.7 million total mentions captured. while no benchmark for mentions exists, as searches are individually parameterized, the overall quantity of search hits was sufficient to see movement over time and to capture top terms of interest arising in the search results. given the topic, veterinarians and veterinary medicine, there was not a clear pattern expected with respect to the days of the week for posts. given the common business hours of many veterinary clinics and pet hospitals, the slightly higher numbers of posts early in the work week with lower numbers of posts on the weekends were reasonable, although no known/verified point of comparison was available given the nature of the data collection and analysis. while the inferred demographics of posters from twitter data can offer some insight into who is posting, ascertaining known demographics in online and social media is complicated and wrought with biases associated with self-reporting. using the inferred gender of posters, more females than males were posting media that was captured in the searches provided. bir et al., in a study about dog acquisition, reported the demographics of pet owning households and non-pet owning households side-by-side from a nationally representative sample [26] . bir et al. found that owning a pet was more commonly reported by females, by larger households, and by respondents under the age of 65 [26] . while not directly related to pet ownership, mckendree et al. reported on the demographics of those most concerned about animal welfare, revealing that females were more likely to self-report concern [27] . taken together, the overrepresentation in search results about veterinarians and veterinary medicine by females is in keeping with suggestions by previous literature that females may more often report pet ownership/care duties and concern for animals, broadly speaking. ages of posters are complicated to interpret, as social media usage may drive some of the lower participation by older demographics. however, pet ownership has been documented most commonly in middle-aged households and less so in households with residents over 65 years of age [26] , so it is likely that both social media user demographics and pet owner demographics were contributing to posts across the age brackets reported in this study. net sentiment can range from −100% to +100%, making the net sentiment of 52% a clearly positive overall assessment as captured in the results over the time period in question. notable events impacting sentiment tended to be about animals themselves, such as harm to animals or risking animal's wellbeing during travel. career-oriented terms were largely within the hashtags, such as #hiring, which is also undeniably related to the domains in which online media appears (i.e., twitter). in a very small number of posts, at 1%, a top negative attribute was die by suicide, which is an aspect of the profession that has received significant attention in the media and press [28] . risk factors among veterinarians, such as suicidal ideation, attempts at one's own life, and depression, have been cited in the literature, both in the u.s. and in other countries [29] [30] [31] [32] [33] . other challenges facing the veterinary health profession are well documented within the industry, including student debt loads [34] , mental health implications arising from compassion fatigue [35] , and the impact of performing euthanasia [36] , in addition to lesser discussed but more obvious risks to (human) physical safety arising from working with animals [37] . while these challenges and issues are being raised in the media and in the literature around the world, veterinary suicides were not found in the searches conducted in this study. there was a general lack of top terms revealed for the veterinarian searches conducted that pertained to roles outside of caring for household pets and companion animals. roles identified for u.s. veterinarians by the avma ranged from caring for animals directly, in animal shelters, pets kept in client's homes, and zoos/aquariums to working with the centers for disease control and prevention (cdc) and national institutes of health (nih) on issues pertinent to both human and animal health and wellbeing through one health initiatives [21] . online media, including user-generated posts such as those on social media platforms, is increasingly popular as a source of information and communication among consumers and members of the general public. in this analysis, the issues facing the veterinary medicine profession from within were, perhaps unsurprisingly, largely absent from the search results, which focused heavily on the pet owner or consumer/client side of veterinary medicine. search terms reveal a focus on pet animal welfare, illnesses, and care, with some hashtags pointing towards career or profession advancement. notably absent in the search results were references to roles embodying one health initiatives in which the links between the health of animals, humans, and the environment are explicit. undoubtedly the search terms selected directed searches to capture verbiage from veterinary clients, thus driving results towards the companion/pet animal aspects of veterinary medicine. however, within those results were career promotion hashtags and the recognition of suicide risks. it is possible that roles beyond client-animal-clinic relationships are simply relatively less discussed in online media, or that the searches conducted did not adequately target societal roles outside of direct pet caretaking. surely, the popularity of discussing pets in social media space contributes to the abundance of mentions related to direct pet animal caretaking. it is impossible to know if career promotion, mental health risks, and/or any of the other comments originated from outside or within the profession. in other words, it could be that social media posts from veterinary medical professionals are referencing these topics. while ongoing concerns and public health campaigns about tick-borne illnesses and rabies continued (and a focus on other higher profile risks like ebola emerged) during the study time period, the public-facing focus on public health was far less relevant than it is today in light of an ongoing coronavirus (covid-19) pandemic situation. perhaps the increased focus on zoonotic diseases in recent months and, in particular, in popular news media, may increase the attention and recognition of one health and the roles of veterinarians in public health. regardless of the degree to which search terms and/or the timing of data collection contributed to these findings, the relative lack of any mention of the roles of veterinarians in one health endeavors may prompt further research into the perceptions of veterinarians in the public domain [38] . in particular, future work may wish to broaden searches to determine if discussions of human medical needs may more directly reference veterinarian contributions [39] [40] [41] . nonetheless, veterinary health organizations and associations may consider the broader dissemination, when appropriate, of information about the diverse roles that veterinarians play in society. a new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication consumer perceptions of using social media for health purposes: benefits and drawbacks avma pet ownership and demographics sourcebook gender changes and the future of our profession 2019-2020 appa national pet owners survey internet/broadband fact sheet social media fact sheet what can veterinarians learn from studies of physician-patient communication about veterinarian-client-patient communication? the preparedness and evacuation behaviour of pet owners in emergencies and natural disasters effect of pets on human behavior and stress in disaster the more people i meet, the more i like my dog: a study of pet-oriented social networks on the web teaching veterinary professionalism in the face (book) of change social networking-making it work for your practice comparison of veterinarian and standardized client perceptions of communication during euthanasia discussions veterinarian-client-patient communication patterns used during clinical appointments in companion animal practice an expert lexicon approach to identifying english phrasal verbs overview of natural language processing protecting the health of animals and people your career-rising professional-avma my veterinary life content features of tweets for effective communication during disasters: a media synchronicity theory perspective social media in product development what do we know about social media in tourism? a review marketing meets web 2.0, social media, and creative consumers: implications for international marketing strategy dog acquisition in a non-tradition market: a focus on adoption effects of demographic factors and information sources on united states consumer perceptions of animal welfare med beat: not all is warm and fuzzy for veterinarians trends and patterns in suicide in england and wales incidence of suicide in the veterinary profession in england and wales high-risk occupations for suicide risk factors for suicide, attitudes toward mental illness, and practice-related stressors among us veterinarians suicide in australian veterinarians the financial life of aspiring veterinarians prevalence of mental health outcomes among canadian veterinarians impacts of the process and decision-making around companion animal euthanasia on veterinary wellbeing no-one knows where you are': veterinary perceptions regarding safety and risk when alone and on-call the evolution of one health: a decade of progress and challenges for the future introducing one health to medical and veterinary students veterinarians between the frontlines?! the concept of one health and three frames of health in veterinary medicine beliefs, attitudes and self-efficacy of australian veterinary students regarding one health and zoonosis management acknowledgments: this research was funded in part by the american veterinary association (avma) economics team. researchers retained complete autonomy from avma during data collection, analysis, and development of the resulting manuscript. the authors declare no conflict of interest. the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. key: cord-346194-l8svzjp2 authors: nazir, mehrab; hussain, iftikhar; tian, jian; akram, sabahat; mangenda tshiaba, sidney; mushtaq, shahrukh; shad, muhammad afzal title: a multidimensional model of public health approaches against covid-19 date: 2020-05-26 journal: int j environ res public health doi: 10.3390/ijerph17113780 sha: doc_id: 346194 cord_uid: l8svzjp2 covid-19 is appearing as one of the most fetal disease of the world’s history and has caused a global health emergency. therefore, this study was designed with the aim to address the issue of public response against covid-19. the literature lacks studies on social aspects of covid-19. therefore, the current study is an attempt to investigate its social aspects and suggest a theoretical structural equation model to examine the associations between social media exposure, awareness, and information exchange and preventive behavior and to determine the indirect as well as direct impact of social media exposure on preventive behavior from the viewpoints of awareness and information exchange. the current empirical investigation was held in pakistan, and the collected survey data from 500 respondents through social media tools were utilized to examine the associations between studied variables as stated in the anticipated study model. the findings of the study indicate that social media exposure has no significant and direct effect on preventive behavior. social media exposure influences preventive behavior indirectly through awareness and information exchange. in addition, awareness and information exchange have significant and direct effects on preventive behavior. findings are valuable for health administrators, governments, policymakers, and social scientists, specifically for individuals whose situations are like those in pakistan. this research validates how social media exposure indirectly effects preventive behavior concerning covid-19 and explains the paths of effect through awareness or information exchange. to the best of our knowledge, there is no work at present that covers this gap, for this reason the authors propose a new model. the conceptual model offers valuable information for policymakers and practitioners to enhance preventive behavior through the adoption of appropriate awareness strategies and information exchange and social media strategies. several patients with symptoms of pneumonia of unknown facts were reported in mid of december, 2019 in wuhan, hubei province, china [1] . after an investigation by world health organization (who), it was identified as a new virus called covid-19, and with time, it spread rapidly throughout china and other countries [2] . according to who, it has been reported that there are 2.6 million confirmed cases, 0.184 million deaths, and 0.722 million recoveries from 2019-ncov worldwide. as evidence of this spread, one such case was reported on january the 24th, 2020 when a person reportedly from china came to pakistan on 21st of january 2020 via dubai. on the 24th he was examined and was found to be positive for covid19 . it was then that pakistan for the first time was exposed to the virus and became part of the affected countries globally. according to the government of pakistan's reports, today the number of confirmed cases in pakistan is 27,474 and the number of confirmed deaths is 618. punjab and sindh are the two mainly affected provinces. moreover, due to the lack of medical resources in developing nations, pakistan also faces challenges in preventing the spread of re-emerging and new infectious diseases. thus, during the outbreak of infectious disease, these countries focus on alternatives to medical facilities to overcome its spread. awareness and accurate information bring the behavioral changes among the people; they can be perceived as half treatment without any expense. social media has become an important source to broadcast awareness and information regarding control of infectious disease [3] . according to [4] , social media consists of different applications, including social networking sites, and blogs, that are founded on the scientific and ideological foundation of web 2.0 (for example, facebook, youtube, and twitter) that allow users to make, share content, and participate in different activities. social media itself is a catch-all expression for sites that may consist of various social actions. social media is designed of an electronic-mediated platform relying upon web-based innovations that permit users to make a profile and share ideas, imagines/clips, and information in the virtual networks system. even though many cases which were initially reported exposed the seafood market in wuhan, according to the current epidemiologic information, this virus is spreading from one individual to another at a very high rate of transfer [5] . with time, 2019-ncov has infected almost all countries on the planet. iafusco, ingenito [6] argued that in these critical circumstances, it has been very difficult for developing nations to communicate with uninfected individuals along with infected persons because this virus can spread quietly from one person to another. it has become complicated for the governments and doctors to communicate with their citizens during an infectious disease outbreak, therefore, social networking sites are playing a critical role in enabling the populations to connect virtually. some years ago, several disease outbreaks of the same nature, for example, ebola, zika flu, and dengue fever, around the globe revealed insights into the significant power of communication strategies concerning such diseases [7] . researchers said that social networking sites (sns) are a scoping evaluation of the utilization of search queries in disease surveillance [8] . first reported in 2006, the viewed literature highlighted accuracy, speed, and cost performance that was comparable to existing disease surveillance systems and recommended the use of social media programs to handle all circumstances of infectious disease systems. now, due to the advanced innovation of web 2.0, sns have appeared to play an essential role for public health specialists to control the spread of such infectious disease. it has been observed that social networks perform an excellent evaluation of real-time data reporting which keeps the state and people posted for the possible solutions of public health safety during epidemics [9] . with this recognizable increase in infectious outbreak in pakistan, public health centers are facing severe problems and challenges at work to act for the prevention of disease at various levels. due to a lack of time and resources, it has become complicated for the nations to address these issues and challenges in a short time. a fear of physical spread hinders health sector workers to interact with patients and suspects in person. therefore, the response time of governments and health departments to tackle the sign and symptoms which lead to the detection of both infectious and noninfectious diseases and their preventions, respectively, is affected by the updates and real-time reporting of social media. in this study, the researchers determined the outcome of social media on the preventive behavior among people about covid-19, how individuals gain information and awareness knowledge through social media to control covid-19. the study also analyzed the direct effect of social determinants on the preventive behavior of such conditions. the study was structured into five sections. after the introduction, the researches present thorough and critical analysis of current and most relevant literature along with hypotheses development. the third section contains material and methods used in this study to achieve the stated study objectives. the next section is about the main results of the study and discussions related to these results. the final section is about the main conclusions, findings, and the future research options. undeniably, online communication is used as an outlet for individuals to freely make and post data that is dispersed and extended worldwide after the advanced foundation of web 2.0. news media, conventional scientific outlets, and online networking sites create a platform for minority perspectives and individuals who are sometime, not being captured by other sources. individuals seek information from an assortment of sources and continually update it from the health sector. conventional news media has become recognizable as a comprehensive source of health information to prevent infectious disease in the public health sector. they provide information and awareness widely through social networking sites for reducing infectious diseases [10] . a few years ago, people did not have any communicational access to exchange their information directly with the government and health sectors, at that time, traditional media, such as newspaper, television, and so forth, played a critical role as a source of information exchange to the public [11, 12] . traditional media provided information about the disease to the public regarding public health issues [13] . therefore, citizens relied on traditional media as a foundation of knowledge which helped them to understand the critical condition of risk and receive precautions about the issues. however, after the advanced foundation of web 2.0, there became a rapid change in the use of media technology, and people have increased their social media usage by registering in almost all the social network accounts, for example, facebook, twitter, and youtube. this can also be seen from the increasing number of registered subscribers on all social web services to exchange their health information during any infectious disease outbreak [14] . unlike traditional media, which just engaged users to a limited amount of used and obtained information, in social media people make their profile and share health-related information to others, also making comments on health-related posts, and these sites also give the users the opportunity to join any public health-related groups [15] . for example, at the time of the h1n1 flu virus outbreak, people used social networking sites as an information exchange medium and gave opinions related to health [16, 17] . however, with the fast use of social networking sites, information access has changed, now people do not rely exclusively on the traditional or government news media, instead they trust sns to get essential information from the public health sector. for example, twitter was primarily utilized by the public for the exchange of experience, opinion, and knowledge among individuals during infectious disease [11] . specifically, sns have become a common source for general society to interchange their information when conventional news media offer very restricted information about an infectious disease outbreak due to some official pressure and limitations [18] . as per media policies, the public's reliance on media will, in general, escalate at the time of significant emergencies. when information is not promptly accessible from traditional news media, people, as information makers and disseminators themselves, assemble electronic methods such as social media for information exchange [19] . digital observation is an internet-based observation system that provides a current situation of public health problems by evaluating data stored digitally [20] . there are now numerous infectious disease observations in an epidemiological practice by which the predominance, outbreak, and extent of infectious disease are checked to build up patterns of active actions and advancements for management and control systems. the fundamental role of infectious disease observation is to observe, forecast, and reduce the harm caused by outbreak and epidemics situation as well as enhance the information system for specialists and the population concerning factors which could possibly be used in such conditions [21] . revealing occurrences of outbreaks has been shifted from manual record-keeping systems to worldwide online communication networks through sns [22] . therefore, we can draw our first hypothesis as the following: there is a significant relationship between social media and preventive behavior among people about covid-19. it is critical for public health sectors and government agencies to take any effective initiatives for the control of diseases, however, it is very difficult for developing countries to detect the infectious disease outbreak. observational capacity for detection of infectious diseases could be very costly and the developing countries lack resources to measure the outbreak of infectious disease at the time of exposure. hence, some social networking websites provide solutions to handle some of these challenges during an outbreak. online networking sites provide a source of information to detect infectious outbreaks earlier with very cheap cost and provide a way to increase their reporting clearly [23] . the exchange of health information on social networking sites has been seen as an opportunity for health sectors to increase public health observation [24] and to predict and control infectious diseases [25] . due to insufficient medical services in developing nations like pakistan, the authorities face severe complications to contain and eradicate the chances of spread of such infectious diseases. consequently, in case of an emergency, such communities start practicing alternatives to medical facilities to control the spread. therefore, we can safely propose the following: there is a significant relationship between social media exposure and information exchange about covid-19 among people. information exchange mediates the relationship between social media exposure and preventive behavior among people about covid-19. awareness regarding control of the infectious disease can overcome the financial burden for precautions. earlier knowledge about the outbreak of disease can overcome the level of its spread [26] . several methods can be used, like social media, internet access, tv, and so forth, by the nations to spread awareness about the precautions of disease. at present, mostly social networking platforms are being used as an important source to spread awareness of emergency to control an epidemic [27] . in the past, some researches have been conducted to evaluate the effect of social media to minimize the spread of infectious disease through preventive behavior. the results prove that social media is playing an essential role in overcoming the prevalence through prevention and reducing infection spread by awareness [28] . awareness brings behavioral changes among communities. as the phrase states, "prevention is better than cure". such awareness may be considered as half treatment without any expense. therefore, the researcher draws their next hypotheses as the following: there is a significant relationship between social media exposure and awareness knowledge. awareness knowledge mediates the relationship between social media exposure and preventive behavior among people about covid-19. many researchers have proved that during the infectious outbreak, socio-economic factors profoundly influence the prevention behavior towards diseases. the individuals with high income and education level have shown to be connected more with social media for the preventive measures [29] [30] [31] . furthermore, numerous studies argued that aged people followed better precautions by wearing a mask, using sanitizer, and keeping healthy respiratory hygiene [32, 33] . likewise, the relationships between the social determinants and prevention behaviors have presented that females [34] , individuals with high literacy [29] , and aged people [35] preferred to stay at home instead of visiting public places during an infectious period. however, according to the research conducted in the uk during a swine flu outbreak, individuals who have a low literacy rate/income level or are unemployed have avoided using public transport and visiting crowded places, in comparison with those individuals who have a high level of social determinants [33] . so, we can formulate following hypotheses. there is no significant relationship between high-income individuals and preventive behaviors among people about covid-19. there is no significant relationship between aged individuals and preventive behaviors among people about covid-19. there is no significant relationship between gender of individuals and preventive behaviors among people about covid-19. there is no significant relationship between high-educational individuals and preventive behavior among people about covid-19. research methodology, the principles and techniques used for gathering and analyzing data, plays an essential role in achieving the objectives of the study. this section presents the overall data sampling, research design, and data collection method used to find the objectives of the current study. an online survey was conducted by researchers in march 2020 during the covid-19 outbreak using social media tools like facebook, twitter, whatsapp, and email. a link was developed, and the structured survey was shared with participants on this link through different social media tools like facebook, twitter, whatsapp, and email. the intention behind the selection of online data collection using social media tools was maintaining the social distancing principle. this research is based on individuals from different geographical areas of punjab and azad jammu and kashmir, pakistan. the study was conducted on social media during 5 march 2020 to 25 march 2020. the sample size of 500 respondents was used through a random sampling method and examined with spss amos. the main reason for choosing this sampling method was that the researcher placed the questionnaire online. the researcher used a likert scale of 5 points. the hypotheses were measured using a scale by [36] . social determinants are considered very important in social science research and these were measured to check the significant direct effect of these control variables on preventive behavior among people about covid-19. the essential statistical components are age, gender, education, and income. these demographic components were necessary for the assessment of our objectives. the details of these variables are given in table 1 . the proposed model and variables investigated in this study are demonstrated in figure 1 . the proposed model and variables investigated in this study are demonstrated in figure 1 . sem technique was performed to examine the hypotheses discussed above. tables 2-4 show the key consequences for the hypotheses. researchers used path models to check the impact of social media on mediating variables, that is, information exchange and awareness knowledge regarding preventive behavior among people about covid-19 as a dependent variable and checked the direct effect of control variables on preventive behavior among people about covid-19. additionally, path analysis and maximum likelihood method were used to verify the mediated impact of health communication (awareness knowledge and information exchange) among social media and sem technique was performed to examine the hypotheses discussed above. tables 2-4 show the key consequences for the hypotheses. researchers used path models to check the impact of social media on mediating variables, that is, information exchange and awareness knowledge regarding preventive behavior among people about covid-19 as a dependent variable and checked the direct effect of control variables on preventive behavior among people about covid-19. additionally, path analysis and maximum likelihood method were used to verify the mediated impact of health communication (awareness knowledge and information exchange) among social media and preventive behavior. amos version 24 was used to check the statistical relationship between variables. initially, we tested the model fit index with comparative fit index (cfi) and root mean square error of approximation (rmsea); a cfi ≥0.97 and rmsea ≤0.055 mean the fit was acceptable (hu and bentler, 1999). the indirect effect of social media on behaviors was calculated using the same statistical tool through 2000 bootstrap samples. critical factor analysis (cfa) was used to test the discriminant and convergent validity of every construct of the measurement model. we also checked the factor score of each item, and all items exceeded the threshold of 0.6 (p < 0.001). the value of ave ranged from 0.55 to 0.79 (all values are exceeding the threshold 0.5), and cr ranged from 0.82 to 0.92 (all exceeding the threshold of 0.7). according to the parameter estimation results of table 4 , the direct impact of social media exposure on preventive behaviors concerning h1 (β = −0.097 p < 0.001) showed an insignificant direct relationship between independent variable and dependent variable. therefore, h1 was not supported. according to the results of h4 (β = 0.389, p < 0.01) and h2 (β = 0.377, p < 0.01), both showed significant direct effect of social media on awareness knowledge and information exchange. so, we accepted these two hypotheses. therefore, we can say that social networking sites have been used as an important strategy to spread awareness and information at the time of emergency to control the covid-19 outbreak. health communications via social media were positively significantly influenced by awareness and information exchange and indirectly influenced the adoption of preventive healthcare behavior. h6, h7, h8, and h9 tested whether age, gender, income, and education would be insignificantly associated with preventive behaviors. the parameter estimates showed that h9 education (β = 0.106, p < 0.01), h7 age (β = −0.052, p < 0.01), h8 gender (β = 0.041, p < 0.01), and h6 income (β = 0.023, p < 0.01) have negatively insignificant relationships with preventive behaviors. all these control variables were supported according to our theory. it is not necessarily individuals with high literacy/income and aged people who avoid using public transport and crowded places. the effects of high social components were directly insignificant on preventive behavior to control the epidemic disease of covid-19. according to the study findings, every type of individual can acquire an advantage through social media campaigns regarding the preventive behavior against covid-19. h5 and h3 tested whether awareness knowledge and information exchange directly influenced preventive behavior during an infectious disease outbreak like covid-19. estimated parameters in table 2 illustrated that awareness knowledge (β = 0.454, p < 0.001) and information exchange (β = 0.199, p < 0.001) have a positive significant direct relation with preventive behavior and have a full mediating effect between the social media and preventive behavior, as illustrated in tables 2 and 3 and figure 2 . social media provides the possibility for individuals to be aware of private or public awareness campaigns. eke [37] supported this theory that public awareness affects an individual behavior during an infectious disease outbreak to control its spread. our study showed that public or private awareness through social media could overcome the spread of infectious disease. the connectivity between the constructed hypotheses of our theory test is shown in table 4 , table 5 and figure 2 . according to the results of direct relation, no direct relationship exists between social media exposure and preventive behavior, however, awareness knowledge and information exchange create a mediating effect between the social media exposure and preventive behavior, so there exists a strong relationship between social media exposure and preventive behavior with the full mediation of awareness knowledge and information exchange. in conclusion, the covid-19 outbreak in china significantly damaged the human population across the globe. this included widespread distrust, a high number of deaths, high public stress, and economic damage. this study analyzed the effect of social media on preventive behavior during the covid-19 outbreak in pakistan. firstly, it should be counted that social media has become an increasingly popular source of awareness and information for health communications, especially the connectivity between the constructed hypotheses of our theory test is shown in table 4, table 5 and figure 2 . according to the results of direct relation, no direct relationship exists between social media exposure and preventive behavior, however, awareness knowledge and information exchange create a mediating effect between the social media exposure and preventive behavior, so there exists a strong relationship between social media exposure and preventive behavior with the full mediation of awareness knowledge and information exchange. in conclusion, the covid-19 outbreak in china significantly damaged the human population across the globe. this included widespread distrust, a high number of deaths, high public stress, and economic damage. this study analyzed the effect of social media on preventive behavior during the covid-19 outbreak in pakistan. firstly, it should be counted that social media has become an increasingly popular source of awareness and information for health communications, especially during an outbreak. the data have been collected and analyzed as the outbreak started in pakistan in 2020. this study examined how social media plays an essential role in formulating preventive behavior during the covid-19 outbreak in pakistan. the results of this research revealed that social media exposure is associated with two relevant variables, awareness knowledge and information exchange, and these variables mediate the relationship between social media exposure and preventive behavior among people regarding covid-19. social media reinforces and enhances health-related communication by raising awareness campaigns and disseminating reliable information to the users in an emergency regarding preventive behaviors. social media has become a source of rapid information and can be updated promptly. if the utilization of social media becomes more accurate or scientific then the social media can provide a very efficient and user-friendly way of monitoring the facts and figures of epidemic both locally and at an international level. the use of social media as a communicating tool during the infectious disease outbreak is a new method of observation, but provides a potential source of an accurate and quick assessment of progression of the current condition of disease within communities. social media has also become the most accessible and valuable tool, particularly in a social-economic and climatic context [36] . mostly, developing nations like pakistan do not have any excess to maintain and control the surveillance system in a timely manner during an infectious disease outbreak. therefore, due to lack of resources, most developing nations use social media networks for health communication tools to prevent and control the spread of infectious disease in a community [37] . thus, social media can afford a fast method of surveillance that forecasts the real-time burden of infectious disease and hence also can guide preventive strategies to control the epidemic. the study has some limitations as only data from pakistan were collected. therefore, the results may not be easily generalized to other developing countries, but are useful for politicians, health administrators, governments, policymakers, and social scientists, especially for those whose circumstances are like those in pakistan. the conceptual structural equation model provides useful information for policymakers and practitioners to enhance preventive behavior through the adoption of appropriate awareness strategies and information exchange and social media policies. the study demonstrates how social media exposure indirectly impacts preventive behavior and illustrates the paths of influence through either awareness or information exchange. to the best of our knowledge, the study is probably the first in the concerned area. the study investigated how only some social variables can help prevent covid-19. future researchers can investigate other variables lying under the category of social sciences and their role in dealing with covid-19 and its impacts. the future studies can also specify the sectors, like health workers, education, police, and other security agencies. the authors declare no conflict of interest. the effect of control strategies to reduce social mixing on outcomes of the covid-19 epidemic in wuhan, china: a modelling study a novel covid-19 from patients with pneumonia in china modeling the role of information and limited optimal treatment on disease prevalence users of the world, unite! the challenges and opportunities of social media the continuing 2019-ncov epidemic threat of novel covid-19es to global health-the latest 2019 novel covid-19 outbreak in wuhan, china the chatline as a communication and educational tool in adolescents with insulin-dependent diabetes scoping review on search queries and social media for disease surveillance: a chronology of innovation social media and internet-based data in global systems for public health surveillance: a systematic review communicating health information to an alarmed public facing a threat such as a bioterrorist attack pandemics in the age of twitter: content analysis of tweets during the 2009 h1n1 outbreak reporting a potential pandemic: a risk-related assessment of avian influenza coverage in us newspapers the influence of outrage factors on journalists' gatekeeping of health risks health information seeking in the web 2.0 age: trust in social media, uncertainty reduction, and self-disclosure the social life of health information news coverage of health-related issues and its impacts on perceptions: taiwan as an example swine flu as social media epidemic: cdc tweets calmly online a dependency model of mass-media effects media dependencies in a changing media environment: the case of the 2003 sars epidemic in china the potential of social media and internet-based data in preventing and fighting infectious diseases: from internet to twitter. in emerging and re-emerging viral infections global infectious disease surveillance and detection: assessing the challenges. workshop summary digital disease detection-harnessing the web for public health surveillance early detection of disease outbreaks using the internet social media in public health internet-based surveillance systems for monitoring emerging infectious diseases stability analysis and optimal control of an epidemic model with awareness programs by media media impact switching surface during an infectious disease outbreak modeling the impact of twitter on influenza epidemics anticipated and current preventive behaviors in response to an anticipated human-to-human h5n1 epidemic in the hong kong chinese general population monitoring community responses to the sars epidemic in hong kong: from day 10 to day 62 the influence of social-cognitive factors on personal hygiene practices to protect against influenzas: using modelling to compare avian a/h5n1 and 2009 pandemic a/h1n1 influenzas in hong kong the impact of community psychological responses on outbreak control for severe acute respiratory syndrome in hong kong public perceptions, anxiety, and behaviour change in relation to the swine flu outbreak: cross sectional telephone survey early assessment of anxiety and behavioral response to novel swine-origin influenza a (h1n1) perceptions related to human avian influenza and their associations with anticipated psychological and behavioral responses at the onset of outbreak in the hong kong chinese general population endemic disease, awareness, and local behavioural response evaluation of internet-based dengue query data: google dengue trends using social media for research and public health surveillance key: cord-288159-rzqlmgb1 authors: marin, lavinia title: three contextual dimensions of information on social media: lessons learned from the covid-19 infodemic date: 2020-08-26 journal: ethics inf technol doi: 10.1007/s10676-020-09550-2 sha: doc_id: 288159 cord_uid: rzqlmgb1 the covid-19 pandemic has been accompanied on social media by an explosion of information disorders such as inaccurate, misleading and irrelevant information. countermeasures adopted thus far to curb these informational disorders have had limited success because these did not account for the diversity of informational contexts on social media, focusing instead almost exclusively on curating the factual content of user’s posts. however, content-focused measures do not address the primary causes of the infodemic itself, namely the user’s need to post content as a way of making sense of the situation and for gathering reactions of consensus from friends. this paper describes three types of informational context—weak epistemic, strong normative and strong emotional—which have not yet been taken into account by current measures to curb down the informational disorders. i show how these contexts are related to the infodemic and i propose measures for dealing with them for future global crisis situations. from the beginning, the pandemic was reflected in the online realm of social media by a flood of redundant information (the so-called infodemic) out of which a significant percentage was made up of misinformation and disinformation (mdi). mdi is used here as one umbrella term for two distinct phenomena: misinformation, which is usually defined as false information shared without knowledge that it is false, while disinformation is fabricated information distributed with the clear intention to mislead (fallis 2014, p.136) . however, on social media, these distinctions are hard to maintain sharply, since we cannot always know who created a piece of misleading information and with what purpose. the infodemic is understood as "an overabundance of information-some accurate and some not-that makes it hard for people to find trustworthy sources and reliable guidance when they need it" (world health organization 2020). this concept is similar with that of information pollution which was defined as "irrelevant, redundant, unsolicited and low-value information" (wardle and derakhshan 2018, p.10) but applied to a crisis situation when an infodemic can become dangerous (tangcharoensathien et al. 2020, p. 2) . already since february 2020, the world health organization had singled the infodemic as an emerging problem in the context of the covid-19 pandemic. one of the most problematic aspects of an infodemic is that it creates information overload which leads to information fatigue for online users: the user's capacity for paying attention to information is limited and tends to exhaust quickly. studies in the psychology of social media have shown that, under conditions of informational overload, users will revert to using mental shortcuts or heuristics for assessing new information. by employing cognitive heuristics, social media users tend to rely on their friends and their endorsements in selecting what information to trust or engage with (koroleva et al. 2010, p. 5) . taking these mental shortcuts occurs "in conditions of low motivation and limited ability to process the incoming information" (koroleva et al. 2010, p. 5) which is arguably the case when dealing with information overload. trying to save their mental energy, users will tend to delegate their critical engagement to their social network, trusting their peers, or to become disengaged from so many news and revert to apathy. both strategies are dangerous because social media users are also citizens who are instrumental in the efforts to curb down the pandemic. if citizens do not correctly understand what they have to do in a pandemic situation, then governmental measures will be ineffective. online information about the covid-19 pandemic posted on social media displayed two seemingly distinct problems: the rapid propagation of misinformation and disinformation (mdi) and the so-called infodemic. both problems concerned how information travels in an online social networking medium, but only one of them was tackled with some degree of efficiency. social media platforms rapidly stepped up their pre-existing measures of dealing with mdi and targeted specifically the covid-19 related misleading information, whereas the infodemic remained untouched (howard 2020) . the infodemic was seen as a side-effect of the intensification of user interactions on social media, some informational noise that accompanied the humming of online communications. in this article, i will argue that both informational problems are related with and symptomatic of a deeper problem embedded in social media: the contextual design of the user's interactions with information. while we certainly need to pay attention to the quality of informational content (floridi and illari 2014) distributed on social media, similar attention should be paid to the quality of the user's interaction with the informational context. measures focusing solely on the factual content of information distributed online risk ignoring a significant aspect of social media, its particular context of engaging with information. the paper is structured as follows: first, i briefly review the measures taken by social media platforms to deal with the covid-19 related mdi, classifying them in view of the content or context focus. secondly, i describe three dimensions of the informational context on social media which made mdi particularly difficult to deal with during the pandemic and, by extension, aggravated the infodemic. finally, drawing from the contextual approach to information, i propose some measures to tackle informational disorders online for future similar global crisis situations. mdi related to covid-19 was tackled visibly by most mainstream social media platforms. this was possible because there were already some methods in place for dealing with mdi. starting with the 2016 elections and the cambridge analytica scandal, social media platforms began paying attention to the mdi shared by their users. in recent years, social media platforms have been testing methods of content curating by using external fact-checking organisations, flagging the misinforming content, and sometimes removing it (howard 2020) . in the wake of the covid-19 pandemic, these efforts for fact-checking were accelerated to an impressive extent: "the number of english-language fact-checks rose more than 900% from january to march" (brennen et al. 2020 ). this concentration of effort from disparate organisations was motivated by the emergency of the situation but also by the topic of covid-19 which rendered itself easier to fact-check: it was more or less clear what was false content 1 and what not-as opposed to previous cases of political mdi (brennen et al. 2020) . table 1 below summarises the main measures taken by social media platforms for dealing with mdi during the pandemic. 2 most approaches listed in the previous table were primarily content-focused as they were targeting the factual or descriptive content of the information, by checking it against existing evidence and reducing its visibility. however, the context of mdi is just as important as its content (wardle and derakhshan 2018) . contextual mdi may appear when placing genuine information in a fabricated setting, for example, a private statement cited as if it were a public one, a personal opinion as to represent the views of an organisation, mixing facts with irrelevant comments, or changing the date or place where a photo was taken. a large extent of covid-19 related mdi shared on social media was contextual or, as some have put it, reconfigured: "most (59%) of the misinformation in our sample involves various forms of reconfiguration, where existing and often true information is spun, twisted, recontextualised, or reworked. less misinformation (38%) was completely fabricated" (brennen et al. 2020) . even those measures which combined content with context awareness (such as numbers 3,5,6 and 7 in the table 1), the content was the primary focus. these measures relied on someone checking first what information was good enough and then proposing some contextual measures. content-focused approaches are not misguided, but tend to give most of the agency to the social media platform, while the users are left with a passive role, to click and react to what they are shown. meanwhile, when a measure is both content and context focused, the user's agency starts to play a significant role. no matter how many alternative sources of information one is exposed to (as it is the case with measures 5,6,7 in the table 1) , the user has still the final choice to engage with them or not. by contrast, a context-focused approach would presuppose that the user has the freedom to use the social media platform in a way that stimulates and recognises other kinds of engagement with information, regardless whether the information at hand is genuine or mdi. a context-focused approach creates informational environments which accommodate the possibility that the users may engage differently with and interpret differently the same information. this approach would aim to stimulate the users in becoming more sensitive to the modes in which the information is presented to them, educating them in the long run. this notion of context-sensitivity was inspired by similar constructs encountered in human-computer interaction studies such as "contextual design" (wixon et al. 1990 ) and "context-aware computing" (schilit et al. 1994 (schilit et al. -1994 . context sensitivity in design starts from the idea that the user's perspective is not that predictable, and that there is not one single context of use, accepting that users make sense of an interface in various ways, hence proposing that the designer accommodates for multiple meanings and complex interactions right from the start. in the long run, a contextual approach can educate the users without trying to change their beliefs or attitudes about the informational content as such. the contextual information approach is inspired by value sensitive design (friedman 1996; van den hoven 2007) which is complemented by paying attention to three particular dimensions of informational context on social media, described in the next section. the strong emotional context before the pandemic, it was already shown that mdi propagates on social media platforms by playing on the emotional reactions of the online audience (zollo et al. 2015; khaldarova and pantti 2016) , aiming to deliberately stir powerful emotions in their readers. some researchers called this feature of mdi a form of "empathic optimisation" (bakir and mcstay 2018, p. 155) . emotional manipulation in news items (especially click-bait) is an efficient way of capturing user's attention since emotion-stirring news are usually more interacted with than the neutral ones (bakir and mcstay 2018, p. 155) . it may seem that only mdi is emotionally loaded, whereas genuine news sound more sober and neutral. but this would be a misleading view of how information travels on social media platforms. emotional reactions do not belong to misleading information alone, rather these are a normal side-effect of the emotional infrastructure already embedded in most social media platforms. users of social media platforms are allowed a palette of actions and reactions: some are seemingly neutral (commenting, sharing and posting) while others have a clear emotional valence: liking and using other emoticons to endorse or dislike a post. these emotionally charged reactions are easier to perform than the neutral ones: it takes a split second to click like on a post, but some more time to comment on fig. 1 ). the assumption was that now, more than ever, users needed to express emotions online with a richer palette. however the simplistic way of expressing such emotions did not change, it was part of the interaction design from the beginning. the emotional infrastructure of social media was not something requested by users but designed from the start. major social media platforms are oriented towards maximising the user's engagement, i.e. how much time one spends on a specific platform and how much attention is consumed (whiting and williams 2013) . these kinds of interactions actively promote the user's "attention bulimia" i.e. a behaviour oriented towards "maximizing the number of likes" (del vicario et al. 2016, p. 1) and presumably other positive reactions. most buttons for emotional reactions are of positive emotions (like, love, hug, laugh) while in recent years facebook added some more nuanced emotions such as angry, sad or cry. but the overwhelming effect of these emotional reactions is to make other users feel liked by their social network hence, to make the platform a place where one wants to keep returning to for emotional gratification. for many users confined to their homes by the pandemic, social media platforms became a window to the world, as the television set used to be in older days and the easiest way of relating to others. in such times of distress and uncertainty, users posted more frequently than usual (cinelli et al. 2020 ) but some of the information posted was not meant to inform others, but rather to express one's concerns and emotions related to the pandemic. posts were also meant to get reactions from one's friends in an attempt to confirm that the others were also feeling the same way as one does. posting about the pandemic became a strategic way of gauging other's emotions on the crisis situation and gathering some feeling of consensus from one's social network. the consensus sought on social media was of an emotional nature which may be at odds with an epistemic consensus about the nature of the facts at hand. during the pandemic, several epistemologists and philosophers of science stepped up and tried to educate the general public on what sources to trust as experts and how to discern facts from fiction about the pandemic-in podcasts, opinion pieces and on their social media accounts (weinberg 2020) . while this effort is laudable, it needs to be complemented with another approach, taking into account the wider epistemic context in which information travels on social media. this is a particularly weak epistemic context in which information is not always shared to inform. social media platforms are not places where one usually goes to be informed. at least in regular, day to day situations, users turn to social media platforms to relate, to communicate and to be entertained (fuchs 2014) . the weak epistemic context of social media is ruled by serendipity (reviglio 2017) , meaning that many users get to be informed by accident. in a crisis situation, users tend to change how they use the platform and shifting towards the communication of vital information such as imminent risks or their location and also seeking to be informed by latest developments from people from the local site of the disaster. the entertainment function tends to become secondary in emergencies (zeng et al. 2016 ). in the 2020 pandemic situation, the difference was that the crisis was global and that the duration was rather long. this time, the uncertainty that accompanies a crisis situation was extended over months. as epistemic agents, online users tried to make sense of what was going on with them, what they could expect and to assess the personal risks, over a longer period. the pandemic was an extended crisis situation compounded with social alienation on top. this made users feel lost and overwhelmed by problems one could not understand. hence the desire-legitimate to a point-for everyone to be an expert so that they could at least understand what was happening to them. people did not want to be experts in epidemiology, quarantine measures, and home remedies for viruses because of a sudden surge of intellectual curiosity. they needed a way of coping that was also understandable to them. meanwhile, the official discourse of "trust the experts" and "please don't share information you do not understand" incapacitated them as epistemic agents. requiring users to do nothing and just comply went against the general desire to do something, as a way to take control. given the increase in posts on the pandemic by regular users, it may seem that many have tried to become experts overnight in epidemiology, viruses and vaccines. the comic below (fig. 2) illustrates the frequent situation emerging during the pandemic of members of the lay public hijacking the role of the expert. a discussion on conditions of trust and expertise makes sense in a regular epistemic context when agents try to acquire knowledge about a domain they know nothing about, having to choose which experts to trust (goldman and o'connor 2019) . however, acquiring knowledge was probably not the main goal of social media users who started posting scientific information which they did not understand. rather, many users tried to build some understanding of the situation, to make sense of the events. in these cases, shared understanding in a circle or network of friends seemed to be more important for users than gaining access to expertise. the scientific information was posted by lay users to back one's personal opinions, to urge for a certain course of action, or to gather consensus. given these weak epistemic uses of information targeted at emotional fulfilment and networking purposes, the regular content-focused measures would probably be less effective than predicted on such users. related to the previous point and stemming from it, most factual information shared around on social media had some normative implications which often shadowed any knowledge claim. descriptive information was used for prescriptive or evaluative aims. scientific expertise was co-opted strategically to enforce one's own pre-existing evaluative opinions. typical mdi claims are not merely descriptive claims of a state of affairs in the world, but often embedded in a normative context be those prescriptive or evaluative claims, both types are meant to change attitudes of the online users. mdi was shared because it prescribed actions or led to evaluations of the state of affairs which users already agreed with. hence debunking the facts would have solved only half of the puzzle, since the user's motivation to believe these normative claims would have not been dealt with. one example of the strong normative context for mdi, also involving a clear "politicisation" of mdi during the pandemic (howard 2020) , concerns one of the most popular types of claims analysed by eu vs disinfo (2020) in which the eu was depicted as powerless and scattered in dealing with the pandemic. this claim was traced back to russianbacked agencies which aimed to make users believe that, ultimately, russia was stronger than the eu (howard 2020). such claims can be debunked by showing that there were fig. 2 everyone is an expert. image source: https ://xkcd.com/2300/ coordinated measures taken by the eu, however, the implicit claim that other states dealt better with the pandemic than the eu is hard to debunk since it is not explicitly stated. this is just one type of difficulty with mdi which cannot be tackled with a content-focused approach: implicit evaluative claims in which one term of the comparison is not named. some evaluative claims can be checked (if these involve relational predicates which are measurable such as "better than" or "more efficient than") however others, incidentally the politicised evaluative claims, are harder to assess. in russian-backed claims against the eu, the name of "russia" is not mentioned anywhere in the text of the "news", since the aim is to erode the trust in eu from its citizens. if these citizens happen to be in eastern europe, this erosion of trust could lead to an anti-eu generalised feeling, and ultimately bottom-up pressures to exit the eu. these kinds of campaigns cannot be easily fact-checked since the effect is achieved by playing the long game. what looks like news about the pandemic is a dog-whistle about something else. the strong normative context is visible also when using scientific expertise co-opted to back up prescriptive claims otherwise untenable. one example is an unpublished paper by blocken et al. titled "towards aerodynamically equivalent covid19 1.5 m social distancing for walking and running" (2020) in which an animated image showed a simulation of how joggers coughing will spread particles of droplets when running at a distance of 1.5 m from each other. the paper became viral on social media despite not being peerreviewed nor published on a pre-print website. the visual animation showing the spread of droplets was understandable by every lay member of the public, without needing to have specialised knowledge of actual aerodynamics, and presumably made the paper so popular among non-scientists who used it to make prescriptive claims by non-scientists. while the authors hypothesised that it might be unsafe to run close to another-and that even 1.5 m distance might not be enough for jogging-the social media audience took this as a reason to shame the runners in their neighbourhoods (koebler 2020). meanwhile, the first author of the study posted a document on his website answering certain questions about the study 3 and refused to draw any epidemiological conclusions, urging for other's expertise. but social media users did not shy from becoming experts and drawing the conclusions themselves, as the information in the blocken et al. paper was just ammunition in a larger informational battle about what others should do. even if the scientific claims of regular users are checked, their aim remains to prescribe actions for others and to evaluate the world in a way that will be endorsed by one's community of friends. for these purposes, other pieces of news will be co-opted if the first ones were flagged as hoaxes. content-based approaches are then ineffective against this strong desire of social media users to emit evaluative or prescriptive claims about the world and strategically use science-looking sources to back these up. one should address the very desire of regular users to evaluate the world from the little soap-boxes that social media affords. the three dimensions of the informational context on social media previously mentioned (strong emotional, strong normative and weak epistemic) have been analytically distinguished but they function simultaneously to promote certain user behaviours which one could call irresponsible information sharing on social media. the weak epistemic context re-enforces the strong normative claims which are coupled with emotional reactions leading users to become strongly attached to their claims and indifferent to their debunking. while the three dimensions of the informational context work together to produce a perfect storm of low-quality and redundant information-an infodemic-one could still try to design modes of interaction to deal with each of them. future measures dealing with mdi and infodemic in a global pandemic need to also tackle the high emotional load of most mdi items. a practical way of flagging this could be devised by modelling on facebook's "hoax alert" system which warns users that a certain post they are reading has been fact-checked and is probably a hoax. a similar system could be designing an "emotional alert" flag which appears below certain news-looking items which have an unusually high amount of emotional triggering words. this kind of alert would show to users when certain news-like items lead them to feel something quite specific. the readers would be still free to engage with such items, but at least they would be warned about possible emotional manipulation. such alert would also flag click-bait and irrelevant news which are not false in themselves, but which do contribute to the infodemic because of the high virality and potential for polarisation. the weak epistemic context needs to be tackled together with the high normative one. when users engage with newsitems relating to the pandemic (or any other crisis situation), they can be shown alternative news and pages or users to follow which are experts in the field. this has been already implemented, but with little success as it needs to be complemented by more context-aware measures. a new measure could be to try to increase the users' critical engagement with a certain class of news items. this could happen by posting a small survey under the tricky news items asking the users to answer the following questions: "what does this news lead me to believe?", "what does this post ask me to feel?", "please rate how strongly you agreed with these claims before reading this post". and, finally, after answering this mini-survey, the user could be shown a disclaimer stating, "now that you've read posts you agreed with, how about trying something different?"-and the alternative sources with diverse information from experts could be displayed, but after the user has been primed to be more critical and diversify one's information sources. to incentivise users to fill in these surveys, they could receive certain bonus points on the social media platform and, once they accumulate a certain number of points, they could get a badge next to their name of "critical media user" or "critical thinker" which would ensure visibility of their informational skills. these sample measures i have proposed-and presumably other context-sensitive measures-are fit to be implemented only in crisis situations and only targeting the user's posts about the current crisis situation. it is, of course, possible to integrate these measures in the day to day users' interaction. however, social media fulfils certain deep emotional needs of users such as expressing strong normative opinions and seeking emotional consensus. if these contexts are strongly discouraged, the users may find other platforms to do so and they may abandon the too critical platforms. meanwhile, in a crisis situation, these dimensions could be targeted specifically and confined to users that share mdi and redundant information about the crisis at hand. this paper outlined three types of informational context which have not yet been taken into account by the current measures to curb down disinformation and the amount of irrelevant information on social media platforms. mdi countermeasures taken thus far have a limited effect because these did not account for the diversity of information contexts on social media. these measures do not address the primary causes of the infodemic itself, namely the user's need to post content as a way of making sense of the situation and of expressing one's opinion as a way of gathering consensus reactions. in light of these considerations, we need a value-sensitive design re-assessment of how users interact on social media among themselves as well as with the information found online in crisis situations. designing for thoughtful, critical and meaningful user interaction should become an explicit aim for the future of social media platforms in times of pandemics and other global emergencies. fake news and the economy of emotions towards aerodynamically equivalent covid19 1.5 m social distancing for walking and running. questions and answers. website bert blocken, eindhoven university of technology (the netherlands) and ku leuven (belgium) retrieved from https ://reute rsins titut e.polit ics.ox.ac.uk/ types -sourc es-and-claim s-covid -19-misin forma tion the covid-19 social media infodemic echo chambers: emotional contagion and group polarization on facebook throwing coronavirus disinfo at the wall to see what sticks. eu vs disinfo the varieties of disinformation the philosophy of information quality value-sensitive design social media: a critical introduction social epistemology misinformation and the coronavirus resistance fake news the viral 'study' about runners spreading coronavirus is not actually a study stop spamming me! exploring information overload on facebook the covid-19 'infodemic': a new front for information professionals serendipity by design? how to turn from diversity exposure to diversity experience to face filter bubbles in social media context-aware computing applications the next phase in fighting misinformation. facebook newsroom framework for managing the covid-19 infodemic: methods and results of an online, crowdsourced who technical consultation ict and value sensitive design. the information society: innovation, legitimacy, ethics and democracy in honor of professor jacques berleur sj thinking about 'information disorder': formats of misinformation, disinformation, and malinformation the role of philosophy & philosophers in the coronavirus pandemic. daily nous why people use social media: a uses and gratifications approach contextual design: an emergent view of system design novel coronavirus(2019-ncov): situation report-13. who rumors at the speed of light? modeling the rate of rumor transmission during crisis emotional dynamics in the age of misinformation the author would like to thank samantha marie copeland for commenting on an earlier draft of this article and to the two reviewers for their insightful comments on the first version of the article.funding this project has received funding from the european union's horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement number. 707404. the opinions expressed in this document reflect only the author's view. the european commission is not responsible for any use that may be made of the information it contains. open access this article is licensed under a creative commons attribution 4.0 international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. key: cord-020197-z4ianbw8 authors: celliers, marlie; hattingh, marie title: a systematic review on fake news themes reported in literature date: 2020-03-10 journal: responsible design, implementation and use of information and communication technology doi: 10.1007/978-3-030-45002-1_19 sha: doc_id: 20197 cord_uid: z4ianbw8 in this systematic literature review, a study of the factors involved in the spreading of fake news, have been provided. in this review, the root causes of the spreading of fake news are identified to reduce the encouraging of such false information. to combat the spreading of fake news on social media, the reasons behind the spreading of fake news must first be identified. therefore, this literature review takes an early initiative to identify the possible reasons behind the spreading of fake news. the purpose of this literature review is to identify why individuals tend to share false information and to possibly help in detecting fake news before it spreads. the increase in use of social media exposes users to misleading information, satire and fake advertisements [3] . fake news or misinformation is defined as fabricated information presented as the truth [6] . it is the publication of known false information and sharing it amongst individuals [7] . it is the intentional publishing of misleading information and can be verified as false through fact-checking [2] . social media platforms allow individuals to fast share information with only a click of a single share button [4] . in previous studies the effect of the spreading and exposure to misleading information have been investigated [6] . some studies determined that everyone has problems with identifying fake news, not just users of a certain age, gender or education [11] . the literacy and education of fake news is essential in the combating of the spreading of false information [8] . this review identify and discuss the factors involved in the sharing and spreading of fake news. the outcome of this review should be to equip users with the abilities to detect and recognise misinformation and also to cultivate a desire to stop the spreading of false information [5] . 2 literature background the internet is mainly driven by advertising [13] . websites with sensational headlines are very popular, which leads to advertising companies capitalising on the high traffic to the site [13] . it was subsequently discovered that the creators of fake news websites and information could make money through automated advertising that rewards high traffic to their websites [13] . the question remains how misinformation would then influence the public. the spreading of misinformation can cause confusion and unnecessary stress among the public [10] . fake news that is purposely created to mislead and to cause harm to the public is referred to as digital disinformation [17] . disinformation has the potential to cause issues, within minutes, for millions of people [10] . disinformation has been known to disrupt election processes, create unease, disputes and hostility among the public [17] . these days, the internet have become a vital part of our daily lives [2] . traditional methods of acquiring information have nearly vanished to pave the way for social media platforms [2] . it was reported in 2017 that facebook was the largest social media platform, hosting more 1.9 million users world-wide [18] . the role of facebook in the spreading of fake news possibly has the biggest impact from all the social media platforms [14] . it was reported that 44% of worldwide users get their news from facebook [14] . 23% of facebook users have indicated that they have shared false information, either knowingly or not [19] . the spreading of fake news is fuelled by social media platforms and it is happening at an alarming pace [14] . in this systematic literature review, a qualitative methodology was followed. a thematic approach was implemented to determine the factors and sub-factors that contribute to the sharing and spreading of fake news. the study employed the following search terms: ("fake news" (near/2) "social media") and (defin* or factors or tools) ("misinformation" (near/2) "social media") and (defin* or factors or tools). in this literature review, only published journal articles between 2016 and 2019 were considered. this review is not be specific to certain sectors i.e. the health sector or the tourism sector but rather consider on all elements that contribute to individuals sharing false information. studies that are not in english have been excluded in this review. only studies that are related to the research question have been taken into account. this article does not discuss the detection of fake news but rather the reasons behind the spreading of fake news. the analysis consisted of four phases: identification phase; screening phase; eligibility phase and inclusion phase. when conducting this literature review, the selection of articles were based on three main criteria: firstly, to search for and select articles containing the search terms identified above; secondly, selection based on the title and abstract of the article and finally selection based on the content of the article. in the identification phase of this literature review, science direct and emerald insight were selected to perform the literature review. science direct offered a total of 177 journal articles matching the search terms. emerald insight generated 121 journal articles that matched the search terms. continuing with the identification phase, the various articles were then combined and the duplicates were removed. in the screening phase of the source selection, all the article titles were carefully screened and a few articles were excluded as unconvincing. the eligibility of the abstract in the remaining articles were consulted and some articles were excluded based on the possible content of the article. the rest of the articles were further thoroughly examined to determine if they were valuable and valid to this research paper. upon further evaluation, these final remaining articles were further studied to make a final source selection. in this paper, possible reasons for and factors contributing to the sharing and spreading of false information are discussed. the reasons are categorized under various factors highlighted in the journal articles used to answer the research question. these factors include: social factors, cognitive factors, political factors, financial factors and malicious factors. while conducting the literature review, 22 articles highlighted the social factors; 13 articles discussed the role that cognitive factors have in contributing to the sharing and spreading of fake news; 13 articles highlighted the role of political factors; nine articles discussed how financial gain could convince a social media users to spread false information and 13 articles debated malicious factors and the effect that malicious factors have on the sharing and spreading of false information. figure 1 gives a breakdown of all 38 articles containing references to all the subcategories listed above. it was clearly evident that the two single sub-categories of social comparison and hate propaganda were the most debated. with the sub-factor, knowledge and education, closely behind. a high percentage of the articles, 34.2% (13 of 38), refer to the effects of social comparison on the spreading of false information; followed by 26.3% (10 of 38) of the articles referencing hate propaganda. knowledge and education was measured at 23.6% (9 of 38). furthermore, it was concluded that the majority of the 38 articles highlighted a combination of the social factors i.e. conformity and peer influence, social comparison and satire and humorous fakes, which measures at 60.5% (23 of 38). where the combination of the cognitive factors e.g. knowledge and education and ignorance measured at 39.4% (15 of 38) . political factors and sub-factors e.g. political clickbaits and political bots/cyborgs, were discussed in 34.2% (13 of 38) of the articles. in addition, financial factors e.g. advertising and financial clickbaits were referenced in 23.6% (9 of 38) of the journal articles. and lastly, malicious factors e.g. malicious bots and cyborgs, hate propaganda and malicious clickbaits measured at 34.2% (13 of 38). fake news stories are being promoted on social media platforms to deceive the public for ideological gain [20] . in various articles it was stated that social media users are more likely to seek information from people who are more like-minded or congruent with their own opinions and attitudes [21, 22] . conformity and peer influence. it is the need of an individual to match his or her behaviour to a specific social group [9] . the desire that social media users have to enhance themselves on social media platforms could blur the lines between real information and false information [12] . consequently, social media users will share information to gain social approval and to build their image [12] . recent studies have shown that certain false information can be strengthened if it belongs to the individuals in the same social environment [23] . the real power lies with those certain individuals who are more vocal or influential [24] . the need for social media users to endorse information or a message can be driven by the perception the social media user has about the messenger [25] . these messengers or "influencers" can be anyone ranging from celebrities to companies [24] . studies show that messages on social media platforms, like twitter, gain amplification because the message or information is associated with certain users or influencers [25] . information exchanging depends on the ratings or the influential users associated with the information [23] . social media users' influence among peers enhance the impact and spreading of all types of information [25] . these social media influencers have the ability to rapidly spread information to numerous social media users [26] . the level of influence these influencers have, can amplify the impact of the information [27] . the lack of related information in online communities could lead to individuals sharing the information based on the opinions and behaviours of others [23] . some studies show that social media users will seek out or share information that reaffirms their beliefs or attitudes [18] . social comparison. the whole driving force of the social media sphere is to post and share information [1] . social comparison can be defined as certain members within the same social environment who share the same beliefs and opinions [12] . when they are unable to evaluate certain information on their own, they adapt to compare themselves to other members, within the same environment, with the same beliefs and opinions [12] . the nature of social media allows social media users to spread information in realtime [28] . social media users generate interactions on social media platforms to gain "followers" and to get "likes" which lead to an increasing amount of fake news websites and accounts [10] . one of the biggest problems faced in the fake news dilemma, is that social media users' newsfeed on social media platforms, like facebook, will generally be populated with the user's likes and beliefs, providing a breeding ground for users with like-minded beliefs to spread false information among each other [16] . social media users like to pursue information from other members in their social media environment whose beliefs and opinions are most compatible with their own [21] . social media algorithms designed to make suggestions or filter information based on the social media users' preferences [29] . the "like" button on social media platforms, e.g. facebook, becomes a measuring tool for the quality of information, which could make social media users more willing to share the information if the information has received multiple likes [30] . social media users' belief in certain information depends on the number of postings or "re-tweets" by other social media users who are involved in their social media sphere [23] . one article mentioned that the false news spreading process can be related to the patterns of the distribution of news among social media users [31] . the more a certain piece of information is shared and passed along the more power it gains [6] . this "endorsing" behaviour results in the spreading of misleading information [25] . it is also known as the "herding" behaviour and is common among social media where individuals review and comment on certain items [23] . it is also referred to as the "bandwagon effect" where individuals blindly concentrate on certain information based on perceived trends [32] . the only thing that matters is that the information falls in line with what the social media user wants to hear and believe [16] . many studies also refer to it as the "filter bubble effect" where social media users use social media platforms to suggest or convince other social media users of their cause [33] . communities form as a result of these filter bubbles where social media users cut themselves off from any other individual that might not share the same beliefs or opinions [33] . it was found that social media users tend to read news or information that are ideologically similar to their own ideologies [29] . satire and humorous fakes. some of the content on social media are designed to amuse users and are made to deceive people into thinking that it is real news [2] . satire is referred to as criticising or mocking ideas or opinions of people in an entertaining or comical way [7, 29] . these satire articles consist of jokes or forms of sarcasm that can be written by everyday social media users [10] . most satire articles are designed to mislead and instruct certain individuals [13] . some social media users will be convinced that it is true information and will thus share the information [2] . the study of cognition is the ability of an individual to make sense of certain topics or information by executing a process of reasoning and understanding [34] . it is the ability of an individual to understand thought and execute valid reasoning and understanding of concept [34] . with an increasing amount of information being shared across social media platforms it can be challenging for social media users to determine which of the information is closest to the original source [22] . the issue of individuals not having the ability to distinguish between real and fake news have been raised in many articles [10] . users of social media tend to not investigate the information they are reading or sharing [2] . this can therefore lead to the rapid sharing and spreading of any unchecked information across social media platforms [2] . [10] . the trustworthiness of a certain article is based on how successful the exchange of the articles are [12] . the more successful the exchange, the more likely social media users will share the information [12] . social media users make supposedly reasonable justifications to determine the authenticity of the information provided [35] . people creating fake news websites and writing false information exploit the nonintellectual characteristics of some people [13] . for social media users to determine if the information they received is true or false, expert judgement of content is needed [1] . in a recent study, it was found that many social media users judge the credibility of certain information based on detail and presentation, rather than the source [11] . some individuals determine the trustworthiness of information provided to them through social media on how much detail and content it contains [11] . it is believed that people are unable to construe information when the information given to them, are conflicting with their existing knowledge base [34] . most social media users lack the related information to make a thorough evaluation of the particular news source [23] . for many years, companies and people have been creating fake news articles to capitalize on the non-intellectual characteristics of certain individuals [13] . a driving force of the spreading of false information is that social media users undiscerningly forward false information [5] a reason for the spreading of false information in many cases are inattentive individuals who do not realise that some websites mimic real websites [2] . these false websites are designed to look like the real website but in essence only contain false information. social media users tend to share information containing a provocative headline, without investigating the facts and sources [16] . the absence of fact-checking by social media users on social media platforms, increases the spreading of false information [2] . social media users tend to share information without verifying the source or the reliability of the content [2] . information found on social media platforms, like twitter, are sometimes not even read before they are being spread among users, without any investigation into the source of the information [16] . as mentioned earlier in sect. 5.1, the bandwagon effect causes individuals to share information without making valued judgement [32] . the spreading of false political information have increased due to the emergence of streamline media environments [22] . there has been a considerable amount of research done on the influence of fake news on the political environment [14, 18] . by creating false political statements, voters can be convinced or persuaded to change their opinions [3] . critics reported that in the national election in the uk (regarding the nation's withdrawal out of the eu) and the 2016 presidential election in the us, a number of false information was shared on social media platforms that have influenced the outcome of the results [8, 22] . social media platforms, like facebook, came under fire in the 2016 us presidential election, when fake news stories from unchecked sources were spread among many users [10] . the spreading of such fake news have the sole purpose of changing the public's opinion [8, 29] . various techniques can be used to change the public's opinion. these techniques include repeatedly retweeting or sharing messages often with the use of bots or cyborgs [15] . it also includes misleading hyperlinks that lures the social media user to more false information [15] . political clickbaits. clickbaits are defined as sources that provide information but use misleading and sensational headiness to attract individuals [16] . in the 2016 us presidential elections it was apparent that clickbaits were used to shape peoples' opinions [4] . in a recent study it was found that 43% (13 of 30) false news stories were shared on social media platforms, like twitter, with links to non-credible news websites [22] . webpages are purposely created to resemble real webpages for political gain [3] . news sources with urls similar to the real website url have been known to spread political fake news pieces, which could influence the opinion of the public [10] . political bots/cyborgs. a social media users' content online is managed by algorithms to reflect his or her prior choices [22] . algorithms designed to fabricate reports are one of the main causes of the spreading of false information [33] . in recent years, the rapid growth of fake news have led to the belief that cyborgs and bots are used to increase the spreading of misinformation on social media [22] . in the 2016 us election social bots were used to lead social media users to fake news websites to influence their opinions on the candidates [3] . hundreds of bots were created in the 2016 us presidential elections to lure people to websites with false information [3] these social bots can spread information through social media platforms and participate in online social communities [3] . one of the biggest problems with fake news is that it allows the writers to receive monetary incentives [13] . misleading information and stories are promoted on social media platforms to deceive social media users for financial gain [20, 29] . one of the main goals of fake news accounts are to generate traffic to their specific website [10] . articles with attractive headlines lure social media users to share false information thousands of times [2] . many companies use social media as a platform to advertise their products or to promote their products [26] . advertising. people earn money through clicks and views [36] . the more times the link is clicked the more advertising money is generated [16] . every click corresponds to advertising revenue for the content creator [16] . the more traffic companies or social media users get to their fake news page, the more profit through advertising can be earned [10] . the only way to prevent financial gain for the content creator is inaction [16] . most advertising companies are more interested in how many social media users will be exposed to their product rather than the possible false information displayed on the page where their advertisement is displayed [13] . websites today are not restricted on the content displayed to the public, as long as they attract users [13] . this explains how false information is monetized, providing monetary value for writers to display sensational false information [13] . financial clickbaits. clickbaits are used to lure individuals to other websites or articles for financial gain [2] . one of the main reasons for falsifying information is to earn money through clicks and views [36] . writers focus on sensational headlines rather than truthful information [13] . appeal rather than truthfulness drives information [5] . these attractive headlines deceive individuals into sharing certain false information [2] . clickbaits are purposely implemented to misguide or redirect social media users to increase the views and web traffic of certain websites for online advertising earnings [4] . social media users end up spending only a short time on these websites [4] . clickbaits have been indicated as one of the main reasons behind the spreading of false information [2] . studies debating the trustworthiness of information and veracity analytics of online content have increased recently due to the rise in fake news stories [37] . social media has become a useful way for individuals to share information and opinions about various topics [3] . unfortunately, many users share information with malicious intent [3] . malicious users, also referred to as "trolls", often engage in online communication to manipulate other social media users and to spread rumours [37] . malicious websites are specifically created for the spreading of fake news [2] . malicious entities use false information to disrupt daily activities like the health-sector environment, the stock markets or even the opinions people have on certain products [3] . some online fake news stories are purposely designed to target victims [3] . websites, like reddit, have been known as platforms where users can get exposed to bullying [38] . some individuals have been known to use the social media platform to cause confusion and fear among others [39] . malicious bots/cyborgs. malicious users, with the help of bots, target absent-minded individuals who do not check the article facts or source before sharing it on social media [2] . these ai powered bots are designed to mimic human behaviour and characteristics, and are used to corrupt online conversations with unwanted and misleading advertisements [38] . in recent studies it was found that social bots are being created to distribute malware and slander to damage an individual's beliefs and trust [3] . hate propaganda. many argue that the sharing of false information fuel vindictive behaviour among social media users [12] . some fake news websites or pages are specifically designed to harm a certain user's reputation [5, 10] . social media influencers influence users' emotional and health outcomes [12] . fake news creators specifically target users with false information [3] . this false information is specifically designed to deceive and manipulate social media users [21] . fake news stories like this, intend to mislead the public and generate false beliefs [21] . in some cases, hackers have been known to send out fake requests to social media users to gain access to their personal information [39] . the spreading of hoax has also become a problem on social media. the goal of hoaxes is to manipulate the opinion of the public and to maximize public attention [35] . social spammers have also become more popular over the last few years with the goal to launch different kinds of attacks on social media users, for example spreading viruses or phishing [2] . fake reviews have also been known to disrupt the online community through writing reviews that typically aim to direct people away from a certain product or person [2] . another method used by various malicious users, is to purchase fake followers to spread harmful malware more swiftly [26] . malicious clickbaits. it was reported on in a previous article that employees in a certain company clicked on a link, disguised as important information, where they provide sensitive and important information to perpetrators [4] . malicious users intending to spread malware and phishing hide behind a fake account to further increase their activities [26] . clickbaits in some cases are designed to disrupt interactions or to lure individuals into arguing in disturbed online interactions or communications [37] . these clickbaits have also been known to include malicious code as part of the webpage [4] . this will cause the social media users to download malware onto their device once they select the link [4] . various articles were used to identify and study the factors and reasons involved in the sharing and spreading of misinformation on social media. upon retrieving multiple reasons for the spreading of false information, they were categorized into main factors and sub-factors. these factors included social factors, cognitive factors, political factors, financial factors and malicious factors. considering the rapidly expanding social media environment, it was found that especially social factors have a very significant influence in the sharing of fake news on social media platforms. its sub-factors of conformity and peer-influence; social comparison and satire and humorous fakes have great influence when deciding to share false information. secondly, it was concluded that malicious factors like hate propaganda also fuel the sharing of false information with the possibility to financially gain or to do harm. in addition, it was concluded from this review that knowledge and education plays a very important role in the sharing of false information, where social media users sometimes lack the logic, reasoning and understanding of certain information. it was also evident that social media users may sometimes be ignorant and indifferent when sharing and spreading information. fact-checking resources are available but the existence thereof is fairly unknown and therefore often unused. hopefully better knowledge and education will encourage a desire among social media users to be more aware of possible unchecked information and the sources of information and to stop the forwarding of false information. a better understanding of the motives behind the sharing of false information can potentially prepare social media users to be more vigilant when sharing information on social media. the goal of this literature review was only to identify the factors that drive the spreading of fake news on social media platforms and did not fully address the dilemma of combatting the sharing and spreading of false information. while this literature review sheds light on the motivations behind the spreading of false information, it does not highlight the ways in which one can detect false information. this proposes further suggestions for follow-up research or literature studies using these factors in an attempt to detect and limit or possibly eradicate the spreading of false information across social media platforms. despite the limitations of this literature review, it helps to educate and provide insightful knowledge to social media users who share information across social media platforms. algorithmic detection of misinformation and disinformation: gricean perspectives a survey on fake news and rumour detection techniques an overview of online fake news: characterization, detection, and discussion detecting fake news in social media networks why students share misinformation on social media: motivation, gender, and study-level differences the diffusion of misinformation on social media: temporal pattern, message, and source behind the cues: a benchmarking study for fake news detection why do people believe in fake news over the internet? an understanding from the perspective of existence of the habit of eating and drinking this is fake news': investigating the role of conformity to other users' views when commenting on and spreading disinformation in social media the current state of fake news: challenges and opportunities identifying fake news and fake users on twitter why do people share fake news? associations between the dark side of social media use and fake news sharing behavior history of fake news getting acquainted with social networks and apps: combating fake news on social media what the fake? assessing the extent of networked political spamming and bots in the propagation of #fakenews on twitter fake news: belief in post-truth real411. keeping it real in digital media. disinformation destroys democracy democracy, information, and libraries in a time of post-truth discourse attention-based convolutional approach for misinformation identification from massive and noisy microblog posts third person effects of fake news: fake news regulation and media literacy interventions good news, bad news, and fake news: going beyond political literacy to democracy and libraries a computational approach for examining the roots and spreading patterns of fake news: evolution tree analysis effects of group arguments on rumor belief and transmission in online communities: an information cascade and group polarization perspective beyond misinformation: understanding and coping with the 'post-truth' era virtual zika transmission after the first u.s. case: who said what and how it spread on twitter distance-based customer detection in fake follower markets exploring users' motivations to participate in viral communication on social media detecting rumors in social media: a survey fake news judgement: the case of undergraduate students at notre dame university-louaize the emergence and effects of fake social information: evidence from crowdfunding fake news and its credibility evaluation by dynamic relational networks: a bottom up approach understanding the majority opinion formation process in online environments: an exploratory approach to facebook social media and the future of open debate: a user-oriented approach to facebook's filter bubble conundrum fake news': incorrect, but hard to correct. the role of cognitive ability on the impact of false information on social impressions social media hoaxes, political ideology, and the role of issue confidence fake news and the willingness to share: a schemer schema and confirmatory bias perspective the dark side of news community forums: opinion manipulation trolls social media? it's serious! understanding the dark side of social media social media security and trustworthiness: overview and new direction key: cord-351448-jowb5kfc authors: ganesh, ragul; singh, swarndeep; mishra, rajan; sagar, rajesh title: the quality of online media reporting of celebrity suicide in india and its association with subsequent online suicide-related search behaviour among general population: an infodemiology study date: 2020-08-29 journal: asian j psychiatr doi: 10.1016/j.ajp.2020.102380 sha: doc_id: 351448 cord_uid: jowb5kfc the literature reports increased suicide rates among general population in the weeks following the celebrity suicide, known as the werther effect. the world health organization (who) has developed guidelines for responsible media reporting of suicide. the present study aimed to assess the quality of online media reporting of a recent celebrity suicide in india and its impact on the online suicide related search behaviour of the population. a total of 200 online media reports about sushant singh rajput’s suicide published between 14(th) to 20(th) june, 2020 were assessed for quality of reporting following the checklist prepared using the who guidelines. further, we examined the change in online suicide-seeking and help-seeking search behaviour of the population following celebrity suicide for the month of june using selected keywords. in terms of potentially harmful media reportage, 85.5% of online reports violated at least one who media reporting guideline. in terms of potentially helpful media reportage, only 13% articles provided information about where to seek help for suicidal thoughts or ideation. there was a significant increase in online suicide-seeking (u = 0.5, p < 0.05) and help-seeking (u = 6.5, p < 0.05) behaviour after the reference event, when compared to baseline. however, the online peak search interest for suicide-seeking was greater than help-seeking. this provides support for a strong werther effect, possibly associated with poor quality of media reporting of celebrity suicide. there is an urgent need for taking steps to improve the quality of media reporting of suicide in india. suicide is a major public health problem, and is one of the leading causes of mortality globally (naghavi, 2019) . the reported deaths due to suicide in india is highest among countries worldwide (dandona et al., 2018) . studies have reported that media reports of celebrity suicide stimulate imitation acts in vulnerable population (gould et al., 2003) . also, repeated insensible media coverage may act as a source of misinformation that suicide is an acceptable solution to ongoing difficulties in life. this has been supported by the bulk of available literature, with a recent meta-analysis reporting a 13% increased risk of suicide (95% confidence interval of 8-18%; median follow-up duration of 28 days) in the period following the media report of celebrity suicide death (niederkrotenthaler et al., 2020) . media reporting of suicide is a double-edged sword, with inappropriate and sensational reporting of suicide news leading to copycat phenomenon or werther effect. whereas, sensible media reporting of suicide along with media involvement in spreading preventive information shown to minimise copycat eff ects, and has been shown to be eff ective in reducing suicide deaths (cheng et al., 2018) . thus, researchers have advocated for a more responsible descriptive reporting of suicide news, with emphasis on sharing preventive information related to suicide. this includes reporting upon how people could adopt alternative coping strategies to deal with life stresses or depressed mood along with sharing links of educative websites or suicide helplines; and has been shown to be associated with decreased suicide suicidal behaviour and ideation in vulnerable population (niederkrotenthaler et al., 2010; till et al., 2017) . therefore, media reporting of suicide-related preventive information has been associated with positive effects on subsequent suicide rates and ideation. this is described as the papageno effect, and acts as a counterforce to the werther effect responsible media reporting of suicide is considered as the best available strategy to counter the harmful effects of media reportage (niederkrotenthaler et al., 2020) . in recent years, internet is being increasingly used by the public for seeking health-related information; and information related to mental health related disorders or problems also being widely available online (amante et al., 2015) . it is understandable, that several researchers have expressed concerns about vulnerable individuals either using internet to access pro-suicide information (e.g. methods of suicide) or inadvertently being exposed to online news or information which negatively affects their thoughts or mood and promote suicidal behaviours in them (arendt and scherr, 2017; till and niederkrotenthaler, 2014) . however, internet also provides a host of suicide prevention related information and resources which could in turn decrease the risk of suicide (biddle et al., 2008) . further, news over internet and social media is able to reach to a large number of vulnerable and difficult to reach youth population; and has been shown to potentially influence the public opinion, attitudes, and behaviours over wide range of topics (kaplan and haenlein, 2010) . thus, it is important to explore the quality of online media reporting of celebrity suicide in india. this would in turn help in better understanding the role played by this new electronic medium in either predisposing or protecting people with suicidal ideas or death wishes. there has been limited literature available assessing the quality of media reporting of suicide in india. most of the studies assessed media reporting of suicide in general population, and only one study had focused on celebrity suicide specifically (harshe et al., 2016) . however, that study took death of robin williams (hollywood movie actor of us origin) as the reference event and was done about four years back. further, all the available studies have assessed newspapers in a particular region and were conducted prior to press council of india (pci) issuing media reporting guidelines on suicide and mental illness in india. the pci has adopted the guidelines of world health organization (who) report on preventing suicide (press council of india, 2020). it forbids undue repetition of stories, placing stories in prominent positions, explicit description of the suicide method, providing details about the suicide location, using sensational headlines and reporting photographs of the person. there might be some change in the quality of media reporting of suicide in recent years, more specifically after the pci guidelines. further, the who guidelines for responsible reporting are valid for all types of media, and it is important to explore the role played by the online media in current digital world. moreover, to the best of our knowledge there has been no study from indian context yet exploring the association between media reporting of death of celebrity by suicide and subsequent suicidal behaviour in the general population. the official figures for deaths due to suicide in india is released by the national crimes record bureau (ncrb) in india. however, the ncrb has stopped releasing this data since 2016 and the official suicide statistics have not been made public till now. further, it is usually available at the end of the year and does not provide data on a weekly basis. moreover, any other system of directly recording suicide statistics in india will face the challenges associated with collecting vital statistics through sub-optimal existing vital registration system, misclassification, and under-reporting of suicide deaths due to associated legal complications and social stigma around suicide death in the family . additionally, the restrictions imposed on movement of people and social distancing guidelines to be followed during the current covid-19 pandemic, makes it even more difficult to access the study population in a systematic manner for assessment of suicide risk using traditional research methods (bidarbakhtnia, 2020) . the above described limitations could be addressed by employing research methods and techniques involving the study of internet-based search behaviours and social media content. infodemiology has been defined as "the science of distribution and determinants of information in an electronic medium, specifically the internet, or in a population, with the ultimate aim to inform public health and public policy" (eysenbach, 2009) . google trends is an analytical tool available for tracking the online search interests of the population. the evidence supporting correlation between increased online search interest for particular suicide-related search queries using google search engine and the actual number of suicides in that region during that particular time-period has been increasing over the past two decades (lee, 2020) . moreover, recent studies have shown that data obtained using google trends for suicide-seeking keywords could be used for predicting actual monthly suicide numbers at the country level (kristoufek et al., 2016) . thus, in the present study we monitored the changes in internet search volumes for keywords representing suicide-seeking and help-seeking behaviours using the google trends platform as a proxy marker to assess the impact of recent celebrity suicide in india. sushant singh rajput (ssr) was a much-loved indian actor who died by suicide on june 14, 2020. this was reported by various national and international media, and was considered as the reference event in this study. thus, the present study aimed to assess the quality of online media reporting of a celebrity suicide in india, and evaluate its adherence with the who guidelines for responsible media reporting of suicide. further, we aimed to examine the change in internet search volumes for keywords representing suicide-seeking and help-seeking behaviours of the population immediately following the celebrity suicide. this would provide indirect evidence for either existence or absence of the werther and the papageno effect at the population level in india. the online media reports related to the theme of death of ssr by suicide on 14 th june were retrieved using the google news online platform (https://news.google.com). the search was conducted on 20 th june, 2020 in the tor browser, using search terms "sushant", "singh", "rajput", "died", "death" and "suicide". the search period was restricted between 14 to 20 june, 2020. this corresponded to first week immediately after the reference event. a total of 214 reports published on various international, national, and regional online news and entertainment media portals were retrieved. fourteen of them contained either only videos or were not related to theme of the present study, and were excluded. thus, a total of 200 articles were selected for further analysis. the news headlines were analysed to generate a word cloud (using a word cloud generator available at https://www.wordclouds.com) representing the commonly used terms in the online media reports covering ssr death. two authors independently reviewed and extracted information related to different news report characteristics using a pre-designed format in microsoft excel. it included information pertaining to descriptive characteristics of the news report such as the date of publishing, name of the news publisher, type media agency, and primary focus of the article being descriptive or commentary. the quality of articles was evaluated using a checklist prepared on the basis of who j o u r n a l p r e -p r o o f responsible media reporting of suicide guidelines (see supplementary table 1) , and is similar to that used in previous studies . the items were coded as "1" if the guideline was violated and "0" if the guideline was adhered to in the report. two trained researchers independently reviewed and extracted information following the above described procedure. any discrepancy or disagreement between the two researchers was resolved by consensus. the third author was consulted if needed. the data were analysed using spss version 23.0 (ibm corp, armonk, ny). the descriptive (frequency and percentage) and inferential statistics (chi-square and fishers' exact test) were conducted. a p-value of less than 0.05 was considered significant for all tests. the google trends utilizes an algorithm to give normalized relative search volume (rsv) for the keyword(s) searched for a specified geographical region and time period. the rsv represents how frequently a given search query has been searched on the google search engine, compared to the total number/volume of google searches conducted in the same geographical region over the selected time period. the rsv values range from zero (representing very low search volumes) to 100 (peak search volume for that query). google trend analysis was conducted to evaluate the online search interest for keywords representing suicide-seeking and help-seeking behaviours of the population for the month of june 2020. the initial list was made based on the review of available literature, which was finalized by the process of consensus building between two authors r.g. and s.s (qualified psychiatrists with clinical and research experience of working with people with mental illness and suicidal ideas/attempts) based on the face validity of search terms. the examples of suicide-seeking keywords included in the study were 'commit suicide', 'suicide method', and 'kill myself'. whereas, the helpseeking keywords such as 'suicide help', 'suicide treatment', and 'psychiatrist' were used. the four google trends options of region, time, category, and search type were specified as india, from 1 june to 30 june 2020, all categories, and web search in the present study. the "plus" (+) function from google trends was used to integrate the search volume (rsv) of all suicide-seeking terms and help-seeking keywords. a graph showing daily variation in rsv for suicide-seeking and helpseeking keywords was constructed. the change in mean rsv value for the suicide-seeking and helpseeking keywords after the reference event, when compared to baseline was analysed by applying the mann whitney-u test. the complete list of keywords used in this study along with other details pertaining to the google trends methodology are described in supplementary table 2. the information used in this study involved published online media reports and data related to the volume of anonymized web searches made during a given time period, both of which were freely available in the public domain. further, no patient or participant was approached directly in this study. thus, no written ethical permission was required from the ethics committee. the frequency of different words used in the headlines of the 200 media reports analysed in the present study were depicted as a word cloud, with the size of font used being representative of its frequency (figure 1) . apart from the words in the name of ssr, the most commonly used words were "suicide", "death", "actor", "police", "bollywood", "mumbai", "rhea", and "kapoor" in decreasing order of frequency. this suggested that a significant proportion of headlines used words like suicide, police or bollywood to sensationalize or glamourize the headlines, with no significant difference between news media (37.5%; 37/122) and entertainment media (42.0%; 37/88) headlines (χ 2 =0.426, p=0.56). the term "suicide" was used with similar frequency in both news media (n=18;16.1%) and entertainment media (n=15;17%) headlines. only two news media reports (n=2;1.8%) mentioned 'hanging' term in the headlines. the location of suicide was mentioned in two news media (1.8%) and two (2.3%) entertainment media headlines. the selected media reports were published from various media platforms: international news group, 9% (n=18); national news group, 28.5% (n=57); regional news group, 18.5% (n=37); and entertainment blogs, 44% (n=88). seventeen news media platforms had reported the story four or more times in the immediate one-week period following the ssr suicide, with hindustan times (6), ndtv (6), republic world (6), times of india (5), dna (5), india tv (4), the indian express (4) and times now (4) contributing to 41.5% (n=83/200) of the articles. around 17% (n=34) articles were published on 14 th june, 2020, while 49.5% of articles (n=99) were published on 18 th june,2020. about 45.5% (n=91/200) articles were focussed at direct descriptive reporting of suicide. the descriptive analysis of media reports for different potentially harmful and helpful media report characteristics are described in table 1 and 2 respectively. about 85.5% of reports violated the recommendation provided in the guideline, by including at least one potentially harmful information. there was significant association between the type of news media and the use of sensational language [χ 2 (3)=9.774 (p<0.021)]. regional and entertainment media used more sensational language compared to national and international media. there was significant association between the type of news media and provision of information about where to seek help [χ 2 (1)=15.98 (p<0.001)]. mainstream news media provided such information more than entertainment media. final social media posts were shared more by national media compared to international, regional and entertainment media [χ 2 (3)=9.91 (p<0.019)]. the median and inter-quartile range (iqr) values of rsv for suicide-seeking keywords in the twoweeks before and after the death of ssr on 14 june 2020 were 12 (iqr: 11.5-13.5) and 22 (iqr: 18-25) respectively. whereas, the median and iqr values of rsv for suicide-seeking keywords in the two-weeks before and after the death of ssr were 11 (10-11.5) and 14 (13-19) respectively. there was a significant increase in rsv for suicide-seeking (u=0.5; z= -4.62; p<0.001) and help-seeking (u=6.5; z=-4.39; p<0.001) keywords after the reference event, when compared to baseline. however, the online peak search volume and search interest for suicide-seeking was greater than help-seeking as shown in figure 2 . the present study analysed the online media reports related to the theme of a popular bollywood movie actor's suicide, and compared it against the who media reporting guidelines for suicide. the story of this recent celebrity suicide received widespread coverage across different online news platforms, including national and international news agencies. overall, majority of articles showed poor adherence with the who guidelines while reporting the celebrity suicide. the reports had minimal focus on educating the public the regarding suicide. further, the change in online search interest for different keywords related to "suicide-seeking" and "help-seeking" behaviours after this event were analysed to explore for possible werther and papageno effects. a substantial proportion of articles did not follow most of the recommendations. about 58% articles used sensational language, 59.5% articles mentioned suicide site, 17% articles suggested possible cause for suicide which was not related to poor mental health. a study assessing the quality of suicide reporting in indian print media found increase in prominence of suicide reports after the celebrity suicide (harshe et al., 2016) . it speculated that the most likely reason for sensationalism in media reporting of suicide might be to enhance the readership. further, only 13% articles provided information about where to seek help for suicidal thoughts. a previous study evaluating the newspaper coverage of celebrity suicide in united states against 'mindset' recommendations for reporting suicide, found 46% articles provided details about suicide method and only 11% provided information about help-seeking (carmichael and whitley, 2019) . previous studies from india found minimal adherence to media reporting recommendations for suicide in the print media. menon et al found that the method of suicide was reported in 99.1% articles and locations of suicide was reported in 86.5% articles (menon et al., 2020) . chandra et al showed that 80% articles reported suicide location and 61% suggested monocausality for suicide (chandra et al., 2014) . the high frequency of harmful reporting characteristics observed in the present study is consistent with the low adherence to who guidelines reported in other neighbouring asian countries as well (s.m. yasir . studies from bangladesh (s. m. yasir , indonesia (nisa et al., 2020) and sri lanka (brandt sørensen et al., 2019) have also reported non-adherence to who recommendations in print media such as reporting of suicide method, description of suicide note and inclusion of personal identification characteristics in reports. the headlines of the online reports included in the current study used the term 'suicide' in 18% articles. previous studies on print media from india reported "suicide" mentioned in headlines of 68.6% articles (chandra et al., 2014) and 30.31% articles (harshe et al., 2016) . refreshingly, in the present study 81.5% articles did not use the word 'commit' or related terms while reporting suicide, and 65.5% articles did not mention the suicide method in the reports. this is a welcome improvement in media reportage of suicide, which might be due to the positive effect of pci adopting guidelines on media reporting of suicide in september 2019 based on the who guidelines (vijayakumar, 2019) . however, photograph of the celebrity was provided by 47.5% of news media reports and 38% of entertainment media reports. publication of photograph of a person with mental illness without the individual's or his/her next of kin's consent in case of suicide violates section 24 (1) of the mental health care act, 2017 in india ("mental healthcare act," 2017). further, sensational language was used to report celebrity suicide by majority of news media and entertainment media reports. among the news media, regional media used sensational language more frequently than national and international media. final social media posts were reported by 8.9% news media and 3.4% entertainment media. among the different media platforms, national media shared final social media posts more frequently than international, regional and entertainment media. mainstream news media provided information about where to seek help more frequently than entertainment media. this is in line with a study on print media from india, that reported vernacular newspapers to be more compliant with who suicide reporting guidelines compared to english language newspapers (menon et al., 2020) . moreover, in terms of providing potentially helpful information, only one article provided research findings and population level statistics regarding suicide. only two percent articles included expert opinion from health professionals while reporting suicide. also, 28.5% articles tried to address the link between suicide and poor mental health in the present study. this highlights the need to emphasize the importance of including such information in media reports of suicide among journalists and news editors. it helps in increasing the awareness about mental health problems among the general population and encourage them to seek treatment for the same. a previous study assessing south indian newspapers found that a few articles recognised the link between suicide and psychiatric disorders or substance use disorders (menon et al., 2020) . similarly, previous studies from india evaluating the reporting of suicides in newspapers found that only few articles tried to educate public about the issue of suicide by including opinion from health professionals, research findings or information about suicide prevention programmes (chandra et al., 2014; harshe et al., 2016; menon et al., 2020) . one possible solution is to have a uniform national suicide reporting guideline for the media of the entire country. a similar approach has been shown to be beneficial in improving the overall quality of media reporting of suicide in australia (pirkis et al., 2009) . however, as prior researchers have pointed out (vijayakumar, 2019) , merely framing of guidelines may not help in improving the quality of media reporting of suicide. a continuous collaborative approach involving both mental health experts and media professionals should be adopted to sensitize them about the available research evidence backing these media reporting guidelines has been shown to successful in improving adherence to media reporting guidelines (bohanna and wang, 2012) . also, there should be regular workshops held for media professionals to provide them with adequate training and support in covering mental health and suicide-related topics. the findings from google trend analysis showed a significant increase in online search interest for terms representing both suicide-seeking and help-seeking behaviours after the ssr death. the surge in internet search volume for suicide-seeking keywords along with media reports of copycat suicides from different parts of india provides evidence of the werther effect (hindustantimes, 2020; news18, 2020, p. 18; timesofindia, 2020) . there are several possible mechanisms described in the literature to explain the observed increase in suicidal behaviour among the general population associated with media reporting of celebrity suicide (niederkrotenthaler et al., 2020) . first, people may identify with the deceased celebrities, which is usually more common in case of entertainment celebrities due to their strong public identity and following. second, repeated insensible media reporting might lead to normalization of suicide as an acceptable way out of their problems by the vulnerable population. third, media reporting about the method of celebrity suicide might increase the cognitive availability of that method and remove ambivalence about which method to choose for suicide in vulnerable individuals, leading to an increase in suicide by the same method among the vulnerable population. interestingly, there was also smaller but significant increase in the internet search volume for helpseeking keywords. the peak search volume for help-seeking and suicide-seeking keywords was observed on the day of ssr's death, with a lower peak rsv and subsequently lower daily rsvs for help-seeking terms as compared to suicide-seeking terms among the general population. this suggests a weaker papageno effect as compared to the werther effect, possibly due to poor adherence to the who suicide reporting guidelines by the online and other types of media in india while covering the celebrity suicide (newslaundry, 2020) . there only a few studies that have assessed the fidelity of suicide reporting in india, with almost of the studies having evaluated the quality of media reporting of suicide in general population and included only few print media newspapers. thus, our study provides valuable addition to bridge these gaps in the existing literature on media reporting of celebrity suicide from india. further, a wide range of online media reports were analysed in this study for the first time in indian settings to the best of our knowledge. further, the use of a novel google trends analysis to show an increased online search interest for suicide-seeking keywords immediately after the reference celebrity suicide provided support for the existence of werther effect in the indian context. however, there are certain limitations as well which should be kept in mind while interpreting the findings of this study. the study focussed only on english language media reports. we did not assess print media without online version, television, radio and social media. this might be an important area for future research, since studies from western countries suggest television coverage or social media (e.g. twitter) to be associated with increased suicide rates (jashinsky et al., 2014) . further, the relationship between people searching for suicide-seeking keywords might not be as clear as that observed for people with certain infectious disease like the influenza, with google trends analysis of data about searching for disease symptoms or other disease-related information being used to predict their incidences or outbreaks prior to the traditional methods of reporting an outbreak (cao et al., 2017; ginsberg et al., 2009; zhang et al., 2018) . this is likely due to the fact that that someone who searched about suicide might not be actually suicidal, and may or may not kill themselves during the specified study period. further, the keywords representing suicide-seeking and help-seeking behaviours were derived from review of literature from western countries mostly followed by consensus amongst the authors based on their face validity. however, the search methodology used for doing the google trends analysis in the present study is in accordance with the guidelines for conducting a robust google trends research (nuti et al., 2014) . the quality of media reporting of celebrity suicide on online media in india is poor when compared to adherence with the who guidelines or the pci guidelines. in terms of including potentially harmful information, about 85.5% of reports violated at least one recommendation provided in the guideline. further, compliance with recommendations of including potentially helpful information about creating awareness about suicide and possible ways of seeking help for suicidal thoughts was very low, with only few articles 13% articles providing information about where to seek help for suicidal thoughts or ideation. there was a significantly greater increase in the online search interest for suicide-seeking keywords after the recent celebrity suicide. this in turn provides support for a strong werther effect, possibly associated with poor quality of media reporting of celebrity suicide. the results emphasize the need for an increased collaboration, promotion, and advocacy by experts for uptake of existing media reporting guidelines on suicide by journalists and other stakeholders. there is an urgent need for research on understanding the effects of media reporting of suicide at general population's suicidal acts and thoughts. funding sources: no financial support was received for this study. the authors declare no conflict of interest. j o u r n a l p r e -p r o o f access to care and use of the internet to search for health information: results from the us national health interview survey quality of media reporting of suicidal behaviors in south-east asia do bangladeshi newspapers educate public while reporting suicide? a year round observation from content analysis of six national newspapers optimizing online suicide prevention: a search engine-based tailored approach suicide in india: a complex public health tragedy in need of a plan assessing the quality of media reporting of suicide news in india against world health organization guidelines: a content analysis study of nine major newspapers in tamil nadu surveys under lockdown; a pandemic lesson suicide and the internet media guidelines for the responsible reporting of suicide: a review of effectiveness self-harm and suicide coverage in sri lankan newspapers social media engagement and hiv testing among men who have sex with men in china: a nationwide cross-sectional survey media coverage of robin williams' suicide in the united states: a contributor to contagion? plos one 14, e0216543 do newspaper reports of suicides comply with standard suicide reporting guidelines? a study from bangalore, india the role of media in preventing student suicides: a hong kong experience gender differentials and state variations in suicide deaths in india: the global burden of disease study infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the internet detecting influenza epidemics using search engine query data media contagion and suicide among the young celebrity suicide and its effect on further media reporting and portrayal of suicide: an exploratory study sushant singh rajput's fan ends life in visakhapatnam: report tracking suicide risk factors through twitter in the us users of the world, unite! the challenges and opportunities of social media estimating suicide occurrence statistics using google trends search trends preceding increases in suicide: a cross-correlation study of monthly google search volume and suicide rate using transfer function models do tamil newspapers educate the public about suicide? content analysis from a high suicide union territory in india global, regional, and national burden of suicide mortality 1990 to 2016: systematic analysis for the global burden of disease study andaman's minor girl hangs herself after she went into depression over sushant singh rajput's suicide [www document are top indian newspapers complying with guidelines on suicide reporting? association between suicide reporting in the media and suicide: systematic review and meta-analysis media and suicide : international perspectives on research, theory, and policy role of media reports in completed and prevented suicide: werther v. papageno effects indonesian online newspaper reporting of suicidal behavior: compliance with world health organization media guidelines the use of google trends in health care research: a systematic review changes in media reporting of suicide in australia between 2000/01 and 2006/07 guidelines adopted by the pci on mental illness/reporting of suicide cases surfing for suicide methods and help: content analysis of websites retrieved with search engines in austria and the united states beneficial and harmful effects of educative suicide prevention websites: randomised controlled trial exploring papageno v. werther effects patna: girl hangs self after watching sushant singh rajput's suicide news | patna news -times of india media matters in suicide -indian guidelines on suicide reporting using internet search data to predict new hiv diagnoses in china: a modelling study j o u r n a l p r e -p r o o f key: cord-282194-0sjmf1yn authors: cherak, stephana j.; rosgen, brianna k.; amarbayan, mungunzul; plotnikoff, kara; wollny, krista; stelfox, henry t.; fiest, kirsten m. title: impact of social media interventions and tools among informal caregivers of critically ill patients after patient admission to the intensive care unit: a scoping review date: 2020-09-11 journal: plos one doi: 10.1371/journal.pone.0238803 sha: doc_id: 282194 cord_uid: 0sjmf1yn background: the use of social media in healthcare continues to evolve. the purpose of this scoping review was to summarize existing research on the impact of social media interventions and tools among informal caregivers of critically ill patients after patient admission to the intensive care unit (icu). methods: this review followed established scoping review methods, including an extensive a priori-defined search strategy implemented in the medline, embase, psycinfo, cinahl, and the cochrane central register of controlled trials databases to july 10, 2020. primary research studies reporting on the use of social media by informal caregivers for critically ill patients were included. results: we identified 400 unique citations and thirty-one studies met the inclusion criteria. nine were interventional trials–four randomized controlled trials (rcts)–and a majority (n = 14) were conducted (i.e., data collected) between 2013 to 2015. communication platforms (e.g., text messaging, web camera) were the most commonly used social media tool (n = 17), followed by social networking sites (e.g., facebook, instagram) (n = 6), and content communities (e.g., youtube, slideshare) (n = 5). nine studies’ primary objective was caregiver satisfaction, followed by self-care (n = 6), and health literacy (n = 5). nearly every study reported an outcome on usage feasibility (e.g., user attitudes, preferences, demographics) (n = 30), and twenty-three studies reported an outcome related to patient and caregiver satisfaction. among the studies that assessed statistical significance (n = 18), 12 reported statistically significant positive effects of social media use. overall, 16 of the 31 studies reported positive conclusions (e.g., increased knowledge, satisfaction, involvement) regarding the use of social media among informal caregivers for critically ill patients. conclusions: social media has potential benefits for caregivers of the critically ill. more robust and clinically relevant studies are required to identify effective social media strategies used among caregivers for the critically ill. introduction social media is defined as "websites and applications that enable users to create and share content or to participate in social networking" [1] . social media tools are platforms and communities, such as facebook or skype, that facilitate quick communication and enable interaction among several users at any given time [2] . social media participation in older age groups is steadily increasing [3] , contributing to over 3.2 billion active users worldwide [4] . in considering the various user-generated content and social networking platforms, the role of social media conveys different meanings between users and non-users, age groups (e.g., millennials), and demographic populations. since technological change is associated with linguistic and cultural changes, the role of social media is constantly in flux [5] . the use of social media in healthcare for increasing speed of communication, distributing accurate information, and promoting knowledge of support, treatments and self-care options is becoming more widespread [6, 7] . patient-and family-centered healthcare, which acknowledges that patients and their informal caregivers are central figures in decision-making and delivery of care [8] , recognizes that patients and caregivers exist within an online social structure and network of relationships [9] . social media tools, such as real-time communication platforms, educational material, and self-management guides, are now more commonly incorporated in the decision-making process to aid caregivers with making informed decisions regarding their loved one's care [10] . critically ill patients are often unable to communicate their care preferences (e.g., due to mechanical ventilation, coma, etc.) including those that are in line with their individual values and goals [11] . in these situations, critically ill patients rely on their informal caregivers to learn about their diagnosis and treatment options, and to make important decisions on their behalf [12] -these situations can be stressful and distressing for an informal caregiver [13] . family-centered interventions may improve caregiver's comprehension, satisfaction, and long-term psychological outcomes during and after a family member's critical illness [13, 14] . social media tools as family-centered interventions might allow for personalization, presentation, and participation of informal caregivers in their loved one's care, engaging them in the decision-making process and promoting better patient and informal caregiver outcomes [2, 15] . despite their potential value, it is unclear whether social media tools can be meaningfully and systematically deployed in critical care medicine [16] . we therefore asked the question: what is the extent, range, and nature of research evidence on the impact of social media interventions and tools among informal caregivers of critically ill patients? this scoping review was conducted and reported as per the arksey-o'malley 5-stage scoping review method [17] . the approach for this review followed the scoping review methods manual by the joanna briggs institute [18] . the preferred reporting items for systematic reviews and meta-analysis protocols (prisma-p) guideline was used to develop the protocol [19] (s1 table) . we adhered to the prisma-scr extension for scoping reviews [20] to report findings. inclusion criteria were as follows: (1) primary quantitative or qualitative research; (2) reporting on social media use with at least one informal caregiver as an end-user; (3) conducted with informal caregivers of critically ill patients of any age group; and (4) in any language or publication year. studies were excluded if they were not primary research (e.g., reviews or editorials), did not report on caregiver use of social media, or were not conducted in a critical care population. for the purposes of this review, we defined: (1) a caregiver as any informal (i.e., non-clinical) person who regularly provides support to the patient and is in some way directly implicated in the patient's care or directly affected by the patient's health problem (e.g., family, friend); (2) social media as any form of electronic communication that allow users to share information and other content and create online communities; and (3) critically ill patients as any persons who are currently admitted to an intensive care unit (icu) or had previously been admitted to an icu. studies were excluded if only abstracts were available. comprehensive literature searches were conducted in medline, embase, psycinfo, cinahl, and the cochrane central register of controlled trials. the search strategies for each database were developed with a medical librarian (dll) and were revised after reviewing preliminary search results. the search strategies combined synonyms and subject headings from three concepts: 1) caregivers; 2) critical care; and 3) social media. a search of the cochrane database of systematic reviews was undertaken to identify review articles related to the research question and their reference lists were screened to identify potential studies missed in the search. all databases were searched from inception to july 10, 2020. reference lists of included papers were reviewed to identify potential studies missed in the search. no language or date limits were applied. the complete medline search strategy is shown in s2 table. after a subset of the team (sc, ma) achieved 100% agreement on a pilot-test of 50 random studies, all titles and abstracts were reviewed independently in duplicate by two reviewers (sc, ma). any study selected by either reviewer at this stage progressed to the next stage. the fulltext of all articles was reviewed independently in duplicate by two reviewers (sc, ma); articles selected by both reviewers at this stage were included in the final review. disagreements were resolved by discussion or the involvement of a third reviewer (br) when necessary. references were managed in endnote x9 (clarivate analytics, philadelphia, pa, usa). two reviewers (sc, kp) abstracted data independently and in duplicate for each included study using a data collection sheet developed and piloted by the review team. discrepancies were resolved through discussion with a third reviewer (ma). information on document characteristics (e.g., year of publication, geographic location), study characteristics (e.g., setting), caregiver group (e.g., spouses, parents, family caregivers), social media tool used (e.g., communication platform, content community, social networking site, blog or microblog), objectives and outcome measures of social media use, statistical significance, and authors' conclusions were collected. studies that examined social media as one component of a complex intervention were noted as such. findings were synthesized descriptively to map different areas of the literature as outlined in the research question. using a social media framework described in previous research [6] , we categorized social media tools into five categories: collaborative projects (e.g., endnote, slack), blogs or microblogs (e.g., wordpress, twitter), content communities (e.g., youtube, slide-share), social networking sites (e.g., facebook, instagram), and real-time communication platforms (e.g., text messaging, web camera, facetime) (s3 table) . study objectives and outcomes were classified according to an adaptation from those outlined in coulter and ellins [21] proposed framework for strategies to inform, educate and involve patients (s4 table) . the main objective from each study was categorized into one of five categories: to improve health literacy, clinical decision making, self-care, patient safety or other. outcomes reported in each study were classified as patient and caregiver knowledge, patient and caregiver experience, use of services and cost, health behaviors and health status, and usage feasibility. studies that reported statistically significant outcomes determined by p<0.05 related to the main objective of the study were classified as "statistically significant." studies that reported outcomes that were not statistically significant were classified as "not statistically significant," and if a study did not assess significance through statistical equations that study was classified as "not assessed." descriptive statistics were calculated using stata ic 15 (statacorp. stata statistical software: release 15. college station, tx: statacorp llc). we screened 400 unique abstracts and reviewed 72 full-text articles; 41 full-text articles were excluded, the most common reasons being that the study did not report original research (n = 15/41) or that the study did not report on social media use (n = 12/41) (fig 1) . hand searching resulted in the inclusion of seven additional studies. there was 85% agreement on title and abstract screening and 89% agreement on full-text screening. the 31 included studies were published between 2000 and 2020 and primarily conducted in north america (n = 20, 65%) or europe (n = 9, 29%), and with neonatal or pediatric critical care populations (n = 23, 74%) ( table 1 ). fig 2a depicts the different icu types from the included studies. the median start date was 2015 (range: 1997-2016) and the median duration was 19 months (range: 3-95 months). many studies (n = 9, 33%) were interventional studies [22, 23, 27, 29, 30, 32, 33, 39, 48] of which most were conducted in neonatal icus (6/9). we included six qualitative studies and most (4/6) were conducted with neonatal or pediatric critical care populations. caregivers were most commonly parents (n = 19, 61%) [30, 31, 33, 35-38, 41-44, 47-49, 51 , 52] and unspecified family caregivers more broadly-which could include parents, but the term was more broadly defined (n = 7, 23%) [23, 24, 26, 27, 32, 39, 50] . one study was specific to mothers [40] and one study was specific to fathers [45] . few studies reported additional perspectives from members of the clinical care team (e.g., nurses, primary care physicians) (n = 3, 10%) [29, 34, 50] or critical care patients (n = 3, 10%) [22, 28, 49] . more than half of the studies examined real-time communication platforms (e.g., face-time, skype) (n = 17, 55%) [23, 24, 28, 30-40, 47, 51, 52] , which accounted for many of the studies conducted with adult populations (3/7, 43%) and most of the studies conducted with neonatal or pediatric populations (14/22, 64%). included studies were categorized by the type of social media tool used (s3 table) . fig 2b depicts the different specific social media tools from the included studies. real-time communication platforms, that allowed user communication with messages, voice, and/or video, were the most common social media tool used (n = 15, 56%), followed by social networking sites (n = 6, 19%) and content communities (n = 5, 16%). few studies (n = 2, 7%) assessed the use of blog or microblogs and only two studies examined social media use in general. overall, most social media tools included functions that operated like communication platforms, such that they provided the option for users to post and share experiences. many studies (n = 8, 30%) included a social media tool as part of a complex intervention, and most of these studies (n = 6/8) used mobile phones to facilitate the social media component. all of these studies (n = 6/6) reported that the ubiquitous nature and technical capacity of mobile phones were strong motivating factors. several of these studies (n = 5/6) addressed potential misuse of information and privacy concerns over text messaging by an established mobile phone dedicated to the study, and provided recommendations to the clinical care team (i.e., nurses, physicians) for text messaging with informal caregivers. the most common intended use of social media was for caregiver satisfaction (n = 9, 29%). most studies that examined caregiver satisfaction used communication platforms (n = 8/9). social networking sites were often used to improve self-care (n = 2/6, 30%), and content communities were mainly intended to improve patient safety (n = 2/4, 50%). there were few studies that addressed clinical decision making (n = 4, 13%) and half (n = 2/4) used content communities. five studies (16%) did not fit the framework, and were classified as "other"; three of these studies reported the prevalence of social networking use (n = 1) or of internet use more broadly (n = 2), and two compared mothers and fathers use of information and communication technology (n = 1) or frequency and length of webcam viewing (n = 1). usage feasibility and patient and caregiver experience outcomes were most commonly reported (n = 30 and n = 23, respectively) ( table 2 ). patient and caregiver knowledge outcomes were reported in 16 studies (52%), and use of services and cost outcomes, and health behaviors and health status outcomes were reported in eight studies each. among outcomes related to usage feasibility (n = 30), measures of usage and demographics were most common (n = 22, 73%) and were often accompanied by measures of users' attitudes and preferences (n = 20, 67%). measures of patient or caregiver satisfaction or of clinician-patient/caregiver communication were most commonly reported for outcomes related to patient and caregiver experience (n = 13 and n = 12, respectively). fig 3a provides a summary of outcomes as they relate to the study objectives. there were no defining trends between outcomes with regard to objectives for social media use, but measures related to the use of services and cost, or to health behaviors and health status, were generally least reported among any objective. one study reported outcomes related to potential for unintended consequences or harm from social media tools [50]. 2 reported at least one outcome related to social media use by an informal (i.e., non-clinical) caregiver; adult patient defined as >15 years. 3 reporting prevalence of internet use among critically ill septic patients and caregivers. 4 comparing mothers' and fathers' use of information and communication technology. 5 comparing mothers' and fathers' frequency and length of viewing their hospitalized neonate via webcam. 6 reporting prevalence of social networking site use among parents of preterm infants. 7 determining parents perception and preferences for information sharing in the neonatal intensive care unit. https://doi.org/10.1371/journal.pone.0238803.t001 social media use among caregivers of critically ill patients: a scoping review 4a). studies that collected data during and/or after 2016 reported only positive, negative or indeterminate effects of social media use. majority of studies with a sample size >300 reported a negative effect, and majority of studies with a sample size 100-300 or <100 reported a positive effect (fig 4b) . prospective observational studies commonly reported a neutral effect and the majority of prospective intervention studies reported a positive effect (fig 4c) . among the studies that assessed statistical significance, the majority determined that social media use had a positive effect (fig 4d) . the most common type of study design was interventional (n = 9, 29%)-of which 4 were controlled by randomization (i.e., rcts)-followed by prospective cohort (n = 8, 26%) and qualitative (n = 6, 19%). of the quantitative studies (n = 25, 68%), majority assessed statistical significance (n = 20/25) and majority determined there was a significantly positive effect of social media use (n = 12/20). among the randomized interventions (n = 4), two found a significantly positive effect, one found a significantly negative effect and one did not assess statistical significance. fig 3b provides a summary of authors' conclusions of social media use with regard to study objectives. the majority of studies with the objectives of improving health literacy, self-care, patient safety or caregiver satisfaction, reported a statistically significant positive effect. among the four studies that aimed to improve clinical decision making, one study social media use among caregivers of critically ill patients: a scoping review reported a positive effect but did not assess statistical significance, and three studies reported a negative effect but only two assessed significance. we used scoping review methodology to synthesize the literature on the extent, range, and nature of research evidence on the impact of social media interventions and tools among informal caregivers of critically ill patients. there is a growing body of literature, primarily from neonatal or pediatric populations, suggesting that real-time communication platforms are now social media use among caregivers of critically ill patients: a scoping review commonly used social media tools among informal caregivers of critically ill patients. in contrast, there is very little literature regarding caregiver use of social networking sites, blogs, or content communities. the most common intended use for social media was to improve caregiver satisfaction with the experience and role of an informal caregiver of a critically ill patient. outcomes related to usage feasibility, such as measures of user's attitudes, preferences, and demographics, were nearly always reported. few studies assessed cost-effectiveness of using social media tools with informal caregivers, and outcomes related to health behaviors and health status of either the patient or caregiver were reported infrequently. although most studies concluded that the use of social media among informal caregivers is beneficial and meaningful, the potential for unintended consequences or harm specific to informal caregivers were not adequately explored. the low reliability and high variability of content shared on social media highlights the importance of control from medical personnel to avoid the spread of "fake news" [53] . the emerging utilization of social media tools among informal caregivers for critically ill patients have practical implications for critical care medicine. modern mobile phones are powerful computational devices. the technical capacity of mobile phones to facilitate phone-based health interventions was a motivating factor for several included studies. mobile phones are also omnipresent and nearly always at hand [54] , which makes it possible to increase the number of points of care to virtually any place and time [55] . the combination of the technical capacity, personal nature, and convenient proximity of mobile phones has reduced barriers to adoption and increased acceptance of phonebased health interventions in numerous healthcare settings [56] . the immediacy of access of conclusions on social media use with regard to patient and caregiver focused objectives 1,2,4 . 1 adapted from coulter and ellins, 2007; only the main study objective was recorded from a single study; 3 more than one outcome category could be recorded from a single study; 4 only one overall conclusion was recorded from each study. frequency indicated by color: red, very frequent; yellow, moderately frequent; green, infrequent. n, number of studies. https://doi.org/10.1371/journal.pone.0238803.g003 mobile phones might also be useful to informal caregivers after patient discharge by providing prompt advice and support, which may reduce healthcare costs by preventing hospital or icu readmission. mobile phones in healthcare settings also have disadvantages. with regard to nursing, disruption of workflow, interruption of practice, and improper usage have been reported [57] . for example, in the study conducted by piscotty and colleagues [58] , 67% of nurses checked their mobile phone more than 2 times per shift and 22% checked their mobile phone more than 10 times per shift. further, possibility of misuse of information that may violate patient privacy remains an unresolved problem [59] . nursing organizations have responded with guidelines on professional social media use in the workplace [60] [61] [62] . many included studies addressed potential privacy issues by an established mobile phone dedicated to the study, and recommended to refrain from using patient last names and conditions, to keep communications brief, and to destroy caregiver phone numbers after patient discharge [63] . that mobile phones may be useful to facilitate social media interventions in critical care medicine is a noteworthy finding of this review, but further research is needed on how social media strategies can be implemented into practice without violating privacy or ethical considerations. support and encouragement can contribute to caregiver confidence, which can promote better understanding of a stressful illness-related situation and enable the caregiver to provide better care [64] . many included studies found that caregivers reported a more satisfactory critical care experience and increased knowledge of a patient's condition and long-term treatment options when provided with links to online resources with credible information. in the last decade, several members of the united states critical care societies collaborative have started using social media [65] . the society of critical care medicine is one member, which uses web-based education initiatives to provide accurate and reliable information to educate their members and the public [15] . as well, the world federation of societies of intensive and critical care medicine also recognized that social media plays a large role in achieving more and better involvement with other member societies, and actively uses social media to liaise with important groups, such as young clinicians [66] . considering the differences in how critical care societies use diverse approaches to deliver overlapping educational content can provide a rich opportunity to inform development of future web-based education initiatives, targeted specifically at informal caregivers. real-time communication platforms have been studied and implemented in many healthcare settings [67, 68] . several included studies found that in neonatal icu populations, parents who were communicating with the clinical care team using videoconferencing instruments (e.g., facetime, skype) felt significantly more satisfied with their infants' care when they were unable to be physically present. no study conducted in adult icu populations used a social media tool dedicated entirely to videoconferencing, although most social media tools included functions which operated similar to communication platforms. further, no included study from any icu reported the use of communication platforms to engage non-local family members or young children who may benefit from remote communication with their loved one. since many communication platforms are free to download on most electronic devices and allow for multiple users at once, an important area for future research is the use of communication platforms by entire support groups of both adult and non-adult critical care patients. this type of research is warranted to determine if positive outcomes of communication platforms depend on whether the caregivers' relationship to the patient is parent-child (i.e., parent providing support to children) versus child-parent (i.e., children providing support to parents). it is important to recognize that social media tools are exactly that-tools-rather than a substitute for personal interaction with healthcare providers. recent studies in other healthcare settings have found that patients' value in-person interaction with healthcare providers more than social media communication, and that healthcare providers are regarded as the most important source of information [69] . knowledge on the values and preferences of the clinical care team, however, is lacking, and a common concern of many clinicians is that information shared on social media may not always be accurate. more understanding on physician preferences and social media accuracy is important as physicians often rely on patients' informal caregivers to make decisions regarding the patient's care, which frequently contributes to caregiver psychological morbidity [70] . individualized social media interventions adapted to caregiver preferences may improve caregiver's satisfaction and psychological morbidity [13] . more research on accurate, proper and potential use of social media in critical care medicine is required before implementation into daily practice. our review indicates there is untapped potential for social media interventions and tools to provide personalized support to informal caregivers of the critically ill. we recommend future inquiry on this topic examine mental health interventions using social media to determine the effect of social media mental health interventions on psychological outcomes of informal caregivers of the critically ill. this information is particularly relevant to challenges related to restricted visitation and social isolation associated with the covid-19 pandemic [71] . the large numbers of patients experiencing critical illness and visiting restrictions enacted to prevent the spread of covid-19 complicate participation of informal caregivers in patient care and recovery [72] . these factors are likely to make mental health consequences of critical illness on informal caregivers more prevalent and severe [73, 74] . social media interventions and tools may be an effective mode of mental health support for informal caregivers of critically ill patients. this scoping review has several strengths. we conducted an extensive literature search and screened reference lists of included studies in order to identify the full breadth of available literature on social media use in critical care populations. the search was executed in five bibliographic databases and was not restricted by language or dates. it was intentionally broad to ensure that social media use across all critical care populations were included. we followed rigorous methodology defined by adherence to recommended protocols and reporting criteria for scoping reviews. further, the interdisciplinary team of a critical care physician, a critical care nurse, and a psychiatric epidemiologist, offered complementary expertise and knowledge. in spite of these strengths, there are limitations to note. we did not search the grey literature nor did we search social media itself, and could have missed studies, though our search strategy was comprehensive and full-text hand searching was completed. as well, the lack of a universal definition for social media, since social media is a relatively new concept that is continually transforming, added complexity to the process of study selection. however, our broad inclusion of study design allowed us to produce a comprehensive summary of the state of the literature on social media use by informal caregivers in critical care medicine. ultimately, the relatively rapid evolution of social media means studies on usage will nearly exclusively reflect social media use of the past. though such studies are valuable, it is important to note that the medium of social media is evolving faster than it is being studied. there is a growing evidence base to support the use of social media among informal caregivers of critically ill patients. there is untapped potential for social media tools to provide personalized support to informal caregivers. social media tools might enable informal caregivers to gain the knowledge that they need in order to feel empowered, involved, and satisfied. social media users should exercise caution on applications and networking sites so as not to compromise patient privacy. in sum, social media represents a flexible medium to deliver health information, and the individualized support that caregivers can obtain through using social media may promote an invaluable collaborative relationship when caring for critically ill patients. supporting information s1 social media.: oxford dictionary social media in critical care social media-statistics & facts ranked by numbers of active users (in millions). statista social media usage: 2005-2015. pew research ceter users of the world, unite! the challenges and opportunities of social media neurology and the internet: a review patient and family members' perceptions of family participation in care on acute care wards a new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication social media and health care professionals: benefits, risks, and best practices a comparison of the opinions of intensive care unit staff and family members of the treatment intensity received by patients admitted to an intensive care unit: a multicentre survey a randomized trial of a family-support intervention in intensive care units association of surrogate decision-making interventions for critically ill adults with patient, family, and resource use outcomes: a systematic review and meta-analysis translating evidence to patient care through caregivers: a systematic review of caregiver-mediated interventions social media engagement and the critical care medicine community evaluation of mobile apps targeted to parents of infants in the neonatal intensive care unit: systematic app review. jmir mhealth and uhealth scoping the field: services for carers of people with mental health problems the joanna briggs institute best practice information sheet: the effectiveness of pelvic floor muscle exercises on urinary incontinence in women following childbirth preferred reporting items for systematic review and meta-analysis protocols (prisma-p) 2015 statement prisma extension for scoping reviews (prisma-scr): checklist and explanation effectiveness of strategies for informing, educating, and involving patients reducing pressure ulcers in patients with prolonged acute mechanical ventilation: a quasi-experimental study. diminuicao das ulceras por pressao em pacientes com ventilacao mecanica aguda prolongada: um estudo quasi-experimental a pilot study of audiovisual family meetings in the intensive care unit prioritizing information topics for relatives of critically ill patients: cross-sectional survey among intensive care unit relatives and professionals reanet", the internet utilization among surrogates of critically ill patients with sepsis. plos one a qualitative study of factors that influence active family involvement with patient care in the icu: survey of critical care nurses a family information brochure and dedicated website to improve the icu experience for patients' relatives: an italian multicenter before-and-after study. intensive care medicine american journal of critical care: an official publication, american association of critical-care nurses effects of a social media website on primary care givers' awareness of music therapy services in a neonatal intensive care unit. the arts in psychotherapy testing the feasibility of skype and facetime updates with parents in the neonatal intensive care unit. american journal of critical care: an official publication, american association of critical-care nurses smartphones and text messaging are associated with higher parent quality of life scores and enrollment in early intervention after nicu discharge a randomized controlled study about the use of ehealth in the home health care of premature infants the use of short message services (sms) to provide medical updating to parents in the nicu web camera use in the neonatal intensive care unit: impact on nursing workflow diabetes self-management care via cell phone: a systematic review patient adherence and accuracy using electronic diaries during remote patient monitoring in type 1 and type 2 diabetes can the ubiquitous power of mobile phones be used to improve health outcomes in developing countries? global health to tweet or not to tweet? nurses, social media, and patient care impact of healthcare information technology on nursing practice mobile phone messaging reminders for attendance at healthcare appointments social media guidelines for nurses registered nurses' association of ontario available from: categorization of objectives and outcomes master's programs in advanced nursing practice: new strategies to enhance course design for subspecialty training in neonatology and pediatrics guidelines for using electronic and social media: the regulatory perspective the effectiveness of mi smart: a nurse practitioner led technology intervention for multiple chronic conditions in primary care patient safety and quality: an evidence-based handbook for nurses. advances in patient safety lessons learned from web-and social media-based educational initiatives by pulmonary the world federation of societies of intensive and critical care medicine newsletter online, social media and mobile technologies for psychosis treatment: a systematic review on novel user-led interventions methods of using real-time social media technologies for detection and remote monitoring of hiv outcomes social media use in healthcare: a systematic review of effects on patients and on their relationship with healthcare professionals retiring the term futility in value-laden decisions regarding potentially inappropriate medical treatment ethical dilemmas due to the covid-19 pandemic bereavement support on the frontline of covid-19: recommendations for hospital clinicians rehabilitation after critical illness in people with covid-19 infection intensive care management of coronavirus disease 2019 (covid-19): challenges and recommendations we thank dr. diane lorenzetti (university of calgary) for the development of the search strategies. key: cord-197474-2wzf7nzz authors: baly, ramy; martino, giovanni da san; glass, james; nakov, preslav title: we can detect your bias: predicting the political ideology of news articles date: 2020-10-11 journal: nan doi: nan sha: doc_id: 197474 cord_uid: 2wzf7nzz we explore the task of predicting the leading political ideology or bias of news articles. first, we collect and release a large dataset of 34,737 articles that were manually annotated for political ideology -left, center, or right-, which is well-balanced across both topics and media. we further use a challenging experimental setup where the test examples come from media that were not seen during training, which prevents the model from learning to detect the source of the target news article instead of predicting its political ideology. from a modeling perspective, we propose an adversarial media adaptation, as well as a specially adapted triplet loss. we further add background information about the source, and we show that it is quite helpful for improving article-level prediction. our experimental results show very sizable improvements over using state-of-the-art pre-trained transformers in this challenging setup. in any piece of news, there is a chance that the viewpoint of its authors and of the media organization they work for, would be reflected in the way the story is being told. the emergence of the web and of social media has lead to the proliferation of information sources, whose leading political ideology or bias may not be explicit. yet, systematic exposure to such bias may foster intolerance as well as ideological segregation, and ultimately it could affect voting behavior, depending on the degree and the direction of the media bias, and on the voters' reliance on such media (dellavigna and kaplan, 2007; iyengar and hahn, 2009; saez-trumper et al., 2013; graber and dunaway, 2017) . thus, making the general public aware, e.g., by tracking and exposing bias in the news is important for a healthy public debate given the important role media play in a democratic society. media bias can come in many different forms, e.g., by omission, by over-reporting on a topic, by cherry-picking the facts, or by using propaganda techniques such as appealing to emotions, prejudices, fears, etc. , 2020a bias can occur with respect to a specific topic, e.g., covid-19, immigration, climate change, gun control, etc. it could also be more systematic, as part of a political ideology, which in the western political system is typically defined as left vs. center vs. right political leaning. predicting the bias of individual news articles can be useful in a number of scenarios. for news media, it could be an important element of internal quality assurance as well as of internal or external monitoring for regulatory compliance. for news aggregator applications, such as google news, it could enable balanced search, similarly to what is found on allsides. 1 for journalists, it could enable news exploration from a left/center/right angle. it could also be an important building block in a system that detects bias at the level of entire news media (baly et al., 2018 (baly et al., , 2020 , such as the need to offer explainability, i.e., if a website is classified as left-leaning, the system should be able to pinpoint specific articles that support this decision. in this paper, we focus on predicting the bias of news articles as left-, center-, or right-leaning. previous work has focused on doing so at the level of news media (baly et al., 2020) or social media users , but rarely at the article level (kulkarni et al., 2018 ). the scarce article-level research has typically used distant supervision, assuming that all articles from a given medium should share its overall bias, which is not always the case. here, we revisit this assumption. our contributions can be summarized as follows: • we create a new dataset for predicting the political ideology of news articles. the dataset is annotated at the article level and covers a wide variety of topics, providing balanced left/center/right perspectives for each topic. • we develop a framework that discourages the learning algorithm from modeling the source instead of focusing on detecting bias in the article. we validate this framework in an experimental setup where the test articles come from media that were not seen at training time. we show that adversarial media adaptation is quite helpful in that respect, and we further propose to use a triplet loss, which shows sizable improvements over state-of-the-art pretrained transformers. • we further incorporate media-level representation to provide background information about the source, and we show that this information is quite helpful for improving the article-level prediction even further. the rest of this paper is organized as follows: we discuss related work in section 2. then, we introduce our dataset in section 3, we describe our models for predicting the political ideology of a news article in section 4, and we present our experiments and we discuss the results in section 5. finally, we conclude with possible directions for future work in section 6. most existing datasets for predicting the political ideology at the news article level were created by crawling the rss feeds of news websites with known political bias (kulkarni et al., 2018) , and then projecting the bias label from a website to all articles crawled from it, which is a form of distant supervision. the crawling could be also done using text search apis rather than rss feeds (horne et al., 2019; gruppi et al., 2020) . the media-level annotation of political leaning is typically obtained from specialized online platforms, such as news guard, 2 allsides, 3 and media bias/fact check, 4 where highly qualified journalists use carefully designed guidelines to make the judgments. as manual annotation at the article level is very time-consuming, requires domain expertise, and it could be also subjective, such annotations are rarely available at the article level. as a result, automating systems for political bias detection have opted for using distant supervision as an easy way to obtain large datasets, which are needed to train contemporary deep learning models. distant supervision is a popular technique for annotating datasets for related text classification tasks, such as detecting hyper-partisanship (horne et al., 2018; potthast et al., 2018) and propaganda/satire/hoaxes (rashkin et al., 2017) . for example, kiesel et al. (2019) created a large corpus for detecting hyper-partisanship (i.e., articles with extreme left/right bias) consisting of 754,000 articles, annotated via distant supervision, and additional 1,273 manually annotated articles, part of which was used as a test set for the semeval-2019 task 4 on hyper-partisan news detection. the winning system was an ensemble of character-level cnns (jiang et al., 2019) . interestingly, all topperforming systems in the task achieved their best results when training on the manually annotated articles only and ignoring the articles that were labeled using distant supervision, which illustrates the dangers of relying on distant supervision. barrón-cedeno et al. (2019) extensively discussed the limitations of distant supervision in a text classification task about article-level propaganda detection, in a setup that is similar to what we deal with in this paper: the learning systems may learn to model the source of the article instead of solving the task they are actually trained for. indeed, they have shown that the error rate may drastically increase if such systems are tested on articles from sources that were never seen during training, and that this effect is positively correlated with the representation power of the learning model. they analyzed a number of representations and machine learning models, showing which ones tend to overfit more, but, unlike our work here, they fell short of recommending a practical solution. budak et al. (2016) measured the bias at the article level using crowd-sourcing. this is risky as public awareness of media bias is limited (elejalde et al., 2018) . moreover, the annotation setup does not scale. finally, their dataset is not freely available, and their approach of randomly crawling articles does not ensure that topics and events are covered from different political perspectives. lin et al. (2006) built a dataset annotated with the ideology of 594 articles related to the israeli-palestinian conflict published on bitterlemons. org. the articles were written by two editors and 200 guests, which minimizes the risk of modeling the author style. however, the dataset is too small to train modern deep learning approaches. kulkarni et al. (2018) built a dataset using distant supervision and labels from allsides. distant supervision is fine for the purpose of training, but they also used it for testing, which can be problematic. moreover, their training and test sets contain articles from the same media, and thus models could easily learn to predict the article's source rather than its bias. in their models, they used both the text and the url contents of the articles. overall, political bias has been studied at the level of news outlet (dinkov et al., 2019; baly et al., 2018 baly et al., , 2020 zhang et al., 2019) , user , article (potthast et al., 2018; , and sentence (sim et al., 2013; saez-trumper et al., 2013) . in particular, baly et al. (2018) developed a system to predict the political bias and the factuality of news media. in a followup work, showed that bias and factuality of reporting should be predicted jointly. a finer-grained analysis is performed in (horne et al., 2018) , where a model was trained on 10k sentences from a dataset of reviews (pang and lee, 2004) , and used to discriminate objective versus non-objective sentences in news articles. lin et al. (2006) presented a sentence-level classifier, where the labels were projected from the document level. in this section, we describe the dataset that we created and that we used in our experiments. while most of the platforms that analyze the political leaning of news media provide in-depth analysis of particular aspects of the media, allsides stands out as it provides annotations of political ideology for individual articles, which ensures high-quality data for both training and testing, which is in contrast with distant supervision approaches used in most previous research, as we have seen above. in all-sides, these annotations are made as a result of a rigorous process that involves blind bias surveys, editorial reviews, third-party analysis, independent reviews, and community feedback. 5 furthermore, allsides uses the annotated articles to enable its balanced search, which shows news coverage on a given topic from media with different political bias. in other words, for each trending event or topic (e.g., impeachment or coronavirus pandemic), the platform pushes news articles from all sides of the political spectrum, as shown in figure 1 . we took advantage of this and downloaded all articles along with their political ideology annotations (left, center, or right), their assigned topic(s), the media in which they were published, their author(s), and their publication date. thus, our dataset contains articles that were manually selected and annotated, and that are representative of the real political scenery. note that the center class covers articles that are biased towards a centrist political ideology, and not articles that lack political bias (e.g., sports and technology), which commonly exist in news corpora that were built by scraping rss feeds. we collected a total of 34,737 articles published by 73 news media and covering 109 topics. 6 in this dataset, a total of 1,080 individual articles (3.11%) have a political ideology label that is different from their source's. this suggests that, while the distant supervision assumption generally holds, we would still find many articles that defy it. table 1 shows some statistics about the dataset. figure 2 illustrates the distribution of the different political bias labels within each of the most frequent topics. we can see that our dataset is able to represent topics or events from different political perspectives. this is yet another advantage, as it enables a more challenging task for machine learning models to detect the linguistic and the semantic nuances of different political ideologies in news articles, as opposed to cases where certain topics might be coincidentally collocated with certain labels, in which case the models would be actually learning to detect the topics instead of predicting the political ideology of the target news article. it is worth noting that since most article labels are aligned with their source labels, it is likely that machine learning classifiers would end up modeling the source instead of the political ideology of the individual articles. for example, a model would be learning the writing style of each medium, and then it would associate it with a particular ideology. therefore, we pre-processed the articles in a way that eliminates explicit markers such as the name of the authors, or the name of the medium that usually appears as a preamble to the article's content, or in the content itself. furthermore, in order to ensure that we are actually modeling the political ideology as it is expressed in the language of the news, we created evaluation splits in two different ways: (i) randomly, which is what is typically done (for comparison only), and (ii) based on media, where all articles by the same medium appear in either the training, the validation, or the testing dataset. the latter form of splitting would help us indicate what a trained classifier has actually learned. for instance, if it modeled the source, then it would not be able to perform well on the test set, since all its articles would belong to sources that were never seen during training. in order to ensure fair one-toone comparisons between experiments, we created these two different sets of splits, while making sure that they share the same test set, as follows: • media-based split: we sampled 1,200 articles from 12 news media (100 per medium) and used them as the test set, and we excluded the remaining 5,470 articles from these media. then, we used the articles from the remaining 61 media to create the training and the validation sets, where all articles from the same medium would appear in the same set: training, development, or testing. this ensures that the model is fine-tuned and tested on articles whose sources were not seen during training. • random split: here, the test set is the same as in the media-based split. the 5,470 articles that we excluded from the 12 media are now added to the articles from the 61 remaining media. then, we split this collection of articles (using stratified random sampling) into training and validation sets. this ensures that the model is fine-tuned and evaluated only on articles whose sources were observed during training. table 2 shows statistics about both splits, including the size of each set and the number of media and topics they cover. we release the dataset, along with the evaluation splits, and the code, 7 which can be used to extend the dataset as more news articles are added to allsides. the task of predicting the political ideology of news articles is typically formulated as a classification problem, where the textual content of the articles is encoded into a vector representation that is used to train a classifier to predict one of c classes (in our case, c = 3: left, center, and right). in our experiments, we use two deep learning architectures: (i) long short-term memory networks (lstms), which are recurrent neural networks (rnns), which use gating mechanisms to selectively pass information across time and to model long-term dependencies (hochreiter and schmidhuber, 1997) , and (ii) bidirectional encoder representations from transformers (bert), with a complex architecture yielding high-quality contextualized embeddings, which have been successful in several natural language processing tasks (devlin et al., 2019). ultimately, our goal is to develop a model that can predict the political ideology of a news article. our dataset, along with some others, has a special property that might stand in the way of achieving this goal. most articles published by a given source have the same ideological leaning. this might confuse the model and cause it to erroneously associate the output classes with features that characterize entire media outlets (such as detecting specific writing patterns, or stylistic markers in text). consequently, the model would fail when applied to articles that were published in media that were unseen during training. the experiments in section 5 confirm this. thus, we apply two techniques to de-bias the models, i.e., to prevent them from learning the style of a specific news medium rather than predicting the political ideology of the target news article. this model was originally proposed by ganin et al. (2016) for unsupervised domain adaptation in image classification. their objective was to adapt a model trained on labelled images from a source domain to a novel target domain, where the images have no labels for the task at hand. this is done by adding an adversarial domain classifier with a gradient reversal layer to predict the examples' domains. the label predictor's is minimized for the labelled examples (from the source domain), and the adversarial domain classifier's loss is maximized for all examples in the dataset. as a result, the encoder can extract representation that is (i) discriminative for the main task and also (ii) invariant across domains (due to the gradient reversal layer). the overall loss is minimized as follows: where n is the number of training examples, l i y (·, ·) is the label predictor's loss, the condition d i = 0 means that only examples from the source domain are used to calculate the label predictor's loss, l i d (·, ·) is the domain classifier's loss, λ controls the trade-off between both losses, and {θ f , θ y , θ d } are the parameters of the encoder, the label predictor, and the domain classifier, respectively. further details about the formulation of this method is available in (ganin et al., 2016) . we adapt this architecture as follows. instead of a domain classifier, we implement a media classifier, which, given an article, tries to predict the medium it comes from. as a result, the encoder should extract representation that is discriminative for the main task of predicting political ideology, while being invariant for the different media. this approach was originally proposed as an unsupervised domain adaptation, since labelled examples were available for one domain only, whereas in our case, all articles from different media were labelled for their political ideology. therefore, we jointly minimize the losses of both the label predictor and the media classifier over the entire dataset. the new objective function to minimize is as follows: where l i m (·, ·) is the loss of the media classifier, and θ m is its set of parameters. in this approach, we pre-train the encoder using a triplet loss (schroff et al., 2015) . the model is trained on a set of triplets, each composed of an anchor, a positive, and a negative example. the objective in eq. 3 ensures that the positive example is always closer to the anchor than the negative example is, where a, p and n are the encodings of the anchor, of the positive, and of the negative examples, respectively, and d(·, ·) is the euclidean distance: figure 3 shows an example of such a triplet. the positive example shares the same ideology as the anchor's, but they are published by different media. the negative example has a different ideology than the anchor's, but they are published by the same medium. in this way, the encoder will be clustering examples with similar ideologies close to each other, regardless of their source. once the encoder has been pre-trained, its parameters, along with the softmax classifier's, are fine-tuned on the main task by minimizing the cross-entropy loss when predicting the political ideology of articles. finally, we explore the benefits of incorporating information describing the target medium, which can serve as a complementary representation for the article. while this seems to be counter-intuitive to what we have been proposing in subsection 4.2, we believe that medium-level representation can be valuable when combined with an accurate representation of the article. intuitively, having an accurate understanding of the natural language in the article, together with a glimpse into the medium it is published in, should provide a more complete picture of its underlying political ideology. baly et al. (2020) proposed a comprehensive set of representation to characterize news media from different angles: how a medium portrays itself, who is its audience, and what is written about it. their results indicate that exploring the twitter bios of a medium's followers offers a good insight into its political leaning. to a lesser extent, the content of a wikipedia page describing a medium can also help unravel its political leaning. therefore, we concatenated these representations to the encoded articles, at the output of the encoder and right before the softmax layer, so that both the article encoder and the classification layer that is based on the article and the external media representations are trained jointly and end-to-end. similarly to (baly et al., 2020) , we retrieved the profiles of up to a 1,000 twitter followers for each medium, we encoded their bios using the sentence-bert model (reimers and gurevych, 2019) , and we then averaged these encodings to obtain a single representation for that medium. as for the wikipedia representation, we automatically retrieved the content of the page describing each medium, whenever applicable. then, we used the pre-trained base bert model to encode this content by averaging the word representations extracted from bert's second-to-last layer, which is common practice, since the last layer may be biased towards the pre-training objectives of bert. we evaluated both the lstm and the bert models, assessing the impact of (i) de-biasing and (ii) incorporating media-level representation. we fine-tuned the hyper-parameters of both models on the validation set using a guided grid search trial while fixing the seeds of the random weights initialization. for lstm, we varied the length of the input (128-1,024 tokens), the number of layers (1-3), the size of the lstm cell (200-400), the dropout rate (0-0.8), the learning rate (1e−3 to 1e−5), the gradient clipping value (0-5), and the batch size (8-256). the best results were obtained with a 512-token input, a 2-layer lstm of size 256, a dropout rate of 0.7, a learning rate of 1e−3, gradient clipping at 0.5, and a batch size of 32. this model has around 1.1m trainable parameters, and was trained with 300-dimensional glove input word embeddings (pennington et al., 2014) . for bert, we varied the length of the input, the learning rate, and the gradient clipping value. the best results were obtained using a 512-token input, a learning rate of 2e−5, and gradient clipping at 1. this model has 110m trainable parameters. we trained our models on 4 titan x pascal gpus, and the runtime for each epoch was 25 seconds for the lstm-based models and 22 minutes for the bert-based models. for each experiment, the model was trained only once with fixed seeds used to initialize the models' weights. for the adversarial adaptation (aa), we have an additional hyper-parameter λ (see equation 2), which we varied from 0 to 1, where 0 means no adaptation at all. the best results were obtained with λ = 0.7, which means that we need to pay significant attention to the adversarial classifier's loss in order to mitigate the media bias. for the triplet loss pre-training (plt), we sampled 35,017 triplets from the training set, such that the examples in each triplet discuss the same topic in order to ensure that the change in topic has minimal impact on the distance between the examples. to evaluate our models, we use accuracy and macro-f 1 score (f 1 averaged across all classes), which we also used as an early stopping criterion, since the classes were slightly imbalanced. moreover, given the ordinal nature of the labels, we report the mean absolute error (mae), shown in equation (4), where n is the number of instances, and y i andŷ i are the number of correct and of predicted labels, respectively. baseline results the results in table 3 show the performance for lstm and for bert at predicting the political ideology of news articles for both the media-based and the random splits. we observe sizable differences in performance between the two splits. in particular, both models perform much better when they are trained and evaluated on the random split, whereas they both fail on the mediabased split, where they are tested on articles from media that were not seen during training. this observation confirms our initial concerns that the models would tend to learn general characteristics about news media, and then would face difficulties with articles coming from new unseen media. removing the source bias in order to further confirm the bias towards modeling the media, we ran a side experiment of fine-tuning bert on the task of predicting the medium given the article's content, which is a 73-way classification problem. we used stratified random sampling to create the evaluation splits and to make sure each set contains all labels (media). the results in table 4 confirm that bert is much stronger than the majority class baseline, despite the high number of classes, which means that predicting the medium in which a target news article was published is a fairly easy task. macro f 1 acc. majority 0.25 10.21 bert 59.72 80.12 in order to remove the bias towards modeling the medium, we evaluated the impact of the adversarial adaptation (aa) and the triplet loss pre-training (tlp) with the media-based split. the results in table 5 show sizeable improvements when either of these approaches is used, compared to the baseline (no de-biasing). in particular, tlp yields an improvement of 14.12 points absolute in terms of accuracy, and 12.73 points in terms of macro-f 1 . table 6 : impact of adding media-level representations to the article-level representations (with and without debiasing). note that the results in rows 3 and 6 are the same for both lstm and bert because no articles were involved, and the media-level representations were directly used to train the classifier. finally, we evaluated the impact of incorporating the media-level representation (twitter followers' bios and wikipedia content) in addition to teh articlelevel representation. table 6 illustrates these results in an incremental way. first, we evaluated the performance of the media-level representation alone at predicting the political ideology of news articles (see rows 3 and 6). we should note that these results are identical for the lstm and the bert columns since no article was encoded in these experiments, and the media representation was used directly to train the logistic regression classifier. then, adding the article representation from either model, without any de-biasing, had no or little impact on the performance (see rows 4 vs. 3, and 7 vs. 6). this is not surprising, since we have shown that, without de-biasing, both models learn more about the source than about the bias in the language used by the article. therefore, the ill-encoded articles do not provide more information than what the medium representation already gives, which is why no or too little improvement was observed. when we use the triplet loss to mitigate the source bias, the resulting article representation is more accurate and meaningful, and the medium representation does offer complementary information, and eventually contributes to sizeable performance gains (see rows 5 and 8 vs. 2). the twitter bios representation appears to be much more important than the representation from wikipedia, which shows the importance of inspecting the media followers' background and their point of views, which is also one of the observations in (baly et al., 2020) . overall, comparing the best results to the baseline (rows 8 vs. 1), we can see that (i) using the triplet loss to remove the source bias, and (ii) incorporating media-level representation from twitter followers yields 30.51 and 28.76 absolute improvement in terms of macro f 1 on the challenging media-based split. we have explored the task of predicting the leading political ideology of news articles. in particular, we created a new large dataset for this task, which features article-level annotations and is well-balanced across topics and media. we further proposed an adversarial media adaptation approach, as well as a special triplet loss in order to prevent modeling the source instead of the political bias in the news article, which is a common pitfall for approaches dealing with data that exhibit high correlation between the source of a news article and its class, as is the case with our task here. finally, our experimental results have shown very sizable improvements over using state-of-the-art pre-trained transformers. in future work, we plan to explore topic-level bias prediction as well as going beyond left-centerright bias. we further want to develop models that would be able to detect specific fragments in an article where the bias occurs, thus enabling explainability. last but not least, we plan to experiment with other languages, and to explore to what extent a model for one language is transferable to another one given that the left-center-right division is not universal and does not align perfectly across countries and cultures, even when staying within the western political world. predicting factuality of reporting and bias of news media sources what was written vs. who read it: news media profiling using text analysis and social media context multi-task ordinal regression for jointly predicting the trustworthiness and the leading political ideology of news media proppy: organizing the news based on their propagandistic content fair and balanced? quantifying media bias through crowdsourced content analysis semeval-2020 task 11: detection of propaganda techniques in news articles a survey on computational propaganda detection fine-grained analysis of propaganda in news articles unsupervised user stance detection on twitter the fox news effect: media bias and voting bert: pre-training of deep bidirectional transformers for language understanding predicting the leading political ideology of youtube channels using acoustic, textual, and metadata information on the nature of real and perceived bias in the mainstream media domain-adversarial training of neural networks mass media and american politics nela-gt-2019: a large multi-labelled news dataset for the study of misinformation in news articles long short-term memory assessing the news landscape: a multi-module toolkit for evaluating the credibility of news different spirals of sameness: a study of content sharing in mainstream and alternative media red media, blue media: evidence of ideological selectivity in media use team bertha von suttner at semeval-2019 task 4: hyperpartisan news detection using elmo sentence representation convolutional network semeval-2019 task 4: hyperpartisan news detection multi-view models for political ideology detection of news articles which side are you on? identifying perspectives at the document and sentence levels a sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts glove: global vectors for word representation a stylometric inquiry into hyperpartisan and fake news truth of varying shades: analyzing language in fake news and political fact-checking sentence-bert: sentence embeddings using siamese bertnetworks social media news communities: gatekeeping, coverage, and statement bias team qcri-mit at semeval-2019 task 4: propaganda analysis meets hyperpartisan news detection facenet: a unified embedding for face recognition and clustering measuring ideological proportions in political speeches predicting the topical stance and political leaning of media using tweets tanbih: get to know what you are reading this research is part of the tanbih project 8 , which aims to limit the effect of "fake news," propaganda and media bias by making users aware of what they are reading. the project is developed in collaboration between the qatar computing research institute, hbku and the mit computer science and artificial intelligence laboratory. key: cord-282966-ew8lwmsn authors: haddow, george d.; haddow, kim s. title: communicating during a public health crisis date: 2014-07-22 journal: disaster communications in a changing media world doi: 10.1016/b978-0-12-407868-0.00011-2 sha: doc_id: 282966 cord_uid: ew8lwmsn “communicating during a public health crisis,” examines how communicating to the public and media during a public health or safety emergency is different. in a serious crisis, all affected people take in information differently, process information differently and act on information differently. this chapter incorporates the centers for disease control and prevention’s (cdc) best advice for communicating during a public health crisis, including infectious disease outbreaks, bioterrorism, chemical emergencies, natural disasters, nuclear accidents and radiation releases and explosions. this chapter also explores the growing role of social media that is now being used for a variety of traditional and new purposes from distress calls to disease surveillance. social media is now a part of the public health communications toolbox. from the cdc down to local departments of health, public health, and safety officials are using social media to push out vital and useful information to the public and to monitor and respond to public comments. but social media is also being used for a broader range of public health purposes-from collecting data to track the spread of diseases to sending calls for help-and the public health system is still figuring out how to adapt. "the use of social media has proven a valuable asset for adaptation and improvisation related to the public health and medical consequences of disasters. these tools are especially valuable for saving lives during a disaster's impact phase and especially during its immediate aftermath, when traditional disaster management capabilities are not available…. the need remains for fusion of social media into existing institutional programs for crisis informatics and disaster-risk management" (keim and noji, 2011) . the center for disease control and prevention is actively using social media, but social media use by public health agencies is still considered to be in the "early adoption stage" (thackeray et al., 2012) . even though the majority of state health departments (60%) report using at least one social media application, they are "using social media as a channel to distribute information rather than capitalizing on the interactivity available to create conversations and engage with the audience" (thackeray et al., 2012) . according to a 2012 report on the use of social media by state health departments, 86.7 percent of the state health departments reported they had a twitter account, 56 percent a facebook account, and 43 percent a youtube channel; but, "on average, state health departments made one post per day on social media sites, and this was primarily to distribute information; there was very little interaction with audiences. shds have few followers or friends on their social media sites. the most common topics for posts and tweets related to staying healthy and diseases and conditions" (thackeray et al., 2012) . the report recommends, "because social media use is becoming so pervasive, it seems prudent for state health departments to strategically consider how to use it to their advantage. to maximize social media's potential, public health agencies should communicating during a public health crisis chapter eleven develop a plan for incorporating it within their overall communication strategy. the agency must identify what audience they are trying to reach, how that audience uses social media, what goals and objectives are most appropriate, and which social media applications fit best with the identified goals and objectives" (thackeray et al., 2012) . there are examples of health departments and associations using social media to augment their communications efforts: • in shelby county, tennessee, the health department is using twitter to increase its media coverage. they tweet out their press releases which are retweeted by reporters-expanding the department's public reach. • in philadelphia, the department of hiv planning uses twitter to increase participation in their community workshops. they tweet out the meeting's content to people in the large nine-county area they serve and use twitter to "extend the conversation beyond the room." • the american public health association (@publichealth) took advantage of the 2013 super bowl to promote related health messages using the #superbowl hashtag. they tweeted about healthy snacks, drinking and driving, and flu vaccination. when the half-hour blackout hit, they took advantage of the unexpected opportunity with the tweet in figure 11 .1, which was widely retweeted. at the 2013 annual meeting for the national association of county and city heath officials (naccho), additional examples of health departments' use of social media were highlighted (new public health, 2013): • the kansas city health department uses twitter and facebook to push information on extreme heat safety during the summertime. the messages and reports of suspected or confirmed heat-related deaths resulted in coverage of health department activities and partnerships on national news channels including the weather channel and cnn. the boston health commission used social media to promote its youth media campaign on sugary beverages. the campaign received close to 30,000 views, and close to 23,000 clicks on their facebook ads. • in contra costa, california, a recent campaign included a podcast by the public health director that was promoted on twitter and facebook. parts of the podcast were picked up by local radio which allowed the public health department to most accurately get their message across. the cdc, which has been a pioneer in the integration of social media tools into public health communications, including their multichannel "zombie"-themed emergency preparedness public education campaign (cdc, 2012) , has developed and is distributing a social media toolkit for health communicators. the cdc's "socialmediaworks" toolkit was designed to help "health communicators integrate social media strategies and technologies into their communication plans." the kit features tools to develop a better social media strategy, learn how social media tools work, plan, implement, and manage all in one place including "calendar and dashboard features that allow you to schedule and manage your social media initiative," and hosts a community forum to enable health professionals to "engage with colleagues on social media strategy, share lessons learned, and learn what works" (cdc, 2013). in a new england journal of medicine article, "integrating social media into emergency-preparedness efforts," the reason given by the three authors to the pervasiveness of social media is "it makes sense to explicitly consider the best way of leveraging these communication channels before, during, and after disasters…. engaging with and using emerging social media may well place the emergency-management community, including medical and public health professionals, in a better position to respond to disasters" (merchant et al., 2011) . specifically, they suggest: • actively using networking sites such as facebook to help individuals, communities, and agencies share emergency plans and establish emergency networks. web-based "buddy" systems, for example, might have allowed more at-risk people to receive medical attention and social services during the 1995 chicago heat wave, when hundreds of people died of heat-related illness. • linking the public with day-to-day, real-time information about how their community's health care system is functioning. for example, emergency room and clinic waiting times are already available in some areas of the country through mobile-phone applications, billboard really simple syndication (rss) feeds, or hospital tweets. monitoring this important information through the same social channels during an actual disaster may help responders verify whether facilities are overloaded and determine which ones can offer needed medical care. • using location-based service applications (such as foursquare and loopt) and global positioning system (gps) software to allow people to "check in" to a specific location and share information about their immediate surroundings. with an additional click, perhaps off-duty nurses or paramedics who check in at a venue could also broadcast their professional background and willingness to help in the event of a nearby emergency. • increasing the use of social media during recovery. the extensive reach of social networks allows people who are recovering from disasters to rapidly connect with needed resources. tweets and photographs linked to timelines and interactive maps can tell a cohesive story about a recovering community's capabilities and vulnerabilities in real-time. "organizations such as ushahidi have helped with recovery in haiti by matching volunteer health care providers with distressed areas. social media have been used in new ways to connect responders and people directly affected by such disasters as the deepwater horizon oil spill, flash floods in australia, and the earthquake in new zealand with medical and mental health services" (merchant et al., 2011) . in late 2002, there was a strange increase in emergency room visits in guangdong province in china for acute respiratory illness and a number of local news and internet reports about a respiratory disease affecting healthcare workers. several long weeks later, the government announced the cause was severe acute respiratory syndrome, or sars. according to dr. john brownstein, one of the developers of healthmap, an online platform that mines informal sources for disease outbreak monitoring, "if this data had been harvested properly and promptly, this early epidemic intelligence collected online could have helped contain what became a global pandemic" (brownstein, 2011) . "we are now in an era where epidemic intelligence flows not only through government hierarchies but also through informal channels, ranging from press reports to blogs to chat rooms to analyses of web searches. collectively, these sources provide a view of global health that is fundamentally different from that yielded by disease reporting in traditional public health infrastructures," dr. brownstein explained. "they also provide a process that dramatically reduces the time required to recognize outbreaks" (brownstein, 2011) . more recently, the explosion of online news and social media has brought a new era of disease surveillance. today, the websites healthmap.org and outbreaks near me deliver real-time intelligence on a broad range of emerging infectious diseases for a diverse audience, which includes local health departments, governments, clinicians, and international travelers. healthmap.org states they "bring together disparate data sources, including online news aggregators, eyewitness reports, expert-curated discussions and validated official reports, to achieve a unified and comprehensive view of the current global state of infectious diseases and their effect on human and animal health. through an automated process that updates 24/7/365, the system monitors, organizes, integrates, filters, visualizes and disseminates online information about emerging diseases in nine languages, early detection of global public health threats" (healthmap.org, 2013) . healthmap is part of a growing landscape of government and nongovernment organizations mining internet and social data to determine the spread of viruses and the rate of infection. some organizations are also asking the public to self-report how they are feeling, according to kim stephens, the lead blogger of idisaster 2.0, who outlines several tools being used to aggregate data to fight the flu and other diseases. google flu trends is a site that provides geographically based information about the spread of the influenza virus. their data is aggregated from the search terms people are using versus self-reporting. in fact, the graph of the tracked searches (see below) related to the flu compared to the actual reported cases of the virus is so close that they almost overlap. google explains how this works: we have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. of course, not every person who searches for "flu" is actually sick, but a pattern emerges when all the flu-related search queries are added together. we compared our query counts with traditional flu surveillance systems and found that many search queries tend to be popular exactly when flu season is happening. by counting how often we see these search queries, we can estimate how much flu is circulating in different countries and regions around the world. google's results have been published in the journal nature (stephens, 2013) . mappyhealth is another tool that tracks keywords related to health but instead of using data from searches in google, this system uses the twitter data stream. their stated reason for the site: "it is hypothesized that social data could be a predictor to outbreaks of disease. we track disease terms and associated qualifiers to present these social trends." although this blog post is focused on influenza, the mappyhealth site tracks 27 different categories of illness (stephens, 2013) . flunearyou is a tool that allows the public to participate in tracking the spread of flu by filling out a survey each week. the survey is quite simple and asks the respondent if they have had any symptoms during the past week and whether or not they have had the flu shot either this year or last year. respondents can include family members and the questions are asked about each person individually. this user-contributed data is then aggregated and displayed on a map with pins that are either green for no symptoms, yellow for some, and red for "at least one person with influenza-like" symptoms. the pins are clickable and display the number of users in that zipcode that have reported their condition, but no personal information whatsoever. the number of participants in the state is displayed (1294 in massachusetts) as well as locations and addresses where people can get vaccinated. links to local public health agencies are also provided. people can also sign up to receive location-based disease alerts via email. social sharing of the site and its content is encouraged by the addition of prominently placed social media buttons (stephens, 2013) . consumer-oriented applications also are being developed such as sickweather, which tracks social media posts that reference illnesses and displays trends by location. sickweather also shows illness patterns over time and allows members to report their illness directly and share information with friends through social networks (newcomer, 2013) . the department of homeland security (dhs) is also mining social media for biosurveillance. dhs is testing whether scanning social media sites to collect and analyze health-related data could help identify infectious disease outbreaks, bioterrorism or other public health and national security risks. the 1-year biosurveillance pilot involves automatically scanning social media sites, such as facebook and twitter, to collect and analyze health-related data in real-time (sternstein, 2012) . the social media data analytics technology will "watch for trends," such as whether new or unusual clusters of symptoms in various geographic regions are being reported on social networking sites. the project is the latest in a series of dhs data analysis efforts for biosurveillance. for example, dhs is already analyzing data that is collected by the cdc from public health departments nationwide. also, it is collecting and analyzing air samples in several cities for signs of bioterrorist chemicals, such as anthrax (sternstein, 2012) . news organizations are providing the public with information about the effects of the influenza virus and some are also using social media to increase public awareness. at the height of the 2013 flu season a #fluchat was sponsored by @usatodayhealth. "health based twitter chats offer the public the opportunity to post questions that are addressed by healthcare professionals or researchers. the cdc, for instance, has conducted many chats on a wide variety of topics. watching the questions that are posted in these chats offers local public health organizations an opportunity to "hear" the concerns of the public. knowing this information can help with message formulation and coordination" (stephens, 2013) . here are a few questions posted to the #fluchat: @usatodayhealth how long after the flu shot are you actually prevented from getting the flu? #fluchat-taylor yarbrough (@sellorelse) january 10, 2013 @usatodayhealth what % of americans have gotten the flu each of the last 10 years?-bob (@sgt1917) january 10, 2013 (stephens, 2013) finally, a trend that will once again change the way public health and safety agencies and organizations operate during disasters-the increased use of facebook and twitter to call for help or rescue. more and more people are turning to social media as their first choice of communications during a crisis. public polling by the red cross in 2011 and 2012 documents the public's large and growing expectation that disaster officials monitor social media sites and respond quickly to distress calls on facebook, twitter, and other platforms. according to red cross surveys: • 80 percent expect emergency responders to monitor social sites-and to respond promptly for calls for help. • 20 percent would try an online channel to get help if unable to reach emergency medical services (ems). • at least a third of the public expects help to arrive in less than an hour if they posted a request for help on a social media website-and more than three out of four (76%) expect help within 3 hours-up from 68 percent in 2011 (american red cross, 2012). clearly meeting this challenge and responding to these expectations must be a priority for the public health and safety community. communicating to the public and media during a public health or safety emergency is different in several aspects than other disaster communications. in a serious crisis, all affected people take in information differently, process information differently, and act on information differently. in recognition of those differences, the cdc has published its own, highly recommended "crises and emergency risk communications manual." highlights from the 2012 edition of the cdc manual follow below. the purpose of an official response to a public health crisis is to efficiently and effectively reduce and prevent illness, injury, and death, and return individuals and communities to normal as quickly as possible. specific hazards under cdc emergency preparedness and response include: • infectious disease outbreaks-the spread of viruses, bacteria, or other microorganisms that causes illness or death. this includes cholera, e. coli infection, pandemic flu, and other infections. • bioterrorism-the deliberate release of viruses, bacteria, or other germs (agents) used to cause illness or death, including anthrax and the plague. • chemical emergencies-the intentional or unintentional release of a chemical that could harm people's health including chlorine, mercury, nerve agents, ricin, or an oil spill. the cdc also has a role in responding to natural disasters, nuclear accidents, and radiation releases and explosions. so what is the public's response to one of these disasters? • fear, anxiety, confusion, and dread-these are emotions that need acknowledging. • hopelessness and helplessness-part of the job of a crises communicator is to help the community manage its fears and set them on a course. action helps reduce anxiety and restores a sense of control, even if it is symbolic, put up the flag, or preparatory-donate blood, or just as simple, check on an elderly neighbor. • uncertainty-people dislike uncertainty. the not-knowing can seem worse than a bad result. people can manage uncertainty if you share with them the process you are using to get answers. "i can't tell you what's causing so many people in our town to get so sick. but i can tell you what we're doing to find out…." the situation may obviously be uncertain and acting otherwise creates mistrust. • not panic-panic during a crisis is rare. contrary to what we see portrayed in the movies, we seldom act irrationally when faced with a crisis-and we seldom panic. people nearly always behave in a rational way during a crisis. in the face of the 9/11 attacks, people in lower manhattan became simultaneously resourceful and responsive. when told what to do by those in authority, people followed instructions. the panic myth is one of the most pervasive misconceptions about crises. many government leaders are concerned about causing public panic. when facing a crisis, they may mistakenly withhold information in an effort to prevent panic and protect the public-at the very time they should be sharing their concerns. conditions that are likely to create heightened anxiety and severe emotional distress are silence or conflicting messages from authorities. people are likely to be very upset when they feel: • they cannot trust what those in authority are telling them. • they have been misled or left without guidance during times of severe threat. • if authorities start hedging or hiding the bad news, they will increase the risk of creating a confused, angry, and an uncooperative public. the faster you give bad news, the better. holding back implies mistrust, guilt, or arrogance. in general, the public wants access to as much information as possible. too little information enhances the psychological stress. if information is incomplete or not present at all during a crisis, this will increase anxiety and increase a sense of powerlessness. it will also lower trust in government agencies. the cdc has found that people may receive, interpret, and act on information differently during an emergency than during a normal period. four factors that change how we process information during a crisis: 1. we simplify messages-under intense stress and possible information overload, we tend to miss the nuances or importance of health and safety messages by: • not fully hearing information, because of our inability to juggle multiple facts during a crisis. • not remembering as much of the information as we should. • confusing action messages, such as remembering which highway is blocked for safety to cope, many of us may not attempt a logical and reasoned approach to decision making. instead, we may rely on habits and long-held practices. we might also follow bad examples set by others, and engage in irrational behaviors like unfairly blaming leaders or institutions. asking people to do something that seems counterintuitive. examples include the following: • getting out of a safe car and lying in a ditch instead of outrunning a tornado. • evacuating even when the weather looks calm. changing our beliefs during a crisis or emergency may be difficult. beliefs are very strongly held and are not easily altered. see, and tend to believe what we've experienced. during crises, we want messages confirmed before taking action. you may find that you or other individuals are likely to do the following: • change television channels to see if the same warning is being repeated elsewhere. • try to call friends and family to see if others have heard the same messages. • check in on their social media networks to see what their friends and family are doing. • turn to a known and credible local leader for advice. • in cases where evacuation is recommended, we tend to watch to see if our neighbors are evacuating before we make our decision. this confirmation first-before we take action-is very common in a crisis. 4. we believe the first message-during a crisis, the speed of a response can be an important factor in reducing harm. in the absence of information, we begin to speculate and fill in the blanks. this often results in rumors. the first message to reach us may be the accepted message, even though more accurate information may follow. when new, perhaps more complete information becomes available, we compare it to the first messages we heard. therefore, messages should be simple, credible, and consistent. speed is also very important when communicating in an emergency. an effective message must: • be repeated. • come from multiple credible sources. • be specific to the emergency being experienced. • offer a positive course of action that can be executed. people should also have access to more information, through other channels, such as through websites, and old and new media. good communication can reduce stress, harmful human behavior, and prevent negative public health response outcomes. trained communicators will do the following: • reduce high levels of uncertainty. • use an effective crisis-communication plan. • be the first source for information. • express empathy and show concern. • exhibit competence and expertise. • coordinate with other response officials. • commit and remain dedicated to the response and recovery after the immediate crisis has passed. audiences receive, interpret, and evaluate messages before they take action. expect your audience to immediately judge the content of your message for speed, factual content, and trust and credibility: was the message timely without sacrificing accuracy? one of the primary dilemmas of effective crisis and emergency risk communication is to be speedy in responding but maintain accuracy even when the situation is uncertain. being first to communicate establishes your organization as the primary source of information. the public may judge how prepared your organization was for the emergency based on how fast you responded. speedy responses suggest that there is a system in place and that appropriate actions are being taken. remember that if agencies are not communicating, audiences will turn to other, less credible sources. first impressions are lasting impressions, and it's important to be accurate. responding quickly with the wrong information or poorly developed messages damages credibility. this does not necessarily mean having all the answers; it means having an early presence so the public knows that agencies are engaged and that there is a system in place to respond. research shows there are some basic elements to establishing trust and credibility through communications, and you will notice they repeat the important elements in executing a successful crisis communication plan: empathy and caring-this needs to be expressed in the first 30 seconds. according to research, being perceived as empathetic and caring increases the chances your message will be received and acted on. acknowledge fear, pain, suffering, and uncertainty. competence and expertise-the public will be listening for factually correct information, and some people will expect to hear specific recommendations for action. therefore, you should do the following: • get the facts right. • repeat the facts often, using simple nontechnical terms. • avoid providing sketchy details in the early part of the response. • ensure that all credible sources share the same facts. speak with one voice. inconsistent messages will increase anxiety, quickly undermining expert advice and credibility. honesty and openness-this does not mean releasing information prematurely. it means being transparent-admitting when you do not have all the information, telling the public you do not, and why. the perception of risk is not about numbers alone and communicators should consider the following rules for raising the public's comfort level during a crisis. these are adapted from the environmental protection agency's seven cardinal rules of risk communication. 1. accept and involve the public as a legitimate partner-two basic tenets of risk communication in a democracy are generally understood and accepted. first, people and communities have a right to participate in decisions that affect their lives, their property, and the things they value. second, the goal should be to produce an informed public that is involved, interested, reasonable, thoughtful, solution-oriented, and collaborative. you should not try to diffuse public concerns and avoid action. guidelines: • show respect for the public by involving the community early, before important decisions are made. • clarify that decisions about risks will be based not only on the magnitude of the risk but on factors of concern to the public. 2. listen to the audience-people are often more concerned about issues such as trust, credibility, control, benefits, competence, voluntariness, fairness, empathy, caring, courtesy, and compassion. they are not as interested in mortality statistics, and the details of a quantitative risk assessment. if your audience feels or perceives that they are not being heard, they cannot be expected to listen. effective risk communication is a two-way activity. guidelines: • do not make assumptions about what people know, think, or want done about risks. • listen. monitor social media and comments on your website. make an active effort to find out what people are thinking and feeling. • involve all parties who have an interest or a stake in the issue. • identify with your audience and try to put yourself in their place. • recognize people's emotions. • let people know that you understand their concerns and are addressing them. understand that audiences often have hidden agendas, symbolic meanings, and broader social, cultural, economic, or political considerations that complicate the task. accepted, the messenger must be perceived as trustworthy and credible. so the first goal must be to establish trust and credibility. short-term judgments of trust and credibility are based largely on verbal and nonverbal communications. longterm judgments are based largely on actions and performance. once made, trust and credibility judgments are resistant to change. in communicating risk information, these are your most precious assets. once lost, they are difficult to regain. guidelines: • express willingness to follow up with answers if the question cannot be answered at the time you are speaking. • make corrections if errors are made. • disclose risk information as soon as possible, emphasizing appropriate reservations about reliability. • do not minimize or exaggerate the level of risk. • lean toward sharing more information, not less, to prevent people from thinking something significant is being hidden. • discuss data uncertainties, strengths, and weaknesses, including the ones identified by other credible sources. • identify worst-case estimates and cite ranges of risk estimates when appropriate. 4. coordinate and collaborate with other credible sources-allies can be effective in helping communicate risk information. few things make risk communication more difficult than public conflicts with other credible sources. guidelines: • coordinate all communications among and within organizations. • devote effort and resources to the slow, hard work of building bridges, partnerships, and alliances with other organizations. • use credible and authoritative intermediaries. • consult with others to determine who is best able to answer questions about risk. • try to release communications jointly with other trustworthy sources, such as: -university scientists. -physicians. -local or national opinion leaders. -citizen advisory groups. -local officials. 5. meet the needs of the media-the media are primary transmitters of risk information. they play a critical role in setting agendas and in determining outcomes. the media generally have an agenda that emphasizes the more sensational aspects of a crisis. they may be interested in political implications of a risk. the media tend to simplify stories rather than reflect the complexity. guidelines: • remain open with, and accessible to, reporters. • respect their need to "feed the beast"-to provide news for an audience that is eager for information 24/7. provide information tailored to the needs of each type of media, such as sound bites, graphics, and other visual aids for television. • agree with the reporter in advance about specific topics and stick to those during the interview. • prepare a limited number of positive key messages in advance and repeat the messages several times during the interview. • provide background material on complex risk issues. • do not speculate. the cdc has produced a series of manuals, toolkits, and trainings that are helping integrate social media into the disaster communications planning and operations of public health officials at every level and are helping speed up the adaption of these tools for saving lives. • keep interviews short and follow up on stories with praise or criticism, as warranted • establish long-term trust relationships with specific editors and reporters speak clearly and with compassion-technical language and jargon are barriers to successful communication with the public. in low-trust, high-concern situations, empathy and caring carry more weight than numbers and technical facts. guidelines: • use plain language • remain sensitive to local norms, such as speech and dress strive for brevity, but respect people's needs and offer to provide more information if needed • use graphics and other pictorial material to clarify messages • personalize risk data by using anecdotes that make technical data come alive. • acknowledge and respond to emotions that people express • promise only what can be delivered • understand and convey that any illness, injury, or death is a tragedy • avoid distant, abstract, unfeeling language about deaths, injuries, and illnesses • do not discuss money-the magnitude of the problem should be in the context of the health and safety of the people-loss of property is secondary give people things to do-in an emergency, simple tasks will: • give people a sense of control • keep people motivated to stay tuned to what is happening • prepare people to take action if and when they need to do so do no harm-the odds of a negative public response increases when poor communication practices are added to a crisis situation • public power struggles, conflicts, and confusion • perception that certain groups are getting preferential treatment more americans using mobile apps in emergencies using social media for disease surveillance. cnn global public square emergent use of social media: a new age of opportunity for disaster resilience integrating social media into emergency-preparedness efforts using social media to extend the reach of local public health departments idisaster 2.0: social media and emergency management. fighting influenza with data nextgov. dhs tries monitoring social media for signs of biological attack adoption and use of social media among public health departments key: cord-025856-gc7hdqis authors: chen, peter john; stilinovic, milica title: new media and youth political engagement date: 2020-06-02 journal: jays doi: 10.1007/s43151-020-00003-7 sha: doc_id: 25856 cord_uid: gc7hdqis this article critically examines the role new media can play in the political engagement of young people in australia. moving away from “deficit” descriptions, which assert low levels of political engagement among young people, it argues two major points. first, that there is a well-established model of contemporary political mobilisation that employs both new media and large data analysis that can and have been effectively applied to young people in electoral and non-electoral contexts. second, that new media, and particularly social media, are not democratic by nature. their general use and adoption by young and older people do not necessarily cultivate democratic values. this is primarily due to the type of participation afforded in the emerging “surveillance economy”. the article argues that a focus on scale as drivers of influence, the underlying foundation of their affordances based on algorithms, and the centralised editorial control of these platforms make them highly participative, but unequal sites for political socialisation and practice. thus, recent examples of youth mobilisation, such as seen in recent climate justice movements, should be seen through the lens of cycles of contestation, rather than as technologically determined. at the turn of the century, considerable interest was focused on new internet-based technologies and their potential to stimulate democratic improvements around the world. attention was particularly given to their role in revitalising the public due to its diverse meanings, and its applications subject to endless contestation (spicer 2019) , democracy is an essentially contested concept. active definitions for democracy lie across a spectrum of performances and values, from the "minimalist" versions that captures simple measures such as voter registration and turnout, to "maximalist" definitions that include activities like associational membership, active information seeking and civic dialogue (dahl 1971) . the discussion of democracy in this article focuses on a maximalist value of equality over other values, such as maximal individual liberty. given the recent prevenance of "collective crises" like the climate and pandemic, prioritising maximal individual liberty contrasts the necessary collective nature of contemporary, complex societies (wildavsky 2017). prioritising values like maximal individual liberty can be unsustainable and/or create inequalities that structurally undermine participation by others through self-interest (wildavsky 2017). in contrast, a maximalist view of democracy equally emphasises the production of democratic culture and institutions that promote just outcomes that sustain democratic practice. this sees generalised civic culture as important in developing practices by citizens that are realised through or by institutions that permit democratic modes of expression and collective action. this is important as recent challenges to individual well-being have collective origins (climate, pandemic, economic inequality). as such, it is complementary to a study of youth participation in the political processes of evolved democracies, such as australia, and the internet-based technologies that afford them access. in recent times, youth participation in democratic processes has been subject to controversy. krinsky (2008) notes that it is unremarkable that young people are often the focus of media and moral panics. therefore, it comes as no surprise that this particular demographic has been implicated as the focal point for three politically focused "crises" within the twenty-first century. the first panic is that young people are the source of "democratic decline". this reductionist view is commonly associated with lower formal participation rates, particularly voting, but also membership in key institutions like political parties (milner 2010) . in the compulsory voting context of australia, print highlights the focus on young people as a state educational project to become "active and informed citizens" (2015) . which, in the context of today's technologically driven political environment, would garner access to political discourse, engagement, and the use of advancing technologies to communicate, coordinate and mobilise. the second and third crises pertain to the increasing levels of structural inequality and the inability of the post-1970s neoliberal economic model-with a focus on egoistic individualism, and the resultant social and political acceptance of enduring and reproducing inequality (nozick 1975)-to ameliorate the causes of, resultant social conflict over, the environmental crisis. while these last two are empirical facts, the former is more contestable. to unpack this youth-focused concern, a good example is the often-cited lowy institute annual poll (kassam 2019) , which, at times, has shown a 28% gap between australians aged 18 to 29 years, and those over 30, in response to a question that asks if "democracy is preferable to any other kind of government". these types of findings are often reported in the media in rejectionist terms that overreads the data set and does not interrogate its context. for example, this type of finding has led to sensationalist claims in the media that "fewer than half of australian adults under the age of 45 actually believe in democratic government" (hildebrand 2018) . this type of coverage commonly is predicated on a discourse that young people are expected to perform a high degree of nativity about the political world, which, when displayed by older people, is attributed to pragmatism and experience. at the core of this is an implicit message that the status quo must be observed as a normative good. thus, young people are at the intersection of multiple fast and slow-moving crises, real or phantasmagorical. yet, with higher levels of concern for issues of social and climate justice (sealey and mckenzie 2016) , it becomes critical for them to have the capacity to engage in political practices and advance these concerns and question the foundations of political practice that have created or contributed to these social problems. therefore, contestable claims about current and potential democratic capacity have to be explored, particularly in the context of claims about technologies that afford or impede on youth participation. emerging information and communication technologies provide new (or remediate) "affordances", or possibilities for human action. affordances are important because of the way they encourage, allow, discourage and prevent particular behaviours. these can be deliberately or accidentally designed into a technology, be visible, or concealed (livingstone and das 2013) . when thinking about the application of this concept to politics, this is frequently captured in the "cost" hypothesis: the internet reduces the costs of political participation and allows some "natural" human desire to be afforded in greater abundance (negroponte 1995) . the positive aspects of the "cost" hypothesis become evident when considering the affordances made possible through advancing technologies. from the late 1990s onwards, the very nature of the internet-as a tool to communicate, aggregate and coordinate-has been associated with its democratising potential. therefore, at first instance, it appears logical to assume that contemporary youth, who have grown alongside these evolving technologies, would employ the internet as a communication tool to engage with political discourse, much in the same way that low-cost printing played an important role in youth politics of the 1960s and 1970s. more recently, with the advent of "platform" technologies (technologies that facilitate a range of applications, rather that provide a narrow set of specific functions) and social media channels, it is noted that young people increasingly use social media to engage in political discourse (yang and dehart 2016) . nonetheless, affordances not only have the ability to promote an engagement in political discourse, the design of certain technologies can also hinder participation. in retrospect, claims that assert the democratising potential of the internet have been predicated on loose understandings of the underlying character of the technology under discussion, that is, an exaggeration of its "true" network characteristics. the internet is not a "mesh" where each node has equal power relative to its peers, but a "powerscape" which virtually mirrors the hierarchical nature of power in the physical space where certain agendas, people and even locations are prioritised. equally, contestable are claims about the impacts of the technology such as deterritorialisation, or the notion that these technologies may separate the individual from the physical context as a primary definer of their social, economic and cultural needs (chen 2013). importantly, the cost claim, once so important to early arguments about the levelling effect of the internet (a view subject to very early empirical criticism; see small 2008) , can now be understood as generating compensatory costs: as data abundance increased, scarcity has shifted from the production of content to its consumption, and considerable time (cost) is now spent sorting, filtering and killfiling the vast amount of content generated and pushed at individuals, particularly through online automation (aka "bots"). additionally, free entry to the internet's public spheres is not cost-free for those marginalised subaltern populations-a term coined by postcolonial theorists to describe faction of society excluded from hierarchal structures of power-who also experience exclusion at a personal cost. virtual violence and harassment in online spaces have forcefully attempted to exclude these marginalised groups from the digital public sphere and are well-documented. in this context, youth within established democracies, despite having access to these virtual public spheres, form a part of not only the subaltern identity due to their cultural standing, but also the repression they experience from institutions such as the education system (spivak 1988) . as such, their participation, much like any other faction of subaltern society, is intensely contested (hartounian 2018; dhrodia 2017) . in thinking about social media from the perspective of democratic affordances, it is important to consider the political implications of its underlying technological and intuitional characteristics (howard and parks 2012) . that is, social media is largely only possible because of its reliance on large database systems that afford horizontal visibility within peer groups. thus, it is unsurprising that social media has been politically useful in the processes of political mobilisation. as evidenced in the work of groups like getup! (an australian-based independent movement for progressive participation in democratic processes), along with others, groups have successfully capitalised off of internet-based technologies to disseminate their message and motivate collective action (vromen 2017) . equally, xenos et al. (2014) have argued strongly for a positive relationship between young people's time spent on social media and political participation. based on a survey of young people (16-29) in the usa, uk and australia, and drawn from online panels, they argued that social media was positively related to increase political participation and produce a good regression analysis in support of this claim. the deterministic interpretation of this research can be contested, however. this analysis also strongly correlated reported levels of participation with respondents' sense of personal political efficacy. this leaves open the real possibility (as the authors identify) that their observations about technology use and political participation may be an expression of some other unmeasured causal agents, or that tool use is epiphenomenal to the connection between political interest and expression that would occur in any other socio-technical setting. significantly, reflecting our concern about the dominance of individualism, the same volume includes a longitudinal analysis of internet use that concludes that "…facilities on the internet often described as 'social' media offer environments which mainly draw young people's attention away from common concerns" (ekstrom et al. 2014 ). thus, the actual relationship remains open for investigation, and youth engagement in political participation on the internet is questionable, opening up the potential to explore how the use of social media and other internet-based technologies could mobilise youths into political engagement. recent attention has particularly been paid to youth mobilisations around climate issues, including the role of young people as leadership figures (i.e. greta thunberg) and peer mobilisation using new media (collin and mccormack 2019) . these observations are commonly placed into the now-familiar causal narrative of new media as inherently facilitative of collective action. however, until end-to-end case research is conducted, caution needs to be taken in ascribing causation. that is, participants may take a bus to participate in a demonstration. however, the bus itself has little to do with political action, much in the same way that social media might not necessarily be the driver for collective action. more specifically, to argue that social media was the driver behind climate youth protests remains a mostly correlative explanation when dealing with a population so ensconced in a mediated lifeworld, a reality in which all the immediate experiences of an individual are directly impacted and influenced by evolving media technologies. many of these mobilisation case examples are embedded in established social movement industries and, importantly, are not outside the scale of mobilisations seen in pre-internet youthled movements during the cold war. in similar "existential" issues of concern for young people (such as anti-conscription in the 1970s or anti-nuclear movements of the 1980s), mobilisation of youth movements was significant, preinternet. an alternative hypothesis is that we can see this as part of the routine, periodical "cycle of contention" of post-war youth mobilisations in which "good, decent, little people" with an apparent distrust towards establishment rally against the "corrupt and evil forces from above whose policies are responsible for their pain and suffering" (kazin 1998) . equally, we could argue that established collective action theory might be hierarchically higher than social media-specific theorising in explaining case examples, as it provides a better-substantiated explanation of a greater number of recurrent phenomena. further, the basic premises of the existence of a "deficit" have been challenged. collin (2015) , for example, argues that claims about youth disengagement are exaggerated. she points to volunteering and social movement participation rates as correctives to reliance on "formal" institutional measures of (dis)engagement. while longitudinal data on social media and volunteering in australia is scarce and unreliable (walsh and black 2015) , internationally, there is evidence that increased volunteering rates pre-date widespread internet adoption and may be associated with motivations like experience-gathering to enhance employability or college entry (jones 2000) . again, membership and volunteering may now be afforded via online channels, but this does not demonstrate a causal connection between the means and social practice. lastly, flexible definitions of participation serve as a correction of the institutionally oriented "democratic decline" literature by expanding what political participation looks like. they do so by recognising a shift towards informality in terms of participation in the public sphere ("everyday making"; bang and sørensen 1999) . this draws us to bennett's (2008) analysis of the implications of a social shift towards citizenship as "social movement citizenship" over dutiful/republican models that privilege participation within or through formal institutions. bennett's model emphasises a focus on concern for specific issues as a primary driver for the "hitand-run" participation of everyday making, combined with modes of participation that are more informal and expressive. this not only sits within a post-modern/post-war notion of justice as including recognition as well as other "rightscentric" motivators, but also recognises that the large participation rates in political organisations in the nineteenth and first part of the twentieth century might have less to do with their political functions, as much as their provision of social services, recreational opportunities and networking resources. these benefits are now seen outside of these explicitly political groups with postwar consumer culture, hyperpluralism and social diversification. while participation in the activities of formal political institutions is essential in liberal democracies, a decline in interest in more conventional models of government presents problems in realising political wins or accepting political compromises, the importance of linking these types of rights and recognition concerns with just structural outcomes (fraser 2001) . overall, social movement citizenship, or everyday making, presents challenges to an outcome-focused democratic analysis due to a tendency towards adhocracy, paradoxical disconnection and rapid demobilisation by political participants following their "hit-and-run" engagement. each is discussed in turn. movement politics tend towards fluid structures which more commonly produce flexible adhocracy. while these non-hierarchal power structures are an established advantage of movements, giving them the flexibility, dynamism and resistance to repression, reliance on adhocracy may not produce democratic socialisation. in the context of virtual collective spaces afforded by internet technologies, adhocracy, through decentralised electronic and online methods of collective action, tends to situate issues and problems in the context of "unique" or unusual issues that may require extra-normal methods to address. these types of organisational forms place politics into states of exception where the framing of the problem as exceptional encourages solutions based on the sovereign's ability to transcend the rule of law in the name of the "public good" (schmitt 1922) . therefore, they are less, not more, likely to consider democratic norms and suffer from low accountability and less-drawing potential. while these forms of governance can ameliorate the crises, the longer-term governance legitimated by invoking sovereign power is problematic under this model (see wallach's (2015) discussion of advocacy and the global financial crisis). while the lack of lesson-drawing limits the "developmental" value of participation in these forms of governance structure, this critique is emerging in response to state responses to the covid-19 crisis of 2020. second, while rejecting arguments about social media as fundamentally "siloing" its users (the so-called filter bubbles argument), issue-based politics can disconnect participants from other issue groups and meta-narratives seen either as generally necessary for social functioning or as important canvases against which popular debate is framed. bouvier (2017) equates this to reduced personal ownership of claims made online, due to anonymity, and the collapse of the existence of a shared symbolic order (the "big other"). new media, in undermining the cultural dominance of mass political media, has played an essential part in this process. as a type of networked politics, horizontal visibility can be low. one of the difficulties of the study of younger people's political engagement lies in its comparative "invisibility" within social media that is not readily observable to wider publics. as schuster (2013) observed, this invisibility can create a "generational divide" within movements, with older activists unaware of the depth of engagement of younger activists. this reinforces findings that social networks may not create social capital as anticipated (valenzuela et al. 2009 ). indeed, there are concerns that high levels of social media consumption may be alienating (hunt et al. 2018) . finally, and related the factors of velocity and transience associated with "internet time", rapid mobilisation and "flocking" (where attention shifts towards the next exceptional space or incident that garners high visibility, leaving the previous platform or issue empty) can be associated with rapid demobilisation (jackson and chen 2015) . as uldam and vestergaard (2015) argue, there is a need to refocus on civic participation beyond movement-based and protest-focused analysis. image is not action, and considerable over-attention to visible movement action raises questions about the extent to which the transition from expressive politics to agenda building to policy design, implementation and monitoring occurs. this saw considerable interest from post-war social movement "pracademics" (academics engaged in instrumental and action research), asking questions about "realising wins" and the problem of follow-through post demobilisation (see, for example, the practical work of moyer et al. 2001 , see in the map model for social movements). to understand the relationship between social media and democratic practice, we need to determine what type of practice space social media affords. "practice space" is used deliberately here over the more popular "public sphere" due to the authors' view of the tendency to misapply this concept to the new media environment. more specifically, the attraction of online media theorists to habermasian deliberation may not have been the right choice because this particular democratic model emphasises early parts of the policy process over later aspects of it highlighted above. thus, rather than see the commercial social media platforms as public spheres (true sandboxes), we can see them as sites with non-trivial visible and invisible geographies of power that not only provide political affordances, but also shape social expectation of social media citizenship. the monetisation of online spaces, combined with the collapse of the conventional advertising driven internet economy, has increasingly shifted social media into a primary role in what is called "surveillance capitalism". surveillance capitalism produces value through the observation, quantification and commodification of individuals' online behaviours. this data becomes the core product of these services, providing a new market with the potential to capitalise off knowledge about users' preferences (zuboff 2018) . this has implications for participation in and through these systems due to the role that surveillance plays in creating self-censorship, and the way preference engines generate sameness in the information consumed by individuals. these tendencies-in stark contrast to the view of information in markets as facilitating fair exchange, or the free-speech ethos maintained by the entrepreneurs who run social media enterprises-are problematic for democratic participation. this is due to the reduced capacity for preference formation (attacking performative aspects of speech practice) and preference realisation (via the selective satisfaction of wants at low cost). from the preference formation perspective, the impact of surveillance is demonstrated in stoycheff's (2016) study of the effect of social media users' awareness of surveillance. this experiment found that-even in "strong" free-speech jurisdictionspriming users' awareness of the possibility of surveillance produced more conservative online behaviour and speech. this impact should be concerning to developmental democrats. dahl (1971) , for example, emphasis on the processes of preference formation as a critical aspect of developmental citizenship, something that continues life long, but is vital in the transition into civic life in youth. preference formation is both an individualised practice, in that it is a developmentally acquired skill that individuals exhibit different levels of capacity in, and a collective capacity under conditions of equality. specifically, discourse within groups develops the capacity of the group to undertake political discourse through observation and the presence of relevant information. from a preference realisation perspective, the existence of these so-called preference-knowing machines has implications for human agency. this observation is made because surveillance capitalism, unlike traditional top-down political paternalism exhibited in democratic and authoritarian societies, is fully compatible with high levels of perceived individual efficacy. where "search" once drove the core economy of the internet and provided efficacy through agency, "sharing" and automagical result systems replace agency with expectation. "sharing" allows users to expect that aspects of their "wants" are provided automatically as these systems offer solutions to personal wants and adjust the platform in line with anticipated user desires. thus, efficacy is obtained in these surveillance regimes. significantly, this is achieved not through the type of agency commonly associated with democratic participation, rather a negative agency, surrendering to the panoptic view in recognition of its capacity to service the individual within very narrow and uncontested spheres. this exchange has a psychic cost. hoffmann et al. (2016) identified "privacy cynicism" as the tendency for users to engage in "a cognitive coping mechanism, allowing users to rationalise taking advantage of online services despite serious privacy concerns." this type of preference servicing/channelling is not something we see in corporate media spaces. through the continued valorisation of corporate modes of production in the political sphere, these types of negative agency, passive efficacy systems are proliferating within institutions (public, private and non-profit), and in the new interest in behavioural and persuasive "nudge" economics evidenced in the public sector. these projects-wrapped in the discourse of "big data" analytics-seek to understand their stakeholder groups, be they citizens, clients or employees, but only to the extent to which that knowledge fulfils institutional objectives. this is most obvious in the authoritarian internet of china, where the hard power has most recently been combined with "soft" incentivising through the "social credit system" (ramadan 2018 ) but can also be seen, to a seemingly less intrusive extent, in developed democracies through the aforementioned corporatised nature of e-governance and behavioural policy units. the naturalisation of the technologies underpinning these management systems, be they in liberal democracies or authoritarian regimes, further erodes the capacity of users to express consent (with legitimacy implications important in democratic regimes) and participate in process design, eroding the capacity for the transference of democratic capacity into other areas of life. socialisation within these systems of expectations thus displaces developmental citizenship as citizens are increasingly embedded in these systems of affordance. it is essential to learn from the failures of the "electronic democracy" (e-democracy) movement as a development which, at the turn of the century, aimed to streamline bureaucratic processes and motivate higher levels of participative engagement between power elites and their publics. attempts to create and propagate participative platforms advocated at this time significantly failed through a combination of low utilisation and limited state interest in cultivating and connecting with them. even where public management has embraced notions of popular legitimacy, such as the focus on connecting administration to public legitimacy through the reclassification of public sector management as the creation of "public value", this has not led to an embrace of participative media by public managers (bolívar 2016) . where participative design is undertaken, understanding the constitutive nature of affordances is essential. as mentioned, design choices activate and allow behaviours. thorson (2014) , for example, has argued that there are important differences in the value of new media associated with how the technologies are employed, with "active" use (i.e. search over sociability) correlated with more challenging modes of citizenship, or seeking divergent opinion exposure. matei and britt (2017) provide a useful analysis of wikipedia as an example of a platform that uses advocacy, creates a social hierarchy in production, but sustains openness to new entrants, accountability, and has sustained itself in the face of attacks on its primary function of knowledge production. as indicated, however, this cannot be seen as an engineering problem, and the cultivation of interest and practices to employ, discover and create new democratic affordances is necessary. the contemporary problem is that most civics education is undertaken in bad faith. as bennett (2008) observed, most academic and educational representation of democratic practice bears little resemblance to its actual practice, precisely because this practice in most established democracies falls far short of the underlying idealistic motivations embedded in civics instruction. as "telling people to participate in bad institutions is mere propaganda" (levine 2006), there has been a tendency to focus on political spaces that are presented as tabula rasa, be this technological spaces over the last two decades, or an emphasis on social movement participation because ever-renewing social movements maintain the appearance of both democratic because they are inherently participative, and new because they tend to be continuously renamed. coleman (2008) has highlighted the importance of skilling for "autonomous ecitizenship" that capacity building and skilling overcome the problem of bad faith by allowing young people to develop their own aspirational spaces. this has been demonstrated in applied research in schools (see black 2017) . this model presumes, however, that these skills are produced in a context where their democratic application is explicit, and that these self-actualising young e-citizens would employ these skills in a frictionless environment that provides them with an even playing field. if this illusion was able to be sustained by coleman in 2008, a decade later it cannot be. the corporate-dominated internet has proven libertarians wrong: atomised "heroic" individuals cannot find self-actualisation in the face of such overwhelming institutional power. individual liberalism is impossible in an age of such incredible institutional power. the "hidden affordances" of the sub-systems most internet users spend the majority of their time in online (as opposed to the delusion of the internet as a "sandbox") may be malleable to a literate few, but this presents the prospect of meritocracy heavily tilted by neoliberal access to education, not a sustainable democracy of equality. further, social media may provide greater affordances for reactionary and counter-democratic agents (the entrenched, corporations, the state) to limit movement and political agency (cammaerts 2015; uldam 2015) . this latter power is contestable. while online activists have attempted to moderate the policies of major internet platform operators, their capacity to act against organisations of this scale and transnational character has been most effective only where similar transnational capacity exists. thus, while there have been a range of statelevel investigations and proposed regulatory interventions in the way platforms operate (in australia primarily focused on ensuring representative democracy), the most significant developments have been seen at the level of the european union. this remains a moving target but provides an example of the need to revisit the politics of accessibility but focused on interventions within the walled gardens online. the internet is no longer a land without a history, and optimistic projections that new affordances have a deterministic correlation with democratisation need to be checked against the visible history of these claims, and the broader background context of societies that are more technologically saturated, and less democratic overall. while new media is undoubtedly at the heart of recent youth mobilisation politics, it is not clear if this is more than correlative or epiphenomenal. younger people are disproportionately institutionally situated in contexts obsessed by the "human capital" developmental school (becker 1993) in which citizenship rights are primarily held in abeyance, secondary to instrumental aims associated with participation in the private world's production and consumption. thus, recent youth mobilisations can be seen as remarkable contestations in the face of these systems, rather than a result of them to avoid falling into the twin traps of bad faith promotion of weak systems or magical thinking about new spaces and places for participation, systematic democratisation of spaces and places-both online and off-must be the focus of reform. participation without democratisation is the possible panacea that can consume the efforts of reformers. as the internet splits into three distinct jurisdictions-the surveillance capitalism space, authoritarian spaces and the regulated internet-new possibilities and natural experiments present themselves for investigation in the online space. to take advantage of these affordances civics education has demonstrated, it can produce positive improvements in rates of participation, but this will need to be met by increased opportunities for democratic agency in our classrooms, workplaces and the public sphere. the everyday maker: a new challenge to democratic governance human capital: a theoretical and empirical analysis, with special reference to education the wealth of networks: how social production transforms markets and freedom active citizenship and the 'making' of active citizens in australian schools policymakers' perceptions on the citizen participation and knowledge sharing in public sector delivery. in: sobaci mz (ed) social media and local governments what is a discourse approach to twitter, facebook, youtube and other social media: connecting with other academic fields building citizen-based electronic democracy efforts. paper presented to the internet and politics: the modernization of democracy through the electronic media conference doing it for themselves: management versus autonomy in youth e-citizenship young citizens and political participation in a digital society: addressing the democratic disconnect young people and politics the marketplace, e-government and e-democracy we tracked 25,688 abusive tweets sent to women mps-half were directed at diane abbott the networked young citizen: social media political participation and civic engagement the internet, democracy and democratization. democratization recognition without ethics? the electronic republic: reshaping democracy in america. viking why young voters are turning their backs on democracy privacy cynicism: a new approach to the privacy paradox social media and political change: capacity, constraint, and consequence no more fomo: limiting social media decreases loneliness and depression rapid mobilisation of demonstrators in march australia youth volunteering on the rise lowy institute poll 2019: understanding australian attitudes to the world moral panics over contemporary children and youth. routledge, london levine p (2006) macarthur online discussions on civic engagement a companion to new media dynamics structural differentiation in social media adhocracy, entropy, and the "1% effect the internet generation: engaged citizens or political dropouts doing democracy: the map model for organizing social movements new york nozick r (1975) anarchy, state, and utopia. blackwell, oxford print m (2015) a "good" citizen and the australian curriculum the gamification of trust: the case of china's 'social credit the virtual community: homesteading on the electronic frontier political theology: four chapters on the concept of sovereignty, g schwab (trans.) invisible feminists? social media and young women's political participation agenda for action: what young australians want from the 2016 election. youth action and australian research alliance for children and youth equal access, unequal success-major and minor canadian parties on the net what do we mean by democracy? reflections on an essentially contested concept and its relationship to politics and public administration can the subaltern speak? under surveillance: examining facebook's spiral of silence effects in the wake of nsa internet monitoring the networked young citizen: social media political participation and civic engagement civic engagement and social media: political participation beyond protest is there social capital in a social network site? facebook use and college students' life satisfaction, trust, and participation digital citizenship and political engagement: the challenge from online campaigning and advocacy organisations australian research alliance for children and youth wildavsky a (2017) culture and social theory the great equaliser? patterns of social media use and youth political engagement in three advanced democracies social media use and online political participation among college students during the us election 2012 the age of surveillance capitalism: the fight for a human future at the new frontier of power key: cord-290777-eylp4k53 authors: ippolito, giuseppe; hui, david s; ntoumi, francine; maeurer, markus; zumla, alimuddin title: toning down the 2019-ncov media hype—and restoring hope date: 2020-03-31 journal: the lancet respiratory medicine doi: 10.1016/s2213-2600(20)30070-9 sha: doc_id: 290777 cord_uid: eylp4k53 nan the proliferation of internet-based health news might encourage selection of media and academic research articles that overstate the strength of causal inference. we investigated the state of causal inference in health research at the end stage of the pathway-ie, the point of social media consumption. did the media hype emanate from ineffective risk communication both to the public and media? proactive case finding and increase in contact tracing and screening led to an exponential rise in the numbers of cases reported by the chinese authorities, with a consequential increase in media reports and ensuing hype. the reproductive rate (r0) predictions, evacuation of european and north american citizens from china, and in some cases the confinement and quarantine of people (eg, in the uk), have gained major visibility in the press and have also contributed to the hype. reporting of the situation in real-time from the public on social media could lead to more accurate collating of information by the media. however, the rapid pace of developments, increasing case detection rates, along with increasing diversity of information mean it has become increasingly difficult for the media to assimilate and make meaningful interpretations from this information source. moreover, the volume of information being reported to and by global public health authorities exceeds the capacity to collate and analyse it, or to cross-reference and verify with other data received. this inability to validate information can fuel speculation, and thereby lead to media and public concern. the balance between providing the information required for appropriate actions in response to risk and providing information that fuels inappropriate actions is delicate. the global media response to 2019-ncov remains unbalanced, largely due to the continuously evolving developments and, as a result, public perception of risk remains exaggerated. the many unknown factors surrounding the virus are likely to lead to further media hype and aberrant public response. for example, the number of people who travelled to and from wuhan before travel restrictions and the lockdown were put in place, how many of these individuals were asymptomatic or were incubating the virus, and whether screening and current control measures will be effective, are all unknowns. a public health emergency among young people as of feb 10, 37 558 cases were confirmed, and 812 deaths had been reported to the who. outside of china, 307 cases had been detected in 24 countries. 6 therefore, although several hundreds of patients remain in intensive care, the overall hospital fatality rate remains at 2%. therefore, it is time to reduce the hype and hysteria surrounding the 2019-ncov epidemic and reduce sens ation alisation of new information, especially on social media, where many outlets aim to grab attention from followers. additionally, the disparity between the strength of language as presented to the media by some researchers and politicians and the inference shared on social media requires more research to determine how content is being relayed on different platforms. an effective way of putting this outbreak into perspective is to compare it with other respiratory tract infections with epidemic potential. 2019-ncov appears to fit the same pattern as influenza, with most people recovering and with a low death rate; the people at risk of increased mortality are older in age (>65 years), immunosuppressed, or have comorbid illnesses. there is currently no evidence that 2019-ncov spreads more rapidly than influenza or has a higher mortality rate. the media should focus on having altruistic intentions and develop dialogue with the appropriate authorities to protect global health security through effective amiable partnerships. they should highlight vaccine development efforts as well as educational and public health measures that are being put in place to prevent the spread of infection. although there are many things to still learn regarding how best to respond to disease outbreaks of this nature, 7 there are also several positives, such as diagnostics tests being developed within 2 weeks and rolled out globally or the rapid garnering of financial support for vaccine development, which should perhaps be in the headlines, to fuel reassurance rather than fear. while some countries have banned the use of e-cigarettes or vaping products altogether (eg, india), and others have strongly advised against their use (eg, australia), in the uk, public health england (phe) appears to be a lone voice in stating that vaping is 95% safer than smoking tobacco. here we consider whether vaping can be considered safe; whether vaping is a means of smoking cessation or at least harm reduction; and the correct response to the spiralling epidemic of vaping in young people (<18 years). the question of whether vaping is safe or safer than smoking can only be answered if the total contents of each of the thousands of available vaping liquids are itemised and subjected to short-term and longterm toxicity testing. to our knowledge, no such database exists; reassurances and extrapolations are no substitute for data. in one expert briefing, 1 it was asserted that "most of the flavours are used in food and, at the relatively low temperatures used in e-cigarettes, they're not going to give rise to hazardous by-products". evidence to support the assertion is scarce and e-liquids contain a multitude of other substances known to be toxic to the lungs, such as ethyl maltol, linalool, methyl emerging infectious diseases and pandemic potential: status quo and reducing risk of global spread history is repeating itself, a probable zoonotic spillover as a cause of an epidemic: the case of 2019 novel coronavirus the continuing 2019-ncov epidemic threat of novel coronaviruses to global health-the latest 2019 novel coronavirus outbreak in wuhan, china sars to novel coronaviruses-old lessons and new lessons clinical features of patients infected with 2019 coronavirus in wuhan, china who. novel coronavirus (2019-ncov) situation report 20. 2020 2019-ncov outbreak-early lessons key: cord-277824-q7blp3we authors: bilal; latif, faiza; bashir, muhammad farhan; komal, bushra title: role of electronic media in mitigating the psychological impacts of novel coronavirus (covid-19) date: 2020-04-29 journal: psychiatry res doi: 10.1016/j.psychres.2020.113041 sha: doc_id: 277824 cord_uid: q7blp3we the current research initiative focuses on the role of pakistani media in eliminating panic and depression among health practitioners and the general public due to the outbreak of novel coronavirus (covid-19). in pakistan, electronic media is the most common source of information due to the higher rural population and the lower literacy rate and media's handling of covid-19 coverage so far creates panic and depression. we suggest that special televised transmissions featuring psychologists and physiatrists should be aired to reduce the panic. media also mitigates the stress of frontline medical staff by paying special attributes to them. abstract: the current research initiative focuses on the role of pakistani media in eliminating panic and depression among health practitioners and the general public due to the outbreak of novel coronavirus . in pakistan, electronic media is the most common source of information due to the higher rural population and the lower literacy rate and media's handling of covid-19 coverage so far creates panic and depression. we suggest that special televised transmissions featuring psychologists and physiatrists should be aired to reduce the panic. media also mitigates the stress of frontline medical staff by paying special attributes to them. keywords: covid-19; media; mental health. during times of public crises, media must ensure to communicate crisis information efficiently and effectively to the general public, failing to do so will certainly lead to uncertainty, fear, and anxiety. in this era of advancement and technology with the general acceptance for freedom of speech, the role of media becomes indispensable in every respect. as today the world is suffering from ongoing novel coronavirus pandemic, fear of getting infected is sweeping all over the globe. in this context, some sources may be of dire help and play their part in reducing the panic that has been caused due to an increase in the mortality rate all over the globe (wit et al., 2011) . media is the most powerful tool that can disseminate such campaigns to provide some relief from panic and boost the morale of the general public. there is a general acceptance of the fact that electronic and print media in south asia are politically biased and tend to exploit scenarios like the current situation to win media wars. this is why we should focus on the role media to help reduce anxiety levels in the general public. history shows that there is a direct link between media and society. the basic motive of media is to inform society and to work for social welfare at a massive level as it targets the mass population. this particular study aims to study the role of pakistani media in reducing the mental stress of the public and enhancing the motivation level of the healthcare service providers during the covid-19 pandemic. (mckibbin & fernando, 2020) . media campaigns in pakistan need to address the mental health of the public and there is a great need for special transmissions with health professionals and experts to provide advice and instructions for the public to cope with the current situation (ali & gatiti, 2020) . pakistani media generally enjoys broadcasting freedom and has a strong impact on the day-to-day life of the general public. the current situation demands that they should play a positive role in this critical condition for the well-being of the general public. psychologists, psychiatrists and healthcare professionals should be invited in the programs to guide the people about the covid-19 with the aim to console and advise the general public on how to avoid stress so that they can cope with this deadly condition without affecting their mental health. special televised campaigns aimed to boost the morale of the public should be given air time (bhatia, 2018) . another dilemma of the current situation is the absence of schooling of children. children are at home and are moved with the situation. the media is not on-airing any productive program for children which shows that pakistani media is only limited to just one age group in society. what about children who are the future of our society and are already suffering from boredom and depression during the current situation, therefore media should play its role in arranging some special programs aimed at promoting learning activities for children and it will be beneficial for the mental and physical developments of these children who are the future of the covid-19 (coronavirus) pandemic: reflections on the roles of librarians and information professionals role of media to inform public about depression related diseases covid-19 risks and response in south asia the global macroeconomic impacts of covid-19: seven scenarios are sedentary television watching and computer use behaviors associated with anxiety and depressive disorders? key: cord-279207-azh21npc authors: sharma, manoj kumar; anand, nitin; vishwakarma, akash; sahu, maya; thakur, pranjali chakraborty; mondal, ishita; singh, priya; sj, ajith; n, suma; biswas, ankita; r, archana; john, nisha; tapatrikar, ashwini; murthy, keshava d. title: mental health issues mediate social media use in rumors: implication for media based mental health literacy date: 2020-05-07 journal: asian j psychiatr doi: 10.1016/j.ajp.2020.102132 sha: doc_id: 279207 cord_uid: azh21npc nan research involving human participants and/or animals:.all procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 helsinki declaration and its later amendments or comparable ethical standards. j o u r n a l p r e -p r o o f social media use has recently become immensely popular not only for its leisure activities through connecting people over the world, but also for keeping updated with the current trends through news and sharing information. it provides a perfect platform to interact with others by offering opportunities to share a user's thoughts, emotions, pictures, videos and creative ideas through posts or blogs (kuss & griffiths, 2011b , 2011a . hence, one important characteristic of social media platforms is rapid spreading of information through its users which is usually impactful. another concern is when it comes to health related information sharing on social media. as open to all, anyone can produce information and publish in the digital forum, share experiences, form their own perspectives which remain unverified by any professional news channel, editors or factcheckers (sommariva, vamos, mantzarlis, uyên-loan đào, & tyson, 2018) . thus, social media comes with its own limitations for misinformation in the form of rumours or fake news (zubiaga, liakata, procter, wong sak hoi, & tolmie, 2016) . moreover, once rumors begin to spread on social media, they are very difficult to control with updates or corrections (jones, thompson, dunkel schetter, & silver, 2017) . among these, health rumors which are unverified information regarding the practice of medicine and healthcare, often endanger public health (oh & lee, 2019) . hence, it is important to understand the role and impact of social media in spreading rumours and verify information before sharing it with others. research literature has found that social media has power in influencing people's behavior when there is an outbreak of epidemic or pandemic. over the decades, social media has been flooded with misinformation on diabetes, anorexia as well as anti-vaccination content along with the recent zika virus or ebola epidemic (fernández-luque & bau, 2015; sommariva et al., 2018) . the news of the ebola epidemic created a climate of global nervousness with rumours and misinformation quickly spreading through social media platforms. similar trend is being observed with current occurrence of severe acute respiratory syndrome coronavirus 2 (sars-cov-2) which has been declared as a pandemic. studies have also documented that during crisis events, people often seek out event-related information to stay informed of what is happening. if there is lack of official information, people may be at risk for exposure to rumors that fill the information void (jones et al., 2017) . additionally, constant assault of information through social media also leads its users to easily consume available information irrespective of its authenticity. in this is the era of "headline stress disorder", a lot of negative feelings like anxiety, hopelessness, despair, and sadness is fueled by being regulated by the (sharma & seshadri, 2020) . similarly, suicide is another public mental health problem where media and social media play a significant role in either increasing or curtailing the problem within the society. the available literature in bangladesh and india suggests that media reporting about suicide includes information which offers details name of the victim, their occupation, method of suicide, images of suicide victims, suicide notes and citations form suicide notes. this is the information which works to make the news attractive and shares details which increase access to information for harming self and may also work to create misinformation or rumors (arafat, mali & akter, 2020; armstrong et al., 2018; jain and kumar, 2016) . however, the media does not highlight information to educate the general population about what are early signs of suicidal behaviors, prevention plans and expert opinions from mental health professionals, helpline numbers for support and availability of emergency services in hospitals. these findings further suggest that the reports in media on suicide do not follow the guidelines issued by the world health organization (who) and other health regulatory bodies on reporting of suicide in media (arafat, mali & akter, 2020; armstrong et al., 2018; cherian, lukose, rappaia et al., 2020; jain and kumar, 2016) . there are similar irregularities j o u r n a l p r e -p r o o f which indicate reporting of sensitive information about suicide in a detrimental manner in media in china as well (chu et al., 2018) . thus, in the light of the existing information, it becomes understandable that media in all its formats have a huge impact and more significantly has a role to report responsibly the information in an educative format which is related to health of the population. in addition, it needs to be more sensitive and responsible in reporting about public health problems like the sars-cov-2, and suicide where the focus is on offering information which is helpful for prevention, details the steps to take in times of the health emergency, offers expert opinions from mental health professionals, helpline numbers for support and emergency services in hospitals. this role of media will surely work to minimize the digital content which leads to creation of misinformation or rumors. to summarize, in addition to the responsible role of media in reporting about public health problems, the individual's members of the population, the government, policy makers, health regulatory bodies and health professionals need to collaborate and develop guidelines for responsible dissemination of information over all kinds of media formats with respect to public health problems. such guidelines will also work to improve the media based literacy about health and mental problems among the population and will be extremely helpful for use in times of public health emergencies like the sars-cov-2 pandemic. the development of such guidelines are crucial as the pattern of epidemics and pandemics changes over time, but the cycle of rumors or fake news or inaccurate media reports continues to revolve around media formats and especially in social media likely due to stress, anxiety and other psychological factors of individuals which requires to be studied in greater detail. is suicide reporting in bangla online news portals sensible? a year-round content analysis against world health organization guidelines assessing the quality of media reporting of suicide news in india against world health organization guidelines: a content analysis study of nine major newspapers in tamil nadu adolescent suicide in india: significance of public health prevention plan assessing the use of media reporting recommendations by the world health organization in suicide news published in the most influential media sources in china health and social media: perfect storm of information is suicide reporting in indian newspapers responsible? a study from rajasthan distress and rumor exposure on social media during a campus lockdown daria j . kuss and mark d . griffiths excessive online social networking : can adolescents become addicted to facebook ? education and health online social networking and addiction -a review of the psychological literature when do people verify and share health rumors on social media? the effects of message importance, health anxiety, and health literacy adolescence: contemporary issues in the clinic and beyond spreading the (fake) news: exploring health messages on social media and the implications for health professionals using a case study analysing how people orient to and spread rumours in social media by looking at conversational threads key: cord-291596-lp5di10v authors: singh, shweta; dixit, ayushi; joshi, gunjan title: “is compulsive social media use amid covid-19 pandemic addictive behavior or coping mechanism? date: 2020-07-07 journal: asian j psychiatr doi: 10.1016/j.ajp.2020.102290 sha: doc_id: 291596 cord_uid: lp5di10v nan "is compulsive social media use amid covid-19 pandemic addictive behavior or coping mechanism? 1 dr. shweta singh, 2 ayushi dixit, 3 owing to easy accessibility of internet, globally there are more than 3 billion social media users accounting for 49% of the world's population. in india, the number of social media users stands more than 376 million with a population of more than 1.36 billion (statista, 2020) . the mental health impact of covid-19 is not limited to affected persons, their families and the healthcare force but embraces society's response at large (tondon, 2020). amid the pandemic and subsequent nationwide lockdown, there has been a surge in social media usage which is also reflective of a social response worldwide. for instance, in india 87% people reported increase in its usage with 75% spending increasing amount of time on facebook, twitter and whatsapp (business today, march 30, 2020) . given the backdrop of this alarming data, it is pertinent to debate two questions i.e. (a) "does the current pattern of social media usage suggest a trend towards addictive behavior or has it become a coping mechanism to deal with current global crisis?" and (b) "what are the current and future implications of this trend on addictive behavior and mental health of people?". considering its widespread use across ages, social media is known to be a source of social reinforcement and validation. this platform provide people with an opportunity to share ideas, interact socially, form relationships, draw attention of others and create social image (kietzmann et al., 2011) . during the current global crises when 'social distancing' has become a norm, over-engagement in social media has become a 'psychological necessity' thereby helping people to address their needs of human j o u r n a l p r e -p r o o f interaction and coping with the pandemic. therefore, despite the precautionary guidelines of social distancing, it provides people a platform to remain socially connected and universalize distress caused by the current crisis. apart from socialization, social media is also being used for academic and workrelated purposes like conducting online lectures, webinars, meetings and ensuring work from home. one of the major advantages of social media is that it facilitates awareness and provides mental health support by making resources available to those facing distress caused by lockdown and to those who are isolated as a result of being quarantined. with the help of this platform, data scientists and healthcare professionals have recently surfaced as social media influencers with the aim to mobilize people for taking proactive steps to deal with the crisis (the economic times, 2020). in the ongoing scenario, social media has become one of the major sources for updating information on covid-19 for people. however, it's irresponsible use poses the challenge of 'infodemics' i.e. a situation when 'misinformation' spreads rapidly thereby affecting thinking and subsequent behavior of people. recently who had cautioned people against social media rumors which lead to panic, stigma and irrational behavior (who, 2020). given the rise in usage of this media, it becomes necessary to address its association with mental health. the relationship between social media disorder and mental disorders becomes controversial which is attributable to diagnostic complications (pantic, 2014) . research in the past has shown that compulsive usage of social media impacts physical and mental health including cardio-metabolic health, sleep, affect, self-esteem, well-being and functioning, especially in adolescents (turel et al., 2016 , cheng et al., 2014 van rooij and schoenmakers, 2013) . in light of the present pandemic, mental health conditions are found to be associated with the amount of social media exposure. for instance, a study during covid-19 outbreak in wuhan china, found the prevalence of depression, anxiety and a combination of depression and anxiety (cda) to be 48%, 23% and 19% respectively. moreover 82% participants who were frequently exposed to social media reported high odds of anxiety as well as cda (gao et al,2020) . it is well known to us and also resonated by research that 'internet addiction' is predominantly linked to increased social media or gaming activities schoenmakers, 2013, van rooij and prause, 2014) . while dsm-5 (apa, 2013) and the stable version of icd-11 (who, 2018) have identified 'internet gaming disorder' (igd) as a provisional disorder, social media disorder is still not acknowledged. increasing research is advocating that social media disorder should be considered an addictive disorder just like igd (pantic, 2014; ryan et al., 2014) . according to the dsm-5, a person is diagnosed as having igd if there is fulfillment of 5 (or more) of the 9 criteria (preoccupation, tolerance, withdrawal, persistence, escape, problems, deception, displacement, and conflict) during a period of 12 months. since social-media disorder and igd both relate to internet use, researches refer to nine igd j o u r n a l p r e -p r o o f criterion of dsm-5 for constructing diagnostic tools and establishing internet / social media addiction (regina et al, 2016; van den eijnden, 2016) . since covid-19 outbreak began from end of 2019 and crossed international borders from the beginning of 2020, undeniably '12 months dsm 5 criterion' is not applicable. but it is difficult to say if five or more igd dsm-5 criteria are fulfilled by the excessive social media users. it comes with a word of caution that excessive social media usage is known to be highly addictive due to its psychological, social and neurobiological basis. during current pandemic, like many other uncertainties, it is unclear whether this compulsive use of social media is just a 'phase' and a coping mechanism or an indication of addictive behavior having mental health implications. hence, in terms of current research implications and management, it is imperative to keep the contextual issue of global pandemic in mind and differentiate between addictive and extremely involved behavior. it can be explored whether (apart from the criterion of 12 month duration) people fulfill at least 5 out of 9 igd criterion of dsm-5. here it would be worthwhile to add that because of unique sociocultural context, experience of various asian countries during covid -19 pandemic needs to be studied and shared with the world (tondon, 2020). moreover, any research conducted on addictive behaviors in the current time should consider longitudinally the pre-present-post lockdown social media usage pattern and its mental health implications among individuals across all age groups. the authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. diagnostic and statistical manual of mental disorders, fifth edition internet addiction prevalence and quality of (real) life: a metaanalysis of 31 nations across seven world regions mental health problems and social media exposure during covid-19 social media? get serious! understanding the functional building blocks of social media online social networking and mental health the social media disorder scalecompute rs in human behaviour the uses and abuses of facebook: a review of facebook addiction the covid-19 pandemic, personal reflections on editorial responsibility health outcomes of information system use lifestyles among adolescents: videogame addiction, sleep curtailment and cardio-metabolic deficiencies the social media disorder scale a critical review of "internet addiction" criteria with suggestions for the future het (mobiele) gebruik van sociele media en games door jongeren covid-19 pheic global research and innovation forum: towards a research roadmap the new indian express). covid pandemic,social media and digital distancing-the new indian express url how covid-19 has made data experts the new-age social media influencers -the economic times 87% people reported increase in its usage with 75% spending increasing amount of time on facebook (statista) 2020. number of social media users worldwide mental health professionals serving selflessly during current global pandemic. key: cord-031964-khbzbjuu authors: coşkun, gülçin balamir title: media capture strategies in new authoritarian states: the case of turkey date: 2020-09-16 journal: publizistik doi: 10.1007/s11616-020-00600-9 sha: doc_id: 31964 cord_uid: khbzbjuu this article focuses on the forced transformation of the mass media as an institution in new authoritarian states. it aims to understand the methods used by theses states to control and manipulate the flux of news through the mass media. turkey’s media system has been chosen as a case study because the recent political developments in the country offer worrisome und devastating examples. this article aims to answer to the following question: how can we classify methods and strategies used by the akp government to capture the media in turkey? why and how do the methods used by the akp government differ from those applied by previous governments? to answer to these questions, the article draws on media capture as a framework of analysis. it argues that the akp captured the media by using new strategies which can be divided into three overlapping and interconnected categories: capture by creating its own private media, capture through financial sanctions, and capture by intimidating and criminalizing journalists. the public opinion is influenced by family inherence and personal experience as well as by the public knowledge on political issues framed and transmitted via media (cf. forman 1991, p. 109 ). this public knowledge constitutes an important source for citizens to check governments' policies and to decide among different political actors. the decisions, actions, and reactions by different political actors on socioeconomic issues "are selected and shaped by mass-media professionals" (habermas 2006, p. 415 ) and thus become accessible to the public. for this reason, some scholars describe the news as a main agent in the "construction of public knowledge" (schudson 2002) . in brief, mass media play an indispensable role in the formation of this public knowledge. having understood the important role of the media in politics, new authoritarian regimes have developed new strategies to influence the formation of public knowledge by the media. instead of establishing direct state control over the media, they tolerate or encourage the private media. however, they prevent them from producing critical content using "diverse measures such as national security prosecutions, punitive tax audits, manipulation of government advertising, and seemingly reasonable content restrictions" (simon 2015, p. 33) . to explain the new relation between the media and the new authoritarians, scholars have drawn on economic theories of capture (cf. besley and prat 2006; corneo 2006; petrova 2008) . the "regulatory capture" or "capture theory," associated with the work by economist george stigler, the theory of economic regulation (1971) , draws attention to the fact that regulatory agencies may come to be dominated by the interests they regulate and not by the public interest. stigler's work helps us to understand the main problems not only in the governmental regulations but also in the other sectors that are supposed to fulfill a regulatory or monitoring function in democracies (cf. carrigan and coglianese 2016, p. 1) . the usefulness of the broad analytical framework that the term "capture" offers soon became obvious to scholars working on power relations and the media. according to the most common definition, the term media capture points to "a situation in which the media have not succeeded in becoming autonomous in manifesting a will of their own, nor able to exercise their main function, notably of informing people. instead, they have persisted in an intermediate state, with vested interests, and not just the government, using them for other purposes" (mungiu-pippidi 2013, p. 41) . based on this definition stiglitz proposes "a taxonomy of media capture based on four broad, and somewhat overlapping, sections: (a) ownership, (b) financial incentives, (c) censorship, and (d) cognitive capture" (stiglitz 2017, p. 11-14) . while capture through ownership refers to the purchase of mass media institutions by wealthy individuals and corporations and to the creation of interest relations between these owners and the political elites, capture through financial incentives is related to the incentives of advertising and to the relationship between advertisers and media outlets. in the third category of his taxonomy stiglitz talks very briefly about the relationship between censorship and capture and focuses on journalists' self-censorship due to their oppression by the government as well as on the commercial interests of media corporations. lastly, stiglitz discusses cognitive capture, which he defines as "the most interesting aspect of capture-the most subtle, the hardest to prove". this refers to a situation where journalists lose their critical and investigative reflexes and start to transmit the dominant point of view. although stiglitz's taxonomy focuses mainly on media capture by economic elites, recent developments in new authoritarian states, or and perhaps also in socalled democratic countries like the usa, italy or the uk, necessitate a more detailed analysis of the ways of media capture by governments. there are growing numbers of studies of media capture by governments, which aim not merely to censor but to manipulate. mungiu-pippidi (2008, p. 73) argues that some governments in eastern europe are unable or unwilling to impose direct control over the media, but prefer to capture them by state subsidies, the preferential distribution of state advertising or by handing out financial favors and penalties. in exchange for these favors or penalties, they aim to get media coverage or framing favorable to their political agenda. the concentration of the media and existing clientelist relationships has brought about media capture by economic elites as well as political leaders (cf. márquez-ramírez and guerrero 2017, p. 54) and these authors explain to what extent these captured media promote the private interests of media executives and politicians over the public good. a number of recent studies of the turkish media system also use the media capture framework in their analyses of the legal and financial pressures on media organizations and journalists. finkel (2015) surveys the historical background of media concentration and explains how it has facilitated media capture by the akp government. yanatma's research "examines advertising as a tool of government to control the media, particularly print media in a country where politics and journalism have long been intertwined in an often combustible mixture" (yanatma 2016, p. 6 ). yeşil's article (2018) focuses on the disciplinary strategies deployed by the akp and reveals the connection between its capture of the media and the previous authoritarian neoliberal order. yet, the number of studies using capture theory to analyze media system in turkey is still limited. this articles aims to fill this gap as well as to classify capture strategies used by the akp to create the basis for comparative research among new authoritarian states. however, before analyzing these strategies, i offer a brief overview of crucial 20 years that paved the way for the rise of the akp and the transformation of the media system in the 2000s. after the 1980 coup d'état a new constitution was designed under the tutelage of turkish military forces and entered into force in 1982. article 28 of the new constitution regulates the freedom of press and starts by saying that "the press is free, and shall not be censored". however, the following lines of the same article define in detail the conditions under which the state can ban and take the measures necessary to stop the distribution of newspapers and periodicals. the main reason for this ban, according to article 28, is to protect the internal and external security of the state, which can be interpreted very broadly. additionally, different articles of the penal code (such as articles 132, 137, 161 and 162) bear directly or indirectly upon the press and were vaguely defined so that the judiciary could use them against journalists when the "state interest" was in danger (cf. waldman and çalışkan 2019, p. 386 ). this illiberal constitutional and legal order was not the only characteristic of the journalistic environment in turkey post coup d'état. neoliberal winds had knocked on turkey's door. one of the reasons for the 1980 coup d'état was to facilitate the application of a new economic policy aiming to integrate and adapt turkish economy to the global capitalist system (cf. gürsel 1995) . the framework of this economic policy was set by decisions made on january 24, 9 months before the coup d'état. one of these decisions was directly related to the media sector and stipulated the end of the state subventions previously granted to newsprint. this decision slowly made all newspapers financially dependent on advertisement revenue. this neoliberal transformation of economic policies created a completely new environment where media owners with a journalistic family background could not easily find ways to afford the cost of paper and to get sufficient advertisements to finance their newspapers (cf. sözeri and güney 2011, p. 39 ). there were two options for these traditional owners: either to invest in areas outside the media or to sell their family business to new entrepreneurs with backgrounds in banking or construction (cf. kaya 2016, pp. 261-262) . as a result, the ownership profile of the media slowly changed. by the second half of the 1990s only a limited number of conglomerations, chiefly those owned by dogan, bilgin and doguş holdings, had started to dominate the media market. their main aim was to "obtain political patronage and support for their other business ventures" (kaya and çakmur 2010, p. 530) . the weak coalition governments and economic instability in the 1990s obviously helped these new media conglomerates to pull the strings of political leaders and to steer the game in the direction that they desired. later, when erdogan decided to get rid of these big media holdings, he used this corrupted relationship between media owners and previous political leaders as a card against them. the third characteristic of the media landscape in the 1980s and 1990s was absolute censorship on the kurdish issue and high levels of violence against kurdish journalists or defenders of human rights. this censorship took a form deliberately ignoring the kurds at the beginning of 1980s. while the military rule imposed restrictions on kurdish identity such as changing the names of kurdish cities or prohibiting the use of the kurdish language in public, the media preferred not to cover topics related to these steps. on the very rare occasions when they did treat on topics related to the kurds or kurdish regions, they avoided using use the word "kurd". in fact, it was a reflection of turkish state discourse (cf. yegen 1999) imposed and controlled by the military rule. the legal framework allowed for lengthy prison sentences to journalists who did not accept to stay within the limits of the state discourse. somer's research, for shows, reveals that "in 1984 and 1985, the mainstream turkish daily hürriyet published only 25 articles that were fully or partially related to the country's ethnic kurds. only 3 of these 25 articles used the word kurd in reference to a person, group, concept, or place" (somer 2005, p. 591 ). this intentional ignorance by the mainstream media in the beginning of the 1980s was later transformed into a framework of enmity directed by the military when the armed conflict between the partiya karkeren kurdistane-kurdistan workers' party (pkk) and the turkish army intensified at the beginning of the 1990s. at the same time, various human rights organizations had prepared reports on the murder, torture, depopulation and extrajudicial killings targeting firstly kurds, but also leftist human rights defenders, advocates, and journalists. however, mainstream newspapers, voluntarily or involuntarily, turned a blind eye to all human rights violations (cf. waldman and çalışkan 2019, p. 389) . some journalists working for left-wing or pro-kurdish newspapers were killed by "unknown people" and the turkish government made no serious effort to investigate these killings. when demirel, then prime minister, was asked about the imprisonment or killings of journalists, he answered without shame: "those killed were not real journalists. they were militants in the guise of journalists. they kill each other." (hrw 1993, p. 18 ) labeling journalists who preferred to practice their profession outside the state-drawn framework as "terrorists", which erdogan often did, has its roots in the recent history of journalism in turkey. the fourth characteristic of the media landscape is the weakening of union rights and the increasing oppression on journalists who were members of unions. this deunionization was not specific to the media sector. in fact, it was directly related to the scything of left-wing organizations active in the 1970s as well as to the neoliberal economic policies of turkish governments in the 1980s. the cooperation of the conglomerates dominating the media sector together with the government facilitated this de-unionization (cf. özsever 2004, p. 159) . in practice, union employees started to be dismissed because they were union members and the monopoly structure of the sector did not permit them to find employment in other media enterprises. at this time of the crisis, the failure of the unions to offer support to the dismissed and threatened journalists also played a role in journalists' unwillingness to become union members (cf. sözeri and güney 2011, p. 32) . in brief, when the akp came to power in 2002, journalists were working without any constitutionally or legally guaranteed freedom. additionally, some political questions such as the kurdish issue were considered taboo and could only be treated in the way defined by the state bureaucracy and the military. the neoliberal economic transformation in turkey also shaped journalists' working conditions and rights. the clientelistic relationship and the interaction between media patrons and political leaders were accepted as natural. as this brief snapshot of historical media-state relations shows, media freedom has never fully existed in turkey. i argue that when the akp came to power they used the previous laws limiting the freedom of the media, established clientelistic relations between the media patrons and political leaders, and provided weak protection to media professionals as a starting point. consecutive victories in elections fueled their desire and capacity to capture the whole state apparatus. in this process, controlling, using and manipulating information according to their interests played a crucial role. the akp government did not challenge the neoliberal media and did not try to nationalize private media outlets. they captured the media using new strategies. instead of giving a timeline of akp's media policies, this section aims to classify these strategies to allow further comparative research. the classification is based on stiglitz's taxonomy, but calibrates it according to practices in a new authoritarian state, namely turkey. we can classify the akp's methods in three categories: capture by creating its own private media, capture through financial sanctions, and capture by intimidating and criminalizing journalists. at the end of 1990s four major holding companies were dominant in the mainstream media sector in turkey: dogan, bilgin, uzan, and çukurova. according to the report presented to the media inquiry committee of the turkish grand national assembly in 2002, 84% of the daily newspapers sold in turkey belonged to the dogan, bilgin or çukurova group (cf. özsever 2004, p. 125) . the 2001 economic crisis brought about a crash, especially in the banking sector, which directly influenced the bilgin and uzan groups. the tasarruf mevduatı sigorta fonu-the savings deposit insurance fund (tmsf) expropriated their media assets, including several newspapers, television and radio outlets. while these two important groups were excluded from the media sector, two new big actors joined the club of media patron, namely doguş and ciner (cf. yeşil 2018, p. 245) . although there were important differences and even some conflicting interests among these media patrons (cf. özsever 2004, p. 122) , they had at least two areas in common. firstly, as supporters of neoliberal policies, they were opposed to any kind of journalist organization. secondly, they were not members of the akp's close circle. during its first term the akp government tried to influence the mainstream media dominated by these groups using a carrot and stick strategy, like previous governments. at the same time, some other media groups such as ihlas, albayrak, and gülen 1 's newspapers and tv channels with islamist backgrounds were becoming richer (cf. özsever 2004, p. 120) . however, the circulation numbers and ratings of these partisan media group outlets were limited. the akp realized soon that they had to develop a new ownership profile to have more direct control over the management of newspapers and tv channels to influence the "construction of public knowledge". businessmen from erdogan's inner circle who had started to prosper under the akp government had to be encouraged and, if necessary, pushed to buy mainstream media outlets. to create a partisan mainstream media, the akp used two main strategies. the first strategy was confiscation followed by a manipulated tender. this scenario first took place over the star (a daily newspaper) and kanal 24 (a news channel), 1 the gülen movement is a transnational islamic order named after the us-based islamic cleric fethullah gülen. established at the beginning of the 1970s ini̇zmir, turkey, the movement became gradually involved in education, politics and the bureaucracy in turkey (cf.şık 2017). the partnership or strategic alliance between the akp and the gülen movement was obvious from 2002 to 2010. after a hidden confrontation between them from 2010 to 2013, akp leaders, especially erdogan, started to attack the gülenists (cf. taş 2018, pp. 397-401) . finally, after the failed coup d'état, the akp declared the gülen movement a "fetullahist terrorist organization/parallel state organization". which had belonged to the uzan group before the 2001 crisis. in 2004 the tmsf confiscated different media outlets belonging to the uzan group, including star and kanal 24 (cf. hürriyet 2004, february 14) . in 2007, both outlets were sold to a businessman through a public tender organized by tmsf. however, there was another change of ownership of these outlets and they were sold to the akp-friendly sancak holding group (cf. t24 2009, may 20) . the second scene was played out in the same year. this time the tmsf confiscated from ciner group atv and the sabah newspaper, two of the biggest mainstream outlets at the time, on the grounds that ciner and its previous owner, the bilgin group, had signed a secret protocol. the timing of this decision was very meaningful because it was less than 4 months before the 2007 general elections. neither atv or sabah had a critical position but they were not under the full control of the government and their ratings and circulation were high. so, it was advantageous for the government to silence two important mainstream news outlets for 4 months before the elections. the akp managed to gain the majority in the 2007 elections once again. after the elections, tmsf organized a public tender, and the only company to make an offer was çalık holdings the then chief executive officer of which was beraat albayrak, erdogan's son-in-law (cf. diken 2014b, february 13). for the third act of the confiscation method, in 2013, tmsf confiscated different companies belonging to the çukurova group, including digitürk, show tv, skytürk 360, and akşam. in the same year, sky türk, akşam, güneş and some other radio channels and magazines were also sold to sancak holding. the akp's second strategy for changing the profile of media ownership is to apply tax mobbing and to force them to avoid the media market. the most obvious example was that of the dogan media, which had controlled more than half of the domestic non-state media market since the mid-1990s. the problems between the dogan group and the akp started with the trial of deniz feneri launched in germany. the allegations against the islamist deniz feneri charity were that part of the money it collected was used for other purposes. the indictment cited some members of the akp, including the then prime minister erdogan. when the dogan media carried the trial on their first page, erdogan attacked the dogan media and blamed media patrons for acting like prosecutors and the courts (cf. dw 2008, september 13). then, suddenly, the ministry of finance decided to audit dogan's financial records for the previous three years (2005, 2006, and 2007) and fined them first $500 million and later $2.5 billion for tax irregularities found during the audit (cf. arsu and tavernise 2009, september 9). the first consequence of these fines was that the gözcü newspaper, known to be critical of the akp, had to shut down, and some critical voices in the mainstream papers hürriyet and radikal were dismissed. soon afterwards, two important newspapers owned by the dogan group were sold to karacan and demirören holdings (bloomberg ht 2011, april 20) . although the dogan group started to decrease its share in the media sector and reduced the level of anti-government criticism, it faced a tacit threat through a continuing criminal case "on charges of 'establishing an organization for the purpose of criminal activity', forging official documents and violating turkey's anti-smuggling law" (ipi 2016, july 8). finally, in 2018, the dogan family decided to leave the media and to sell all its media outlets, including hürriyet, kanal d, and cnn türk, to a conglomerate k whose majority shareholder is erdogan demirören, a businessman with close ties to erdogan (cf. bucak 2018, may 29) . in brief, in the last 13 years, using these two strategies the akp government almost completely changed the ownership profile of the mainstream media. today, 90% of the mainstream media in turkey is under the direct control of the akp government, or more precisely, of president erdogan. the akp government has also used financial sanctions to discipline those media outlets that they cannot completely capture. the first of these methods targets the advertisement revenues of media companies. this kind of sanction does not often immediately attract attention, but it is very effective. since most media outlets depend on the advertising revenue, advertising remains a control tactic for governments, especially in non-democratic regimes. the media outlets in turkey have two sources of advertising revenues: from public announcements and advertisements and from private advertisements. the first group is distributed by the state-run press bulletin authority (basıni̇lan kurumu; bi̇k), which is controlled by the government. bi̇k tends to use official adverts and announcements as carrots or sticks, depending on the position of the newspapers and their owners' relations with the government. according to data compiled by yanatma, in two years the value of official advertisements given to akşam newspaper increased from tl 5.6 million (in 2013) to tl8 million (in 2015) after the tmsf took control of this newspaper and sold it to a businessman loyal to president erdogan. in the same period, due to the confrontation between the akp and the gülen movement, official adverts and announcements given to the zaman newspaper (the flagship newspaper of the gülen movement) decreased from tl 13.5 million to tl5 million (cf. yanatma 2016, p. 56). a recent example of dropping all official adverts and announcements given to evrensel and birgün, two newspapers known as being leftist and critical of the government, illustrates how bi̇k can also act as a punitive institution (cf. köylü 2020, february 5). given the fact that official adverts constitute important items of revenue, especially for small national newspapers, we can argue that the government aims to silence even miniscule alternative voices using the bi̇k. the second group, private advertisement revenues, are also not free from governmental pressure. in his very detailed report, yanatma studies data showing advertising space (in square centimeters) and analyzes the distribution of advertising in six major public firms (the state-run banks halkbank, ziraat bankası, and vakıfbank; turkish airlines, turkcell, and turk telekom). after a quantitative analysis of advertisement distribution, the author concludes "that the only criterion for public firms advertising distribution is their coverage of the government and circulation did not play any role in this allocation. while these apparently partisan and unfair practices were apparent before 2013 they subsequently turned into a more direct way to punish or support individual newspapers." (yanatma 2016, p. 36 ) during my field research, one of the interviewees who works as a manager at a private tv k channel 2 also referred to an "advertising embargo" by public firms. they argued that once the minister of treasury and finance berat albayrak and some partisan columnist known to have close relations to erdogan directly criticized and targeted them, an embargo followed. they noted that the negative or critical speeches by high-level akp members also spread the message to other firms. they added that private firms wanting to keep up good relations with the government stop adverting in these media and this kind of advertising embargo by public firms and companies has an indirect effect on the total of advertisement revenue acquired. the other method of financial sanctions is more direct and is applied by the radio and television supreme council (rtük). 3 rtük can impose broadcasting bans or heavy penalties to broadcasters for reasons only vaguely defined by law and this can easily become a powerful stick in government hands. recent experiences show that the government, or more often erdogan himself, did not hesitate to use rtük as a stick against any critical voice. in response to a parliamentary question from the main opposition party chp (cumhuriyet halk partisi-republican people's party), rtük revealed that it had imposed more than 16,000 sanctions on the media and tl250 million (nearly $45 million) in fines over 8 years from 2010 to 2018 (cf. güneysu 2019, march 30). the reasons given for these fines include what was defined by the turkey's top media watchdog as "targeting, insulting, and threatening turkish president erdogan, in addition to calling for a military coup against the constitution" (cf. ipa 2018, december 27) or critical reporting on the government's efforts to combat the covid-19 crisis (hürriyet 2020, april 15) . whatever the content, as underlined by erol önderoglu, the representative of rsf in turkey, it is true that alternative media outlets that reject the official discourse are exposed to broadcasting bans and heavy fines (cf. hacaloglu 2020, april 17). additionally, for the first time in the history of rtük, media ombudsperson faruk bildirici, selected from the chp quota, was dropped from the council since he criticized almost all the decisions it made and made his criticisms public via social media (cf. bianet 2019). this exemplifies that the council is reluctant to debate its fines and sanctions and is intolerant to opposition. media patrons or institutions have not been the only target of the akp. at the same time, it developed a three-step strategy of intimidating journalists trying to do their job. during their first term in office, before full control by ownership of the media had not yet been achieved, members of the akp government or their representatives made direct contact with the chief editors of tv channels and newspapers. daily newsroom meetings increasingly took into account their instructions. one of my 2 based on an interview conducted by the author on october 11, 2019 in istanbul. 3 rtük is composed of nine members who are elected by the turkish parliament from nominees on the basis of the number of mps of each political party. as the appointment process of its members is not transparent, the council is often criticized for its lack of independence (cf. media ownership monitor 2019). k interviewees 4 who was working in one of the most important news channels at this period noted that the instructions on which news items to cover started very early. one example was the pamukova train accident in 2004 where journalists were permitted to cover the tragedy only from limited perspectives and they could not ask questions about the responsibility of the ministry and high-level bureaucrats. the more the akp reinforced its position within the state bureaucracy over consecutive election victories, the more direct and effective these instructions became. journalists and correspondents who had good relations with the prime minister were appointed to the positions of chief editor and helped to establish a "red phone line" between the newsroom and the office of the prime minister (cf. dagıstanlı 2014, p. 102) . for example, nermin yurteri, who had been the correspondent following erdogan, was appointed as the news coordinator of ntv (cf. t24 2011, december 15) and fatih saraç 5 became a member of the management board of the ciner media group (cf. t24 2014, february 6). another interviewee 6 said that there were rumors about whatsapp groups among new directors of different newspapers and tv channels set by representatives of the prime ministry or presidency. it is also claimed that government representatives can share a complete news article ready to be "copied and pasted in next day's paper" on these whatsapp groups (özer 2018) . the second step for controlling newsrooms was to dismiss journalists who resisted instructions from their chiefs or coming directly from the government. it is difficult to reveal the real number of journalists fired from newsrooms for their critical points of views or refusal to apply instructions dictated by the government. a list of sacked journalists, editors and correspondents prepared by turkey's journalists union during the gezi protests alone includes seventy-two names (cf. muhalefet 2013, july 22). the visible part of this iceberg includes famous columnists, chief editors or anchor persons, such as banu güven, hasan cemal, ayşenur arslan, and ruşen çakır (cf. arsan 2013), who resigned or were dismissed as they could no longer defend the essence and the ethics of journalism. although the mainstream media have been captured by the government, step by step, "alternative" or "critical" newspapers or tv channels try to survive against legal and financial sanctions. the third step of intimidation by the government has targeted journalists working in these niche 7 newsrooms. the akp's strategy has been based on the illiberal clauses of the press law, the penal code and the anti-terror law, which can be very broadly applied according to the desire of the judiciary. the first large-scale judicial attack on journalists started with the ergenekon case. this began as an investigation into a "deep state" military-led plot to overthrow the akp government, but soon was transformed into a witch-hunt targeting adversarial voices in media and non-governmental organizations (cf. steinvorth 2013, august 12). in this period, journalists came up against two categories of charges. firstly, an important group of journalists who were following the ergenekon trials were charged, 4 based on an interview conducted by the author on october 31, 2018 in istanbul. 5 fatih saraç is an islamist businessman who had close relations with erdogan (cf. diken 2014a, february 7). 6 based on an interview conducted by the author on october 29, 2018 in istanbul. 7 based on an interview conducted by the author on april 26, 2019 in istanbul. based on the articles 285 and 288 of the penal code, with "the violating of the secrecy of an ongoing trial". a second group of journalists including famous names such as mustafa balbay, tuncay özkan, soner yalçın, ahmetşık, and nedimşener was accused of colluding with the so-called ergenekon ultra-nationalist conspiracy and was jailed pending trial (cf. yeşil 2016, pp. 99-100) . as underlined by one of the interviewees 8 , this harsh judicial suppression orchestrated by akp's proxies in the judiciary constituted a breaking point and led to the criminalization of all journalists who do not toe the line. after this moment, journalists who speak truth to power realized in concrete terms that not only their job, but also their freedom was at risk. after the ergenekon case, in 2011, 36 journalists working for pro-kurdish media outlets, namely the dicle news agency and the daily özgür gündem, were arrested and journalists were charged with membership of kck (koma civaken kurdistan-kurdistan democratic communities union) and the pkk, within the framework of police operations that continued for 6 months and targeted administrative staff from the peace and democracy party (bdp), then some academics, and then a large group of lawyers (cf. ergin 2011, december 28) . journalists were charged with the dissemination of terrorist propaganda and criminalized on the basis of the vague language of the anti-terror law provisions. the criminalization of journalists gained unprecedented momentum after the failed coup d'état in 2016. a series of judicial suppression targeted journalists with different backgrounds. the first targets were journalists working in media outlets financed by the gülen movement. ten days after the coup, arrest warrants were issued for 89 journalists suspected of links with the gülen movement, some of whom were immediately taken in police custody. more tragically, they found it difficult to get lawyers to defend them (cf. rsf 2016, july 28). on july 28, 2016, all media outlets and print houses close to the gülen movement were closed down and their movable property as well as all real estate belonging to them were transferred to the treasury in terms of decree no. 668. almost two years afterwards a verdict was handed down and "six former columnists for zaman were sentenced to heavy prison terms for 'membership in a terrorist organization'" (rsf 2018, july 6). the second target was the pro-kurdish newspaper özgür gündem. the suppression of the newspaper's editors and journalists was so high that human rights defenders, academics, and journalists initiated a campaign on may 3, 2016 (the day of the freedom of the press) to protect their colleagues. each of the 56 supporters of the campaign became for one day the substitute editor-in-chief of özgür gündem until august 9. however, the judiciary decided to close down the newspaper temporarily on august 16, 2016 for "regularly making pkk terrorist organization propaganda and serving as a broadcasting organ for an armed terrorist organization" (akgül 2016, august 16) . next day, some journalists and workers at the daily were detained during a police raid. in addition, the prosecutor charged 38 participants in the solidarity campaign with özgür gündem with "terror organization propaganda" under the anti-terror act (tmk) article 7/2 and "releasing or publishing terror organizations' statements or declarations" under article 6 (akgül 2017a, march 7) and their trial started on september 20, 2016 (cf. duran 2016 . two different trials related to özgür gündem lasted for more than 3 years. in brief, the process of jailing and trying defendants in the özgür gündem trials, as well as the penalties imposed on them, illustrates very well what journalists who do not accept the journalism framework imposed by the government can expect. the third target was cumhuriyet, one of the oldest newspapers from turkey's republican era. on october 31, 2016 turkish police detained board members, executives and some columnists following a series of raids (cf. cumhuriyet 2016, october 31) . nine months after their detention the first hearing of the cumhuriyet trials took place on july 24, 2017. the prosecution claimed that the daily had published news and articles legitimizing the july 2016 coup attempt and given their support to three terrorist organizations-"fetö, pkk/kck and dhkp/c" (a small far-left group) (cf. akgül 2017b, july 24). the indictment presented all the news and articles criticizing the authoritarian policies of the government as support for terrorism and the court sentenced 13 journalists to prison on terrorism charges on april 25, 2018 (cf. gall 2018 . as in previous cases against different journalists, the indictment, pretrial detention and trial proceedings of cumhuriyet violated the basic human rights of the defendants. however, the symbolic nature of this trial was even more devastating, as the daily has always had an important place in turkey's journalistic history. in addition, one of my interviewees (see footnote 8) noted that the cumhuriyet trials showed that, starting from the ergenekon case, the akp government was able to create a very insecure environment and intimidate an important proportion of journalists as well as human rights defenders by using judicial coercion. they explained that it was relatively easier to organize resistance when ahmetşık was first arrested during the ergenekon case. however, they said that everybody was more reluctant to participate when they again tried to organize a campaign to support ahmetşık and other cumhuriyet journalists on trial for the second time. apart from these major trials, there are several legal investigations against individual journalists. since the failed coup d'état, the "famous" article 301 9 of the turkish penal code has been put into use again against critical intellectuals including academics, lawyers, and journalists. news covering turkey's incursion into afrin in january 2018 and military operations in syria were regarded as insults to the turkish state (cf. cengiz 2019, october 28). since 2017, the media monitoring database has registered 36 judicial interferences based on the article 301 (cf. medya gözlem veritabanı 2020). turkey's renewed use of article 301 as a weapon points out the increasing suppression on journalists. 9 the original version of the article 301 accepted in 2005 as a part of the new penal code made it a crime to denigrate "turkisshness" and turkish government institutions. prosecutors used the article 301 against novelists, academics and journalists including well-known names such as orhan pamuk, hrant dink (murdered by a turkish nationalist during the period of his trial), taner akçam, and elifşafak. in 2008, a new amendment changed "turkishness" into "the turkish nation" and made it obligatory to get approval of the minister of justice to file a case. taner akçam brought a case against turkey and the ecthr (european court of human rights) found turkey guilty of violating freedom of expression (cf. akçam v. turkey 2012). additionally, in their report published at the end of 2019, the committee to protect journalists ranked turkey the second-highest jailer of journalists in the world after china (cf. beiser 2019). pretrial detention can last for months and sometimes years and has been used as a method of punishment. in 2017, several cases were brought by journalists to the ecthr and in march 2018, the court took an important decision. for the application ofşahin alpay and mehmet altan, the court held that there had been a violation of article 5 § 1 (right to liberty and security) and a violation of article 10 (freedom of expression) of the european convention on human rights (echr) and ordered to release these journalists from prison. however, almost immediately another warrant was issued for their arrest on charges and they were sent back to jail (cf. ipi 2020). the last developments show that ecthr rulings are no longer effectively applied by turkish government. the fact that turkey is still a member of the council of europe despite the systematic violations of the echr ruling creates serious doubts on the capacity of european democratic institutions to deal with new authoritarian regimes such as turkey. on the other hand, the huge numbers of prosecutions after the coup attempt in 2016, the use of pretrial detention and the criminalization of any critical views under the state of emergency have increased the number of detainees in prisons and created a very unhygienic and unsafe environment in jail, especially at the time of the global pandemic. although the turkish government passed a new prison release law on april 13, 2020, the law fails to include many "political" prisoners who are already convicted or in pretrial detention, including journalists, lawyers, and political and human rights activists. the insistence of the turkish government on not releasing them brings to mind the question whether it is a conscious choice to leave all political opponents in jails and at a high-level of risk during the covid-19 pandemic. the akp government has applied threefold strategies to capture the mass media in turkey. firstly, they have systematically obliged owners of mainstream media outlets to exit the media sector by using the competences of divers public institutions and imposing financial sanctions. in this way, they have changed the ownership profile of the mass media and created their own loyal and partisan media. secondly, using rtük they have fined media outlets that they cannot fully control. thirdly, they have targeted and intimidated critical journalists using legal suppression and threats of imprisonment. different factors, such as the transformation of the political economy after the military coup d'état in 1980, the corrupted relationship between journalists and political leaders, and latent censorship on taboo topics, certainly played a role in facilitating the application of these strategies. however, the akp's methods differed on at least three points from previous governments' strategies. first of all, they were able to design their media capture strategy within a political system that they have dominated step by step due to their majority in successive elections. constitutional amendments and important changes in the judicial system have also eroded judicial independence. this erosion k of judicial independence opened the way to the criminalization of journalists as well as to intensifying the effects of criminalization. secondly, instead of continuing with a stick and carrots strategy with existing media tycoons, they transformed pro-akp businessmen into media owners. this process created new media patrons whose existence and future depends largely on erdogan's consent. their blind allegiance to erdogan has become the main characteristic of these new media patrons. the transformation of the ownership profile also influenced workers of media outlets at all level. as underlined by one of my interviewees (see footnote 4), an important feature of the journalistic profession is based on a master-apprentice relationship. yet, the coin of the realm is no longer experience in the field, but the capacity to submit. those who were able to offer their experience have voluntarily or involuntarily left the profession. the new generation of reporters, editors or columnists is familiar only with the rule of the akp, which raised important questions about the future. thirdly, ever since his very first years in government erdogan himself has openly attacked journalists and media patrons with whom he does not have a deal. his threats, "you will pay for this," directed to critical journalists and the subsequent judiciary suppression that follows these threats refer also to a systemic transformation of the field. this systemic transformation points out an accreditation of journalism as a whole in turkey. in brief, this captured mass media environment in turkey deeply harms freedom of the press as well as citizens' right to be informed. however, we are at the same time living in an age of digitalization of the media landscape. can social media and online news platforms play a role in solving the media freedom problems in turkey? or will and can turkish government apply a digital censorship through the "new social media law regulation" 10 ? further research focusing on the relevance of social media for the freedom of expression and on the impact of social media regulation will permit us to think about media capture through digital censorship. funding open access funding provided by projekt deal. open access this article is licensed under a creative commons attribution 4.0 international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/. özgür gündem newspaper shut down jail term, fine for özgür gündem editor-in-chief on watch mate sansürün kaldırılmasının 109. yılında cumhuriyet davası başlıyor case of altug taner akçam v köşe yazarıyım, kapı dışarıyım turkish media group is fined $2.5 billion egypt are world's worst jailers of journalists handcuffs for the grabbing hand? media capture and government accountability opposition member faruk bildirici dropped from rtük membership milliyet ve vatan demirören-karacan ortaklıgına satıldı dogan media sale to erdogan ally is blow to press freedom illiberal democracy in hungary: authoritarian diffusion or domestic causation? democratization capturing regulatory reality: stigler's the theory of economic regulation. faculty scholarship paper 1650 turkey resurrects deadly article 301 against dissent media capture in a democracy: the role of wealth concentration illiberal democracy or electoral autocracy-the case of turkey cumhuriyet gazetesine operasyon 5 ne 1 kim -medyanın mutfagından sansür -otosansür hikayeleri 9 soruda mehmet fatih saraç 9 soruda sabah-atv satışı özgür gündem'le dayanışma davası başladı deniz feneri kavgası büyüyor journalist arrests from the model of democracy rising competitive authoritarianism in turkey k media capture strategies in new authoritarian states: the case of turkey captured news media: the case of turkey mastering british politics turkish court convicts 13 from cumhuriyet newspaper on terrorism charges. ny times rtük sansürünün 8 yıllık bilançosu dış borçlar political communication in media society: does democracy still enjoy an epistemic dimension? the impact of normative theory on empirical research rtük'ün televizyon kanallarına verdigi cezalar ne anlama geliyor? the kurds of turkey: killings, disappearances and torture uzan grubuna el konuldu rtük'ten fatih portakal'ın sözleri nedeniyle fox tv'ye ceza erdogan lawyer files an appeal to rtük demanding tv program suspension ipi troubled by tax case targeting turkey media owner turbulent times: turkey and european court of human rights islamisation of turkey under the akp rule: empowering family, faith and charity i̇ktidar yumagı: medya, sermaye, devlet.i̇stanbul:i̇mge yayınevi politics and the mass media in turkey evrensel ve birgün basıni̇lan kurumu'na karşı elections without democracy: the rise of competitive authoritarianism a third wave of autocratization is here: what is new about it? democratization political affiliations gazetecii̇şten atılır, dövülür how media and politics shape each other in the new europe freedom without impartiality. the vicious circle of media capture in the service of power: media capture and the threat to democracy turkey's judiciary and the drift toward competitive authoritarianism whatsapp groups: another side to censorship in turkey tekelci medyada örgütsüz gazeteci inequality and media capture illiberal democracy and the struggle on the right turkey: in latest escalation, 102 media outlets closed by decree zaman trial: heavy prison sentences for six turkish journalists elections without democracy: the menu of manipulation the politics of uncertainty the power of news i̇mamın ordusu.i̇stanbul: kırmızı kedi the new censorship: inside the global battle for media freedom resurgence and remaking of identity: civil beliefs, domestic and external dynamics, and the turkish mainstream discourse on kurds the political economy of the media in turkey: a sectoral analysis turkish journalist seeks german protection the theory of economic regulation in the service of power: media capture and the threat to democracy tamince star gazetesi ve kanal 24'e ortak ntv yayın koordinatörü nermin yurteri oldu kart: hükümet eliyle yaratılan 'fatih saraç prototipleri' deşifre edilmeli a history of turkey's akp-gülen conflict public service broadcasting and media systems in troubled european democracies media capture and advertising in turkey: the impact of the state on the news the kurdish question in turkish state discourse media in new turkey: the origins of an authoritarian neoliberal state authoritarian turn or continuity? governance of media through capture and discipline in the akp era the rise of illiberal democracy gülçin balamir coşkun phd, is an einstein fellow at the institut für sozialwissenschaften, humboldt-universität zu berlin. her ongoing project focuses on the role of media control as a symptom of democratic backsliding in the akp era key: cord-024569-d9opzb6m authors: seo, mihye title: amplifying panic and facilitating prevention: multifaceted effects of traditional and social media use during the 2015 mers crisis in south korea date: 2019-07-26 journal: journal mass commun q doi: 10.1177/1077699019857693 sha: doc_id: 24569 cord_uid: d9opzb6m in the context of the 2015 middle east respiratory syndrome (mers) outbreak in south korea, this study examines the multifaceted effects of media use considering the current complex media environment. analysis of a two-wave online panel survey found that traditional media use had a positive influence on mers knowledge while social media use did not. however, knowledge did not facilitate preventive behaviors. in contrast, negative emotional responses due to media use stimulated desirable behaviors. furthermore, social media use directly influenced behavioral responses but traditional media use did not show the same effects. different functions of traditional and social media during an epidemic are discussed. changed that understanding considerably (petersen, hui, & zumla, 2015) . this unforeseen crisis induced not only morbidity and mortality but also fear and panic in korea. in fact, the panic epidemic caused more widespread damage than the disease itself by slowing the economy and interfering with people's daily routines. in times of crisis, the importance of the media is heightened. government and responsible organizations consider media to be an essential part of crisis management (reynolds & seeger, 2005) , and the public relies on the media to make sense of confusing or chaotic situations (tai & sun, 2007; zhang, kong, & chang, 2015) . given the importance of media in times of crisis, scholarly attention has been largely paid to the following questions: (a) how government and other organizations work (or should work) with media to prepare for and respond to crises and (b) how the media reports (or should report) on crises. relatively less attention, however, has been paid in existing research to examining informational, emotional, and behavioral consequences of individuals' media use in times of crisis. as the importance of social media has risen in general, its importance in the context of crises has also increased. evidence shows that many people turn to social media to seek crisis-related information, such as safety instructions and news updates (veil, buehner, & palenchar, 2011) , which stands to promote proper behavioral responses to facilitate effective crisis management. however, both researchers and practitioners caution that media-social media in particular-may create misperceptions and amplify public fears by fostering public panic and proliferating unverified information (kasperson, 1992) . in comparison to traditional media, social media use is particularly susceptible to the aforementioned concerns due to enhanced speed of information transmission and distinctive features of open access platforms (zeng, starbird, & spiro, 2016) . in the context of the 2015 mers outbreak in korea, this study provides an empirical examination of the multifaceted effects of media use in times of crisis in the complex and dynamic media environment of today. using two waves of online panel data collected at two different time points during the mers crisis, i investigate how individuals' traditional and social media use during the crisis produced various consequences, including increased mers knowledge, negative emotions such as fear and anxiety, and direct and indirect facilitation of mers preventive behaviors. i also scrutinize the differences in these effects caused by traditional and social media use. the term crisis is defined as "some breakdown in a system that creates shared stress" (coombs, 2014, p. 2) , which includes a very broad range of situations. an infectious disease outbreak is a typical example of crisis in the public health context. prior research has focused on how governments or other responsible organizations can achieve positive relationships with the public in managing a particular crisis. based on the organization-public relationship (opr) approach, scholars have theorized and investigated various (pre)conditions, attributes, and communication strategies of organizations to bring about positive relational outcomes with the public, such as satisfaction, commitment, trust, and mutual understanding (s.-u. yang, 2018) . with respect to the mers crisis in korea, s.-u. yang (2018) showed that the government's lack of dialogic competency negatively affected government-public relationships. those findings indicate that the korean government's lack of mutuality and openness weakened the credibility of its risk information, which produced negative relational consequences such as distrust and dissatisfaction and the intent to dissolve the relationship. cooperation with the media on the part of government and responsible organizations is a major portion of effective crisis management processes (coombs, 2014) . from a crisis management perspective, prior research has mainly focused on how to understand and work with the media to accomplish various goals (reynolds & seeger, 2005) . for instance, researchers have identified the kinds of communication strategies that work best to reduce public-relations damage and generate compliance with desired behaviors in hazardous situations (glik, 2007) . for instance, seeger, reynolds, and sellnow (2009) emphasized the importance of coordinating specific communication tasks for each crisis phase in the context of hurricane katrina and the h5n1 outbreak. based on the existing literature and case studies, scholars have also attempted to provide guidelines for best practices in crisis communication (veil et al., 2011) , which could also be used as evaluative criteria in crises (plattala & vos, 2012) . another line of research focuses on how media channels cover crises by analyzing the content of crisis reporting and discussing its implications. as manifested by terms such as disaster marathon (liebes, 1998) , the unexpected and impending nature of crises triggers media hype, which produces a prolific amount of reporting. much research has investigated the characteristics of crisis coverage (shih, wijaya, & brossard, 2008) . shih and colleagues (2008) , for instance, found that the coverage of epidemics showed common patterns across discrete diseases, such as a high eventbase and emphasis on newly identified cases and government actions. with respect to the mers crisis in korea, jin and chung (2018) performed semantic network analysis of korean and foreign media coverage of the crisis. they examined the most frequently used words (e.g., patient, hospital, infection, government, and case) and concluded that korean media focused heavily on the number of cases and the government's responses, consistent with shih and colleagues' findings (see kwon, 2016, for similar findings) . based on content analyses of crisis reporting, past research has also identified persistent problems in crisis reporting, such as excessiveness (rezza, marino, farchi, & taranto, 2004) , inaccuracy (auter, douai, makady, & west, 2016) , and sensationalism (moeller, 1999) . korean media's mers reporting was not exempt from sensational and excessive coverage of the contagious nature of the disease and patient counts (kim, 2016; kwon, 2016) . as population mobility and trade in goods and services have increased, newly emerging infectious diseases have become global public health concerns. some emerging infectious diseases have derived from a known infection, such as influenza, and have spread into new populations. the mers outbreak in korea can be understood as one such example. an outbreak of infectious disease causes not only human casualties but also massive economic harm. different from chronic health risks, infectious pandemics trigger spontaneous and intense media attention (posid, bruce, guarnizo, taylor, & garza, 2005) , which could create cascading effects in various public responses. however, relatively little empirical research has considered the various consequences of media use by individuals during a public health crisis. one of the most desirable public responses to a public health crisis is engaging in preventive behaviors (mitroff, 2004) . public adoption of precautionary behaviors is critical to preventing large outbreaks of infectious disease, particularly in densely populated countries such as korea. people need to behave in ways that prevent the spread of infectious disease and its consequences, and the media plays an important role in facilitating those behaviors (gammage & klentrou, 2011; zhang et al., 2015) . learning from media is one potential pathway to engagement in preventive behaviors (sayavong, chompikul, wongsawass, & rattanapan, 2015) . besides cognitive responses, another way to galvanize preventive behaviors could be through emotional responses, which could alarm people enough to take proper actions with respect to a given risk. research on risk and health communication has offered various theoretical models and empirical evidence for each approach (boer & seydel, 1996; griffin, dunwoody, & neuwirth, 1999 ). yet, there has been little research testing and comparing the two potential paths to preventive behaviors in the context of a pandemic crisis. first, media use could increase knowledge about a crisis, which could stimulate the public to enact preventive behaviors. the heavy emphasis on knowledge is largely drawn from the traditional knowledge deficit model of communication (rutsaert et al., 2014) , which claims that a lack of understanding is the major obstacle to reasonable public responses to a risk or crisis. therefore, it accentuates the scientific knowledge transfer from experts to the layperson and media have been regarded as a major conduit of knowledge transfer (hilgartner, 1990) . thus, when a crisis happens, government and responsible organizations attempt to work with media to disseminate crisis-related information, and the general public turns mainly to the media to acquire the information to deal with the atmosphere of uncertainty. despite that widespread expectation, relatively little empirical attention has been given to whether public crisis knowledge is indeed increased by media use or whether understanding of a crisis indeed facilitates preventive behaviors in times of crisis. media is known to be more suitable for diffusion of knowledge than other channels, such as interpersonal communication (price & oshagan, 1995) . prior research shows that media use increases health knowledge in the general public, which in turn encourages desirable health behaviors (gammage & klentrou, 2011; sayavong et al., 2015) . little empirical evidence, however, has been collected in the context of urgent public health crises such as epidemics. on the contrary, disaster studies have extensively examined individual and group responses to impending threats (e.g., natural disasters or terrorist attacks). according to that body of research, in the face of an impending threat, people become more sensitive to cues about social environments and engage in searches for information as a basis for protective behaviors (lindell & perry, 2004) . however, despite the known contribution of media channels to these disaster research models, media variables have received insufficient attention in explaining the behavioral responses of individuals to crises. second, media use could stimulate proper behavioral responses via mobilizing information (mi). in the health communication literature, mi is designed to encourage a specific health behavior (friedman & hoffman-goetz, 2003) . applied to a crisis context, mi offers specific "how to" and "where to" information, such as checklists for preparedness supplies, evacuation information, phone numbers or websites for further information, or specific instructions for precautionary behaviors, meant to encourage people to take specific actions (tanner, friedman, & barr, 2009) . facing a crisis, the public needs to learn about both the nature of the crisis and how to mitigate its effects and defend themselves (guion, scammon, & borders, 2007) . a handful of prior studies in the communication discipline have documented the direct and indirect effects of media use on preventive behaviors via knowledge in a crisis situation. ho (2012) , for example, found that attention to newspaper and television news increased public knowledge about the h1n1 pandemic. zhang et al. (2015) also found that media use potentially influenced h1n1 preventive behaviors through fear and perceived knowledge. the results of 36 national surveys in the united kingdom indicate that exposure to media coverage or advertising about swine flu increased the adoption of recommended preventive behaviors (g. rubin, potts, & michie, 2010) . lin and lagoe (2013) also showed that tv and newspaper use increased h1n1 risk perception and vaccination intent in media users. based on those discussions and findings, it is expected that media use will facilitate public understanding of an emerging infectious disease and encourage appropriate precautionary behaviors. media use in large-scale emergencies, however, still requires empirical scrutiny because of the many unexpected twists that characterize fluid crisis situations. human beings facing a crisis often experience a range of negative emotions. the intense uncertainty inherent to a crisis situation galvanizes fear, worry, and panic (sandman & lanard, 2003) . in the outbreak of an unfamiliar contagious disease, both the unknown cause and fatal outcome and the interruptions of daily routines and stigma could strengthen negative emotional responses (lee, kim, & kang, 2016) . prior research indicates that individuals often feel more threatened during crises than is warranted by the actual risk level (coombs & slovic, 1979) . according to social amplification theory, the risk people feel when facing a crisis could be amplified or weakened by exchanging various forms of information via the news media or informal networks (renn, burns, kasperson et al., 1992) . once a perceived risk is officially acknowledged, the distortion and exaggeration of information tend follow (song, song, seo, jin, & kim, 2017) , feeding a range of negative emotional responses. prior research has found that the media tends to overemphasize risk and sensationalize crises. for instance, the media overstresses the horrific symptoms of contagious diseases regardless of facts about the prevalence of those symptoms in a time of outbreak (moeller, 1999; ungar, 2008) . the media also tends to focus more on the spread of a disease and the body counts rather than scientific causes (d. rubin & hendy, 1977) . these types of sensationalism can produce disproportionate public fear and panic responses to infectious diseases. scholars such as muzzatti (2005) have gone a step further and demonstrated that the media can actually manufacture threats to public health. some prior works have examined the effects of media use on emotional reactions toward a crisis. hoffner, fujioka, ibrahim, and ye (2002) , for instance, found that people who learned about the september 11 terrorist attacks through mass media were more likely to report negative emotions than those who heard the news interpersonally. they attributed that difference to the nature of the live pictures and content in mass media crisis reporting. people's negative emotional responses as influenced by media use could lead to inappropriate behaviors, such as avoidance of precautionary behaviors or unnecessary or excessive behavioral reactions (liu, hammitt, wang, & tsou, 2005) . however, other convincing literature has claimed that emotions can trigger behavioral responses that are benign and adaptive (baumeister, vohs, dewall, & zhang, 2007) . negative emotional experiences can stimulate people to seek pertinent information (e.g., the risk information seeking and processing (risp) model, griffin et al., 1999) and encourage them to take preventive actions (e.g., protection motivation theory [pmt], boer & seydel, 1996) . unlike the cognition-based approach, which emphasizes the role of knowledge, these theoretical models focus on how negative emotions work as motivational drivers to guide people to protect themselves. empirical work has supported those claims by showing that negative emotion can drive positive behavioral responses, including adopting recommended health behaviors (ruiter, abraham, & kok, 2001) and engaging in information seeking (z. j. yang & kahlor, 2013) . as discussed, korean media coverage of the mers crisis did not deviate wildly from the patterns reported in the literature (kim, 2016; kwon, 2016) . the terms most frequently and centrally mentioned by the major news outlets mainly related to the contagious nature of the disease and patient counting (kwon, 2016) . sensational reporting and delivering government press releases without critical validation were also characteristic of the mers coverage (kim, 2016) . in addition, poor government handling of that crisis reduced the credibility of the information it provided (s.-u. yang, 2018) . against that backdrop, it is worth investigating the potential association between media use related to mers and the negative emotional responses of media users to determine the extent to which they influenced the behavioral responses of those users. along with traditional media channels such as television and newspapers, the importance of social media increases during large-scale events. these trends are particularly salient in the context of crises, which are traditionally marked by high levels of information seeking by the general population. evidence shows that people turn to social media during times of crisis to find information about safety instructions, news updates, and damage reports. increasingly, the public expects even official agents to respond to public requests via social media in times of emergency, concurrent with traditional crisis management (veil et al., 2011) . these heightened expectations are due in large part to perceptions of the benefits of social media for crisis communication. it is believed that social media can accelerate information dissemination in crisis situations by linking end users directly to critical information sources in real-time (hughes & palen, 2012) . during the 2009 h1n1 virus outbreak, people exchanged information and experiences through social media (chew & eysenbach, 2010) . social media, such as twitter, has been used to quickly share initial information and updates during various types of crises, as well as to encourage specific actions, such as volunteering and precautionary behaviors (potts, 2014) . the presence of social media as an information source becomes salient when traditional media provides limited information. for instance, tai and sun (2007) showed that, during the severe acute respiratory syndrome (sars) epidemic in 2003, chinese people turned to the internet and message services to find information unavailable from the traditional media, which operate under close censorship by the chinese government. likewise, at the beginning of the mers crisis, the korean government withheld the list of hospitals affected by mers, and the mainstream media adhered to the information embargo requested by the government. limited access to needed information fueled fear, so people turned to social media instead of traditional media outlets (kim, 2016) . social networking service (sns) users are likely to be exposed to messages that affect their perceptions and their use of preventive behaviors related to health risks and the crisis event. vos and buckner (2016) found that sns messages in the 2013 outbreak of the h7n9 virus consisted mainly of sense-making messages to educate users about the nature of the risk and efficacy messages to encourage appropriate responses. in a similar line, yoo, choi, and park (2016) found that receiving information about mers through sns directly encouraged mers preventive behavioral intentions. social media is filled not only with information but also emotional expressions (chew & eysenbach, 2010) . negative emotions are more likely than positive emotions to be floating around social media during an infectious outbreak, which could increase risk perception (choi, yoo, noh, & park, 2017) . for instance, song et al. (2017) found that negative emotions (e.g., anxiety or fear) prevailed in online boards and social media during the mers crisis. often, information intertwined with emotional markers travels around social media, and emotionally charged messages are more likely than purely informational posts to be shared and diffused (pfeffer, zorbach, & carley, 2014; stieglitz & dang-xuan, 2013) . s. choi, lee, pack, chang, and yoon (2016) mined internet media reports about mers in 2016 and examined their effects on public emotion expressed online, which they captured using a sentiment analysis. they mined all mers-related news articles from 153 media companies, including the comments about each one. in a time-series analysis, they found a flow from internet media to public fear. according to song et al. (2017) , negative emotion accounted for 80% of all posts throughout online networks, especially twitter, during the mers crisis. therefore, i have drawn the following four hypotheses. i expected that both traditional and social media use in times of crisis could directly and indirectly facilitate preventive behaviors (via mers knowledge) and negative emotional responses to the mers situation. emerging social media channels have reshaped the crisis information context, which potentially complicates the relationships among media use and informational, emotional, and behavioral responses in the general public. although social media has been considered a powerful tool for disseminating information, both researchers and practitioners have cautioned against the propensity of social media to proliferate inaccurate data, unverified rumors, and even malicious misinformation (zeng et al., 2016) . because information disseminated through social media is often unverified, identifying accurate data and valid sources can be challenging and could both undermine relevant knowledge and exacerbate the consequences of emotional and behavioral responses. together with the characteristics of open access platforms, this concern is heightened by the dynamic nature of crisis communication and information overload in times of crisis (zeng et al., 2016) . social media networks are mostly composed of acquaintances and thus share common characteristics with interpersonal channels. prior works have shown that interpersonal channels tend to show stronger effects on behavioral change than traditional media, mainly through normative pressure (price & oshagan, 1995) . in addition, social media generally deal with a rapid exchange of information, which could be more appropriate for short and clear mi than for scientific knowledge about a crisis. for instance, won, bae, and yoo (2015) conducted an issue word analysis of sns messages during the mers outbreak and reported the six most frequently mentioned words on sns at that time. two of the six words were mask and hand sanitizer, which are clearly related to preventive behaviors. the other four words were hospital, infection, coughing, and checkup, which are also at least indirectly related to preventive behaviors. however, a big data analysis examining online news sites found that posts about symptoms, government reaction, disease treatment, business impacts, and rumors ranked higher than prevention-related information (song et al., 2017) . therefore, the effects of social media use on crisis knowledge and preventive behaviors might differ from those of traditional media. in terms of emotional responses, social media and traditional media might also show some differences. social media is known for its ability to spread information in a speedy and viral manner, but that information tends to be integrated with emotions. furthermore, emotionally charged messages are more likely to be shared than neutral messages (pfeffer et al., 2014; stieglitz & dang-xuan, 2013) . recent works on the mers crisis have shown that about 80% of the buzzwords on social media during the mers crisis were negatively charged emotional words such as fear and anxiety (s. choi et al., 2017; song et al., 2017) . those works suggest that social media might be more likely than traditional media to magnify negative emotional responses. on the contrary, some prior works have found that people use social media for social support in crisis situations (y. choi & lin, 2009) . that phenomenon could lessen the negative emotional responses in social media users. therefore, it is worth investigating how the cognitive, emotional, and behavioral responses to social media use differ from those to traditional media use, which raises the following research question: are there any differences between traditional media use and social media use in terms of cognitive, emotional, and behavioral responses during the mers crisis? the proposed research model is based on the preceding discussion (see figure 1) . specifically, i expect that both traditional and social media use related to mers was associated with mers knowledge, and that it directly and indirectly encouraged mers preventive behaviors in media users. i also expect that traditional and social media use about the korean mers crisis stimulated negative emotional responses, which in turn influenced both precautionary and panic behaviors in media users. finally, i examine whether social media use showed informational, emotional, and behavioral consequences similar to those of traditional media use. the data for this study were obtained from a two-wave online panel survey. wave 1 survey was conducted in the first few days of june 2015, when hype about the mers situation was escalating, and wave 2 survey was conducted about 1 month later. the respondents were adult members of an online panel recruited and maintained by a survey company in seoul, korea. about 1 million online users compose the company's panel, and participants were randomly drawn from that list to meet the specified constraints (e.g., areas of residence based on census composition). the selected panel members received a soliciting e-mail from the company and had full liberty to decline participation. the first page of the online survey included an explanation of the study purpose and researcher contact information. study participants gave informed consent by clicking the start survey button. they were free to stop taking the survey at any time, and they received small monetary incentives (e.g., cash-equivalent points) from the company. a total of 1,187 members of the online panel participated in wave 1 survey. about 1 month later, the same respondents were recontacted by the survey company to participate in wave 2 survey. the number of participants who completed both waves 1 and 2 online surveys was 969, which is the final subject sample used for the analyses herein. the final sample was 50.7% male, with a mean age of 43.88 years (sd = 12.67). with regard to level of education, 50.5% of the sample had some level of college education or a bachelor's degree, 17.1% had a 2-year college degree, and 20.9% of the sample had some level of high school education or were graduates of high school. the median monthly household income was between 3,500,000 and 4,000,000 won. of the total sample, 64.4% were married, with an average of 1.7 children (sd = .84). traditional news use. traditional news use of mers-related content was measured based on media use frequency questions. specifically, on a 7-point scale (1 = never to 7 = very often), respondents were asked to measure how much they relied on tv news or tv news websites and newspapers and newspaper websites in the context of the korean mers crisis (r = .37, m = 4.20, sd = 1.25). social media use. social media use of mers-related content was measured using three 7-point scales (1 = never to 7 = very often) in wave 1 survey. the specific use behaviors were (a) seeking mers-related news or information using social media, (b) receiving mers-related news or information from others via social media, and (c) talking about mers with others via social media (î± = .79, m = 3.72, sd = 1.48). negative emotional reactions to the mers situation. negative emotional reactions to the mers situation were measured in both waves 1 and 2 surveys. on a 7-point scale (1 = not at all to 7 = feel strongly), respondents were asked to report how strongly they felt fear and anxiety about the mers situation (wave 1, r = .85, m = 5.42, sd = 1.47; wave 2, r = .85, m = 5.08, sd = 1.53). mers knowledge. mers knowledge was measured using six quiz-type questions in both waves 1 and 2 surveys. knowledge items covered mers symptoms, lethality, maximum latent period, and institutions where a formal diagnosis of mers could be made during the outbreak. for each question, correct answers were coded as 1, and incorrect answers were coded as 0. subsequently, the answers to those six questions were combined and used as the mers knowledge measure (wave 1, with scores ranging from 0 to 6, m = 3.09, sd = 1.17; wave 2, with scores ranging from 0 to 6, m = 3.75, sd = 1.26). wave 2 survey using five items. on a 7-point scale (1 = never to 7 = very often), respondents were asked how often they engaged in behaviors to prevent mers: wearing a mask when they were out, washing hands frequently, avoiding contact with people with mers symptoms, avoiding hospitals, and avoiding daily activities such as grocery shopping to avoid crowds (î± = .77, m = 4.18, sd = 1.43). before running the path model to test hypotheses, a confirmatory factor analysis (cfa) was conducted to test validity of the measurement model. two knowledge measures were excluded from cfa because measures were composed of all dichotomized items. model modification was not utilized: ï� 2 (67) = 301.95, p = .00; normed fit index (nfi) = .95; incremental fit index (ifi) = .96; comparative fit index (cfi) = .96; root mean square error approximation (rmsea) = .06 with 90% confidence interval (ci) = [.05, .07]. factor loadings ranged from .51 to .98 and two factor loadings were rather low of .56 (one of mers preventive behaviors) and .51 (newspaper use item). however all ave (average variance extracted) were higher than .50 and factor loadings were still within acceptable range (fornell & larcker, 1981) . therefore, all items remained. path analysis was conducted to test the hypotheses with latent variables and two observed knowledge variables. the tested model included two control variables from wave 1 data. specifically, mers knowledge and negative emotions about the mers situation measured in wave 1 controlled each of the wave 2 variables accordingly. the research model showed a reasonably good fit according to commonly used criteria (hu & bentler, 1999) : ï� 2 (90) = 359.76, p = .00; nfi = .96; ifi = .96; cfi = .96; rmsea = .056 with 90% ci = [.050, .062]. figure 2 presents the overall results with path coefficients of the hypothesized model. h1 predicted that the more individuals use traditional news media (h1a) and social media (h1b), the more mers knowledge they would acquire. as expected, mers knowledge increased with increasing use of mers-related traditional news media (î² = .13, p < .01), which supported h1a. on the contrary, mers-related social media use was not significantly associated with mers knowledge (î² = â��.04, ns), which did not support h1b. therefore, h1 was only partially supported. h2 predicted that two types of media use would be positively associated with negative emotional responses to the mers situation. results showed that the more traditional media one used, the more anxious and worried about mers one became (î² = .09, p < .05), which supported h2a. however, mers-related sns use and negative emotions was not significantly associated (î² = .05, ns), which failed to support h2b. therefore, h2 was also partially supported. the next set of hypotheses predicted that positive effects of mers knowledge (h3a) and negative emotional responses (h3b) on mers preventive behaviors. as predicted, negative emotional response was positively related to engagement in mers preventive behaviors (î² = .54, p < .001). on the contrary, mers knowledge was not significantly associated with mers preventive behaviors (î² = .05, ns). therefore, h3 was also partially supported. the final set of hypotheses expected that both traditional news media use (h4a) and social media use (h4b) would have direct positive effects on mers preventive behaviors. results showed that only social media use showed note. dotted arrows denote paths that are not statistically significant or only marginally significant. the numbers presented in figure 2 are the standardized path coefficients. all exogenous variables, including mers knowledge (w1) and negative emotions (w1), were correlated with one another. no error term was correlated. w1 = wave 1; w2 = wave 2; mers = middle east respiratory syndrome. *p < .05. **p < .01. ***p < .001. direct positive effects on mers preventive behaviors (î² = .23, p < .001) not traditional media use (î² = .05, ns). finally, rq1 asked whether traditional media use and social media use show any differences in terms of cognitive, emotional, and behavioral responses. results showed that individuals during the mers crisis showed different responses depending on two types of media use. with respect to cognitive responses, as reported above, only traditional media use was positively associated with mers knowledge. likewise, in terms of emotional responses, traditional news media use showed a positive association with negative emotional reactions. with respect to behavioral responses, only social media use showed significant positive direct effects on mers preventive behaviors. wald tests comparing the path coefficients confirmed that the difference between effects of social media and traditional media on behavioral responses (wald test = â��6.51, p < .001) was statistically meaningful. wald test result also validated that the difference in cognitive response between social and traditional media use was also statistically significant (wald test = â��2.29, p < .01). despite a consensus that the media play an important role in times of crisis, the informational, emotional, and behavioral effects of traditional and social media use by individuals have been understudied in previous research. using two sets of data collected at two different time points during the 2015 mers crisis in korea, i investigated how traditional and social media use influenced mers knowledge, fear and anxiety about the mers situation, and adoption of preventive behaviors. to reflect the complexity of today's media environment in times of crisis, i focused on potential differences between the effects of traditional and social media use. first, this study found that traditional media use and social media use have different effects. as expected, traditional media use galvanized public understanding of mers. the more people read newspapers and watched tv news about mers, the more knowledge they acquired about mers symptoms, lethality, and maximum latent period. social media use neither increased nor decreased mers knowledge. the role of social media in times of crisis has received growing attention, at least in part because of its potential for viral information transmission. however, i did not find significant cognitive effects from social media use. compared with traditional media, which mainly report information verified by expert sources (lowrey, 2004) , social media can not only convey knowledge but also disseminate false or unverified information during a crisis. the null effect of social media use on mers knowledge might thus result from the conflicting content of social media. this speculation, of course, should be verified with a robust empirical examination. second, i found that negative emotions played a prominent role in facilitating preventive behaviors in the mers crisis. experiencing fear and anxiety is a natural reaction when people are faced with a crisis. a lack of familiarity with newly emerging infectious diseases such as mers tends to deepen fear and anxiety (sandman & lanard, 2003) , and media use could amplify those emotional experiences (kasperson, 1992) . finding of this study also indicate that traditional media use increased negative emotional responses. the more media respondents consumed, the more fear and anxiety they felt about the mers situation. more importantly, i found that the negative emotional responses people experienced when facing a crisis caused them to protect themselves from mers instead of pushing them into illogical or destructive behaviors. the positive role of negative emotions in promoting adaptive behaviors is consistent with what theoretical frameworks such as pmt and risp suggest. however, most prior studies based on pmt have been developed and tested within the context of public health campaigns. compared with carefully designed campaign messages intended to scare people to act properly with respect to a given risk, media messages could more likely be crude and mixed with various intentions (e.g., sensational reports to secure audience attention). indeed, there is a lack of empirical research examining what kinds of crisis-specific responses could be elicited from the media individuals choose to use in times of crisis. in addition, risp research provides a useful theoretical framework to predict that negative emotions could motivate people to seek information about a risk. however, this framework did not pay attention to media use, which can lead to behavioral consequences such as preventive behaviors beyond information seeking. this study addressed these gaps in the literature by directly testing the roles of negative emotions and media as potential sources of those emotions during a pandemic, which has been rarely examined in previous studies. this finding has clear implications for governmental communication strategies and media reporting. this study's results suggest that it may not be effective crisis management to brush off negative emotions among the public as illogical overreactions. instead, admitting fears and anxiety people may feel and designing messages channeling those emotions into desirable health behaviors needs to be considered even in times of crisis. sympathizing with people's feelings can be an effective means to boost the evaluation of communicators' dialogic competency which was found to be positively related to information credibility and trust in the government during the mers endemic (s.-u. yang, 2018) , which in turn may promote government and media health message effectiveness. this seemingly positive role of negative emotions, however, does not advocate for sensationalizing crisis situations. instead, both government and media may consider applying knowledge about the potentially positive function of negative emotion to public communication during a crisis. for instance, risk and health communication research has developed guidelines on how to write a good fear appeal message in the public health campaign context. government and news media may consult with these guidelines to compose a message to help people to cope with an epidemic risk. third, the findings of this study indicate that the role of knowledge for preventive behaviors in the 2015 mers crisis was rather limited. only traditional media use significantly increased the public's knowledge about mers but the increased understanding did not facilitate precautionary behaviors. disseminating knowledge has been a top priority in times of crisis under the assumption that understanding could lead the public to protective behavioral responses. the results of this study imply that filling the deficit of knowledge may not be the most efficient way to promote behavioral responses. the findings here suggest that an emotional pathway may work more efficiently than cognitive drives in promoting the adoption of preventive behaviors during a public health crisis. however, it is also important to point out that the findings related to knowledge should be interpreted with caution in the context of the 2015 mers crisis. s.-u. yang (2018) noted that koreans in general tended to perceive the government's mutuality and openness in communication somewhat unfavorably, which often negatively affected government-public relationships. as s.-u. yang (2018) showed, the korean government's lack of mutuality and openness seems to have deteriorated the credibility of its risk information, which may have worsened the public's trust and satisfaction with the government. this unique situation might have lowered not only the credibility of government information but also the credibility of information from news media that relied heavily on the government for information. accordingly, the low credibility of public information on mers can explain the limited role of knowledge in promoting desirable behavioral responses. prior work examining a different epidemic crisis found that one material factor influencing preventive action was respondent opinion about authorities (quah & hin-peng, 2004) . my finding thus suggests that simple acquisition of disseminated scientific knowledge might not necessarily produce desirable behavioral responses from the public. it also implies that contextual factors, such as government dialogic competency, should be considered to fully grasp the role of knowledge in times of crisis. fourth, the findings of this study show that, unlike traditional media use, social media use produced strong direct behavioral responses during the mers crisis. traditional media use did not show direct effects on preventive behaviors. this distinction between social media use and traditional media use could be explained in the following two ways. first, compared with traditional media, which is known to be powerful in knowledge diffusion, the interpersonal channel has been considered effective on behavioral responses via the normative route (price & oshagan, 1995) . the role of normative pressure on behavioral adaptation has been well documented in the health communication literature (e.g., planned behavioral intention model, ajzen, 1991) . given that social media networks are mainly composed of acquaintances, it is possible that social media might have formed specific behavioral norms related to the mers crisis (e.g., avoidance of hospitals and crowded places). the other explanation is related to potential differences in the content of traditional media and social media. social media might have delivered more mi than traditional media. indeed, a prior work analyzing social media content during the mers crisis reported that preventive behavior-related terms (e.g., masks and hand sanitizers) were the most frequently mentioned words (won et al., 2015) . furthermore, the most needed information at the beginning of the mers crisis was the list of mers-affected hospitals, which the korean government had withheld. given the government's unwillingness to share the list of hospitals, the public resorted to social media to actively seek, exchange, and share the hospital-related information. that piece of information could single-handedly trigger one preventive behavior (i.e., avoiding specific hospitals to protect themselves). indeed, hospital was one of the terms most frequently used on social media during the mers crisis (won et al., 2015) . however, a big data analysis of traditional media reported that words such as infection, confirmed cases, and death were centrally located in traditional media content (kwon, 2016) . in short, compared with traditional media, social media might play an essential role in virally diffusing mi (specific behavioral information, including where not to go and what to do). given the importance of dialogic competency for effective crisis communication (s.-u. yang, 2018) , some technological features of social media may have enhanced communication mutuality and openness which the korean government and media lacked. communication through social media may allow users to share empathy and social support. at the same time, social media-mediated communication can be perceived as more accessible and open than media-mediated communication. strong mutuality and openness perceived through communication with network members on social media could have increased the credibility of mi which could have enhanced behavioral responses at least during the mers crisis in korea. another notable finding regarding social media is that social media did not generate public anxiety and fear, at least during the mers crisis in korea. this is quite an interesting finding because the korean government strongly criticized media, especially social media, for releasing unverified rumors and fears. the lack of association between social media use and negative emotional responses could be a result of canceling out effects. in other words, social media might induce fear and anxiety, but social media might effectively cancel out the negative emotions by providing wanted social supports or mi, which should be under solid empirical scrutiny in the future study. in the interest of future research, it is important to discuss some of the limitations of this study. the data are not based on a representative sample, which means the study findings must be interpreted with caution. for instance, online panels tended to include the younger and the more educated compared with the general population, which could influence the interpretation of the role of social media and traditional media use during the mers crisis in korea. the measurement of key variables also has limitations. for example, traditional media use was assessed only for newspaper and television. omitting radio and general internet use should be recognized as another major limitation of the study. also, using quiz-type questions to tap mers knowledge without close validation and planning might have contributed to the low correlation between the two mers knowledge measures, which also raises a concern about a test-retest error involved in the panel data. these measurement issues could produce errors, which would be carried into the hypothesized path model this study proposed. in addition, wave 1 data did not include mers preventive behavior measures, which made it impossible to control all the endogenous variables in wave 1 data. therefore, it is hard to argue that an ideal panel data analysis was conducted for testing the proposed hypotheses. with these limitations in mind, this study contributed to the scholarship by testing the predictions drawn from existing theories that have rarely been examined in the context of a real epidemic crisis. major scholarly focus in the communication field has been on how government and responsible organizations manage or should manage crises. shifting away from that focus, this study investigated the associations between individual media use and consequent responses which have been relatively understudied. the author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. the author(s) received no financial support for the research, authorship, and/or publication of this article. mihye seo https://orcid.org/0000-0002-5734-3181 the theory of planned behavior. organizational behavior and human decision process circulating health rumors in the "arab world": a 12-month content analysis of news stories and reader commentary about middle east respiratory syndrome from two middle eastern news outlets how emotion shapes behavior: feedback, anticipation, and reflection, rather than direct causation predicting health behaviour: research and practice with social cognition models pandemics in the age of twitter: content analysis of tweets during the 2009 h1n1 outbreak mining internet media for monitoring changes of public emotions about infectious diseases consumer responses to mattel product recalls posted on online bulletin boards: exploring two types of emotion the impact of social media on risk perceptions during the mers outbreak in south korea ongoing crisis communication: planning, managing, and responding newspaper coverage of causes of death evaluating structural equation models with unobservable variables and measurement error cancer coverage in north american publications targeting seniors predicting osteoporosis prevention behaviors: health beliefs and knowledge risk communication for public health emergencies proposed model of the relationship of risk information seeking and processing to the development of preventive behaviors weathering the storm: a social marketing perspective on disaster preparedness and response with lessons from hurricane katrina the dominant view of popularization: conceptual problems, political uses the knowledge gap hypothesis in singapore: the roles of socioeconomic status, mass media, and interpersonal discussion on public knowledge of the h1n1 flu pandemic emotion and coping with terror cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives the evolving role of the public information officer: an examination of social media in emergency management semantic network analysis of domestic and overseas media coverage regarding korea mers the social amplification of risk: progress in developing an integrative framework in social theories of risk an essay on korean media's coverage of middle east respiratory syndrome coronavirus a study of semantic network analysis of newspaper articles on mers situation: comparing conservative and progressive news media the emotional distress and fear of contagion related to middle east respiratory syndrome (mers) on general public in korea television's disaster marathons effects of news media and interpersonal interactions on h1n1 risk perception and vaccination intent the protective action decision model: theoretical modifications and additional evidence valuation of the risk of sars in taiwan media dependency during a large-scale social disruption: the case of crisis leadership: planning for the unthinkable compassion fatigue: how the media sell disease, famine, war and death bits of falling sky and global pandemics: moral panic and severe acute respiratory syndrome (sars). illness middle east respiratory syndrome-advancing the public health and research agenda on mers-lessons from the south korea outbreak understanding online firestorms: negative word-of-mouth dynamics in social media networks quality indicators for crisis communication to support emergency management by public authorities sars: mobilizing and maintaining a public health emergency response social media in disaster response: how experience architects can build for participation social-psychological perspectives on public opinion crisis management and management during sars outbreak the social amplification of risk: theoretical foundations and empirical application crisis and emergency risk communication as an integrative model sars epidemic in the press swine influenza and the news media the impact of communications about swine flu (influenza a h1n1v) on public responses to the outbreak: results from 36 national telephone surveys in the uk scary warnings and rational precautions: a review of the psychology of fear appeals social media as a useful tool in food risk and benefit communication? a strategic orientation approach fear is spreading faster than sars and so it should knowledge, attitudes and preventive behaviors related to dengue vector breeding control measures among adults in communities of vientiane, capital of the lao pdr crisis and emergency risk communication in health contexts: applying the cdc model to pandemic influenza media coverage of public health epidemics: linking framing and issue attention cycle toward an integrated theory of print news coverage of epidemics social big data analysis of information spread and perceived infection risk during the 2015 middle east respiratory syndrome outbreak in south korea emotions and information diffusion in social media-sentiment of microblogs and sharing behavior media dependencies in a changing media environment: the case of the 2003 sars epidemic in china disaster communication on the internet: a focus on mobilizing information global bird flu communication: hot crisis and media assurance a work-in-process literature review: incorporating social media in risk and crisis communication social media messages in an emerging health crisis: tweeting bird flu issue words analysis according to the mers outbreak on sns middle east respiratory syndrome coronavirus (mers-cov)-republic of korea effects of government dialogic competency: the mers outbreak and implications for public health crises and political legitimacy what, me worry? the role of affect in information seeking and avoidance the effects of sns communication: how expressing and receiving information predict mers-preventive behavioral intentions in south korea media use and health behavior in h1n1 flu crisis: the mediating role of perceived knowledge and fear #unconfirmed: classifying rumor stance in crisis-related social media messages mihye seo (phd, the ohio state university) is an associate professor in the department of media and communication at sungkyunkwan university, seoul, south korea. her research focuses on media use and its influence on individuals' daily life and on community-building efforts. key: cord-024385-peakgsyp authors: walsh, james p title: social media and moral panics: assessing the effects of technological change on societal reaction date: 2020-03-28 journal: nan doi: 10.1177/1367877920912257 sha: doc_id: 24385 cord_uid: peakgsyp answering calls for deeper consideration of the relationship between moral panics and emergent media systems, this exploratory article assesses the effects of social media – web-based venues that enable and encourage the production and exchange of user-generated content. contra claims of their empowering and deflationary consequences, it finds that, on balance, recent technological transformations unleash and intensify collective alarm. whether generating fear about social change, sharpening social distance, or offering new opportunities for vilifying outsiders, distorting communications, manipulating public opinion, and mobilizing embittered individuals, digital platforms and communications constitute significant targets, facilitators, and instruments of panic production. the conceptual implications of these findings are considered. technologies (for example, see flores-yeffal et al., 2019; hier, 2018; marwick, 2008; wright, 2017) . despite their insight and contributions, knowledge of social media's diverse effects remains scattered and fragmentary. thus, while some of this article's propositions can be gleaned from existing studies, it offers a systematic elaboration that aims to promote analytic balance and encourage productive exchanges that can orient future scholarship. after revisiting the media-moral panic relationship, this article assesses how social media escalate the frequency and intensity of overwrought reactions. while addressing several concrete examples, particularly the role of digital communications in promoting extremist agendas, as recent events concerning trumpism, brexit, the alt-right, and 'fake news' have shattered myths regarding their positive and empowering qualities, the focus of this article is more on general claims than particular findings. accordingly, rather than a final, definitive statement, it presents developmental suggestions and a heuristic that can, and should, be subjected to further scrutiny and debate. in the end, such preliminary efforts are significant as 'before we can pose questions of explanation, we must be aware of the character of the phenomenon we wish to explain' (smelser, 1963: 5) . while the identification and policing of deviance are perennial features of human groups, moral panics are 'unthinkable without the media' and are distinctive to modern, mass societies (critcher, 2003: 131) . in many respects, cohen and his contemporaries (cohen and young, 1973; hall et al., 1978; pearson, 1983) were the first to articulate the essential role of news-making in constructing social problems. beyond generating surplus visibility and making otherwise marginal behaviours appear pernicious and pervasive, the media represent an independent voice . by delineating moral boundaries and circulating dire predictions about monstrous others, the histrionic tenor of reporting sensitizes audiences, culminating in hardened sentiment and unbridled punitiveness (wright, 2015) . moreover, coverage translates 'stereotypes into actuality', elevating the actual and perceived severity of deviance (young, 1971: 11) . here, identifying affronts to moral order triggers virulent hostility, further marginalizing folk devils and amplifying their deviant attachments and identities. as a control culture is institutionalized, surveillance and intervention intensify, exposing additional deviance, confirming popular stereotypes and justifying further crackdowns (garland, 2008) . since cohen's research nearly a half-century ago, media systems have undergone sweeping transformation, leading many to question the continued relevance of his work. a particularly influential critique in these regards comes from mcrobbie and thornton (1995) . for them (1995: 560), cohen's emphasis on mass-broadcasting and its social and institutional correlates -a univocal press, hierarchical information flows, monolithic audiences -is untenable in the context of 'multi-mediated social worlds'. 1 specifically, it is held that the proliferation of media sources encourages exposure to alternative, if not dissenting, claims and reactions, ensuring that 'hard and fast boundaries between 'normal' and 'deviant' are less common' (mcrobbie and thornton, 1995; 572-3; cf. tiffen, 2019) . moreover, expanded access to media technologies -portable camcorders, personal computers, editing software and so one -broadens the remit of expression, giving rise to media sources inflected with the interests of marginalized groups (coleman and ross, 2010) . able to 'produce their own media' and defended by 'niche and micromedia' (mcrobbie and thornton, 1995: 568) , folk devils are no longer powerless victims and can 'fight back' (mcrobbie, 1994; cf. deyoung, 2013; thornton, 1995) . consequently, deviant outsiders and their supporters display greater capacity to contest and short-circuit panicked reactions, outcomes that render the success of moral crusades 'much less certain' (mcrobbie and thornton, 1995: 573) . focused on the diversification of conventional media space, mcrobbie and thornton conducted their stock-taking precisely as media systems were being further destabilized. with the onset of the 21st century, digital platforms not only underpin but also constitute social life in affluent societies, with individuals' identities and relations at least partly cultivated through computing infrastructures (lupton, 2018) . among the most significant manifestations of 'digital societies' are social media. whether as social networking (facebook), micro-blogging (twitter), or photo -(instagram) and video-sharing (youtube) sites, social media have profoundly reconfigured the production and exchange of information. as 'many-to-many' systems of communication, they promote vernacular discourse and creativity, permitting ordinary users to produce and distribute staggering quantities of 'user-generated content' (keane, 2013; yar, 2014) . 2 digital platforms are also displacing the mass media as an information source. 3 finally, as loosely coupled networks of users, their structure not only promotes virality -the rapid and unpredictable diffusion of content -but also fosters an expansive virtual sociality (baym, 2015; . here, various attributes -'likes', 'retweets', hashtags (#), mentions (@) and so on -index and anchor communications, promoting awareness of others and uniting spatially dispersed users into communities of shared interest and identity (murthy, 2013) . while mcrobbie and thornton could not have anticipated these momentous shifts, contemporary scholarship assumes, either overtly or implicitly, that their corrective remains as, if not more, relevant today (for example, see carlson, 2015; carrabine, 2008; fischel, 2016; marres, 2017) . with information control representing a critical axis of power, social media are frequently depicted as an elite-challenging 'microphone for the masses' (murthy, 2011; cf. gerbaudo, 2018; jenkins, 2006 . here, the accessibility and sophistication of digital platforms is believed to empower ordinary citizens to make their own news, name issues as public concerns, and shape collective sentiment (coleman and ross, 2010; turner, 2010) . with knowledge production and image-making increasingly steered by non-experts, many perceive citizen journalism as breeding accounts of reality rooted in public-mindedness rather than sensationalism or commercial considerations (goode, 2009) . in light of such developments, noted panic scholars claim digital media are shifting 'the locus of definitional power', ensuring 'more voices are heard' (critcher, 2017) and generating 'new possibilities for resistance' (lindgren, 2013 (lindgren, : 1243 . thus, the increasingly nodal configuration of media space has attenuated moral guardians' influence, ensuring that panics are 'more likely to be blunted and scattered among competing narratives' (goode and ben-yehuda, 2010: 99; cf. le grand, 2016) . while mcrobbie and thornton's claims remain influential, their ability to convincingly order the evidence is considerably more limited than recent analysis suggests. in accentuating social media's progressive consequences -information pluralism and robust opportunities for citizens to access the public sphere and defuse frenzied reactionsexisting scholarship neglects how digital platforms are 'underdetermined' and doubleedged (monahan, 2010) . informed by such issues, the following offers a counterpoint, detailing how social media's affordances intensify the proclivity to panic. whether as objects of unease, sources of acrimonious division, or venues for staging moral contests, on balance, contemporary media systems promote febrile anxiety. changing communicative and informational conditions frequently incite moral restiveness. as cohen himself intimates (2002 cohen himself intimates ( [1972 : xvii), societies are regularly gripped by fears that, if improperly governed, new media will have deleterious effects on younger generations. the latest iteration of so-called 'media' (drotner, 1999) or 'techno-panics' (marwick, 2008) , reactions to social media encapsulate deep-seated anxieties about social change and the types of people it begets. like prior episodes involving 'dangerous' media, including 'penny dreadfuls', pinball machines, comic books and 'video nasties', youth are ambivalently constructed as threatened and threatening (springhall, 1998) . while anxieties have surfaced around vulnerability stemming from, inter alia, online predators, sexting, cyber-bullying, and exposure to violent and pornographic content (barak, 2005; gabriel, 2014; lynch, 2002; milosevic, 2015) , youth are also positioned as undisciplined and pathological, with social media branded a leading culprit. alongside being blamed for moral failings -obesity, addiction, disengagement, cultural vacuity, solipsism (baym, 2015; thurlow, 2006; szablewicz, 2010 ) -multi-media platforms have been linked to violent criminality. whether in relation to video game violence, the possibility of obtaining information about weaponry and prior incidents, or the promise of celebrity immortality offered through documenting their grievances and attacks, digital media have been maligned for encouraging school shootings and associated massacres (ruddock, 2013; sternheimer, 2014) . 4 further, during the 2011 england riots, journalists and politicians referenced blackberry and twitter 'mobs', claiming teenage gangs employed digital communications to evade authorities, publicize lawlessness and coordinate anti-social behaviour (crump, 2011; fuchs, 2013) . such fears have frequently culminated in attempts by adult society to intensify surveillance, censorship, and control over online platforms. for such crusaders, who often utilize the very technologies they condemn to whip up outrage, techno-panics provide an alibi for manning the 'moral barricades' and reasserting the hegemony of their values (sternheimer, 2014) . thus, while they may empower grassroots actors and disturb social hierarchies, technological changes equally engender moral backlash and nostalgia. social media also reconfigure the external environment wherein panics occur. frequently valorized for encouraging connectedness and encounters with diverse others, upon closer inspection digital platforms exert centripetal force, producing 'filter bubbles' (pariser, 2011) and 'information silos' (mcintyre, 2018) which narrow social horizons and increase the likelihood of engaging with affective, and often acerbic, content. as news is increasingly digitally mediated, such dynamics reveal, pace mcrobbie and thornton (1995) , there is no one-to-one correspondence between media and message pluralism. able to curate content at the expense of professional gatekeepers, social media allow users to construct information ecologies that are personalized and restricted (sunstein, 2018) . such outcomes are exacerbated by social media's 'aggregative functionalities' (gerbaudo, 2018) : the use of promotional algorithms to deliver tailored content (rogers, 2013) . for example, by assessing the volume of 'clicks' (likes, shares, mentions, etc.) communications receive, facebook's customized news feed determines what is worthy of users' attention, filtering out stories deviating from extrapolations of their interests and preferences (mcintyre, 2018) . as this and related examples suggest, by amplifying users' biases and aversions, social media encourage confirmation bias and isomorphic social relations (powers, 2017) . social media also favour content likely to generate significant emotion and outrage. by promoting communications based on predicted popularity, they prioritize and reward virality and the intensity of reaction rather than veracity or the public interest (van dijck, 2013; yardi and boyd, 2010) . the result is the proliferation of 'click-bait', deliberately sensationalized content that captivates through affective arousal (vaidhyanathan, 2018) . more significantly, new media systems privilege incendiary communications. research suggests that, even for the most staid users, the frisson of disgust is too alluring as content unleashing fear and anger about out-groups is considerably more likely to garner attention and 'trend' (berger and milkman, 2012; vosoughi et al., 2018) . 5 these dynamics ultimately appear contagious as messages' emotional valence 'infects' other users, influencing their subsequent interactions and escalating bitterness and antipathy within online environments (kramer et al., 2014; stark, 2018) . together such conditions promote anxious alarm. by allowing users to remain cloistered within their preferred tribes and visions of reality, digital platforms encourage misrecognition and distort understanding of social issues, making the acceptance of bloated rhetoric more likely (albright, 2017) . accordingly, they obstruct heterogeneous interactions and exposure to opposing perspectives, dynamics long identified as precluding the root causes of panics -intolerance and hostility (murthy, 2013) . finally, by inflating the visibility of inflammatory content, social media mobilize animosity towards common enemies and transform uneasy concern into full-blown panic. alongside breeding fissiparous societies, multi-media platforms can be wielded to engineer crises. historically, panics require the mass media to generate sufficient concern and indignation. social media expand the pathways of panic production. as detailed below, walsh 7 by allowing ordinary netizens to identify and sanction transgression, they unleash participatory, crowd-sourced panics. additionally, as architectures of amplification, their structural features can be commandeered to promote moral contests that are surreptitious, automated, and finely calibrated in their transmission and targeting. conventional wisdom suggests that panics are spearheaded by seasoned and advantageously positioned activists and elites. by expanding capacities of media production and distribution, digital communications permit citizens to directly publicize issues and promote collective action. typically this has been associated with amateur news-making and attempts to document injustice and promote transparency and accountability (coleman and ross, 2010; walsh, 2019a; yar, 2014) , but scholars have recently documented opposing trends, where social media are appropriated to define and enforce public morality. as lay actors increasingly participate in the exposure and sanctioning of deviance, distinctions between the media, the public and moral entrepreneurs are blurring, ensuring that panics stem from unorthodox sources and display new discursive and interactional contours. on the one hand, social media enable micro-crusades that, while lacking broad public appeal and support, are sustained by dispersed groups of devoted and technologically equipped citizens. whether employed to advance claims that harry potter promotes satanism and the occult to impressionable youth (sternheimer, 2014) or discredit public officials and assert a link between vaccinations and autism (erbschloe, 2018) , digital environments offer optimal arenas for uniting the conspiratorial. given their accessibility and ease-of-use, they obviate the need for elite participation, promoting patterns of mobilization around issues where all citizens potentially emerge as crusaders (hier, 2018) . moreover, social media's 'mob-ocratic' tendencies can activate collective effervescence (gerbaudo, 2018) , producing panics driven by mass collaboration. falling into this category are online 'firestorms' (johnen et al., 2018) , spontaneous and electric outbursts where the documentation and exposure of moral breaches -petty theft, public outbursts, drug use, sexual promiscuity, etc. -are rapidly disseminated, igniting interactive cascades of denigration (trottier, 2018; wright, 2017) . such episodes often culminate in digital vigilantism: forms of extra-judicial punitiveness -ostracism, doxing, harassment, job loss, physical attacks, death threats -that emerge from below (powell et al., 2018; trottier, 2017) . 6 consequently, alongside increasing the frequency and velocity of panics, online environments appear to promote heightened virulence and excoriation. while underpinned by emergent technologies, forms of digitally mediated opprobrium are inseparable from late-modern social conditions as they offer a palliative for ontological precarity and allow otherwise atomized individuals to police social boundaries (ingraham and reeves, 2016; cf. bauman, 2013) . beyond expanding the profile of moral entrepreneurs, the networked and digital configuration of social media can also be marshalled to distort information flows, promote 8 international journal of cultural studies 00 (0) incendiary content, and channel user experience and engagement. in such instances, digital platforms constitute architectures of amplification that allow interested parties to punch well above their weight. 'attention hacking' and media manipulation. on the one hand, digital platforms permit highly energized and sustained groups to sculpt public sentiment by maximizing the visibility of 'information pollution' and 'fake news' -arresting, sensational and morally tinged content designed to distort and agitate (kalsnes, 2018) . whether by steering communications, creating fake accounts, or exploiting digital interactions, techniques of 'attention hacking' can strategically influence engagement patterns and produce wildly disproportionate effects (marwick and lewis, 2017) . ultimately, by allowing users to eliminate ambiguity and delineate moral boundaries in publicly visible ways, sites like twitter and facebook generate new types of agency that can rapidly propel the ideas and identities of various outsiders into prominence (joosse, 2018). 7 with their cacophonous character making it difficult to vet the integrity of content, digital platforms have been inundated with captivating, tendentious and skewed, if not entirely spurious, communications (news stories, videos, memes, blog posts, hashtags, etc.) to distort online conversations and mobilize receptive users. an exemplary case of digitally mediated crusades appeared during the 2016 american election as dedicated members of the 'alt-right', as well as digital mercenaries employed by the internet research agency (ira), a russia-backed 'troll farm', devoted considerable energy and resources to shaping political communication and behaviour. central to their efforts was the creation, sharing, liking and promotion of misinformation and provocative discourse about contentious sociocultural issues, including race relations, gun control, abortion, islamophobia and men's rights (bradshaw and howard, 2017; nagle, 2017; singer and brooking, 2018) . 8 armed with an appreciation of digital platforms' value in shifting the parameters of public discourse, such actors succeeded in generating virality, obtaining mainstream press coverage, and inciting considerable outcry and anxiety (phillips, 2018) . more recently, the role of digital communication in spreading fake news and inciting panic was on full display in initial reactions to the novel coronavirus (covid-19), an infectious respiratory disease of zoonotic origin. following its emergence in wuhan, china in january 2020, widespread scapegoating and fear-mongering erupted across social media. in relation to the former, the virus was racialized, with numerous messages linking it to the ostensibly exotic dietary practices and unsanitary behaviour of chinese populations, with representations depicting them as folk devils and dangerous, impure others (yang, 2020) . 9 reflecting a 'politics of substitution' (jenkins, 1992) , such claims-making diverted attention from considerably more deadly (and preventable) diseases (e.g. malaria), as well as, the structural conditions -media censorship, political corruption, weakly enforced health and safety standards -underlying the emergence and rapid spread of the disease. digital platforms were also used to circulate misinformation and dire, if not apocalyptic, predictions with various rumours -whether false reports of positive cases and contaminated chinese imports, stories of individuals absconding from quarantine zones, or claims that the virus was a bioweapon developed by the chinese or american governments -outpacing official information during the early stages of the outbreak (bogle, 2020) . by contributing to a broader climate of suspicion, such communications appear to be reactivating fears of a 'yellow peril', as well as producing emergency measures (enhanced surveillance, quarantines, travel bans etc.) and everyday expressions of racism and anti-chinese sentiment (dingwall, 2020; palmer, 2020; yang, 2020) . as this example reveals, like prior epidemics (sars, aids, etc.) where media coverage promoted fear and opprobrium about various outsiders (gay men, drug users, foreigners; see muzzatti, 2005; ungar, 2013; watney, 1997) , digital communications also play a significant role in distorting understanding and encouraging over-reaction. the episode equally suggests, however, that social media's anonymous, horizontal structure ensures that messages travel exponentially faster, lack clear origins and feature palpable vitriol, outcomes that escalate the impetus and excess of alarm (miller, 2020) . the spread of information pollution frequently hinges on perceptions of social media as the embodiment of the vox populi (gerbaudo, 2018) . here, fake accounts are utilized to raise awareness and bolster the credibility of favoured content. on the one hand, advances in artificial intelligence allow bots -machine-led communications tools that mimic human users and perform simple, structurally repetitive, tasks -to spread 'computational propaganda' (bradshaw and howard, 2017; ferrara et al., 2016) . as social machines and artificial voices, bots automate and accelerate diffusion and engagement, creating, liking, sharing, and following content at rates vastly surpassing human capabilities. thus, they facilitate viral engineering; expanding the momentum of certain messages and, in the process, altering information flows. to exude authority and authenticity, content is also circulated by bogus, 'sockpuppet' accounts posing as those of accredited experts (scientists, journalists, etc.) or ordinary citizens belonging to various groups (women, blue-collar workers, police officers, urban youth, etc.) and appearing to possess folk wisdom (bastos and mercea, 2017; marwick and lewis, 2017) . whether manual or automated, techniques of media manipulation also control narratives by reducing the visibility of unwanted and objectionable content. here, keywords and hashtags affiliated with opposing perspectives can be 'hijacked' as platforms are flooded with nonsense or negative messages to disrupt and drown out specific communications, denuding them of their salience and influence (woolley and howard, 2016) . a recent example of such efforts is found in twitter communications concerning the intensity of the 2019-20 australian bushfires, an outcome widely linked to the longer fire seasons produced by climate change. the preliminary results of research conducted by graham and keller (2020) suggests that, at the height of the crisis, a coordinated misinformation campaign was waged by a sprawling network of troll and bot accounts to advance broader narratives of climate denial. by flooding social media with hashtags like #arsonemergency (in place of #climateemergency) and co-opting those already trending (e.g. #australiafire, #bushfireaustralia), such actors sought to publicize conspiracies that criminal elements -whether arsonists, radical environmentalists, or isis fighters -were responsible for the blazes and that climate change is an elite-engineered hoax and form of population control (knaus, 2020) . finally, the propagation of misinformation involves attempts to harness social interaction and collective sense-making. studies suggest that distorted, emotionally charged content is considerably more likely to be shared by ordinary users who unwittingly enlarge its sphere of influence (albright, 2017; tanz, 2017) . 10 by bearing the imprimatur international journal of cultural studies 00 (0) of whomever shared it, whether a relative, colleague, neighbour, or opinion leader, the substance of messages is validated and appears authentic as it spreads laterally across users' networks (van der linden, 2017) . for instance, on several occasions, accounts linked to the alt-right and russian operatives have successfully 'seeded' content, goading journalists, bloggers, activists, and politicians (including president trump) into endorsing particular communications and providing broader platforms (phillips, 2018) . since messages distributed through formal channels and hierarchical apparatuses are frequently perceived as self-serving and inauthentic, media manipulation provides a powerful vehicle of promotion. by engineering popularity and relevance, the discursive swarms unleashed by bots and fake accounts can generate an impression of credibility, unanimity and common sense, an outcome essential to normalizing particular modes of thought (chen, 2015) . ultimately, by concealing the authors and agendas behind communications, such practices facilitate shadow crusades and astroturfing (rubin, 2017) . while applicable to numerous topics, digitally mediated crusades are distinctly prominent in relation to issues -migration, crime and policing, or terrorism -identified as leading and recurrent sources of panic (hall et al., 1978; kidd-hewitt and osborne, 1995; odartey-wellington, 2009; walsh, 2017 walsh, , 2019c welch and schuster, 2005) , as well as, central topics in online conversations during critical political moments (benkler et al., 2018; evolvi, 2019) . for instance, in their recent study of anti-immigrant crusades, flores-yeffal et al. (2019) observed how the indexing of social media communications through hashtags like #illegalsarecriminals and #wakeupamerica fostered networked discourses and connectedness, helping to construct scapegoats, circulate calls for action, and ensure that xenophobic rhetoric echoed throughout cyber-space (see also morgan and shaffer, 2017) . 11 additionally, preceding the brexit referendum, supporters of the far-right uk independence party utilized digital platforms to trigger and inflate fears about foreigners, circulating contentious claims about workforce competition, cultural displacement, crime and terrorist infiltration (vaidhyanathan, 2018) . computational crusades. finally, social media unleash crusades that are data-driven, granular, and highly dynamic in their transmission and targeting. here, the digital surveillance and marketing infrastructures that underpin social media's profitability permit computational modelling of user data, promising greater awareness of audiences and encouraging claims-making practices involving extensive narrowcasting; behavioural and psychometric profiling; and the production of predictive knowledge. while empowering users as participants and agents of communication, digital platforms also render them legible as vast tranches of information about their attributes (e.g. gender, race, income), activities (e.g. hobbies, movements, browsing habits), and associations (e.g. relational ties, organizational memberships) are continuously scrutinized for commercial, legal and political purposes (nissenbaum, 2009) . once harvested, user data undergoes deep profiling, producing digital dossiers which sort individuals based on dozens, and potentially hundreds, of variables. consequently, audiences are less collectivities to be influenced en masse, than individually calculable units, arrangements that permit those possessing the necessary resources and technological literacy to target users with highly customized messages (zuboff, 2015) . accompanying geodemographic criteria, algorithms can identify and calculate expressive energies and subjective orientations -moods, sensibilities, and emotions. with advances in machine learning and sentiment analysis, digital communications can be analysed to map meaning structures, and discern personality traits on scales previously unimaginable (andrejevic, 2013; stark, 2018) . for example, cambridge analytica, a consulting firm hired to assist the trump campaign's online messaging, harvested data concerning online engagement for over 230 million facebook users, pooling it with other information to develop a sprawling collection of psychographic profiles on potential voters and gauge their receptiveness to various messaging strategies (vaidhyanathan, 2018) . heralding the rise of communications that, while reaching immense audiences, are highly differentiated, it is estimated that, with the assistance of big data analytics, trump's campaign disseminated over 6 million distinct online ads, with variations of individual messages, at times, surpassing 200,000 (singer and brooking, 2018) . big data also yields inferential and predictive knowledge, with computer models unearthing correlations, extrapolating information about users, and forecasting reactions. here, digital enclosures are mined to identify regularities against which users are continuously compared, outcomes that allow claims makers to anticipate content's likely resonance and develop flexible outreach strategies (baym, 2013) . 12 practices of dataveillance are also recursive, as feedback in the form of engagement patterns is reflexively monitored to elaborate correlations and deepen knowledge of users (neuman et al., 2014) . accordingly, digital communications double as iterative experiments where multiple messages can be distributed simultaneously to survey reactions and refine techniques of persuasion (andrejevic, 2013) . in relation to panics, profiling user data liberates crusaders from 'monolithic massappeal, broadcast approach[es]' to issue mobilization (tufekci, 2014) . rather than attracting support through unifying, 'big tent' issues, dataveillance facilitates agile micro-targeted crusades. able to cleave populations into demographic and affective types, moral guardians can precisely 'hail' subjectivities, allowing them to combine mass transmission with individual connection and overcome what has traditionally been a hobson's choice between maximal exposure and intimate resonance. consequently, moral contests promise to become exponentially more sophisticated, ensuring overwrought discourse reaches, motivates and energizes its intended targets. moreover, given the expressive contours of panics, and the importance of emotions -anxiety, hostility, even hysteria -as levers of action (walby and spencer, 2012) , the mining, measurement and classification of affective states allows crusaders to viscerally connect with audiences and strengthen their messaging. as a distinct species of collective behaviour, moral panics represent contentious and intensely affective campaigns to police the parameters of public knowledge and morality. as such, they are necessarily dependent upon and constituted by claims-making, with interested parties historically seeking to actuate alarm by influencing the imagery and representations of the mainstream press, arrangements disrupted by recent upheavals in media space. to illuminate the complex relationship between panics and the broader socio-technical context in which they unfold, this article has surveyed the impact of digital communications, presenting a taxonomy of social media's effects on the issues, conditions and practices that incite collective alarm. while displaying elite-challenging potential, social media are ultimately janus-faced and contradictory. alongside providing emergent sources of unease, they cultivate facilitating conditions and offer ideal venues for constructing social problems. specifically, by elevating agitational discourse and promoting homophily, social media generate social friction and hostility. moreover, as instruments of panic production, new technologies reshape the identification and construction of deviance, both permitting lay participation and allowing various parties to manipulate public communications in ways that produce outsized, imperceptible and highly efficient influence. while gauging the precise effects of social media requires more rigorous scrutiny than can be provided here, the available evidence indicates that, all things considered, they inflate the incidence and severity of panics. on the one hand, various studies suggest that, as architectures of amplification, digital platforms reduce transaction costs and transform peripheral (as well as automated and artificial) voices into conspicuous claimants (vaidhyanathan, 2018) . 13 they also appear to enhance the spread of information pollution, with scholarship revealing that, whether transmitted by algorithms or human agents, 'misinformation, polarizing, and conspiratorial content' (howard et al., 2018: 1) not only 'diffuse[s] significantly further, faster, [and] deeper' on social media (vosoughi et al., 2018 (vosoughi et al., : 1146 albright, 2017) but also, during the final days of the 2016 election, represented the most popular informational content on facebook, leading many to speculate that it played a decisive role in trump's victory (waisbord, 2018) . finally, evidence surrounding the extent to which gross distortions, extremist views and readily falsifiable conspiracies (such as the views that: climate change is a manufactured crisis, violent crime is at historic highs, undocumented migrants are overwhelmingly violent criminals, etc.) are being normalized as public idiom gives considerable cause for concern (mcintyre, 2018; scheufele and krause, 2019). 14 beyond advancing understanding of the media-moral panic relationship, an important task in its own right, by initiating dialogue between theoretical expectations and empirical instances, the preceding analysis promotes conceptual refinement and renewal. specifically, accounting for social media's effects on panic production illuminates significant mutations surrounding the interactants, functions and communicative patterns that define contemporary crusades. first, as many-to-many systems of communication, social media promote novel patterns of participation, offering ordinary persons a greater role, facilitating spontaneous outbursts driven by multitudes and introducing automated, machine-led campaigns. additionally, in enabling new techniques of media manipulation, digital platforms contribute to the weaponization of panics. while conventional wisdom suggests that panics represent domestic affairs, oriented towards mobilizing support, acquiring power and status or manufacturing consent, the case of russia and information warfare suggests that normative conflict may be exogenously engineered to provoke significant social and psychological disruption. finally, in place of uniform messages and mass appeal, the combination of data-mining and behavioural profiling unleashes claims-making techniques that are inhabited and hyper-targeted. drawing attention to these features exposes significant transformations and bolsters the versatility and explanatory capacity of cohen's paradigm. thus, mirroring other recent interventions (falkof, 2018; joosse, 2018; wright, 2017) , by accounting for emergent social conditions, this article advances a nuanced, flexible framework rather than a fixed, uniform model. ultimately, exposing anomalous findings that push the limits of existing perspectives extends the concept's range of applicability, promoting a more robust framework capable of accommodating pivotal shifts in media space and the social relations they engender. alongside laying the foundation for further empirical applications, given the depth and rapidity of social change, such conceptual dexterity is an asset rather than a liability jewkes, 2015) . as an account of reaction and social problems construction, moral panic theory has traditionally emphasized the mass media's role in sculpting collective knowledge, arbitrating between the real and represented, and generating significant discrepancy between risk and response. this article suggests that, while the legacy press continues to play a significant role, with the ubiquity of digital platforms and technologies, the emergence and spread of panics is being reconstituted. in particular, scholars can further refine and expand the concept's range and impact by engaging with social media's diverse and far-reaching effects on the contours of collective alarm. while it is admittedly premature to predict what new attributes media systems will assume, and there is too much contingency to suggest that future developments will follow an inexorable path, it is hoped that, by taking technological change into account, the idea of moral panic will continue to influence understandings of how fear and transgression are mobilized for varied purposes. the author received no financial support for the research, authorship, and/or publication of this article. notes 6. in a striking example of online shaming and digitally mediated outrage, moral entrepreneurs associated with anti-paedophile activism in canada, the uk, and russia have all employed digital platforms to investigate, identify, expose and censure suspected sex offenders (favarel-garrigues, 2019; trottier, 2017) . 7. citing donald trump's rise as a charismatic political maverick, joosse (2018) argues that non-traditional media are ideally suited for producing and reiterating simplistic and highly resonant moral categories, outcomes that can endow otherwise peripheral parties with significant power and influence. 8. while a full discussion exceeds the scope of this article, the alt-right encompasses an illdefined amalgam of actors (white nationalists, men's rights activists, palaeo-conservatives, nativists, etc.) united by opposition to 'identity politics', multiculturalism, and perceived 'political correctness' (hawley, 2017; nagle, 2017) . 9. for instance, videos of chinese citizens eating bats, rodents, snakes and other 'dirty' or 'exotic' wildlife were quickly posted and widely distributed across various social networking sites (palmer, 2020) . 10. surveys from the usa reveal one-quarter of respondents have knowingly shared misinformation on social media (barthel et al., 2016) . 11. russian operatives also contributed to such efforts, distributing content and even organizing protests through fake twitter and facebook accounts (singer and brooking, 2018) . 12. research reveals, for instance, that various attributes -sexuality, religiosity, education, etc. -can be reliably predicted from patterns involving the single data point, facebook 'likes' (markovikj et al., 2013) . 13. for example, during the 2016 election, content from just six russian-backed facebook accounts garnered 340 million shares and nearly 20 million interactions on the platform (matsakis, 2018) . additionally, whether deployed by foreign agents or domestic extremists, bots produced one-third of posts concerning the 2016 brexit vote, despite representing just 1% of active twitter accounts (narayanan et al., 2017) . 14. for instance, over two-thirds of americans claim that fake news has left them disoriented and confused about basic facts (barthel et al., 2016) , while another survey revealed 75% of americans familiar with a fake news headline thought it was accurate (roozenbeek and van der linden, 2019). mcrobbie and thornton's (1995) critique continues to be cited as a core 'dimension of dispute facebook's 2.38 billion active users leave roughly 300,000 comments per minute and share over 5 billion posts per day two-thirds of american adults obtained some of their news from social media (shearer and gottfried, 2017), while, for british and north american youth, it represents their primary news source an exemplary case is pekka-eric auvinen a finnish shooter deemed the 'youtube gunman' after using the video-sharing site to publicize his actions, espouse nihilistic views, and share a final message immediately before killing eight people one study of facebook found the 'click-through' rate for socially divisive content exceeded typical ads by tenfold welcome to the era of fake news infoglut: how too much information is changing the way we think and know sexual harassment on the internet liquid fear many americans believe fake news is sowing confusion the brexit botnet and user-generated hyperpartisan news data not seen: the uses and shortcomings of social media metrics personal connections in the digital age network propaganda: manipulation, disinformation, and radicalization in american politics what makes online content viral coronavirus misinformation is running rampant on social media. abc science social network sites as networked publics: affordances, dynamics, and implications troops, trolls and troublemakers: a global inventory of organized social media manipulation. the computational propaganda project moral panic, moral breach: bernhard goetz, george zimmerman, and racialized news reporting in contested cases of self-defense crime, culture and the media digital objects, digital subjects the agency. new york times magazine folk devils and moral panics the manufacture of news: a reader the media and the public moral panics what are the police doing on twitter? social media, the police and the public the idea of moral panic: ten dimensions of dispute considering the agency of folk devils we should deescalate the war on the coronavirus. wired dangerous media? panic discourses and dilemmas of modernity extremist propaganda in social media #islamexit: inter-group antagonism on twitter. information on moral panic: some directions for further development digital vigilantism and anti-paedophile activism in russia. global crime, epub ahead of print 21 october the rise of social bots sex and harm in the age of consent #wakeupamerica #illegalsarecriminals: the role of the cyber public sphere in the perpetuation of the latino cyber-moral panic in the us social media: a critical introduction sexting, selfies and self-harm: young people, social media and the performance of self-development on the concept of moral panic crime social media and populism: an elective affinity? media social news, citizen journalism and democracy bushfires, bots and arson claims: australia flung in the global disinformation spotlight. the conversation making sense of the alt-right interview with stuart hall. communication and critical/cultural studies moral panics and digital-media logic: notes on a changing research agenda. crime, media, culture, epub ahead of print 6 social media, news and political information during the us election arxiv, epub ahead of print new media, new panics fans, bloggers, and gamers: exploring participatory culture intimate enemies media and crime the digital outcry: what incites participation behaviour in an online firestorm? new media & society expanding moral panic theory to include the agency of charismatic entrepreneurs fake news democracy and media decadence bots and trolls spread false arson claims in australian fires 'disinformation campaign'. the guardian experimental evidence of massive-scale emotional contagion through social networks the ashgate research companion to moral panics. farnham: ashgate moralising discourse and the dialectical formation of class identities youtube gunmen? mapping participatory media discourse on school shooting videos. media pirate panics: comparing news and blog discourse on illegal file sharing in sweden. information pedophiles and cyber-predators as contaminating forces association for the advancement of artificial intelligence digital sociology: the reinvention of social research to catch a predator? the myspace moral panic media manipulation and disinformation online the ftc is officially investigating facebook's data practices crime, media and moral panic in an expanding european union folk devils fight back rethinking 'moral panic' for multi-mediated social worlds it plays to our worst fears': coronavirus misinformation fuelled by social media. cbc cyberbullying in us mainstream media surveillance in the time of insecurity sockpuppets, secessionists, and breitbart. medium, 31 march twitter: microphone for the masses? media bits of falling sky and global pandemics: moral panic and severe acute respiratory syndrome (sars). illness kill all normies: online culture wars from 4chan and tumblr to trump and the alt-right russian involvement and junk news during brexit. the computational propaganda project (comprop) data memo the dynamics of public attention: agenda-setting theory meets big data privacy in context racial profiling and moral panic: operation thread and the al-qaeda sleeper cell that never was don't blame bat soup for the wuhan virus the filter bubble hooligan: a history of respectable fears the oxygen of amplification not everyone in advanced economies is using social media digital criminology: crime and justice in digital society my news feed is filtered? awareness of news personalization among college students on microtargeting socially divisive ads: a case study of russia-linked ad campaigns on facebook the fake news game: actively inoculating against the risk of misinformation deception detection and rumor debunking for social media youth and media science audiences, misinformation, and fake news news use across social media platforms 2017 likewar: the weaponization of social media theory of collective behavior youth, popular culture and moral panics algorithmic psychometrics and the scalable subject pop culture panics #republic: divided democracy in the age of social media the ill effects of 'opium for the spirit': a critical cultural analysis of china's internet addiction moral panic journalism fights for survival in the post-truth era club cultures from statistical panic to moral panic: the metadiscursive construction and popular exaggeration of new media language in the print media moral panics digital vigilantism as weaponisation of visibility coming to terms with shame: exploring mediated visibility against transgressions engineering the public: big data, surveillance and computational politics ordinary people and the media: the demotic turn is this one it? viral moral panics beating the hell out of fake news the culture of connectivity the spread of true and false news online truth is what happens to news: on journalism, fake news, and post-truth social media 'outstrips tv' as news source for young people how emotions matter to moral panics moral panics by design: the case of terrorism the handbook of social control desired panic: the folk devil as provocateur. deviant behavior social media and border security: twitter use by migration policing agencies. policing and society social media and policing: a review of recent research policing desire detention of asylum seekers in the uk and usa: deciphering noisy and quiet constructions automation, algorithms, and politics| political communication, computational propaganda, and autonomous agents: introduction moral panics as enacted melodramas making sense of moral panics a new virus stirs up ancient hatred. cnn the cultural imaginary of the internet dynamic debates: an analysis of group polarization over time on twitter the drugtakers. london: macgibbon and kee big other: surveillance capitalism and the prospects of an information civilization international journal of cultural studies 00(0) james p walsh is an assistant professor of criminology at the university of ontario institute of technology. in addition to moral panics, his research focuses on crime and media; surveillance; and border security and migration policing. key: cord-034975-gud4dow5 authors: kalpokas, ignas title: problematising reality: the promises and perils of synthetic media date: 2020-11-09 journal: sn soc sci doi: 10.1007/s43545-020-00010-8 sha: doc_id: 34975 cord_uid: gud4dow5 this commentary article focuses on the emergence of synthetic media—computer-generated content that is created by employing artificial intelligence (ai) technologies. it discusses three of the most notable current forms of this emerging form of content: deepfakes, virtual influencers, and augmented and virtual reality (collectively known as extended reality). their key features are introduced, and the main challenges and opportunities associated with the technologies are analysed. in all cases, a crucial change is underway: reality (or, at least, the perception thereof) is seen as increasingly less stable, and potential for manipulation is on the rise. in fact, it transpires that personalisation of (perceived) reality is the likely outcome, with increasing societal fragmentation as a result. mediatisation is used as a broad-ranging metatheory that explains the permeation by media of everyday affairs to explain the degree of impact that synthetic media have on the society. in this context, it is suggested that we search for new and alternative criteria for reality that would be capable of accounting for the changing nature of agency and impact in today’s world. for assessing the reality of objects and phenomena still hold and whether reality in the general sense must be reconsidered. adopting a perspective tentatively affirmative of such a switch, this article explores ways in which the new, synthetic, media can affect human thinking and behaviour without dealing with anything conventionally real. although the assertion that the media now play an increasingly central role in everyday life has become ubiquitous, the changing nature of the media themselves is commonly overlooked. while discussions would often focus on issues of framing, misrepresentation, or underrepresentation, it is becoming crucial to also focus on the media's generative capacity. the latter refers to the capacity to create synthetic likenesses, personalities, and entire environments solely by way of digital technologies. therefore, the reality we experience and use as a baseline for future decisions and life plans can easily have no physical counterpart and might be even unique to our own personal experience. to provide at least a tentative account of the transformations pertaining to synthetic media, mediatisation theory is briefly overviewed as a metatheory for conceptualising the media's growing influence. the analysis then focuses on synthetic media, first engaging with the capacity to create synthetic likenesses (deepfakes), then moving onto synthetic personalities (virtual influencers) and synthetic worlds (extended reality). this article thereby demonstrates the growing challenges faced by traditional accounts of reality that are biased in favour of physical tangibility. the contention is that the reality of something must, instead, be measured primarily through affective capacity. according to an influential definition, mediatisation refers to the condition whence the media 'have become an integral part of other institutions' operations, while they also have achieved a degree of self-determination and authority that forces other institutions […] to submit to their logic' (hjarvard 2008, p. 106) . the matter here is, essentially, one of the media's ever-presence, permeating 'all aspects of private, social, political, cultural, and economic life, from the micro (individual) to the meso (organisational) to the macro (societal) level' (giaxoglou and döveling 2018, p. 2). in the same vein, the social world of today is 'changed in its dynamics and structure by the role that media continuously (indeed recursively) play in its construction' (couldry and hepp 2017, p. 15 ). hence, the media no longer mediate between the world and the experience of it but increasingly generate that experience. simultaneously, while previously individuals were confined to their physical location, now one can be immersed in a number of digital worlds and interact with a number of individuals regardless of distance (couldry and hepp 2017, p. 90) . likewise, the media must be seen as constituting 'a realm of shared experience' by offering 'a continuous presentation and interpretation of "the way things are"' and thereby contributing to 'the development of a sense of identity and of community' (hjarvard 2008, p. 126) , thereby determining the functioning of social relations (nowak-teter 2019, p. 5). of course, the media have always played a community-building and community-integrating role. however, the key difference is this: while previously the media used to perform a somewhat supplementary role, building onto the 'real' world and conveying or explaining it, with varying degrees of fidelity, the current condition is characterised by the media hosting and creating the world that they purport to merely represent (kalpokas et al. 2020) . no less importantly, mediatisation also implies a certain delegation of agency as 'collectivities [are] created by automated calculation based on the "digital traces" that individuals leave online' (couldry and hepp 2017, p. 168) . in this sense, as boler and davis (2018, pp. 82-83) assert, algorithms inherent in today's dominant media platforms 'define the spaces of our information encounters, encounters with others, and the status of knowledge'. simultaneously, attention becomes the scarcest of resources-individuals simply no longer have sufficient means to pay enough of it (citton 2019, p. 35) . when coupled with algorithmic analysis of trends and user behaviour, attention becomes its own magnet: 'attention attracts attention', i.e. the more people interact with a digital object, the more it rises in the algorithmic pecking order, thereby becoming more visible to others; therefore, even 'looking at an object represents a labour which increases the value of that object', leaving pleasure and labour inextricably entwined (citton 2019, pp. 47-48, 65) . it then also becomes obvious that whatever maximises audience attention, becomes an attractive proposition for content providers-audience captivation becomes more important than truthfulness, 'reality' in the conventional sense of the term, or any other considerations (kalpokas 2019) . that also implies a great degree of malleability and adaptability of the social world, as strict adherence to the tangible no longer is a must: for as long as social occurrences can be created and sustained within media ecosystems, they can and should be seen as sufficiently real, leading towards 'primacy of anticipation over content' (marcinkowski 2014, p. 17) . such anticipation refers to both the communicators (anticipation of particular audience expectations to be satisfied) and their audiences (anticipation of being satisfied); in this situation, neither side is likely to give the substance of content priority-whatever satisfies expectations, is good enough. and no less importantly, technology now affords increasingly sophisticated ways of decoupling satisfaction of expectations from conventional considerations of reality by producing high-fidelity synthetic reality. deepfakes are digital content, generated using a deep learning technique known as a generative adversarial network (gan). the production process involves the simultaneous use of two algorithms: one, typically referred to as 'the generator', is tasked with creating artificial content while the second, called 'the discriminator', tries to find fault in the newly-generated content; once such a fault is found, the generator learns from its own mistakes and creates an improved version to be scrutinised by the discriminator, and so on (chesney and citron 2019, p. 148; giles et al. 2019, p. 8 )-this is where the adversarial element of gan comes from. the end product is arrived at when the pair of algorithms can no longer make any improvements through mutual learning. one of the main fears pertaining to deepfakes is that they can purport to represent events or insinuate behaviours that never took place in order to destroy the reputations of featured individuals (for both political manipulation and private harassment) or potentially even sway the results of elections (chesney and citron 2019, p. 148); alternatively, they can lead to an environment of distrust, whereby even 'hard' evidence of crimes or misdemeanours can be easily dismissed as mere deepfakes (woolley 2020, p. 14) . deepfakes can also potentially be used for blackmail and extortion, either for financial gain or to manipulate decisionmakers (hall 2018, p. 52) . likewise, whereas the creation of simulated public opinion currently requires armies of trolls, deepfakes can automate the process, generating custom-made content coming from custom-generated profiles etc. (giles et al. 2019, p. 11) . crucially, deepfakes are democratic in nature: the only things needed are training material for the algorithms and computing power; in contrast to traditional photo or video editing software, no specialist skill is necessary as the process is automated, meaning that even a relative amateur can produce high-quality synthetic content (chesney and citron 2019, p. 148) . currently, the primary use of deepfakes is for synthetic pornography, as in transposing the faces of celebrities or former partners onto the bodies of performers in pornographic videos; however, there are clear threats coming from improvements in the technology itself, such as reducing the quantity of necessary input and increasing the quality of output, and from its pairing with other techniques, including big data-based precision targeting to identify those most susceptible to believing the synthetic content (paul and posard 2020) . although deepfakes can usually still be identified it is only a matter of time until technology catches up with human perception; moreover, as human response to audio-visual content is often visceral and immediate, people will, nevertheless, believe their eyes and ears 'even if all signs suggest that the video and audio content is fake' (charlet and citron 2019) . simultaneously, as communication, particularly online, is turning more and more towards the visual, the capacity to manipulate content in this dominant mode of expression can become a notable source of power (vaccari and chadwick 2020, p. 2) . nevertheless, due to the aforementioned democratic nature of deepfakes, it is unlikely that this power would be concentrated in the hands of a few actors only. a much more likely outcome is dizzying excess, in which it becomes increasingly difficult for information consumers to make up their minds. the net result might be 'a climate of indeterminacy' whereby people have low levels of trust beyond their bubbles (vaccari and chadwick 2020, p. 2) . moreover, this indeterminacy is likely to extend even further, including in domains where objective veracity is prized. one such example would be the legal process, whereby the authenticity of even video evidence will become hard to determine (see e.g. maras and alexandrou 2019), thereby further contributing to the undermining of trust. in particular, deepfakes may prove to be dangerous in the runup to elections, as parts of e.g. a smear campaign against an opponent. while unlikely to feature in isolation, they are likely to form an integral part of broader cyber operations, perpetrated by domestic or foreign actors (whyte 2020) . extant research already indicates that if deepfakes are targeted precisely, they can considerably reduce the image of 1 an unfavourably depicted politician in the eyes of the target population (see dobber et al. 2020) . certainly, the precision-targeting necessary for such an effect necessitates large sets of audience data, which might up the ante for those willing to enter the political manipulation game. nevertheless, for well-resourced political campaigns and, even more so, for hostile nation-states targeted deepfakes will, in all likelihood, become a new addition to their arsenal. still, even for the less-resourced, deepfakes may prove to be a viable tool, for example in trying to harass activists of the opposite camp by placing their images in pornographic videos or other types of content that the victims would likely find unpleasant and disturbing (see e.g. maddocks 2020). for those reasons, it is extremely likely that deepfakes will feature, in some capacity, in the elections to come. simultaneously, though, the problematisation of reality wrought about by deepfakes extends much further than manipulation or other nefarious uses. for example, as kietzmann et al. (2020, p. 141 ) asserts, 'we may soon enjoy injecting ourselves into hollywood movies and becoming the hero(ine) in the games we play' while shopping is going to be transformed by a capacity to create personal deepfake avatars to model different outfits, leading to 'ultimate personalization'. indeed, deep personalisation is likely to be the next big thing in digital consumer-oriented products more broadly, a continuation of the current drive to put as much personal touch into services as possible. there are also further opportunities for businesses: while data-driven targeting and programmatic ad buying are already de rigueur; the next step would be employing gans to deepfake segments in anything from news broadcasts to films in real time to deliver targeted advertising and personalised product placement to every viewer. nevertheless, this ability to place oneself (or be placed) at the centre of the universe and to subject perceived reality to one's interests or tastes (or tasks at hand) clearly points towards an impending future of 'reality' that, instead of being stable and capable of providing a common point of reference, becomes personally tailored and, simultaneously, only personally meaningful, leading to personalised experience cocoons. ultimately, such synthetic media are going to 'challenge public opinion and what we know as reality in basically all sectors of culture and society' (woolley 2020, p. 107) . hence, individuals will be faced with a broad variety of largely (or, at least, immediately) indiscernible truth candidates only to default to their pre-existing opinions, partisan bubbles, or influencers. however, the latter are also becoming synthetic. recent developments in today's media also involve the creation of synthetic personalities, primarily as virtual influencers (vis). like their human counterparts, these are personalities geared for maximum audience impact. however, due to their synthetic nature, vis provide an unprecedented degree of flexibility and targeting. hence, it is typical for creators to provide vis with 'a composite personality based on market research', and then use machine learning-based social listening to adapt to target audiences as effectively as possible (bradley 2020a) . in contrast to a human influencer, all of the virtual one's characteristics, including 'age, gender, tone of voice and aesthetics' can be tailored to match audience expectations (bradley 2020a). therefore, as bergendorff (2019) similarly to what has been observed in relation to deepfakes, the synthetic nature of these influencers opens up potential for manipulation, particularly because they can be made so impactful. campaigners already warn of adverse consequences on matters ranging from body image and a sense of inferiority in comparison to virtual personalities' computer-generated accomplishments to virtual influencers taking a political stance (booth 2019; yocom and acevedo 2019) . and while in case of most, if not all, of the currently popular virtual influencers such concerns are more of a side effect, it is not too difficult to imagine a virtual influencer intentionally created for manipulative purposes; having been created specifically with appeal to a target audience's preferences in mind and specifically designed to evoke trust from that particular segment of the population, virtual influencers could the conceivably become trusted purveyors of information held in high esteem by their followers (yocom and acevedo 2019) . while they would be unlikely to develop independent persuasive power in the short or mid-term, virtual influencers could conceivably assist efforts to wrap target audiences within a cocoon of misinformation and this amplify existing campaigns. for brands, vis offer the usual combination of advertising and audience engagement, but with total control of content and behaviour, unlike the often-erratic antics of real-life influencers (bradley 2020a) . 1 moreover, a vi tends to generate around three times more engagement than a human one and acquires followers at a significantly higher rate (leighton 2019) , possibly as a result of their meticulous tailoring. an additional benefit of vis is their independence from real-world context: for example, while coronavirus lockdowns issued by governments have significantly constrained opportunities (travel, public appearances etc.) for human influencers, virtual ones can continue regardless (deighton 2020). the preceding can leave brands wondering why hire a human 'when you can create the ideal brand ambassador from sctratch' (hsu 2019) . from a societal perspective, however, the situation might be somewhat suboptimal because vis are less regulated than their human counterparts, leaving brands more leeway in constructing their campaigns; moreover, vis endorsing products they claim to have tried (which is, of course, impossible) likely contains more than a hint of manipulation (hsu 2019; cook 2020) . this problem also extends beyond products and brands as the persuasive power of vis can also be used for promoting political actors and agendas (deighton 2020). crucially, the synthetic nature of vis might be somewhat liberating: while social media have been used by humans to perform their fake selves, vis are at least authentically fake (hsu 2019). nevertheless, this authenticity can be easily lost. one reason is the interactive capacity of vis. making vis interact both among themselves and with real-life humans allows for storytelling opportunities and manufactured events that can be carefully orchestrated to generate publicity (sokolov 2019) and captivate audiences through 'emotional storytelling and empathy' (luthera 2020) . this captivation might preclude followers from maintaining the necessary distance. the second reason, meanwhile, concerns those followers who do retain that distance: here, the creators of vis may be facing an emotive storytelling gap in talking about specifically human experience while simultaneously being open about not being human (bradley 2020b) . that might drive the creators of some vis to be at best ambiguous about the nature of their creations. after all, around 42% of millennials and gen z social media users follow or have followed vis without realising their artificial nature (cook 2020). the next step in dissolving the boundary between the real and the synthetic will be true ai generation of vis without requiring major human input (bergendorff 2019) . once vis cease being painstakingly human-made and, instead, become interactively and automatically generated, they will not only become ubiquitous but will also become even more irresistible by automatically adapting to their audiences. and as their impact increases, the question of whether they are human or virtual will become increasingly irrelevant. a further step towards the problematisation of reality is the capacity for immersion in a synthetic environment through augmented and virtual reality technologies, typically referred to collectively as extended reality (er). the problematisation of reality is made particularly acute by the fact that er is only effective if it causes an illusion of presence (i.e. the loss of awareness of technological mediation) and an illusion of plausibility, whereby a user's experience responds personally to their actions (pan and de hamilton 2018, pp. 406-407) . for full immersion, users must also be provided with an 'experiencescape', i.e. a package of 'people, products, and a physical environment' (hudson et al. 2019, p. 461) . in that way, er becomes a self-contained world that stands in for 'normal' everyday experience. moreover, the merging of er and social media is likely to offer 'far more immersive experiences and the possibility of sharing more of our lives online', affording an even more effective refuge from the physical world (marr 2019) . thus far, the primary uses of er are for gaming and educational purposes, but new and emerging uses involve e.g. virtual attendance of real concerts and sporting events (rubin 2018), potentially even allowing a band to perform in their studio while their sound is put into a completely virtual concert performed by their avatars-particularly attractive in times of distancing and travel restrictions. additionally, there is increasing use of er meeting spaces for both commercial and private use, standing in for travel (rubin 2018) . in other words, er can literally offer a (synthetic) world of experience within the confines of one's home. however, virtual co-presence also comes with its own dangers, such as virtual abuse, which can have psychological effects as bad as the 'real' thing (rubin 2018). moreover, there also is a threat of manipulation as er can cause attitude change through experiencing a place, a brand, or a person (tussyadiah 2018) . in fact, social engineering on er is already a thing, although the applications currently available (at least those open about their aims) are primarily concerned with empathy, social responsibility etc. (marx 2019) . nevertheless, there is no guarantee that such techniques will not be (or are not already) also employed for nefarious purposes. an important sticking point is that er allows much more extensive data collection (particularly biometric data) than any other type of media (braun 2019) . these data can also be matched with a record of everything the user sees or hears while the device is on (marr 2019) . the result is not only potentially detrimental privacy invasion but also unprecedented capacity to tailor the experienced environment by predicting the most visceral of our responses (hall and takahashi 2017) . that tailoring might tilt users towards prioritising er over the non-user-centric physical environment, thus further contributing to the problematisation of reality. furthermore, the er of the near future 'will be aware, data-rich, contextual, and interactive' courtesy to the development of both 5g and cloud-based representations of the physical environment, allowing data to be overlaid on the physical world in real time; the net result will be not only a richer experience but also a shift of er from an add-on to the operating system of everyday life (koetsier 2019) , particularly as haptic technologies mature. as with the other types of synthetic media discussed above, manipulative potential is rather clear. in the case of er, this relates not only to the capacity of generating artificial experiences but also its immersive quality that might have serious ramifications. as audiences experience content much more vividly and are, therefore, less immune to the messages promoted, fake news on er are likely to be more impactful than their broadcast or online forms (pavlik 2019) . a further issue to be kept in mind is the disappearance of authorship: whereas in traditional content it is easier for the audience to remain conscious of the constructed nature of what they encounter, immersiveness is likely to lead to over-ascribed authenticity (johnson 2020) . in this way, borders between different versions of reality are likely to blur, thereby completing the slide towards epistemological anarchy. finally, as the adoption of er accelerates, that will further affect the perception of the self, not least through the development of digital avatars into effective standins (marr 2019) , thereby diminishing the importance of the physical self. concurrently, in line with the development of synthetic persons, it will be increasingly difficult to tell whether one is interacting with an avatar of a human or with an artificial agent (pan and de hamilton 2018, p. 411) , thus further stripping reality of its tangible and verifiable character. as shown in this article, the issue of reality has become less straightforward than ever. as we move towards the construction of synthetic likenesses, persons, and entire worlds, that which is real (at least in terms of affecting understanding and causing decisions) might become intangible and personalised. certainly, people have always had divergent interpretations of the world, including many opinions that were false. however, the present condition simultaneously allows the creation of increasingly realistic synthetic objects and environments and is likely to ensure survival even of those who fundamentally misperceive some of the basic physical characteristics of the world. nevertheless, the change goes even deeper as we seem set to lose the awareness of-or, indeed, interest in-the source of lived reality while lacking the time, resources, or motivation to assess the exact nature of what is driving us towards certain beliefs, actions, or decisions. hence, it is advisable that we move towards an affective criterion of reality: an artefact's or an environment's reality to an individual is directly proportionate to the power of affect exerted onto that individual (kalpokas 2020) . while that might, at first sight, come across as a highly relativist solution, it nevertheless adheres closely to the problematisation of reality discussed above. the net result would be a partly synthetic life that only abides by the conventions of the physical world to a limited extent. data availability there is no data associated with this article. why i'd invest in a robot teenager: an investor's perspective on cgi influencers the affective politics of the 'post-truth' era: feeling rules and networked subjectivity fake online influencers a danger to children, say campaigners. the guardian even better than the real thing? meet the virtual influencers taking over your feeds. the drum can virtual influencers build real connections with audiences? the drum security, privacy, virtual reality: how hacking might affect vr and ar. iot tech trends campaigns must prepare for deepfakes: this is what their plan should look like. carnegie endowment for international peace deepfakes and the new disinformation war: the coming of age of posttruth geopolitics brands are building their own virtual influencers. are their posts legal? huffpost the mediated construction of reality. polity, cambridge deighton k (2020) why use virtual influencers? raconteur do (microtargeted) deepfakes have real effects on political attitudes? mediatization of emotion on social media: forms and norms in digital mourning practices the role of deepfakes in malign influence campaigns deepfake videos: when seeing isn't believing augmented and virtual reality: the promise and peril of immersive technologies the mediatization of society: a theory of the media as agents of social and cultural change these influencers aren't flesh and blood, yet millions follow them. the new york times with or without you? interaction and immersion in a virtual reality experience promises and perils in immersive journalism london kalpokas i (2020) towards an affective philosophy of the digital: posthumanism, hybrid agglomerations, and spinoza creating students' algorithmic selves: shedding light on social media's representational affordances deepfakes: trick or treat augmented reality is the operating system of the future. ar cloud is how we get there what it means for virtual instagram influencers' popularity rising the dark side of deepfake artificial intelligence and virtual influencers a deepfake porn plot intended to silence me': exploring continuities between pornographic and 'political'. deep fakes porn stud determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos mediatisation of politics: reflections on the state of the concept the important risks and dangers of virtual and augmented reality taking virtual reality for a test drive mediatization: conceptual developments and research domains. sociol compass why and how to use virtual reality to study human social interaction: the challenges of exploring a new research landscape artificial intelligence and the manufacturing of reality. the rand blog journalism in the age of virtual reality: how experiential media are transforming news virtual influencer trends: an overview of the industry. the drum virtual reality, presence, and attitude change: empirical evidence from tourism deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news deepfake news: ai-enabled disinformation as a multi-level public policy challenge the reality game: how the next wave of technology will break the truth and what we can do about it this social media influencer is a robot-but how could this influence the future? the globe post conflict of interest the author has reported no conflict of interests. key: cord-288024-1mw0k5yu authors: wang, wei; liang, qiaozhuan; mahto, raj v.; deng, wei; zhang, stephen x. title: entrepreneurial entry: the role of social media date: 2020-09-29 journal: technol forecast soc change doi: 10.1016/j.techfore.2020.120337 sha: doc_id: 288024 cord_uid: 1mw0k5yu despite the exponential growth of social media use, whether and how social media use may affect entrepreneurial entry remains a key research gap. in this study we examine whether individuals’ social media use influences their entrepreneurial entry. drawing on social network theory, we argue that social media use allows individuals to obtain valuable social capital, as indicated by their offline social network, which increases their entrepreneurial entry. we further posit the relationship between social media use and entrepreneurial entry depends on individuals’ trust propensity based on the nature of social media as weak ties. our model was supported by a nationally representative survey of 18,873 adults in china over two years. as the first paper on the role of social media on entrepreneurial entry, we hope our research highlights and puts forward research intersecting social media and entrepreneurship. social media, defined as online social networking platforms for individuals to connect and communicate with others (e.g., facebook), has attracted billions of users. an emerging body of literature suggests that social media enables entrepreneurs to obtain knowledge about customers or opportunities, mobilize resources to progress their ventures, and manage customer relationships after venture launch (cheng & shiu, 2019; de zubielqui & jones, 2020; drummond et al., 2018) . further, social media allows entrepreneurs to efficiently manage their online relationships and reinforce their offline relationships (smith et al., 2017; thomas et al., 2020; wang et al., 2019) . despite much research on the impact of social media on the launch and post-launch stages of the entrepreneurial process (bird & schjoedt, 2009; gruber, 2002; ratinho et al., 2015) , there is little research on the impact of social media on the pre-launch stage, the first of the three stages of the entrepreneurial process (gruber, 2002) . despite the popularity of social media, it remains unclear whether and how social media affects individuals at the prelaunch stage of the entrepreneurial process, given social media consists of weak ties and substantial noise from false, inaccurate or even fake information, which may or may not benefit its users. in this study, we aim to contribute to the literature by investigating whether individuals' social media use affects their entrepreneurial entry based on social network theory. we argue that a higher social media use will allow an individual to develop a larger online social network and accumulate a greater amount of social capital, which facilitates entrepreneurial entry. a larger social network may facilitate individuals' information and knowledge seeking activities (grossman et al., 2012; miller et al., 2006) , which have a significant impact on their ability to generate and implement entrepreneurial ideas in the pre-launch stage (bhimani et al., 2019; cheng & shiu, 2019; orlandi et al., 2020) . social media, unlike offline face-to-face social networks, allows a user to develop a large social network beyond their geographical area without incurring significant effort and monetary cost (pang, 2018; smith et al., 2017) . the large social network arising from social media further enables social media users to build larger offline networks beyond their geographical proximity. hence, we argue that individuals' social media use has a positive impact on their offline network, which facilitates their entrepreneurial entry. however, social media is dominated by weak ties, and individuals with low trust propensity may not trust other online users easily so they are cautious about online information and knowledge. thus, we propose that trust propensity, an individual's tendency to believe in others (choi, 2019; gefen et al., 2003) , moderates the relationship between social media use and entrepreneurial entry. fig. 1 displays the proposed model. we assessed the proposed model on a publicly available dataset of china family panel studies (cfps), which consists of a sample of nationally representative adults. our findings reveal that social media use https://doi.org/10.1016/j.techfore.2020.120337 received 8 august 2020; accepted 21 september 2020 has a positive impact on entrepreneurial entry with individuals' offline network serving as a partial mediator. further, the findings confirm that individuals' trust propensity moderates the relationship between their social media use and entrepreneurial entry, with the relationship becoming weaker for individuals with high trust propensity. our study makes several important contributions to the literature. first, we contribute to the emerging entrepreneurship literature on an individual's transition to entrepreneurship by identifying factors contributing to the actual transition (mahto & mcdowell, 2018) . the identification of social media use addresses mahto and mcdowell's (2018) call for more research on novel antecedents of individuals' actual transition to starting entrepreneurship. to the best of our knowledge, this is the first study on the role of social media on individuals' entrepreneurial entry using social network theory. the research on social media in entrepreneurship area has focused on post-launch phases of entrepreneurship (cheng & shiu, 2019; drummond et al., 2018; mumi et al., 2019) , while research on individuals at the pre-launch stage of the entrepreneurial process is lacking. second, our study specified a mechanism for the impact of individuals' social media use on entrepreneurial entry via their offline network and used instrumental variables to help infer the causality. yu et al. (2018 yu et al. ( , p. 2313 noted that "specifying mediation models is essential to the advancement and maturation of particular research domains. as noted, mathieu et al. (2008: 203) write, 'developing an understanding of the underlying mechanisms or mediators (i.e., m), through which x predicts y, or x → m → y relationships, is what moves organizational research beyond dust-bowl empiricism and toward a true science.'" third, we contribute to the limited stream of research in the entrepreneurship literature on the networking of individuals in the prelaunch phase which has focused on networking offline (dimitratos et al., 2014; johannisson, 2009; klyver & foley, 2012) . instead, we offer a clearer picture of networking for entrepreneurship by connecting the literature on online social media use (fischer & reuber, 2011; smith et al., 2017) with offline social networks and entrepreneurial entry. the paper is organized as follows. the next section, section 2, provides an overview of the social capital theory and associated literature used to construct arguments for hypothesis development. section 3, data and methods, reports the context, method, and the variables. section 4 reports the results of the statistical analysis, instrumental variable analysis to address endogeneity concerns, and an assessment of robustness checks. section 5 discusses the study findings, outlines key study limitations, and provides guidance for future research and section 6 concludes. social capital theory (rutten & boekema, 2007 ) is a popular theoretical framework among management scholars. more recently, the theory has been increasingly used by entrepreneurship scholars to explain behaviors at the levels of both the individual (e.g., entrepreneurs) and firm (e.g., new ventures) (dimitratos et al., 2014; klyver & foley, 2012; mcadam et al., 2019) . according to the theory, the network of an individual has a significant influence on an individual's behavior (e.g., seeking a specific job) and outcomes (e.g., getting the desired job). in the theory, the network represents important capital, referred to as social capital, that produces outcomes valued by individuals (mariotti & delbridge, 2012) . social capital allows an individual to obtain benefits by virtue of their membership in the social network. the underlying assumption of social capital is, "it's not what you know, it's who you know" (woolcock and narayan (2000) , p. 255). for example, people with higher social capital are more likely to find a job (granovetter, 1995) or progress in their career (gabby & zuckerman, 1998) . for firms, social capital offers the ability to overcome the liability of newness or resource scarcity (mariotti & delbridge, 2012) . in entrepreneurship literature, scholars have used social capital to explain resource mobilization and pursuit of an opportunity by both entrepreneurs and small firms (dubini & aldrich, 1991; stuart & sorenson, 2007) . at the individual level, entrepreneurs embedded in a network are more likely to overcome challenges of resource scarcity and act promptly to launch a venture to capitalize on an opportunity (klyver & hindle, 2006) . for example, high social competence entrepreneurs establish strategic networks to obtain information, resources and more strategic business contacts (baron & markman, 2003) . mahto, ahluwalia and walsh (2018) supported the role of social capital by arguing that entrepreneurs with high social capital are more likely to succeed in obtaining venture capital funding. further, entrepreneurship scholars have argued that social networks influence entrepreneurs' decisions and the probability of executing a plan (davidsson & honig, 2003; jack & anderson, 2002; ratinho et al., 2015) . in women entrepreneurs, the presence of a robust social network is a key determinant of success (mcadam et al., 2019) . research suggests that the extent of a social network determines which resources entrepreneurs can obtain (jenssen & koenig, 2002; witt, 2004) . in the entrepreneurial context, scholars have also examined the influence of social networking at the firm level. for example, new and small firms often use a strong social network to overcome the liability of newness or smallness to pursue growth opportunities (galkina & chetty, 2015; mariotti & delbridge, 2012) . entrepreneurial ventures with limited resources often rely on their networks to obtain information and knowledge about consumers, competitors and networks in a foreign market (lu & beamish, 2001; wright & dana, 2003; yeung, 2002) . in the internationalization context, it is almost impossible for entrepreneurial firms to enter a foreign market without a robust social network (galkina & chetty, 2015) . it is well documented that new firms commonly use strategic networking for resources and capabilities (e.g., research and development) unavailable within the firm. the research on social networks in the entrepreneurship area is robust, but is focused almost exclusively on traditional offline social wang, et al. technological forecasting & social change 161 (2020) 120337 networks with limited attention to the dominant online social media. as offline social networks and online social networks differ significantly in terms of strength of ties (i.e., weak ties vs. strong ties) between network associates (filiposka et al., 2017; rosen et al., 2010; subrahmanyam et al., 2008) , empirical findings from traditional offline social networks may not be applicable to online social networks because offline social networks are dominated by strong ties while online social media are dominated by weak ties (filiposka et al., 2017) , and strong ties are based on a high degree of trust and reciprocity while weak ties have low trust and reciprocity. this significantly limits our understanding of entrepreneurial phenomena in the context of online social media. further, the research on social networks has also paid limited attention to the pre-launch phase of the entrepreneurial process, focusing mostly on entrepreneurs and established entrepreneurial ventures. finally, as offline social networks, which have strong ties, are the main context of the literature, the role of individual trust propensity remains unexplored as well. this offers a unique opportunity to investigate the role of social media and individuals' trust propensity in the pre-launch phase of the entrepreneurial process. the widespread adoption of the internet has led to an exponential growth in social media around the world. we refer to social media as "online services that support social interactions among users through greatly accessible and scalable web-or mobile-based publishing techniques" (cheng & shiu, 2019, p. 38) . social media, using advanced information and communication technologies, offers its users the ability to connect, communicate, and engage with others on the platform (bhimani et al., 2019; kavota et al., 2020; orlandi et al., 2020) . some of the most popular social media companies in the world are facebook, twitter, qq, and wechat. the large number of users coupled with other benefits of social media platforms, such as marketing, engagement, and customer relationship management, have attracted firms and organizations to these platforms. for example, firms have used social media to build an effective business relationship with their customers (steinhoff et al., 2019) , create brand loyalty (helme-guizon & magnoni, 2019), and engage in knowledge acquisition activities (muninger et al., 2019) . firms have also started adopting social media to enhance their internal operations by strengthening communication and collaboration in teams (raghuram et al., 2019) . thus, social media and its impact on firms and their environment has intrigued business and management scholars driving growth of the literature. recently, entrepreneurship scholars have begun exploring the impact of social media on entrepreneurial phenomena. limited research on social media in entrepreneurship suggests that social media allows entrepreneurial firms to enhance exposure (mumi et al., 2019) , mobilize resources (drummond et al., 2018) , and improve innovation performance (de zubielqui & jones, 2020) . this limited research, while enlightening, is devoted almost entirely to the post-launch stage of the entrepreneurial process, where a start-up is already in existence. the impact of social media on other stages of the entrepreneurial process, especially the launch stage (i.e., entrepreneurial entry), remains unexplored and is worthy of further scholarly exploration. for example, even though we know that social media can offer new effectual pathways for individuals by augmenting their social network, whether social media influences entrepreneurial entry or offline social networks remains unexplored. thus, our goal in this study is to address the gap in our understanding of the impact of social media on entrepreneurial entry. a social network refers to a network of friends and acquaintances tied with formal and informal connections (barnett et al., 2019) , that can exist both online and offline. social media is useful for creating, expanding and managing networks. research suggests social media can be used to initiate weak ties (e.g., to start a new connection) and manage strong ties (i.e., to reinforce an existing connection) (smith et al., 2017) . similar to social interactions in a physical setting, people can interact with others and build connections in the virtual world of social media, which eliminates the need for a physical presence in the geographical proximity of the connection target. the lack of requirement for geographical proximity with the in-built relationship management tools in social media allows a user to connect with a significantly larger number of other users regardless of their physical location. the strength of relationships among connected users in social media is reflected by the level of interaction among them; users in a strong connection have a higher level of interaction and vice versa. however, given the probability of a much larger number of connections in social media, dominance of weak ties is accepted. when connected users, either online or offline, in a network reinforce their connection by enhancing their level of interaction in both mediums (i.e., offline and online), they strengthen ties. for example, when two connected users in social media engage in offline activities, they may enhance their offline social tie through the joint experience . research also informs that social media use helps reinforce or maintain the strength of relationships among offline friends (thomas et al., 2020) . social media allows people to communicate with their offline friends instantly and conveniently without the need to be in geographical proximity (barnett et al., 2019) . the opportunity to have a higher level of interaction at any time regardless of physical location offers social media users the ability to manage and enlarge their offline social network. further, social media can also be used to initiate offline ties directly. in the digital age, users can connect their friends and acquaintances to other friends and acquaintances on social media. social media platforms also recommend connections to users based on their user profile, preferences, and online activities to generate higher user engagement. for example, in china, when a user intends to connect with a person known to their friends or connections, they can ask their friends for a wechat name card recommendation. once connected online, users can extend their connection to their offline networks as well. as a result, higher social media use may enhance a user's offline social network. thus, we hypothesize: h1. social media use of a user is positively associated with their offline social network. entrepreneurship, a context-dependent social process, is the exploitation of a market opportunity through a combination of available resources by entrepreneurs (shane & venkataraman, 2000) . the multistage process consists of: (a) the pre-launch stage, involving opportunity identification and evaluation, (b) the launch stage, involving business planning, resource acquisition, and entrepreneurial entry, and (c) the post-launch stage, involving venture development and growth (gruber, 2002) . our focus in this study is on entrepreneurial entry, which is the bridge between the pre-launch and launch stages of the entrepreneurial process, representing the transition from an individual to an entrepreneur (mahto & mcdowell, 2018; yeganegi et al., 2019) . entrepreneurial entry requires a viable entrepreneurial idea (i.e., opportunity) and resources (ratinho et al., 2015; ucbasaran et al., 2008) . individuals' social networks are important for researching and assessing entrepreneurial ideas (fiet et al., 2013) and accumulating valuable resources for entrepreneurial entry (grossman et al., 2012) . research suggests that networks play a crucial role in the success of entrepreneurs and their ventures (galkina & chetty, 2015; holm et al., 1996) . social networks allow individuals to access information and resources (chell & baines, 2000) . a larger social network allows entrepreneurs and smes to overcome resource scarcity for performance enhancement and expansion, especially international expansion (dimitratos et al., 2014; johannisson, 2009 ). although enlightening, the prior research on social networks in entrepreneurship has focused only on the traditional offline networks. in the digital age, social media has emerged as the key networking tool and enhanced individuals' ability to significantly enlarge their network and draw a higher social capital. these platforms allow entrepreneurs to efficiently manage both their online and offline networks and relationships . social media has significantly expanded the ability of individuals to network by removing geographical, cultural and professional boundaries. it allows people, separated by physical distance, to overcome the distance barrier to network and manage relations effectively (alarcóndel-amo et al., 2018; borst et al., 2018) . this is especially beneficial for an individual searching for entrepreneurial ideas that may be based on practices, trends, or business models emerging in the geographical locations of their network associates. as an example, jack ma of alibaba did not have to travel to the us to stumble upon the idea of an online commerce platform. social media allowed him to observe and obtain that information through network associates. while social media enlarges the social network of an individual with associates located beyond their geographical location, critics of the platform argue that such networks are mostly made up of weak ties lacking the strong ties of an offline network. however, individuals can still obtain useful and valuable information from abundant weak ties in such social networks (granovetter, 1973) . when accessing the network, the individuals have access to knowledge and information from various domains to inform their entrepreneurial ideas. further, the efficiency of social media allows for more effective and easy communications with distant individuals (alarcón-del-amo et al., 2018) . the improved communication with distant network associates allows individuals to strengthen their ties and obtain richer and reliable information. individuals may also obtain valuable access to new resources or new associates, who may support the formation of their new entrepreneurial venture. the distant network associates could also offer individuals additional resources in the form of entrepreneurial connections to new partners, buyers, suppliers, or talent, which all improve the chance of launching new ventures. it is well known that people, especially venture capitalists and investors, tend to minimize their risk by investing in known entrepreneurs rather than unknown entrepreneurs . thus, we believe social media use is beneficial for entrepreneurial entry. h2. social media use is positively associated with entrepreneurial entry. social media significantly enhances individuals' capability to expand their networks by removing cultural, geographical, and professional boundaries, to manage and strengthen offline social relationships. according to prior research, offline networks can provide the spatially proximate information and resources relevant to entrepreneurial entry (levinthal & march, 1993; miller et al., 2006) . social media enhances the efficiency and reduces the transaction cost of communication with offline network associates, allowing individuals to use them for information, knowledge and resource search. a recombination of information and knowledge is key to generating and then evaluating entrepreneurial ideas for entrepreneurial entry. in an offline social network, an individual has a stronger relationship with network associates because of their face-to-face interactions and collective experience in geographical proximity. further, geographical proximity in an offline social network facilitates relationships in real life by augmenting face-to-face interactions via virtual means (kim et al., 2019) . the additional channel of communication via virtual social media allows individuals to obtain timely and richer information, which may help them benefit from the collective wisdom and capability of their higher social capital (orlikowski, 2002) to develop entrepreneurial opportunities. the richer information and better access to knowledge and resources all benefit their entrepreneurial entry. thus, with higher social media use, individuals will have an expanded offline social network, which provides them the resources needed for successful entrepreneurial entry. therefore, we propose: h3. the offline social network mediates the relationship between social media use and entrepreneurial entry. trust propensity refers to an individual's tendency to trust others (choi, 2019; gefen et al., 2003) . trust propensity is a stable personality trait formed early in life through socialization and life experience (baer et al., 2018; warren et al., 2014) . like other ingrained personality traits, it affects an individual's behaviour, especially trust, in many situations (baer et al., 2018; friend et al., 2018) . for example, a customer with a high trust propensity is more likely to trust a salesperson without doubting their integrity (friend et al., 2018) . while trust propensity enables trust, it may leave individuals vulnerable due to reduced monitoring and reduced flow of new ideas (molina-morales et al., 2011) . furthermore, an individual with a high trust propensity may be inclined to obtain information from others indiscriminately and be locked into relationships. this may influence the individual's information processing capability. in the literature, trust propensity has attracted the attention of scholars seeking to explain not only the offline behavior of individuals, but also online behavior in social media platforms and virtual communities (lu et al., 2010; warren et al., 2014) . in social media, network associates are mostly connected through weak ties representing lack of trust and reciprocity. the existence of significant weak ties in social media makes the role of individual trust propensity critical. we believe trust propensity in social media moderates the impact of individuals' social media use on entrepreneurial entry by influencing their ability to network with strangers and known associates. further, prior findings in the literature suggest that trust influences entrepreneurial information searching and processing (keszey, 2018; molina-morales et al., 2011; wang et al., 2017) . this supports the possibility of trust propensity as the moderator of the link between social media use and entrepreneurial entry. in social media, the trust propensity of an individual influences their interaction and behavior (lu et al., 2010) . accordingly, an individual with a high trust propensity is more inclined to trust others. however, the trust in the relationship may not be mutual as the transacting party may lack the same trust propensity. as a result, the individual may fail to generate identical trust from the other individual thereby limiting the benefits of the relationship. with the aid of social media, an individual has the ability to access a large network of weak ties with remote individuals. this may allow the individual to obtain and validate information crucial to formalizing and finalizing an entrepreneurial idea. however, the advantage of higher social capital from access to a large network on social media may be eroded when individuals have a high trust propensity due to multiple factors. first, the network associates of individuals on social media vary significantly in terms of their trust propensity. the variations in the trust propensity of associates may result in them providing information via social media that may not always be reliable. in particular, network associates with low trust propensity may be reluctant to share valuable information. individuals with high trust propensity will treat a network associate and the information they provide with trust and without suspicion (peralta & saldanha, 2014; wang et al., 2017) . as a result, social media users may be exposed to both true and false information from associates. thus, such individuals are more likely to experience greater obstacles in distinguishing reliable information from unreliable noise, thereby incurring significantly higher information and resource search costs. the higher cost may hinder formation and finalization of an entrepreneurial idea and may hamper entrepreneurial entry. alternatively, individuals with low trust propensity are more likely to be more cautious (choi, 2019) . such individuals, due to their cautious attitude, are less likely to experience noise in their information and resource search, and thus may find it easier to distinguish reliable information from w. wang, et al. technological forecasting & social change 161 (2020) 120337 unreliable information. as a result, the cost (i.e., monetary, labor, and time) of obtaining information and resources for such individuals is lower, which may significantly enhance the probability of entrepreneurial entry. second, in social interactions and transactions trust may trigger a lock-in effect (molina-morales et al., 2011) . the lock-in effect refers to a scenario where high trust propensity individuals interact only with a few trusted associates on social media. the lock-in effect prevents the individuals from benefiting from a higher social capital on social media. thus, a lock-in effect may significantly limit individuals' information and resource search to a limited number of associates, which may significantly impair development and formation of their entrepreneurial idea, and ultimately entrepreneurial entry. however, individuals with low trust propensity are less likely to suffer from the lock-in effect thereby increasing their probability of entrepreneurial entry. thus, we hypothesize: h4. trust propensity moderates the relationship between social media use and entrepreneurial entry. we tested our proposed model on a sample of adults in china, a country with the world's largest population and the second highest total gross domestic product. china provides a rich setting for examining the link between social media and entrepreneurial entry for multiple reasons. first, china has experienced exponential growth in entrepreneurship and private enterprise development unleashed by economic transition (he et al., 2019) . the resulting entrepreneurial intensity provides a suitable context for investigating entrepreneurial phenomena including entrepreneurial entry. second, in china the adoption and use of social media is widespread with the world's largest number of users of internet (li et al., 2020) . the major american-based social media platforms, such as facebook, twitter, and instagram, were inaccessible in china at the time of the study (makri & schlegelmilch, 2017) , and people in china use other social media, such as wechat, qq, and sina weibo, which mirror or are similar to the american social media platforms (li et al., 2020) . our data is from the surveys of china family panel studies (cfps). cfps is a nationally representative longitudinal survey conducted every two years since 2010 by the institute of social science survey at peking university (xie & hu, 2014) . the cfps covers 95% of the chinese population in 25 provinces, providing extensive individual-and familylevel economic and social life information. the data from cfps has been validated and used for research in entrepreneurship (barnett et al., 2019) and other fields (hou et al., 2020; sun et al., 2020) . the survey, first conducted in 2010, had three follow-up waves in 2012, 2014, and 2016. our study used data from the 2014 and 2016 waves, which started including variables on internet activities. the 2014 survey contains 37,147 observations from 13,946 families. we matched the samples in 2014 and 2016 through a unique identifier of the respondents. as our study focuses on the transition of an individual to an entrepreneur, we excluded respondents who had entrepreneurial entry, and our final study sample had 18,873 observations. entrepreneurial entry. the cpfs survey followed existing literature to operationalize entrepreneurial entry, an individual's entry into entrepreneurship, by whether (s)he started a business or became selfemployed (barnett et al., 2019; eesley & wang, 2017) . accordingly, in the study, entrepreneurial entry refers to whether the respondents became entrepreneurs within the two years between the 2014 and 2016 surveys. specifically, the cpfs surveys had a multiple choice question on employment information, where participants chose their current employment status among: (a) agricultural work for your family, (b) agricultural work for other families, (c) employed, (d) individual/private business/other self-employment, and (e) non-agricultural casual workers. we used option d to operationalize entrepreneurial entry, following barnett et al. (2019) . if the respondent did not choose option d in year 2014 but chose option d in year 2016, (s)he transitioned to self-employment in those two years, and we dummy coded this individual 1 on entrepreneurial entry. social media use. a primary use of social media on the internet is socializing (bhimani et al., 2019; hu et al., 2018) . social media is the main online platform where people connect to each other and share information (bahri et al., 2018) . the 2014 cpfs survey measured social media use by asking, "in general, how frequently do you use the internet to socialize?". the respondents selected an option from the following: (1) everyday, (2) 3-4 times per week, (3) 1-2 times per week, (4) 2-3 times per month, (5) once per month, (6) once per a few months, and (7) never. as the scale was inverted, we reverse recoded it as 8 minus the selected option to obtain the measure of social media use. offline social network. offline social network refers to an individual's network of associates in the real world. scholars have used a variety of measures to assess the social network of an individual, including the cost of maintaining the relationship (du et al., 2015; lei et al., 2015) . in china, the context of our study, a social network is composed primarily of family, friends, and close acquaintances (barnett et al., 2019) . an important means of maintaining such relationships is through exchanging gifts during important festivals, wedding and funeral ceremonies, and other occasions. thus, scholars have used gift expenses and receipts in the previous year to assess social networks in china (barnett et al., 2019; lei et al., 2015) . we focused only on expenses incurred on gifts as the cost of maintaining an offline social network. hence, we operationalized offline social networks by the question on "expenditure on gifts for social relations in the past 12 months" from the 2014 cpfs survey. given that the expenditure is an amount, we transformed it using its natural log (ln (expenditure + 1)) (lei et al., 2015) . trust propensity. following the guidance of previous studies (chen et al., 2015; volland, 2017) , the cpfs survey assessed trust propensity by a single item scale that asked the extent to which a respondent trusts others. the respondents indicated their preference on a 0-10 scale. the data for trust propensity is from the 2014 survey. controls. in statistical analysis, we controlled for respondent demographics such as gender, age, and education. as age can correlate to people's resource availability, experience, and willingness to assume risk in a nonlinear fashion, we followed prior research to include the squared term of age as a control variable (belda & cabrer-borrás, 2018) . given the possibility of personal and family income influencing an individual's ability to finance a start-up (cetindamar et al., 2012; edelman & yli-renko, 2010) , we included it as a control variable in the analysis. all control variables are from the 2014 survey. we report descriptive statistics along with correlations among the study variables in table 1 . table 1 shows there is significant correlation among study variables, with most of the correlation coefficients below 0.40. the negative correlation between age and social media use, at 0.58, is the only exception. given the reported correlation among study variables, we rule out the possibility of multicollinearity in the sample. we further confirmed our inference by calculating variance inflation factors (vif), which were well below the threshold level of 10 with the highest vif being 1.94. w. wang, et al. technological forecasting & social change 161 (2020) 120337 we used stata and spss to test our hypotheses. in the regression models, we used ordinary least squares regression to predict offline social network and logit regression to predict entrepreneurial entry. we report the results of hypothesis testing in table 2 . in the table, model 1 shows the impact of social media use on offline social network. the regression coefficient suggests that social media use has a positive and significant (β=0.039, p<0.01) influence on the offline social network consistent with hypothesis h1. thus, it provides support for h1. in table 2 , models 2 and 3 provide support for hypotheses h2 and h3. the results of model 2 show the main effect of social media use on entrepreneurial entry is significant (β=0.050, p<0.05), thus providing support for h2. in model 3, when we add offline social network, the coefficient of social media use decreases (β=0.047, p<0.05) and the coefficient of offline social network becomes significant (β=0.084, p<0.05). meanwhile, the chi-squared statistics suggest that the model improved significantly (δχ 2 =6.04, p<0.05). the results offer preliminary support for hypothesis h3 (baron & kenny, 1986) . we further confirm h3 by using the bootstrapping method due to its inherent advantages (hayes, 2013; kenny & judd, 2014; preacher & hayes, 2008) over the technique of baron and kenny (1986) . we apply bootstrapping with model 4 in spss process (hayes, 2013) . with 5000 bootstrapping samples, the results show that social media use has an indirect effect on entrepreneurial entry (β=0.0033, 95% confidence interval: 0.0008-0.0065) while the direct effect is also significant (β=0.0465, 95% confidence interval: 0.0066-0.0864). thus, the results support hypothesis h3. the moderating effect of trust propensity is also reported in model 5 of table 2 . in the table, the interaction of social media use and trust propensity is significant and negative (β=-0.017, p<0.05) along with a significant change from model 4 to model 5 (δχ 2 =4.66, p<0.05). this notes: n refers to the sample size. ⁎ p < 0.10; ⁎⁎ p < 0.05; ⁎⁎⁎ p < 0.01. notes: n refers to the sample size. standard errors in parentheses. ⁎ p < 0.10, ⁎⁎ p < 0.05, ⁎⁎⁎ p < 0.01 w. wang, et al. technological forecasting & social change 161 (2020) 120337 provides support for hypothesis h4. in fig. 2 , we depict the moderating effects, where social media use of high trust propensity individuals has a weaker impact on entrepreneurial entry. additionally, model 6 displays the results for all study variables, suggesting it is robust. we performed additional robustness checks by using alternative measurements for social media use and trust propensity. first, as social media is a communication channel on the internet, we used an item measuring the degree of importance of the internet as a communication channel as an alternative measure of social media use. the results of the analysis with alternative measures are in table 3 and are largely consistent with our original analysis except for the moderating effect of trust propensity. second, because a high trust propensity individual is more likely to trust others, and vice-versa for a low trust propensity individual, we used an alternative dichotomous measure of whether people are mostly trustworthy or cautious when getting along with others for trust propensity. the results of the analysis with the alternative measure of trust propensity are reported in table 4 and offer support for the moderating effect of trust propensity. we assessed endogeneity issues using the two-stage least squares instrumental variables (2sls-iv) approach. there is a possibility that social media use may not be fully exogenous and could be under the influence of certain unobservable characteristics that also influence offline social network. following prior literature (semadeni et al., notes: standard errors in parentheses. the sample size n varies because less missing values on the alternative measurement. social media use is measured with the item "how important is the internet as a communication path?" the answer is scored on a 1-5 scale from "very unimportant" to "very important". ⁎ p < 0.10, ⁎⁎ p < 0.05, ⁎⁎⁎ p < 0.01 w. wang, et al. technological forecasting & social change 161 (2020) 120337 2014), we treated social media use as an endogenous variable and reassessed our results on offline social network. in our model, we identified two instruments to investigate potential endogeneity issues. to investigate endogeneity, we used two instrumental variables (iv): (1) online work and (2) online entertainment. we operationalized the two ivs through the frequency of using the internet to work and the frequency of using the internet to entertain, respectively. first, as people can work or entertain on social media, we suggest that these two ivs are correlated with social media use and satisfy the correlation with the endogenous variable. second, the ivs should not be directly correlated with the error terms of estimations on offline social network because learning and entertainment are not the direct social activity but instead the users aim to learn and to entertain. hence, online learning and entertainment should not directly impact offline social network in a strong manner. empirically, in the first stage result in model 1, the results of the instruments on the potentially endogenous variable are, by and large, significant, suggesting the relevance of the instruments. also, the results of cragg-donald f-statistics show that the instruments are strong (f=9342.66). moreover, the results of overidentification estimations suggest that the instruments are exogenous (sargan statistics p=0.55) (semadeni et al., 2014) . thus, the results statistically suggest that both ivs satisfy the conditions of qualifications as ivs. last but not least, both durbin (p<0.01) and wu-hausman (p<0.01) tests confirm the endogeneity. the results of the iv estimation, reported in table 5 , are similar to the previous result. the outcomes of the two-stage estimations are consistent with the regression outcomes in the previous analysis. these outcomes empirically confirm that social media use positively affects offline social network, even after considering the endogeneity issues. despite social media being dominated by weak ties and the substantial noise of false, inaccurate or even fake information, our findings reveal that individuals with higher social media use tend to conduct entrepreneurial entry. it is consistent with the positive benefits of higher social capital or a larger social network (galkina & chetty, 2015; johanson & vahlne, 2009 ). our results suggest that higher social media use indicates a higher probability of a larger social media (online) network, which provides higher social capital that benefits entrepreneurial entry. our findings that the positive influence of offline social network on entrepreneurial entry is also due to the network effect extends the research on the offline social networks of entrepreneurs (chell & baines, 2000; dubini & aldrich, 1991; klyver & foley, 2012) . the literature suggests that social networks influence entrepreneurs' decision making and actions, and entrepreneurs require a strong social network to succeed in the entrepreneurial process (jenssen & koenig, 2002; witt, 2004) . our findings, using instrumental variable analysis, suggest that higher social media use enhances individuals' offline social networks. this finding is consistent with past evidence that users often used social networking sites to connect with family and friends (subrahmanyam et al., 2008) . unlike past studies that simply indicate an overlap between social media and offline network associates (mcmillan and morrison (2006) ), our instrumental variable analysis helps to establish the impact of online networks on offline networks, suggesting social media enhances offline networks and subsequently entrepreneurial entry. specifying mediation models is essential to the advancement of research domains and hence this study helps research on social media in entrepreneurship to further develop beyond its nascent stage (yu et al., 2018) . finally, our finding that trust propensity moderates the influence of social media use on individuals' entrepreneurial entry suggests that social media, which is dominated by weak ties and substantial noise from false, inaccurate or even fake information, is in fact beneficial to entrepreneurial entry. such benefit may be smaller for people who are notes: standard errors in parentheses. the sample size n varies because less missing values on the alternative measurement. trust propensity is measured with the item "in general, do you think that most people are trustworthy, or it is better to take greater caution when getting along with other people?". we code 1 for the answer "most people are trustworthy" and 0 for "the greater caution, the better". ⁎ p < 0.10, ⁎⁎ p < 0.05, ⁎⁎⁎ p < 0.01 table 5 the results of instrumental variable analysis (n = 18,873). more trusting. specifically, our findings indicate that an individual's trust propensity plays a critical role in their use of social media and the outcome they experience. our results have important implications for practice. first, as social media can help individuals build networks that help with business resources and information both locally and remotely, people can target social media to help refine and validate entrepreneurial ideas and secure much needed resources for entrepreneurial launch. second, as individuals' trust propensity enhances or hinders the positive role of social media on entrepreneurial entry, potential entrepreneurs may specifically aim to apply more caution to their online contacts to obtain higher benefit from social media use for entrepreneurial entry. finally, given the role of social media in entrepreneurship, social media platforms may more specifically promote and facilitate networking of individuals to increase the level of entrepreneurial activity that can be enhanced via social media. our study has limitations and offers opportunity for further inquiry. first, theoretically, we used social network theory, and another theoretical framework may identify other possible mechanisms. for instance, an identification based theory may argue that social media use's influence on entrepreneurial entry could also be attributed to identity change in individuals due to network associates as theorized by mahto and mcdowell (2018) . however, given the lack of information about network associates on social media, identity change may be a remote probability. empirically, we operationalized offline social networks using gift expenses that serve as a proxy for the offline social network. the large nationally representative survey we used contained only expenditure on family relationships, yet individuals also need to expend similarly on gifts, eating out, etc. to maintain relationships with work acquaintances, partners, clients, former school mates, distant relatives, etc. hence, the expenditure on other relationships may mirror the expenditure on family relationships captured by this survey. we acknowledge these limitations and call for future research to search for alternative measures of social networks in other datasets. third, we caution readers in generalizing the findings of our study outside of china due to the study sample. china is different from other countries in terms of its cultural, legal, and social environment, which may affect respondent behavior on social media and entrepreneurial launch. thus, we suggest scholars empirically examine our model in other cultures. our study addresses the effect of social media on the entrepreneurship process, especially the pre-launch phase, by assessing the link between social media use and entrepreneurial entry. we use social capital theory to explain the link between social media use and entrepreneurial entry. we further argue that this relationship is contingent on individuals' trust propensity. thus, individuals with low trust propensity are more likely to benefit from social media use for entrepreneurial entry compared to individuals with high trust propensity. we also find that social media use strengthens individuals' offline social networks, which further aids their entrepreneurial entry. in conclusion, a key message is that social media can help individuals' transition to entrepreneurship. and practice, journal of applied psychology, journal of small business management, and family business review, etc. raj serves on editorial review boards of family business review and international entrepreneurship and management journal. he is also an associate editor of the journal of small business strategy and journal of small business management. wei deng is a phd candidate major in organization management at school of management, xi'an jiaotong university. his research interests include social entrepreneurship, entrepreneurial bricolage, and female entrepreneurship. his research has been published in journal of business research, asia pacific journal of management, and others. stephen x. zhang is an associate professor of entrepreneurship and innovation at the university of adelaide. he studies how entrepreneurs and top management teams behave under uncertainties, such as the impact of major uncertainties in the contemporary world (e.g. covid-19 and ai) on people's lives and work. such research has also given stephen opportunities to raise more than us$1.5 million of grants in several countries. prior to his academic career, stephen has worked in several industries and has founded startups. w. wang, et al. technological forecasting & social change 161 (2020) 120337 examining the impact of managerial involvement with social media on exporting firm performance. int it's not you, it's them: social influences on trust propensity and trust dynamics knowledge-based approaches for identity management in online social networks does the utilization of information communication technology promote entrepreneurship: evidence from rural china the moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations beyond social capital: the role of entrepreneurs' social competence in their financial success necessity and opportunity entrepreneurs: survival factors social media and innovation: a systematic literature review and future research directions entrepreneurial behavior: its nature, scope, recent research, and agenda for future research from friendfunding to crowdfunding: relevance of relationships, social media, and platform activities to crowdfunding performance what the numbers tell: the impact of human, family and financial capital on women and men's entry into entrepreneurship in turkey networking, entrepreneurship and microbusiness behaviour the joint moderating role of trust propensity and gender on consumers' online shopping behavior how to enhance smes customer involvement using social media: the role of social crm characters' persuasion effects in advergaming: role of brand trust, product involvement, and trust propensity the role of social and human capital among nascent entrepreneurs how and when social media affects innovation in start-ups. a moderated mediation model micro-multinational or not? international entrepreneurship, networking and learning effects the impact of social media on resource mobilisation in entrepreneurial firms do social capital building strategies influence the financing behavior of chinese private small and medium-sized enterprises? personal and extended networks are central to the entrepreneurial process the impact of environment and entrepreneurial perceptions on venture-creation efforts: bridging the discovery and creation views of entrepreneurship social influence in career choice: evidence from a randomized field experiment on entrepreneurial mentorship search and discovery by repeatedly successful entrepreneurs bridging online and offline social networks: multiplex analysis social interaction via new social media: (how) can interactions on twitter affect effectual thinking and behavior? propensity to trust salespeople: a contingent multilevel-multisource examination social capital and opportunity in corporate r & d: the contingent effect of contact density on mobility expectations effectuation and networking of internationalizing smes. manag trust and tam in online shopping: an integrated model the strength of weak ties getting a job: a study of contacts and careers resource search, interpersonal similarity, and network tie valuation in nascent entrepreneurs' emerging networks transformation as a challenge: new ventures on their way to viable entities introduction to mediation, moderation, and conditional process analysis: a regression-based approach entrepreneurship in china consumer brand engagement and its social side on brand-hosted social media: how do they contribute to brand loyalty? business networks and cooperation in international business relationships returns to military service in off-farm wage employment: evidence from rural china what role does self-efficacy play in developing cultural intelligence from social media usage? electron the effects of embeddedness on the entrepreneurial process the effect of social networks on resource access and business start-ups networking and entrepreneurship in place the uppsala internationalization process model revisited: from liability of foreignness to liability of outsidership social media and disaster management: case of the north and south kivu regions in the democratic republic of the congo power anomalies in testing mediation trust, perception, and managerial use of market information. int urbansocialradar: a place-aware social matching model for estimating serendipitous interaction willingness in korean cultural context networking and culture in entrepreneurship do social networks affect entrepreneurship? a test of the fundamental assumption using large sample, longitudinal data do social networks improve chinese adults' subjective well-being? the myopia of learning the impact of social media on the business performance of small firms in china the internationalization and performance of smes from virtual community members to c2c e-commerce buyers: trust in virtual communities and its effect on consumers' purchase intention entrepreneurial motivation: a non-entrepreneur's journey to become an entrepreneur the diminishing effect of vc reputation: is it hypercompetition? time orientation and engagement with social networking sites: a cross-cultural study in austria, china and uruguay overcoming network overload and redundancy in interorganizational networks: the roles of potential and latent ties mediational inferences in organizational research: then, now, and beyond stories from the field: women's networking as gender capital in entrepreneurial ecosystems coming of age with the internet: a qualitative exploration of how the internet has become an integral part of young people's lives adding interpersonal learning and tacit knowledge to march's exploration-exploitation model the dark side of trust: the benefits, costs and optimal levels of trust for innovation performance investigating social media as a firm's signaling strategy through an ipo the value of social media for innovation: a capability perspective organizational technological opportunism and social media: the deployment of social media analytics to sense and respond to technological discontinuities knowing in practice: enacting a collective capability in distributed organizing can microblogs motivate involvement in civic and political life? examining uses, gratifications and social outcomes among chinese youth knowledge-centered culture and knowledge sharing: the moderator role of trust propensity asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models virtual work: bridging research clusters structuring the technology entrepreneurship publication landscape: making sense out of chaos online and offline social networks: investigating culturally-specific behavior and satisfaction regional social capital: embeddedness, innovation networks and regional economic development the perils of endogeneity and instrumental variables in strategy research: understanding through simulations the promise of entrepreneurship as a field of research embracing digital networks: entrepreneurs' social capital online online relationship marketing strategic networks and entrepreneurial ventures online and offline social networks: use of social networking sites by emerging adults depressive costs: medical expenditures on depression and depressive symptoms among rural elderly in china student loneliness: the role of social media through life transitions opportunity identification and pursuit: does an entrepreneur's human capital matter? the role of risk and trust attitudes in explaining residential energy demand: evidence from the united kingdom wechat use intensity and social support: the moderating effect of motivators for wechat use trust and knowledge creation: the moderating effects of legal inadequacy social media effects on fostering online civic engagement and building citizen trust and trust in institutions entrepreneurs' networks and the success of start-ups social capital: implications for development theory, research and policy changing paradigm of international entrepreneurship strategy an introduction to the china family panel studies (cfps). chin individual-level ambidexterity and entrepreneurial entry entrepreneurship in international business: an institutional perspective consequences of downward envy: a model of selfesteem threat, abusive supervision, and supervisory leader self-improvement key: cord-026579-k3w8h961 authors: carr, paul r. title: shooting yourself first in the foot, then in the head: normative democracy is suffocating, and then the coronavirus came to light date: 2020-06-10 journal: postdigit sci educ doi: 10.1007/s42438-020-00142-3 sha: doc_id: 26579 cord_uid: k3w8h961 this text starts with the premise that ‘normative democracy’ has rendered our societies vulnerable and burdened with unaddressed social inequalities. i highlight three central arguments: (1) social media, and, consequently, citizen engagement are becoming a significant filter that can potentially re-imagine the political, economic, and social worlds, which increasingly bleed over to how we might develop and engage with ‘democracy’; to this end, i introduce a brief case study on the nefarious interpretation of the killing of jamal khashoggi in 2018 to underscore the tension points in normative democracy; (2) capitalism, or neoliberalism, needs to be more fully exposed, interrogated, and confronted if ‘normative, representative, hegemonic, electoral democracy’ is to be re-considered, re-imagined, and re-invented; the perpetuation of social inequalities lays bare the frailty of normative democratic institutions; (3) covid-19 has exposed the fault lines and fissures of normative democracy, illustrating here the ‘common sense’ ways that power imbalances are sustained, which leaves little room for social solidarity; i present herein the case of the economic/labor dynamic in quebec during the coronavirus. ultimately, i believe the quest to re-imagine a more meaningful, critically engaged democracy, especially during a context that is imbued with a political, economic, and public health crisis, cannot be delayed much longer. 2018; thésée, carr, duclos, and potwora 2018) , and contextualizes how we think (and act) about the subject. to further clarify this context of how normative democracy manifests itself, i highlight the following examples from my research projects with colleagues over the years, which involved studies in some 15 countries with roughly 5000 teacher education and educator participants (carr and thésée 2019) : & we found that the vast majority did not have a robust, significant democratic experience in their own education; & that this has affected how they consider democracy and education; & that social justice is, for many, a difficult and problematic area to cultivate in and through education owing to a weakly asserted, structured, and supported institutional culture based on normative democracy; & that most considered that the space for inclusive and critical engagement in and through education is constrained, limited, and fraught with obstacles; & that racialized participants had significantly higher levels of experience, conscientization, and engagement with, for example, racism, antiracism, and efforts to address racial inequities, which further underscores how normative democracy closes down fundamental debate, dialog, teaching and learning as well as transformative education while presenting the posture and framework of democracy. when democracy and education are considered to be naturally disconnected while not leaving room for a more critically engaged democracy, it is not difficult to imagine the suffocating nature of normative democracy. normative elections, the ones that have been so effectively presented by the usa as the backbone to any meaningful democracy, have been jettisoned into a cesspit of turmoil and intractable debate that often neglects problematizing some of the most intractable and germane issues (achen and bartels 2017; howe 2010; torcal and ramon montero 2013) . not everyone involved in elections is corrupt or corrupted, or is afflicted with unsightly motivations, and people who go to the polls are not simply sheep being led to the proverbial slaughterhouse. there is a great deal of complexity as to why we vote and why we hope that there will be some hope in participating in mainstream democracy, but the faith in electoral democracy is waning almost everywhere (torcal and ramon montero 2013; carr and thésée 2019 ). yet, these normative elections, which are often ordered to measure with the threat of massive (real and rhetorical) carpet-bombing and worse, if not realized, are replete with all kinds of paradoxical anti-democratic maneuvers, starting with who can be elected, how much money plays into the process, how media can control and shape the message, manipulation, and diversion is a fundamental component, how seeking to win is more a priority than seeking to build a meaningful democracy, and how capitalism is the enormous, indelicate, meandering proverbial 800pound gorilla in the room (amico 2020; carr and thésée 2019) . added to this is the role, the purpose, and place of education in supporting, cultivating, and building a critically engaged democracy as well as critically engaged citizen participation. it is extremely difficult to have one without the other (democracy without education, for example, or, rather, meaningful, critically engaged democracy without meaningful, critically engaged education). (see carr 2011, and carr and thésée 2019 , as well as the unesco chair dcmét website at uqo.ca/dcmet/ for an archive of publications.). and then, starting in late 2019, the world started to feel the indelible, intractable, and (in)visible perturbations of the coronavirus, which emanated in china, and has quickly disseminated throughout all regions, making it a global pandemic. the number of people affected, contracting the virus, and ultimately succumbing to it, is increasing daily at this time but there is much analysis and data-crunching indicating that, in many areas, after several weeks of self-distancing, hygienic measures, increasing testing, closing down all but 'essential services', and enhancing medical and health care measures, the 'curve' may be flattening. however, few people believe that the virus will disappear, nor that the cost, in terms of human life, will be entirely negligible. so what is the connection to democracy, capitalism (or perhaps more correctly neoliberalism), and covid-19? the vulnerabilities, inequalities, and fault lines that existed prior to the coronavirus have been exacerbated, and the virus has disproportionately impacted racialized, marginalized, and lower income communities. the contraction and death rates are higher, and the economic, labor, living, and social conditions have worsened, notably for already vulnerable communities. this pandemic, sadly, provides a tremendous and significant impetus to re-consider and re-calibrate our thinking around democracy (diamond 2020; roy 2020) . this text starts with the premise that 'normative democracy' has put us in a pickle, and that, although there are ways out if it, this will require breaking out of the glass box that has a great many of us believing that there is no alternative. i highlight three points related to democracy in this text, formulating the following central arguments: 1) social media and, consequently, citizen engagement are becoming a significant filter that can potentially re-imagine the political, economic, and social worlds (outside of and beyond normative democracy), which increasingly bleed over to how we might develop and engage with 'democracy' (garrett 2019) ; to this end, the advent of 'fake news' is a worthy subject to explore here because a functioning democracy, to a certain degree, is dependent on media/political literacy, critical engagement/participation, and the capacity to communicate, analyze, and disseminate nuanced perspectives, ideas, and information; i introduce a brief case study on the nefarious interpretation of the killing of jamal khashoggi in 2018 (bbc news 2019) to underscore the tension points in normative democracy; 2) capitalism, or neoliberalism, needs to be more fully exposed, interrogated, and confronted if 'normative, representative, hegemonic, electoral democracy' is to be re-considered, re-imagined, and re-invented (lydon 2017) ; the perpetuation of social inequalities lays bare the frailty of normative democratic institutions; 3) covid-19 has exposed the fault lines and fissures of normative democracy, illustrating here the 'common sense' ways that power imbalances are sustained, which leaves little room for social solidarity (human rights watch 2020); i present here a small case study of the economic and labor dynamic in quebec during the coronavirus. ultimately, i believe the quest to re-imagine a more meaningful, critically engaged democracy, especially during a context that is imbued with a political, economic, and public health crisis, cannot be delayed much longer. capitalism, in addition to acknowledged and unacknowledged hegemony, is central to this model or framework, and a natural order and superiority flows effortlessly through thinking and believing that this is the only way to be, exist and function. democracy 2.0, which considers more fluidly agency, power crystallizations, social justice, and individual as well as collectivist media and citizen engagement, is much messier than democracy 1.0, which connects more directly with normative, representative, hegemonic, and electoral machinations (carr, hoechsmann, and thésée 2018) . social media is an exemplary feature of this new environment and can help us draw out the fundamental question if greater media, communication, and online involvement can lead to more robust, critical democratic forms of citizen participation. elsewhere, with colleagues (carr, daros, cuervo, and thésée 2020) , i describe some of the overlapping components, processes, and concerns that help frame the context for social media, fake news, and citizen participation. it would appear that everyone today is somehow connected to social media, even if one does not have an account for one or many of the social networks that pervade, link and smother the socio-cultural landscape (keating and melis 2017) . there are networks for an untold array of information sharing and gathering. nouns have become verbs as in 'youtubbing,' 'blogging,' 'vlogging,' 'googling,' 'facebooking,' etc.. the reach is significant, and the digital imprints (and footprints) are equally commensurate (sun, wang, shen and zhang 2015) . one can do a search for a pair of shoes on amazon.com, and, magically, there will be ads for shoes on the personal facebook feed immediately afterward. algorithms are increasingly programming what we see, and aligning at least some of our attention on 'stuff,' for lack of a better word, where we might not otherwise be interested. this surveillance, usurpation and data-gathering was significantly exposed in 2018, with facebook being highlighted for a particularly negative watershed year (sutton 2018; wong and morris 2018) . among the litany of events, problems and phenomena that have plagued facebook, which are clearly not limited to this one, albeit prominent social network, were the following claims, findings and evidence, amongst other issues: algorithms connected to the 'negative effects to referral traffic,' unregulated ads that underscored the mueller investigation that has, as it focus, in large part, the russian involvement in the 2016 us presidential election, the cambridge analytica scandal that 'obtained the data of tens of millions of facebook users without their knowledge or consent to help build a powerful political influencing tool,' privacy and security issues, 'special data-sharing arrangements with tech manufactures like amazon, apple and samsung,' hacking of accounts, and regulation problems (sutton 2018) . (carr et al. 2020: 4) fake news has leaped into mainstream consciousness over the past few years as if it is the problem hampering democracy. emphasizing that fake news is rarely neatly packaged within a singular category, the report cited above cautions that deception needs to be interrogated at various times while viewing media messages. with the avalanche of fake accounts, fake (bot) users, and fake (or tampered with) algorithms, the terrain is fertile for fake news. this is especially the case if users, consumers, and citizens are conditioned to not question or verify what comes their way, are reluctant to disbelief 'official' sources, are ignorant, are disinterested, or are enveloped in turbulent news cycles with complex, nuanced, voluminous information, for which they are unable to decipher the diverse and divergent realities emanating from a particular situation, event, or reality (carr, daros, cuervo, and thésée 2020) . citizen participation requires critical engagement, and constructing media/political literacy, however defined, needs to be considered in order to better underpin meaningful forms of democracy (carr, cuervo, and daros 2019) . the hailstorm of misinformation, misdirection, and disinformation during the early phases of the coronavirus mirrors the general online landscape, serving as both a tremendous opportunity and a mud-slide concurrently, and highlighting the potential for meaningful solidarity as well as, conversely, marginalization and xenophobia (ali and kurasawa 2020) . i am also drawn to the nuanced layers that mackenzie and bhatt (2020) add to this debate, suggesting that '[b]ullshit is different from lying and it need not undermine trust, particularly when it is blatant'. (the literature around the notion and proliferation of 'bullshit' is linked to, and builds on, the work of harry g. frankfurt, notably the 2005 book aptly entitled on bullshit.) this is extremely relevant in contemporary times, given populist movements, xenophobic manifestations, and the denunciation of human rights, and the quest to diminish 'news' as being 'fake' as a basic principle emanating from some powerful leaders in the western world as well as elsewhere. at the same time, i acknowledge that the traditional media is anchored in biases and hegemonic trappings but am troubled that the 'fake news' caravan seeks to whitewash anything that may bring contrary dimensions to the debate, especially in relation to revealing, exposing, and countering mainstream narratives related to war, conflict, racism, inequalities, and the like. democracy 1.0 had a relatively controlled audience, whereas democracy 2.0 has let the floodgates open, and this means that there are now opportunities for critique, solidarity, and mobilization that may not have been as readily available previously, including diverse social movements that have taken off through social media (carr, daros, cuervo, and thésée 2020) . these social movements can be a force of change in society at the local, national, and global levels where governments and international institutions are unwilling, unable, or unmotivated to respond to the needs of the population. for instance, black lives matter (mundt, ross, and burnett 2018) , #metoo (botti et al. 2019) , occupy, and idle no more have all had a significant social media influence, and environmental, peace, and other movements have also been influential at diverse levels in mobilizing solidarity that surpasses cultural, linguistic, geographic, and political boundaries carty 2019) . within the quickly evolving media/social media landscape, i can think of what appear to be several major (media and/or social) events in recent times-noting full-well that by the time this article is published, they may not even be recognizable-including the khashoggi killing, the covington school student debacle, the parkland shooting, the kavanaugh nomination, the thai cave recue, the (british) royal wedding, the manifestations in haiti, the never-ending quest to build a 'wall/barrier/fence' between the usa and mexico, and the political/humanitarian crisis in venezuela, among many others. i apologize for the usa-centric focus here. as a canadian, i am fully cognizant of the depth and reach of usa tentacles, thinking, control, power, and influence in and on my own work, as well as on many others, even though i collaborate widely with colleagues in diverse jurisdictions and contexts, notably in latin america. the usa and its interests bleed over to every region of the world, and, although united statesians (the concept of 'american' is hotly contested and does not cover all of the peoples of the 'americas') may not be talking collectively (in a central way) about the world or may not be collectively immersed in ingratiating the usa into the infinite number of political and economic issues, concerns, and cultural representations of the other countries and peoples, the world is watching, listening, and being consumed by the behemoth of usa empire. the covid-19 pandemic also squarely places the usa within the core of the action, with daily pronouncements about blaming china, cutting off funding to the world health organization, downplaying the spread of the virus, boasting about how the virus has been beaten back, and spreading the political and economic reach of this country far and wide, in military, diplomatic, commercial, and (potentially) humanitarian ways. it seems as though the reality of this being a (global) pandemic, a far-reaching health crisis, is only partially the story, and the present manifestations in the usa of people demanding that 'isolation' be stopped, while so many are being infected and even dying, is almost incomprehensible, and social media concurrently exposes, denounces, disseminates, and provides an echo chamber for what is taking place. so i question what becomes news, indeed viral, and how does it become more than click-bait, algorithmic entertainment, the bouncing around in limited, like-minded networks, tepid sharing, and a platform for trolls? is it about numbers, the quantity of clicks, views, shares and reads, or something more substantive? at the same time, what are the true dimensions of the issue(s)? who frames it, how, and why, and to what end? what is omitted, downplayed, obfuscated, how and why? in the list of issues in the previous paragraph, we can think of many pitfalls, foibles, and problematic concerns as to what 'issues' look like in democracy 2.0. all issues are not simply a usa problem, but connections to elites, hegemony, power differentials, and media framing are, i believe, worth establishing and interrogating. what is clear is that power differentials are at play in how fake news is constructed, disseminated, understood, and engaged with. the more volatile social media can push up against normative media in further determining how fake news can be projected, masked, embellished, and consumed. concerning the jamal khashoggi killing in october 2018 in istanbul (bbc news 2019), we can follow the usual process of focusing on hegemonic interests and avoiding contextual factors and backdrops. several significant and pivotal factors were down-or under-played in reporting on this tragedy. for example, the relationship to the saudi kingdom, human rights, billions upon billions of dollars in armaments sold to the saudis, the unimaginable assault by saudi arabia against yemen, and the impending famine and genocide in yemen as a result, women's rights, journalistic freedom, and an unending series of beheadings by the quasi-untouchable saudi regime. undoubtedly, information, discussion, debate, reports, and mobilization on all of these fronts can be located and advanced through social media, in spite of the mainstream, hegemonic vision. the point here is that central, controlled, and 'manufactured' debate, at least within a condensed and constrained optic and timeframe, shined a light on the actual killing of khashoggi in turkey, who did it, how, and why. yet, significantly, it was only weakly concerned with the other, what could/should be considered to be, highly pertinent and central issues that are/were intertwined within this quagmire. why such deference was paid to the saudi leadership in this case, when this same deference is not paid in many other instances, especially when the faulty regime is not an ally, is quite pertinent. the lack of historical, political, and economic context, combined with the propensity to avoid latching onto 'research', and a plurality of visions, perspectives, and experiences seem to be a predominant feature of how these stories crystallize. the khashoggi example, like others, contains an evolving set of circumstances and frames, as well as questions, and we are cognizant of how some segments of social media can provide differing narratives that can, consequently, re-shape the 'official' story. yet, the social media dimensions can also counter the formal hegemonic narrative, and this is where alternative forms of 'democracy' can start to take hold (jenkins, shresthova, gamber-thompson, kligler-velenchik, and zimmerman 2016) . why the more critical dimensions within the khashoggi case (or the venezuela situation or others, for that matter) were/are not more broadly taken up by democracy 1.0 relates, i believe, to the hegemonic shaping/framing of the issues. it is also combined with a weakly focused mainstream media, whose reach is now consumed within the 'fake news' bubble, and a still questionable place, at least among many formal political leaders and their business sector supports, of uncontrolled social movements and social media within formal political spheres. however, i do believe that this last factor-social movements and social media being a mobilizing force-is, and will continue to potentially be, central to conceptualizing, developing, cultivating, building, and elaborating a more decent, meaningful, robust, and critically engaged democracy, in spite of the status quo aiming to maintain and sustain its hegemonic place. social media movements can also lead to dictatorship, genocide, and an infringement of rights (sapra 2020) . for instance, gayo-avello (2015) hypothesizes that social media may contribute but is not the central feature to democratization: in short, social media is not a democratizing catalyzer per se. it is just one of many factors, in addition to great tactical tools, provided the conditions in the nondemocratic country are suitable. moreover, there are many variables which can negatively affect the outcome of any uprising, even without the regime tampering with social media. in other words, social media does not make people free; freedom requires people taking risks and organizing themselves. (gayo-avello 2015: 6) social media cannot magically lead to class consciousness, anti-racism, peace, and social solidarity. however, it may be able to provide an outlet and legs to important stories, events, and realities for people who were only previously loosely connected. this could have a dual effect of further questioning and delegitimizing normative democracy, and also providing space and voice for marginalized interests, perspectives, and arguments. social media is now indelibly a part of the citizen participation landscape. what is the point of living in a 'democracy' if you are one of those living in abject poverty, are homeless, and are working tirelessly to make ends meet but never achieve economic justice (ely yamin 2020)? of course, the notion of having the 'freedom' to pursue your dreams, as in 'the american dream', is sufficiently grounded within normative debates to ensure that questioning entrenched, systemic, institutional, deeply grounded social inequalities will be quickly snuffed out. within the usa context, amadeo (2019) highlights the increasing social inequalities as follows: structural inequality seems to be worsening. between 1979 and 2007, after-tax income increased 275% for the wealthiest 1% of households. it rose 65% for the top fifth. the bottom fifth only increased by 18%. that's true even adding all income from social security, welfare, and other government payments. during this time, the wealthiest 1% increased their share of total income by 10%. everyone else saw their share shrink by 1-2%. as a result, economic mobility worsened. the 2008 financial crisis saw the rich get richer. in 2012, the top 10% of earners took home 50% of all income. (amadeo 2019) powers, fischman, and berliner (2016) have highlighted how research on poverty and social inequalities is poorly understood or operationalized, which further underpins weak policy responses to entrenched and systemic problems. similarly, it is helpful to problematize how wealth has been accrued historically through genocide, slavery, imperialism, war and conflict, colonialism, and a host of racialized, sexist, and other machinations in addition to piketty's (2017) welldocumented treatise capital in the twenty-first century. mclaren (see pruyn and malott 2016) has highlighted marx's theory on surplus value and the limited mobility between the social classes, and the crushing blow of capital against labor; ultimately, the value of what is produced encounters hyper-inflation in the hands of investors, owners, and speculators without real production, which may seem locked into the days of children being exploited in coal mines over a century ago but there are still many parallels today. giroux (2016) has coined 'casino capitalism' to label the politicoeconomic system that enraptures the vast majority of formal, and to varying degrees, informal activity that underpin mass exploitation. he further elucidates the danger of continuing on the one-way neoliberal path before us: neoliberalism has put an enormous effort into creating a commanding cultural apparatus and public pedagogy in which individuals can only view themselves as consumers, embrace freedom as the right to participate in the market, and supplant issues of social responsibility for an unchecked embrace of individualism and the belief that all social relation be judged according to how they further one's individual needs and self-interests. matters of mutual caring, respect, and compassion for the other have given way to the limiting orbits of privatization and unrestrained self-interest, just as it is has become increasingly difficult to translate private troubles into larger social, economic, and political considerations. one consequence is that it has become more difficult for people to debate and question neoliberal hegemony and the widespread misery it produces for young people, the poor, middle class, workers, and other segments of societynow considered disposable under neoliberal regimes which are governed by a survival-of-the fittest ethos, largely imposed by the ruling economic and political elite. (giroux 2016) mclaren (see pruyn and malott 2016) and giroux (see giroux, sandin, and burdick, 2018) have also made a compelling case to interpret today's reality as a politicoeconomic context that is launching us into hyper-sophisticated forms of fascism. within this backdrop, i believe that there is a great need, as there always has been, to be more fully engaged with (and in) education, in political circles and in public debate, in general, in relation to the philosophy and operationalization of capitalism and, in particular, to the all-encompassing mercantilization of all public and private goods, services, and experiences enveloped within neoliberalism. the covid-19 context has expedited and underscored the slippery slope toward authoritarianism, stripping away rights while creating socio-economic cleavages that are even more serious than before . democracy 2.0 is tethered to democracy 1.0 conceptualizations of the world, but the door is (slightly) open to develop a new world, despite the titanic hegemonic vice-grip that maintains a stranglehold on education and public debate. as alluded to in the previous section, the collective 'we' are free to surf the web, consume, create, diffuse, comment, and cajole the other, whether the 'other' knows us, sees us, or cares about us or not. we are not frontally impeded from opening our eyes and ears. on the contrary, many movements have been stimulated from doing so-including the arab springalthough the aftermath re-captured regressive hegemonic features of what preceded it. the dilemma is that the corporate/business politico-economic (hegemonic) world has grown into this concurrently in-your-face and stealth, quickly-evolving, dynamic context seamlessly stamping its imprint in every way possible. the interplay between democracy 1.0 and democracy 2.0, thus, offers tremendous potential for citizen participation and engagement while, simultaneously, presenting the quicksand mirage that we may not be as 'free' as we think we are, or we may not be as 'engaged' as we think that we are. neoliberalism has many people around the world gasping for air. now mired in a pandemic that vacillates from signs of encouragement that the 'curve is flattening' to fears that 'community transmission' is rapidly spreading through asymptomatic contact, there is enormous stress about when there will be an effective vaccination, how the health context will play out, and, increasingly, when will the 'economy get back to normal.' at this point in time-although we are aware of massive numbers (the information is not hidden, anyway) of unnecessary deaths in 'developing' countries related to hunger, disease, poverty, and conflict-we can see the extreme concern within local, national, and international governments and institutions to get the economy working. while most of the world has emphasized 'social distancing' as a key measure to diminish the dissemination and transmission of covid-19, an eerily bizarre phenomenon has taken hold in the usa (wong, vaughan, quilty-harper, and liverpool 2020) . disparate, semi-organized protests against 'self-isolation' are taking place in diverse locations, often replete with a range of arms and placards enunciating the right to, among other things, 'haircuts' and to 'play golf.' is it pure insanity, a case of hubris beyond all limits, an anti-science ideology that needs to play out in every sector-including the environment-or complete indifference to human suffering? while the usa situation deeply underscores the anxiety and agitation around the health/economy dichotomy, i present below a brief illustration of the neoliberalization of the political and economic convergence through an example of the coronavirus in québec (canada). québec, a predominantly french-speaking province of 8.5 million people in canada, provides an interesting illustration of how a jurisdiction within a federal framework has worked to mobilize, sensitize, and activate a range of health, economic, political, and education measures to confront covid-19. there are daily press briefings, information sessions, directives, a vast media campaign, testimonies, and a host of consultations, which all serve to educate the public and to engage the citizenry concurrently. it would be disingenuous to simply criticize where there have been gaps and problems; the reality is that many people have worked diligently and courageously to create a sense of the gravity of the problem and to diminish the extent of the propagation of the virus. having a universal healthcare system has been, i believe, indispensable to understanding how to assess, allocate, distribute, and organize resources. this is not an individual problem but a vast, insidious collective one. it should be acknowledged as well that what we know is shifting and re-calibrating in real time, and decisions made on march 1 were questioned and re-assessed by march 7 and so on. moreover, what we know now cannot always be fully understood until later, and decisions that are taken in that light can lead to nefarious situations and the rampant spread of the virus. hindsight is 20/20 as the proverb goes so a fulsome diagnosis of what we are doing today will be more effectively critiqued once we are through to the other side of the pandemic. the situation in québec, one that is surely not unknown elsewhere, underscores the fragility of 'normative' democracy; this is, i believe, a question of normative democracy working the way that it does. one heart-wrenching issue that we are observing at this time is that the vast majority of deaths in québec, like elsewhere, is among those 70 years of age and older, and particularly the 80 + age-group. moreover, what many of us did not know or fully consider, the vast majority of deaths up until now within québec are among those who are in long-term care residences (in french, they are called cshlds), roughly 70%, which are essentially senior's residences for people with health issues. the transmission within these residences is extensive and rapid, with an increasing number of personnel, nurses, and doctors also being affected. one residence, for example, experienced an overwhelming amount of infection (herron, discussed below), and there are others that have also been deeply affected. one might say that there are two public health crises at this time: one for the general population and another for these particular residences with this specific group. on the one hand, the population is astonished, sickened, and in shock ('how could this happen?,' 'especially to "our elderly"?'). on the other hand, this was a serious politico-economic cocktail being mixed for a couple of decades, massaged through diverse political parties within the normative democracy that adjudicates such matters. (why was there such sustained neglect and under-funding? why was this not flagged as a serious catastrophe in the making?). i would like to underscore that this is not a problem of one person, one political party, one decision, one law, or one particular model: it is the consequence of systemic, institutional failure/negligence as well as the thinnest wedges of normative democracy carrying the day over the broader public interest and good. i briefly present some of the specific underlying conditions that lay the groundwork for what is playing out within this vulnerable population at this time: a lack of monitoring, under-paying workers, and diminished policy importance and planning. media accounts provide information on the tragedy unfolding before our eyes. in one case, at herron, in western montréal, the chsld there, which is privately owned, experienced serious staffing shortages, insufficient equipment, poor oversight, inadequate support from oversight bodies, and unacceptable communications with health authorities. mckenna (2020) provides a sense of the chaos and suffering there: nurses were getting sick, too: six out of the seven registered nurses on staff were experiencing covid symptoms, and of seven licensed practical nurses (lpns), only four were still healthy. (…). about a quarter of the orderlies (préposés aux bénéficiares, or patient attendants) had also stopped workingeither because they were experiencing covid symptoms or because they felt it was no longer safe to work at chsld herron. within weeks, a quarter of those patient attendants would test positive for covid-19. (…). bedridden residents were lying in sheets stained brown up to their necks in excrement, so long had it been since their diapers had been changed. some were dehydrated and unfed. (…). the head of professional services at the ciusss, dr. nadine larente, is the doctor who went to help. she told the french-language newspaper la presse the place was in chaos: one lpn and two patient attendants were trying to care for 130 residents. food trays had been placed on the floor, dishes untouched because residents with mobility issues could not reach them. (mckenna 2020). about double the number, proportionately, of seniors in québec opt for long-term care residences compared with the rest of the country, which could be a function of culture, policy, economics, and options available, and the rapidly aging québécois population is a further aggravating factor preparing the context (dougherty 2020). one expert (see lowrie 2020) noted that the spatial configuration 'with long corridors and residents sharing rooms, have a harder time isolating sick residents from uninfected ones, compared to residences with house-style layouts, where residents live in smaller wings' is another factor that helps explain the extreme transmission of the virus in chslds. with staff falling ill or refusing to come to work, there has been a massive campaign to recruit retired nurses and also to bring doctors and specialists into the overburdened long-term care system; the premier of the province has also asked for the military (over a thousand troops) to further provide support within these seniors' residences. social class and political power are fully intertwined in the quickly unraveling situation involving seniors' residences in québec. raising the minimum wage in québec, for example, was vigorously opposed by the present government and others along the way, fearful that employers, especially small businesses, could not afford it. while there is no maximum wage being regulated, those struggling with the minimum wage are often obligated to work two or three jobs, to seek assistance elsewhere, and face other severe challenges, including in relation to housing, childcare, education, and the cost of living. the chsld situation brought everything to a head, with it being clearly obvious that those designated as 'essential services' were often those being paid the least in society. the premier took the almost unprecedented measure of apologizing for underpaying workers when it became difficult and problematic to staff these residences: 'i know a lot of quebecers are asking themselves how we could have got ourselves in this situation,' a sombre legault said at his friday briefing, addressing the catastrophe unfolding in covid-19-stricken long-term seniors' residences (chslds). 'i myself have spent several days and nights asking what i should have done differently.' 'if i was able to redo one thing, i would have increased the wages of orderlies faster, even without the accord of the unions. i assume full responsibility. we entered this crisis ill equipped, and clearly the situation deteriorated for all kinds of reasons. the virus got in.' (authier 2020) . the premier also took a series of steps to increase pay for healthcare workers. as part of its effort to improve working conditions in the health-care system, quebec announced that nearly 300,000 employees in both the public and private sector will be getting temporary pay increases. workers who are in direct contact with the diseasesuch emergency-room professionals and nurses in coronavirus testing centreswill receive an 8 % boost in their salaries. those working in long-term care homes, known as chslds, will also be among the 69,000 workers to benefit from the 8 % raise…. another 200,000 people who work in the health-care system but aren't as directly exposed to the disease, such as the nurses who staff the 811 health line, will get a salary increase of 4 %. and workers in private long-term care homes, many of whom make little more than minimum wage, will get an additional $4 per hour. that measure appears designed to discourage these workers from quitting and staying home, to take advantage of federal financial assistance that's worth $2000 a month. (shingler, stevenson, and montpetit 2020) . one question that arises here is how these workers could have been underpaid for so long, and what the effect may have been, for them, the people receiving the care, the healthcare institutions and system, and society as a whole. did it dissuade qualified workers from pursuing careers or staying in them? what were the other priorities that negated remunerating fairly such indispensable and 'essential' workers? on the economic side of the ledger, how efficient is it to underpay some employees and over-compensate others who have not actually done the work or who are, ironically, considered to be disproportionately fundamental? is a 10:1 ratio for salaries at the top and the bottom reasonable or should it be 100:1 or 1000:1? in canada, in general, the wage differentials are less extreme and odious than the usa, but the issue of social (in)equality is also a significant concern there. one study (mishe and wolfe 2019) focused on usa compensation provides some backdrop to how public services and priorities can be disproportionately affected. ceo compensation is very high relative to typical worker compensation (by a ratio of 278-to-1 or 221-to-1). in contrast, the ceo-to-typical-worker compensation ratio (options realized) was 20-to-1 in 1965 and 58-to-1 in 1989. ceos are even making a lot more-about five times as much-as other earners in the top 0.1%. from 1978 to 2018, ceo compensation grew by 1007.5% (940.3% under the options-realized measure), far outstripping s&p stock market growth (706.7%) and the wage growth of very high earners (339.2%). in contrast, wages for the typical worker grew by just 11.9%. there is a lot of complexity to how covid-19 is analyzed, and comparing diverse sites/ jurisdictions/systems and how data are compiled and evaluated may not reveal the true breadth and scope of the reality. similarly, there are many moving parts and lots of people (remunerated and volunteer) involved and engaged, and there are also all kinds of activities aimed at supporting a solidified, vigorous response. my intention in presenting this case study is not to admonish or diminish those serious and important efforts. on the contrary, it is my hope that this pandemic will reveal a silver lining somewhere in that extreme vulnerabilities and shortcomings need to be rectified in order to ensure, as much as possible, that economics will not suffocate political considerations in the future. and i have not emphasized here the race, gender, and other pivotal underlying factors underpinning this pandemic, but they are also a significant piece of the puzzle. this text has underscored what 'democracy' we are trying to achieve, to cultivate, and to ingratiate. the focus and direction of my central arguments about the lack of bone fide democracy within a normative, mainstream political framework that preaches that we live in a developed democracy has, i believe, become accelerated and accentuated as a result of covid-19. i have highlighted some of the fundamental issues and problems with 'normative, representative, hegemonic, electoral democracy,' and also emphasized the pivotal contextual shifts and cornerstones embedded in democracy 1.0 as well as democracy 2.0. i have also made the case for more robust, critically-engaged citizen participation, which would require or, at the very least, benefit from new forms of education and media/political literacy. the social media equation was brought to light since it serves as an unruly, uncontrolled, and rapidly evolving microcosm of the world, its diversity, its problems, its challenges, and its potential. i was careful to not make a definitive declaration related to achieving democracy through the potentially transformative technologies that now shape how we live and function and relate to the world. despite everything, we are still mired in conflict(s), in inequitable power relations, and in 'democracies' that are not very 'democratic. ' we are still straddling democracy 1.0, in which formal political declarations are fabricated with partisan political interests at the fore, the stock market is seemingly central to everything, and business elites are catered to at every level. similarly, tax cuts-regardless of political stripe-figure into everything, political parties shamelessly line up to receive 'donations' (does anyone believe that they come with no strings attached?), tax breaks for companies must be considered as much as lower tax rates for the rich (does anyone believe that rich people will create more employment based on having more cash? if so, why are there so many off-shore accounts in tax-havens intended to not pay tax?), and (military) might is (still) right for many. the further the coronavirus expands, the more there is discussion about needing an economic balance to 'get back to normal,' and indicators such as the stock market are central to supposedly gaging what is happening (karabell 2020) . of course, there have been lots of (incremental) changes, and lots of new laws, policies, practices, and shifts in cultural norms that have benefitted, generally speaking, women, racialized minorities, the poor, and the vulnerable. yet, social inequalities, despite massive technological and others changes, not only persist but, in many regards, are increasing. how could this be when there is so much wealth? why do so many people leave their countries in complete desperation, why is there still so much military conflict-most of which goes unreported-why do so many problems of poverty and discrimination persist in the most vulgarly palpable ways, why is there such little global outrage over the state and fate of indigenous peoples (the loss of land, language, culture and autonomy), and why is the 'environment' not the priority? this very partial list of questions is noteworthy because neoliberalism is, definitely, an accelerator to many of the problems we are facing . to be clear here, this is not a binary proposition, and avoiding confronting real problems with real people will not address real suffering, oppression and marginalization (gray and gest 2018) . we might ask: why are there (recurrent, entrenched) problems when there are so many people, projects, forces, and movements fighting for a more decent, robust, and (even) alternative democracy 2.0, one that could place neoliberalism within a new, different and alternative landscape? how should hegemony be understood today when (many) people so freely believe that they have complete agency over their actions, thoughts and experiences, and when (many) people believe that voting is the (only) key? i would stress here that the binary capitalist-socialist, rich-poor binary is not the most productive lens through which to examine the complexity of such extreme power imbalances around the world. the debate around 'democracy,' i believe, needs to be more all-encompassing, involving all of the tentacles and blockages of neoliberalism into the class, race, gender, cultural, and other pivotal sociological markers of identity, and it also needs to carve out a place for how power works, is distributed and recreated. this debate needs to leave open the door for unknown questions and answers as well as (creative and alternative) processes and deliberations, accepting that the normative elections in place are most likely not very beneficial for most people, and, most definitely, the massive numbers of people who do not participate, willingly or unwillingly (van reybrouck 2017). it is important to connect the local with the global, as we can through covid-19. ely yamin (2020) provides a sense of the need to address global issues globally and to be leery of not considering the complexity of the linkages between complex problems. but that and many other challenges requires weaving human rights praxis-human rights for social change-into broader social movements, as well as working across disciplinary silos. the problems facing the rights movement are too complex for any one set of advocacy tools or any one field's expertise. of course, there is no single monolithic 'human rights community' just as there is no unified 'health and human rights community'. those tropes are used from the 'inside' to police the boundaries of orthodoxy and from the 'outside' to caricaturize sets of actors and strategies. yet, there are dangers of circling the wagons defensively around our professional tribes. the complexity of the challenges posed by rampant inequality, the spread of authoritarianism and illiberalism, distrust in multilateralism, and climate cataclysm call for embracing justified critiques and opening up to new ideas and perspectives-and uniting with labor, environmental and many other social justice movements. (ely yamin 2020) inspired by paulo freire's transformative work (freire 1970) , i would advocate for more openness and acceptance of political realities that shape our lived experiences as well as an extremely healthy dose of humility as means to being able to understand, engage with, and be with the 'other.' i explore more fully the interconnections and inspiration of freire's work with my colleague gina thésée (carr and thésée 2019) . the hard-wired, testosterone-induced, keep-fundraising-at-all-times political systems that have been put in place all over the place need to be re-imagined before they suffocate themselves and everyone else. people will slowly divest themselves from the voting game, leaving it as an empty shell filled with a bunch of white guys in suites. (yes, there are some openings for other identities in this equation but the game was made by and for these guys.) freire wrote of conscientization, and i believe that to get there, we need to focus on peace, not war, social, and cultural development as opposed to economic development, solidarity, and emancipation rather than exclusively on individual rights and liberties, and the recognition that we are (all) human beings. as human beings, we are not required to be racist (no baby is racist but we learn to be so), sexist (a totally learned behavior), classist (exploiting one's neighbor is not an obligation), kill one another (who gets killed anyway? the rich or the poor, and who are they? do we care?), or live with so much misery, hatred, and oppression. ultimately, we are in the same boat (or world) together. one could see the glass half full with lots of progress all over the place, and, yet, the empty side of the glass contains real people living through unimaginable (for the full side of the glass) realities; the wage discrepancies and gaps in the québec example exemplify this reality. the quest for a meaningful democracy aimed at both sides of the glass would be a more conducive option, and re-imagining democracy will require more fully and, even disproportionately, considering the empty side of the glass. taking a stand against democracy 1.0 and 'normative, representative, hegemonic, electoral democracy' is a necessary condition to moving forward for this re-imagined democracy. donkervoort (2020) underscores that the pandemic has been exploited by 'autocrats' but that citizens can resist and coalesce around global initiatives to weaken and confront hegemonic forces. this could mean enhanced civil society engagement across all boundaries with an eye to unmasking and dismantling the concentration of wealth and power. covid-19 has exposed the need for a different universe, not only in terms of public health but also, importantly, in relation to democracy and citizen engagement (roy 2020) . so while my foot, to return to the title, may be in taters, i'm hopefulindeed, it may be the only way out if this-that my head will not be the ultimate causality as we strive to either sustain or re-imagine a democracy that can not only take us out of a pandemic but, rather, into social solidarity that will remove our bodies and minds (and souls) from imminent disaster. open access this article is licensed under a creative commons attribution 4.0 international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. democracy for realists: why elections do not produce responsive government #covid19: social media both a blessing and a curse during coronavirus pandemic. the conversation structural inequality in america: how structural inequality stifles the american dream do democracy and capitalism really need each other? harvard business review the struggle for democracy in education: lessons from social realities covid-19 analysis: legault offers a mea culpa -but did he really have a choice? montreal gazette jamal khashoggi: all you need to know about saudi journalist's death the #metoo social media effect and its potentials for social change in europe. brussels: feps -foundation for european progressive studies does your vote count? critical pedagogy and democracy citizen engagement in the contemporary era of fake news: hegemonic distraction or control of the social media context? postdigital science and education social media and the quest for democracy: faking the re-awakening? democracy 2.0: media, political literacy and critical engagement it's not education that scares me, it's the educators…': is there still hope for democracy in education, and education for democracy? social movements and new technology america's covid-19 disaster is a setback for democracy. the atlantic while autocrats exploit the pandemic 70% of quebec's covid-19 deaths are in long-term care, seniors' residences. ipolitics putting human rights at the centre of struggles for health and social equality on bullshit pedagogy of the oppressed social media's contribution to political misperceptions in u.s. presidential elections social media, democracy, and democratization the mad violence of casino capitalism. counterpunch the covid-19 pandemic is exposing the plague of neoliberalism the new henry giroux reader: the role of the public intellectual in a time of tyranny silent citizenship: the politics of marginality in unequal democracies citizens adrift: the democratic disengagement of young canadians by any means necessary: the new youth activism as covid-19 spreads, listen to the stock market for now social media and youth political engagement: preaching to the converted or providing a new voice for youth? many factors behind covid-19 outbreaks hitting quebec's long-term care homes noam chomsky: neoliberalism is destroying our democracy: how elites on both sides of the political spectrum have undermined our social, political, and environmental commons. the nation why are american elections so long? orderlies worked without ppe, covid-19 patients wandered herron's halls for days after health agency took over lies, bullshit and fake news: some epistemological concerns ceo compensation has grown 940% since 1978: typical worker compensation has risen only 12% during that time. economic policy institute scaling social movements through social media: the case of black lives matter capital in the twenty-first century making the visible invisible: willful ignorance of poverty and social inequalities in the research-policy nexus this fist called my heart: the peter mclaren reader (volume i) the pandemic is a portal the last decade showed how social media could topple governments and make social change -and it's only getting crazier from here no one 'is more deserving,' says legault, raising wages of 300,000 health-care workers as covid-19 cases climb location information disclosure in location-based social network services: privacy calculus, benefit structure, and gender differences facebook's terrible, horrible, no good, very bad year: and youthought you had a rough conjuguer démocratie et éducation : perceptions et expériences de futurs-es enseignants-es du québec. citizenship education research journal/revue de recherche sur l'éducation à la citoyenneté political disaffection in contemporary democracies: social capital, institutions, and politics against elections: the case for democracy it's complicated: facebook's terrible 2018. the guardian covid-19 latest: cdc director warns us second wave could be even worse key: cord-307292-de4lbc24 authors: rosenberg, hananel; ophir, yaakov; billig, miriam title: omg, r u ok? [image: see text]: using social media to form therapeutic relationships with youth at risk date: 2020-08-17 journal: child youth serv rev doi: 10.1016/j.childyouth.2020.105365 sha: doc_id: 307292 cord_uid: de4lbc24 the rising of social media has opened new opportunities for forming therapeutic relationships with youth at risk who have little faith in institutionalized interventions. the goal of this study is to examine whether and how youth care workers utilize social media communications for reaching out to detached adolescents and providing them emotional support. qualitative in-depth interviews (n=17) were conducted with counselors, social workers, and clinical psychologists who work with youth at risk. a thematic analysis of the interviews revealed three principal psychosocial usages of social media: (1) reaching out and maintaining reciprocal and meaningful therapeutic relationships with youth at risk over time; (2) identifying risks and emotional distress; and (3) “stepping in” and providing psychosocial assistance, when needed. these beneficial practices are made possible through the high accessibility and the sense of secured mediation that characterize social media communication and that complement the psychosocial needs of youth at risk. alongside these advantages, the analysis yielded several significant challenges in social media therapeutic relationships, including privacy dilemmas and blurring of authority and boundaries. given that social media communication is a relatively new phenomenon, the applied psychosocial practices are shaped through a process of trial and error, intuitive decisions, and peer learning. although the main conclusion from this study supports the notion that the advantages of social media therapeutic relationships with youth at risk outweigh their problematic aspects, future research is recommended to establish clear guidelines for youth caregivers who wish to integrate the new media in their daily psychosocial work. youth who are, or may be, at risk of physical, mental, or emotional harm suffer from a wide range of difficulties that endanger them and threaten their ability to function, currently or in the future. unfortunately, adult caregivers are not always aware of their children, patients, or students' emotional struggles and the at-risk youth often drift away from conventional supporting environments, such schools and youth movements. instead, they wander around unsupervised, seeking adventures and belongingness in alternative, and in some cases dangerous, settings (resnick & burt, 1996) . one of the principal challenges in supporting these detached youth is the very formation and maintenance of solid and trustful relationships. when confronting hardships, many adolescents at risk prefer not to turn to their parents or teachers for psychosocial support (grinstein-weiss, fishman & eisikovits, 2005) . however, in some cases they are willing to accept help from informal counselors and social workers (kaim & romi, 2015) . this informal help is crucial for their mental health because one of the key factors that can help at-risk youth successfully navigate their complex challenges is a rich and supportive environment and a strong relationship with a caring adult figure (gilat, ezer, & sagi, 2011) . remarkably, new opportunities for reaching out and supporting youth at risk have aroused with the outbreak of online social media. youth all over the world spend large amount of time online every day and perceive the various social networks technologies (e.g., instagram, whatsapp) as a most convenient channel of communication. in many cases they would even prefer text-based communication online over other forms of communication, including face-to-face (joshi, stubbe, li, & hilty, 2019) . recognizing these trends, some educators and school counselors have started to establish online connections with teenagers under their care in an attempt to offer them guidance and emotional support (asterhan & rosenberg, 2015) . however, in spite of these spontaneous initiatives, little is known about the extent and the ways in which caregivers utilize online communication for establishing positive and trustful relationships with youth at risk. the goal of the current study is to examine whether and how social media can be utilized for reaching out and providing emotional support to at-risk youth (adolescents and emerging adults). we hypothesized that youth caregivers would leverage the new media to overcome one of the main obstacles that characterize youth at risk. in many cases, these distressed adolescents disengage from institutionalized support systems, such as family, school, and youth movements and wonder around, searching for novel and distant spaces, away from their community. this allows them to achieve a sense of freedom and control but also places an obstacle in front of the adults who wish to contact and support them (kaim & romi, 2015) . in this study, we focus on youth caregivers who work with a unique population of youth at risk from small isolated settlements in the east border of israel, yet, the conclusions are relevant for similar rural areas around the world. like other youth at risk, they too, are at risk of substance use, problematic sexual behaviors, and school dropouts. however, they are also at risk for a unique identity crisis. in many cases, these youth rebel against their parents' religion and values. the ideological tension between the conservative small communities and the excitements and temptations that are evident in the 'big city' or in the mass media contributes to this rebellion and distance them from their families and community (shemesh, 2000) . these youth are also dispersed in a relatively large geographical space, with limited transportation options. youth workers in this geographical location may therefore tend to rely on non-formal communication methods, even more than in other urban communities. the current research aims to learn from the accumulating experience of these workers (psychologist, social workers, and counsellors) and provide a window to the benefits and limitations of social media therapeutic relationships between caregivers and youth at risk. the enormous popularity of social media among children and adolescents led many educators to start utilizing the various social networks for online communication with their students. this emerging trend raised multiple ethical and educational challenges, including concerns that the adult educational authority will keep deteriorating and that the privacy of both educators and students will be compromised. however, despite these concerns, some policymakers and educators have integrated the online communication in their daily educational practices (greenhow, robelia, & hughes, 2009; hershkovitz, abu elhija, & zedan, 2019; asterhan & rosenberg, 2015; . teachers use online communication for improving group and individual studying, extending learning time beyond school settings, and managing the logistics of their class activities. some teachers even extend their educational role to include the new media environment and took upon themselves to supervise and monitor online forums, identify signs of personal distress, and assist students in need. finally, some educators leverage the informality that characterizes the various social networks, to get acquainted with the social and cultural aspects of their students life and deepen their relationships with them (forkosh-baruch, hershkovitz, & ang, 2015; hershkovitz & forkosh-baruch, 2013; . online communication has been also leveraged by educators for delivering emotional support to their students. during the 2014 israel-gaza war for example, the majority of the teachers in the israeli cities that were exposed to war-related events provided support and empowered their students using non-formal communication methods, such as facebook and whatsapp. notably, most students valued their teachers outreach effort and reported that it contributed to their general sense of resilience (ophir, rosenberg, asterhan, & schwarz, 2016) . aside from benefiting from the concrete psychological aid online, it seems that the very existence of a continuous relationship online with a caring adult, had a significant contribution to the adolescent' sense of control and belonging (rosenberg, ophir, & asterhan, 2018) . these 'semi-therapeutic' communications join teachers efforts to detect signs of distress from their students' online activities and to provide them emotional support at normal times (i.e., not at wartime) (hershkovitz & forkosh-baruch, 2013; forkosh-baruch, hershkovitz, & ang, 2015; asterhan & rosenberg, 2015) . the therapeutic value of the very existence of online relationships has been examined in other, non-educational settings. for example, users suffering from depression who joined facebook support groups showed a significant improvement, especially when one of the "facebook friends" in the group was a psychiatrist (mota pereira, 2014). given that only very few patients in this particular study actually contacted the psychiatrist via facebook, it seems that simply knowing they had access to professional medical assistance contributed to their subjective well-being. this positive psychological experience can be explained by the authentic discourse that characterizes the instant and synchronous-textual communication in the new media (lapidot-lefler & barak, 2012) . the informal sharing of personal content and the removal of barriers such as embarrassment, especially among adolescents (bardi & brady, 2010) provide users with the opportunity to achieve emotional relief, which in some cases may even surpass the emotional benefit that is "allowed" in the traditional face-to-face communication (dolev-cohen & barak, 2013) . in fact, a dominant theme that emerged from a recent narrative analysis of anonymous online stories on suicide and self-harm behaviors, was 'overcoming silence and isolation'. according to the authors, the anonymous communication online enables young people who feel invisible and experience emotional distress to resist oppressive social norms and turn their emotional struggles into testimonies and stories worth telling (yeo, 2020) . correspondingly, qualitative interviews with 15 young recipients (< 24 year-old) of online outreach services reported that social medica and whatsapp communication enables them to speak freely about their emotional problems, even more than face-to-face interactions (chan & ngai, 2019). the unique advantage of online communication in creating a comfortable platform for emotional disclosure is well documented in the research on psychological counseling online (via email or video-chats). in fact, online counselling has gained much experience since the late 1990s (mallen, vogel, rochlen, & day, 2005) and today many people use online platforms to overcome emotional distress and to improve their overall well-being (andersson et al., 2014; mota-pereira, 2014; richards & richardson, 2012) . indeed the specific research on social media counselling is scarce and one cannot assume that this new medium would share the same characteristics of the somewhat more formal communication through email or video-chat. however, as described above, social media communication may especially relevant to youth. in the following section we expand on the potential benefits and limitations of social media in the context of therapeutic relationships. on the one hand, using social media to improve the well-being of youth at risk seems most relevant. in contrast to adults, adolescents may be less cooperative with conventional therapeutic efforts and prefer tactics of avoidance and escape (kaim & romi, 2015) . in fact, a primary difficulty in working with at-risk youth is to bring them to the health service gate and therapists struggle in maintaining ongoing therapeutic relationships. adolescents are not always aware of the available mental health services and even when they are, they fear the negative stigma associated with seeking help, which will damage their "tough" image in the eyes of their peers (ben hur & giorno, 2010) . in the digital realm, however, adolescents can receive discreet psychological support (barak & dolev-cohen, 2006; dolev-cohen, & barak, 2013; valkenburg & peter, 2009 ), without fearing from real or imagined social sanctions (friedman & billig, 2018) . the online communication is usually voluntary (i.e., it is not being mandated by the adult) and the adolescent is prone to feel more comfortable and less shameful (ben hur & giorno, 2010) . on the other hand, despite the above-mentioned benefits, therapeutic interventions via social media have limitations and challenges, precisely due to the characteristics of online communication. first, the informality and the blurred boundaries that characterizes social media communication may undermine the caregiver authority and create a false presentation as if the relationships between the adolescent and the caregiver are symmetrical (asterhan & rosenberg, 2015) . therefore, computer-mediated relationship requires clear behavioral procedures to avoid deviation from an appropriate therapist-patient relationship (barak, klein, & proudfoot, 2009 ). second, the physical distance between the parties makes it difficult to establish a long-term commitment to treatment, or to provide immediate assistance in situations requiring physical access to the patient, such as cases of potential suicide (amichai-hamburger et al., 2014) . finally, the very notion that youth at risk will be open and responsive to social media based outreach efforts is yet to be studied. freshmen students for example, who published facebook posts that included explicit references to emotional distress, reported that they prefer being approached directly, in a face-to-face manner (whitehill et al., 2013) . indeed, the majority of these students were willing to accept help from their professors or teachers' assistants, but they also expressed their concerns that strangers would monitor their facebook activity, even if this is done with good intention (whitehill et al., 2013) . the complex picture that arises from the literature emphasizes the need for effective and safe training programs for mental health workers (barak, klein, & proudfoot, 2009 ). however, despite this growing need and despite the rising number of studies on teachers-students online communication, only little is known about the potential benefits and limitations of online treatments for at-risk youth. the goal of the current study is therefore to examine the characteristics of online therapeutic relationships between adolescents at risk and their caregivers (i.e., counselors and social workers). specifically, the study addresses three research questions: (a) how do youth workers utilize social media communication for creating and maintaining therapeutic relationships with at-risk adolescents (b) what are the potential therapeutic benefits of such communication and what distinguish it from traditional face-to-face relationships with at-risk youth? (c) what are the problems and challenges that arise from these online practices? altogether, by providing pioneering answers to these questions, we hope to promote the emerging interdisciplinary research filed of online counseling and contribute to the mental health of adolescents at risk around the world. a qualitative research design was applied due to the exploratory nature of this research (creswell, 2009 ). the study included 17 in-depth interviews (10 women and 7 men) with youth caregivers who work in a range of facilities and programs for youth (table 1) , including welfare departments and boarding schools in isolated settlements in the eastern border of israel. participants were recruited in several ways: by contacting welfare departments in the research area, using the snowball method with the help of personal acquaintances, and posting messages on social media to identify potential participants. after collecting the contact information of potential participants, a personal inquiry was sent to them, which included a description of the research topic and its objectives. from among those who agreed to participate, we made an effort to create a research group with as wide a range of faculty positions as possible. the interviewees were: youth counselors and coordinators (n = 7), social workers (n = 7), a clinical psychologist (n = 1), a director of social welfare services (n = 1), and a senior coordinator responsible for the training of faculty (n = 1). for details of the interviewees, see the table in appendix 1. the age of the participants ranged from focusing on the perspective of the therapeutic professionals provides a broad perspective of the phenomenon, including therapeutic, institutional, and organizational aspects. data collection consisted of semi-structured in-depth interviews conducted with the purpose of exploring the personal perspectives and experience of the interviewees, through the presentation of their authentic voices regarding the phenomenon under investigation (moustakas, 1994 ; see also marwick & boyd, 2014 , on the use of interviews to reveal insights regarding new media functions in daily life). the interviews were conducted over the course of three months during 2019, with each interview lasting between one and one-and-a-half hours. the interviews were mainly done in-person, with the exception of five interviews that were done over the phone. interviewees were asked a variety of questions about the content and characteristics of their online communication with at-risk youth, what considerations motivated them to choose social media as an aspect of treatment, and how they perceived the advantages and disadvantages of online communication in this context (for the full questionnaire, see appendix no. 2). the interviews were recorded and transcribed. the data were analyzed with individual profiling and thematic analysis: first, we crafted profiles of each interviewee, which "allows us to present the participant in context, to clarify his or her intentions, and to convey a sense of process and time, all central components of qualitative analysis" (seidman, 2006: 119) ; second, we examined categories, patterns, and connections, trying to find a balance between within-case and cross-case analysis (king and horrocks, 2010) . the thematic analysis mostly involved an inductive approach (themes emerged from and were grounded in the data), although self-presentation was defined as a priori theme. third, the coded categories were conceptualized into broader themes. while innovative researchers are increasingly using electronic methods for coding data (mainly for the second and third stages mentioned above), due to the small size of the sample we preferred to stick to the traditional method of manual coding (see: basit, 2003) . following king and horrocks (2010) and neves et. al, (2015) , we preformed three steps procedures: first, the main and second authors read the transcripts independently to identify categories and themes. second, both authors coded together, using a label-coding scheme and tested for convergence. third, the coded categories were conceptualized into broader themes. finally, the third author sampled 35 per cent of the transcripts to determine reliability of the category coding: the inter-rater reliability for category coding was 90 per cent. the inter-rater reliability was calculated by counting discrepancies in category assignment between the coded transcripts of the two authors and the third author and by dividing them by the total category assignments throughout the research, we attempted to remember the gap between our experience as researchers in understanding the phenomenon being researched and we also aware that the qualitative researcher's feelings and emotions serve as an analytical tool, as well as constituting an integral part of our scientific work (allen, orbe, & olivas, 1999) . in order to improve the trustworthiness of the results we used strategy suggested by lincoln (1985), we invited three of the informant to read a review first draft of the findings analysis and conclusions, and some of their remarks are even integrated into this paper. this, in order to create a dialog wherein "the subjects of the theoretical statements become active partners in the developing process of verification of knowledge" (bauman, 1976, p. 106), in the course of conducting the research and writing the results and analysis, special attention was given to ethical considerations. all interviewees received a written explanation of the research aims, methods and ethics (considering anonymity and confidentiality). in order to ensure the privacy and anonymity of the interviewees as well as that of individuals mentioned in work-related stories that were told during the interviews, the names of the interviewees appearing in this article have been changed, while maintaining their gender and professional role. in some cases, technical and biographical details have been changed from the interview transcript (allmark et al., 2009 ). the research was approved by the ariel university irb. the primary impression that emerges from the analysis of the interviews is that therapeutic staff members view social media as an important and necessary tool in the treatment of at-risk youth. the social media platforms that used the staff was mainly facebook, instagram, and whatsapp. all of the participants reported that they see great advantages in integrating these platforms into their daily social work and therapeutic practices. some even defined this type of work as a "necessary skill." however, with the exception of several interviewees who work in a dedicated online setting (detailed below), the majority noted that because the use of this media channel for therapeutic purposes is new, their workplace does not have clear guidelines about it. as daniella, a social worker at a boarding school for at-risk youth said: "the staff doesn't have agreed-upon rules, each person does what he wants to and chooses to do." the therapists noted that they are given a "free hand" when it comes to online relationships. the issues that commonly arise are concerned with questions of privacy, authority, and boundaries (for more on these issues, see below). analysis of the interviews revealed four distinct purposes of online communication with alienated youth: (a) technical and organizational purposes; (b) familiarity and connection (c) monitoring potentially dangerous situations, and (d) therapeutic interventions. deepening contact with youth understanding the (online and interpersonal) youth culture defining the treatment time the technical and organizational aspect is particularly seen in the use of whatsapp groups among the therapeutic staff, as well as whatsapp groups that include both professionals and the youth they are treating. eleven interviewees mention that the whatsapp serves as an easy and highly accessible platform for messaging, updating, and scheduling meetings. the diverse functions of whatsapp, such as the ability to create groups and mailing lists, and to switch between interpersonal and group communication, allow for increased control and management. for example, the ability to easily send information and updates to a large number of people frees up valuable time for professionals to engage in educational and therapeutic work. two of the youth care workers noted the feature of whatsapp that allows them to see whether the recipient has read the message (this is a default setting in the application). the feature has become an integral part of their decision-making process about whether it is worth investing additional efforts in an outreach project aimed at recruiting as many youth as possible. a core theme that arose from all of the interviews is the use of social media for purposes of integration, deepening familiarity, and establishing therapeutic relationships with the youth. analysis of the interviews revealed three types of potential therapeutic benefits of social media communication with youth at risk. moreover, young people understand that the ability to use the distinctive language of the internet, for example emojis and new phrases and terms, enables a conversation on equal terms, allowing them to express their feelings in their own language. this creates closeness and opportunities to build trust in their relationships. "i identify a kid who has an easier time expressing himself through messaging. it gives him the time and space to read my message and get back to me when its comfortable for him. it's especially helpful to me especially in the early 'courting' stage of the relationship," (uri, social worker at a non-residential regional center for at-risk youth). one interviewee noted a gender difference when it comes to social media communication: "with boys, let's say, i see that it's easier to communicate with them through whatsapp than in person. for girls, it seems to me the opposite, because they are more verbal. for example, we had a male teen who had a really hard time communicating, but it was easier to talk to him through whatsapp," (ella, coordinator at a youth outreach program). when we asked the interviewees what exactly is on social media that allows the connection they describe, they highlighted specific features and characteristics of social media, such as the availability, accessibility, and computer-mediation of the text that help deepen the relationship and create an open and honest conversation. "there is something about messaging. hiding behind the message allows you to feel comfortable and open, without facial expressions that convey judgment" (leah, media coordinator at the department for youth development). another interviewee mentioned a phenomenon he commonly encounters, a situation in which there is no way to contact the youth, for technical reasons: "i have encountered situations in which youth have a problem with their cellphones, usually because they don't have money to pay for the service, so they turn off their phones, and whatsapp is the only way i have to communicate with them," (david, social worker at a boarding school for at-risk youth). these descriptions recall the quote from uri, about the way online communication helps in the early stages of the relationship. in contrast to classic psychological treatment, therapeutic professionals working with at-risk youth take an approach of gradually getting to know each patient. the characteristics of online communication facilitates this practice and help overcome barriers that may impede success. (b) maintaining relationships. social media enables staff members to keep in touch with youth whom they've worked with in the past. facebook and instagram, in particular, allow them follow the activities, personal lives, and achievements of 'alumni' months and even years after they stop communicating through the more intensive media outlets such as whatsapp groups that were actively used as part of the treatment frameworks. at the same time, three interviewees noted that the whatsapp groups they shared with the youth remained active for years after the formal treatment framework ended. "the group is still active from time to time. i find it moving that they make sure to send happy birthday notes when one of the youth or counselors has a birthday... it's nice to see that they care about each other and that they take a minute or two to send each other birthday wishes," (roni, social worker in the department of youth development). the importance of this connection is twofold. on the part of the staff members, whatsapp allows them to 'stay in touch,' but they indicated that it goes beyond that. they said that being able to follow how alumni are integrating into ordinary life allows them to receive feedback, which one interviewee described as "getting closure." seeing the results, the outcomes, helps reinforce their belief in the challenging work that they do. from the perspective of the youth, the online forum can serve as a mutual support network, and even as a possible opening for them to seek out advice and assistance as they navigate their paths: "[the whatsapp group] helps them keep in touch with the most meaningful group in their lives, that is, the people they were with when they went through a hard time at the very important age of 16 or 17. we see that even four years later, they'll arrange to meet up, and they share a common language that stays very much alive. it's mainly social, but it can certainly be a platform for therapeutic work like monitoring or ongoing assistance or referrals for help. recently, a lot of the youth who finished [our program] had no framework to be in, and we found jobs for them, we were able to help them," (nurit, director of social welfare division for youth development). (c) understanding the (online and interpersonal) youth culture. the therapeutic staff's online presence helps them to get to know the personal world of the youth, bridging the gap between the adults (the therapeutic professionals) and the young people. six interviewees that highlighted this theme, refer to two levels of acquaintance. this further familiarity with the youth is particular to the online sphere. caregivers see their presence on social media as a way to better understand the online forums and to identify risks, such as bullying and inappropriate discourse. sometimes they can even offer guidance in this new type of social space. social media offers a convenient platform for youth to share emotions and express their thoughts, feelings, and frustrations. therefore, in addition to providing a forum for strengthening connections and becoming familiar with each other, the presence of therapy professionals in online forums has valuable potential for identifying signs of distress prevalent among at-risk youth. interestingly, this theme was raised by all of the interviewees, citing numerous situations in which signs of distress were first exposed online. there was one girl who always posted questions about sexual harassment. it was very worrying. after each such post, i would send her a private message: 'hi, i read your post. i would love to help you.' it was a gentle kind of outreach, because she was already big, 17 years old, which is a stage when they tend to want to share less, when there's a desire for independence. after a few times, she suddenly answered and we developed a connection, and it really helped. (nurit, director of social welfare division for youth development). online communication venues seem to allow youth who are experiencing distress to express their feelings with greater ease, as compared to face-to-face meetings. the interviewees described a wide variety of online posts that drew their attention to the distress of those who wrote them, whether the post was about the pain of a breaking up with a boyfriend or girlfriend, or a crisis with their parents, a depressive episode, social isolation, or even suicidal thoughts. there was a post in which someone expressed signs of distress. naturally, we showed it the social worker, and a tragedy was avoided. the girl wanted to commit suicide, she wrote all kinds of things like "i want to die," "i hate my life." because of facebook, we were able to save her. (ariella, coordinator at a youth outreach program). use of social media is relevant to therapeutic professionals tasked with identifying and reaching out to marginalized youth. in the digital age, this type of work has expanded from the traditional "street search" to a new practice of online searches. indeed, some welfare departments have established media departments and teams that seek to leverage online forums and social media as tools for identifying distress, as means for outreach to at-risk youth, and even to help by providing an initial response, as described in this quote: the orly, who works in the media department of a youth development program, emphasizes the potential of this tool to reach youth who typically avoid exposing their distress or actively seeking mental health support. we can identify situations of distress on social media by spreading posts that reach young people and invite them to contact us. it's there that we see the ones whose pain is more internalized, who don't necessarily look like the ones out in the streets. we're also often approached with questions about sexual identity, which means we are expanding our target audience. using social media to meet the needs of youth and provide them with emotional support was also noted by the most of the interviewees -about 14 in number. "online media helps us locate the youth. there are situations in which we were able to achieve deep therapeutic discourse solely through online correspondence... [the internet] provides us with a vast therapeutic space." (leah, media coordinator, department of youth development). however, it is important to note that while there was considerable consensus among the interviewed professionals regarding the potential uses of online communication described above (technical and organizational purposes, making connections, and monitoring distress), the situation was different regarding its use in therapeutic interventions. even those who to some extent supported such practices presented ambivalent positions, or at least qualified their statements. when leah (quoted above) was asked to give an example of online therapy, she noted a practice that enables youth to sit in the same physical space as caregivers, but chat with them online: choosing to conduct therapy sessions through the chat feature is comfortable for the youth. they feel more secure and more open to talking. meaning, we can actually meet with someone and sit in the same room but still be communicating through the chat. the main thing is to give them the feeling that they are safe and can trust us, and that we are here to help in any way that's comfortable for them, at least in the beginning," 5 of the interviewees strongly emphasize that providing treatment online is a topic of debate among professionals in the field. one of them mentioned a heated discussion among the professional team about whether the word "therapy" should be used to refer to online communication, or whether it is merely a "supportive aid" for therapy. leah stated that she sees an online chat as a first step in the therapeutic process, but not as a practice that can stand on its own. similarly, other interviewees said they did not consider it a replacement for traditional in-person therapy. the in discussions with the interviewees about their actual work in the field, we found almost complete consensus in favor of using online communication as part of their therapeutic work. however, deeper investigation into the issue indicates that they decided to use it both because of its benefits and despite its limitations. most of the interviewees (14 of 17) noted dilemmas and debates leading up to the decision to adopt online forums as part of their work and some grappled with these questions even while they use online communication. these dilemmas revolve mainly around two issues: the need to protect the privacy of caregivers and patients, and the difficulty of delineating the treatment time. privacy of therapists and youth. some characteristics of social media include personal exposure and sharing by its users. this is the key to its therapeutic potential, and also presents one of the main challenges facing therapists. disclosing caregivers' personal profiles to the youth they treat is seen as problematic. "my facebook page includes family-related things. i don't know if they need to know everything," (samuel, social worker at a youth support center). this concern also relates to the professionals' opinions on current events: "i write a lot about politics on facebook. often, i didn't feel it was right for them to be so aware of my opinions," (david, social worker at a boarding school for at-risk youth). the privacy dilemma also touches on the need to respect the privacy of the youth being treated. monitoring youth online has ethical as well as therapeutic implications. "keeping track of them and being exposed to the things they do is sometimes problematic, because it is an invasion of their privacy. we use it, but are very careful. most of them have trust issues, so i don't want them to think i've been following them [online] all day," (julia, social worker and therapist at a youth development program). various aspects of the privacy dilemma were mentioned multiple times by the interviewees. on the one hand, they expressed a desire to initiate discussions with youth about issues to which the therapist was exposed while viewing their posts on social media. on the other hand, they worry that raising issues the youth did not choose to share directly with the therapist could cause them to avoid treatment, or cause a crisis of trust. the significant therapeutic potential that is inherent in information the therapist sees online must be considered alongside the invasion of the youths' privacy and the danger of violating the therapeutic contract. most of the interviewees said they strive to find the proper balance. on the issue of the online statuses to which we are exposed, there is a lot of information that can help us catch problems in time, such as suicide. we had a girl who showed all kinds of warning signs that she was suicidal, and we prevented it. this can be confusing, but also helpful. it is a question of whether to invade their privacy and whether they want us to do that, while the post is also be a call for help. it's between invasion of privacy and offering help. as we catch up with the gaps in technology advancement, we have to think about how to use online media as an essential tool. (leah, coordinator in the department for youth development) defining the treatment time. another potential disadvantage of using social media that bothered the interviewed professionals pertains to the question of defining the therapeutic setting. the accessibility and ease of sending messages makes it possible for youth to contact therapists beyond working hours, at night or on weekends and holidays. there is a disadvantage in the sense that, for example, they might send messages really late at night. it's more difficult to set limits online, so you need to define the treatment time. i had a case with a girl who asked me 'why were you up at 4 in the morning?' because she saw on whatsapp that i was online. you have to understand that they are on their phones all the time, and they see and pay attention to everything. this has many benefits for adolescents, but you also have to pay it is important to note that the interviewees who mentioned the difficulty in limiting the treatment time were not only relating to their personal experience of how their online presence required them to be available to the youth in their free time; the undefined time also potentially harms the youth. being friends on facebook and instagram with the youth may allow them to deepen and improve their relationship, but it also raises questions about authority and how the relationship is perceived. there is a concern that the caregiver-patient or mentor-mentee relationship will be transformed, in the youth's perspective, into a relationship similar to their other social relationships. this concern is heightened in the case of online contact between therapists and patients of the opposite sex ("if, with boys, i deliberate a few times, with girls i am a million times more cautious," omer, clinical psychologist at a youth development program). in addressing this issue, four interviewees noted that defining the times during which they can be in contact conveys an important therapeutic message and improves patients' ability to accept delayed gratification (despite their potential availability). this skill can be beneficial to the youth in many areas of life. before i told them until what time i'm available, there were situations when i would receive notifications at night as well. i had to make a separation between work and my personal life. it's mostly for myself, but it's also good for the youth, who know i am there for them, but not constantly. this parallels reality, in that they learn that not everything is always accessible to them, and that it may not be possible to help them as soon as they want it, and it's from an empathic place, so they can understand what it's like out there in the world. (julia, social worker and caregiver at a youth development program) one of the interviewers emphasized that when defining their working hours with the youth, they clarified that these limits can be extended in urgent cases: "for example, i specify that i will answer until 6 p.m., and after that only if it is an emergency" (yifat, a coordinator at the department of youth development,). similarly, the very discussion with the youth about what is and is not considered an emergency also serves as a therapeutic opportunity: i had a case a few months ago of a boy calling me at 9 p.m. i sent him a message asking if it was urgent. it turned out that his bike had been stolen, and he was very worried about how his father would respond to the theft (...). that is why i think there are gray areas, and a need for shared thinking about everything. (leah, media coordinator at the department for youth development). another work-related challenge concerns the nature of communication through text messaging. interviewees gave examples of text messages that could be interpreted as more social and intimate than was their original intent. added to this is the fact that such messages sent outside of working hours can diminish the status of the therapist as an authority figure and thus detract from the effectiveness of treatment: "the problem is that it turns into something social, and i am not sure i want to give the relationship that connotation," (menachem, senior coordinator at a youth development program). it emerged that questions related to defining the treatment time are a significant concern to the therapeutic professionals. they repeatedly emphasized that because of the novelty of using online forums, they have no regular use pattern and no definitive answers. the interviewees reiterated that they lack formal guidelines from their workplace about providing treatment to youth online. ("in terms of any general definition or expectation for employees -there is nothing. everyone decides on individual basis," julia, social worker and caregiver at a youth development program.) it is evident that they are eager to receive training on how to effectively and appropriately communicate with youth via social media. instead, their online work activity is characterized by trial-and-error, intuitive decisions, retrospective considerations, and sharing their deliberations with their colleagues. "basically, we are still discovering all the problems and advantages of this tool," (leah). several interviewees mentioned the need to guide the youth regarding proper online communication with therapy professionals as well as a need to develop professional training for therapeutic staff who wish to work (also) through online technologies. this sentiment is summarized by yifat, a coordinator in the department for youth development: there is no doubt that today's technological world is evolving and it is not really under our control. the youth are there, and so we must show our presence there as well. i think, and see, that there are many young people who can only be contacted through online networks. despite the disadvantages communicating through screens, this is the situation, and it should be channeled towards good things. through online networks, trust is also created. there is something very powerful about entering their world and about its ways of reaching them and treating them. even if the relationship ends without them getting frontal treatment, i have been in situations where the communication helped and supported young people who were in distress. the growing popularity of social media raised new opportunities for reaching out and supporting youth at risk. using in-depth interviews with youth counselors and social workers, this study examined the characteristics of online therapeutic relationships between adolescents at risk and their caregivers. the interviews revealed that counselors and social workers generally hold a positive view towards online therapeutic relationships. they acknowledge the benefits of this new type of communication and recount its therapeutic successes. according to the interviewees, social media based communication contributes to maintaining a reciprocal, meaningful, and long-term contact with the at-risk youth under their care. the qualitative analysis of the interviews reveals that online communication can facilitate and even improve the quality of the therapeutic relationship with at risk youth in three key areas: (1) forming a trustful and positive relationship with detached youth and maintaining it over time; (2) detecting early signs of dangers and distress; and (3) providing psychosocial counseling when needed. yet, the clinical picture is more complicated than these noted advantages. the interviewees expressed several concerns regarding the integration of new technologies into the traditional therapeutic practice and some even opposed to use of the word "therapy" in the context of online communication with at-risk youth. the ambivalence regarding the use of new media as a legitimate therapeutic tool might derive from the fact that counselors and social workers usually rely on well-defined, theory-driven therapeutic approaches (e.g., cognitive behavior therapy, psychodynamic therapy). these approaches determine the specific settings that are required for a successful treatment and layout the specific therapeutic strategies that should be applied to achieve beneficial outcomes. nevertheless, it seems that, regardless of their theoretical background, all the interviewees acknowledged the importance of maintaining a continuous relationship with at-risk youth through online communication. this main theme can be understood through the prism of "common factors" in psychotherapy (castonguay, 1993) . even though different therapeutic approaches incorporate different set of values and practices, they also share common factors that contribute to the successful outcome of the treatment. for example, simply the beginning of a treatment may arouse patient's expectations for symptom relief, which in turn inspire hope and motivation to change (arnkoff, glass, & shapiro, 2002) . and above all, all approaches highlight the therapeutic alliance as an essential condition for treatment (pilecki, thoma, & mckay, 2015) . the quality of the relationship between patients and therapists is one of most researched therapeutic component, which has been proven to significantly contribute to outcomes of treatments, independent of the specific type of the examined therapy (norcross & wampold, 2011) . based on this perspective, it is understood why all the interviewees cherished the online communication. regardless of their guiding approach, they all assume that a positive and trustful therapeutic alliance is a crucial factor in the treatment of at-risk youth. some might even argue that the very relationship between the therapist and the adolescent is the therapy. however, forming a solid therapeutic relationship with at-risk youth is not an easy task. youth at risk are not always inclined to cooperate with conventional therapeutic efforts and may prefer tactics of avoidance and escape (kaim & romi, 2015) . many of them fear the negative stigma associated with seeking help and are worried that cooperating with the "adult system", will damage their image in the eyes of their peers (ben hur & giorno, 2010) . these characteristics may explain why the new media attracted both the adolescents and the counselors and social workers. apparently, some of the problematic aspects of online communication, such as the anonymity and the avoidance of direct face-to-face communication, are the very reasons that this form of communication "speaks" to adolescents at-risk. the (partial) anonymity encourages youth to share their inner feelings openly and helps them to overcome the psychosocial barriers mentioned above. it allows them to receive discreet psychosocial support (barak & dolev-cohen, 2006; dolev-cohen, & barak, 2013; valkenburg & peter, 2009 ), without fearing from real or imagined social sanctions (friedman & billig, 2018) . some scholars even argue that this type of communication encourages the users to be honest and to express their "true selves" (lapidot, lefler, & barak, 2012) . moreover, in the digital realm, the adolescent may feel less pressure to participate in the conversation. without the pressure, she/he may feel more comfortable and less shameful (ben hur & giorno, 2010) . he or she can choose to contact the therapist through textual communication, which in many cases, is perceived as a more convenient and protected channel of communication in which they can express their feelings and thoughts in their own language (barak, klein, & proudfoot, 2009 ). indeed, this form of communication requires therapists to let go of some of their rules and theoretical beliefs. yet, it seems that the interviewees agreed that the benefit of online therapeutic relationships with at-risk youth outweighs its potential damage. aside from the potential benefits for the relationship itself, interviewees emphasized the new opportunities that aroused with regard to early detection of dangers and distresses. early detection of distress can prevent worsening of the patient's emotional state and mitigate his or her emotional burden (halfin, 2007) . this is especially relevant to youth, as many adolescents do not share their negative experiences, such as victimization of bullying or suicidal thoughts, with their adult caregivers (rey & bird, 1991; velting et al., 1998) . this finding corresponds with the emerging line of research according to which mental health conditions can be traced from social media activities (for a review see: guntuku, yaden, kern, ungar, & eichstaedt, 2017) . although most of this research is focused on adults, promising results were also evidenced among adolescent populations (ophir, asterhan, & schwarz, 2019) . the current study joins these new studies and suggest that social media has become an important source of psychosocial information about atrisk adolescents, offering a rare glimpse into their troubles and pain (ophir, 2017; ophir, asterhan, & schwarz, 2017) . finally, the interviewees highlighted the opportunity to provide counseling from a far. despite some reservations from online treatments, there is ample evidence that online counseling has a significant impact on patients, which in many cases is comparable to that of face-to-face counseling (andersson, 2016; mallen et al., 2005) . studies showed for example that computerbased treatments are effective for both depressed (kessler et al., 2009 ) and anxious patients (barak, hen, boniel-nissim, & shapira, 2008) . the findings from the current study confirm previous research that claimed that online counseling increases accessibility to treatment (mallen et al., 2005) . social media seem to increase the accessibility to youth who live in remote geographical areas and to youth who chose to distant themselves from their family and community. the therapeutic relationships that are formed online are especially relevant to these complicated times of social distancing due to the covid-19. moreover, aside from overcoming geographic limitations, social media seem to provide the adolescents subjective feelings of ease and comfort. these feelings correspond with a well-acknowledged advantage of online counseling, in which clients may feel less threatened in their natural surroundings and allow themselves to be more exposed with their therapists (gilat, ezer, & sagi, 2011) . these two factors, accessibility and comfortableness, can be critical when working with at-risk youth who tend to engage in defiant or rebellious behaviors. despite these advantages, online counseling suffers from several principle drawbacks: first, it reduces the ability of patients and therapist to interpret nonverbal communication signs and messages are sometime ambiguous (barak, hen, boniel-nissim, & shapira, 2008; suler, 2008) . second, it might diminish the caregiver's ability to provide empathy and warmth. third, it may feel less binding for some patients could easily dropout from the treatment (amichai-hamburger, et al., 2014) . fourth, it raises practical and ethical dilemmas, especially when the therapeutic relationship is form via social media (rather than video-chats, such as zoom or skype). therapists and counselors may ask themselves: what distinguishes me from other "facebook friends"? should adolescents be exposed to my personal photographs? and will they expect me to be available to them online at all times? these dilemmas are not unique to youth care workers. previous literature that addressed teacher-students online relationships raised similar concerns. although many teachers acknowledge the benefits of social media communication, they are also worried that it will undermine their authority in the eyes of their students (asterhan & rosenberg, 2015) . even those teachers who used social media to communicate with their students, were concerned about privacy boundaries and were worried that their relationships with the students will become too personal. like teacher, youth caregivers, as shown in this study, are not naïve. they are well aware of the potential pitfalls of social media interactions. despite the inherent informality that characterizes their work with at-risk youth, they do not see their relationships with youth at risk as symmetrical relationships. a primary challenge in this context is the need to understand and define the physical and emotional boundaries between them and their patients (friedman & billig, 2018; haenfler, 2004; nagata, 2001) . several interviewees noted difficulties in defining such boundaries, including the therapeutic space, the duration of the treatment, the availability of the therapist, and the privacy of both patients and caregivers. thus, future training programs should consider intrinsic characteristics of computer-mediated communication, such as the blurred boundaries and its relatively democratic and egalitarian nature (e.g., asterhan & eisenmann, 2011; hampel, 2006; weasenforth, biesenbach-lucas, & meloni, 2002) . without minimizing these concerns, the main conclusion from this study is that youth care workers are well aware of these challenges and yet they all agree that these challenges should not stop them from leveraging the new media for psychosocial purposes. they make considerable efforts to navigate these challenges by clearly defining the online practices that could contribute to the therapeutic process and avoiding or minimizing the ones that could disrupt it. however, they still lack appropriate training and usually work their way to the at-risk youth through trial-anderror. the current study has limitations. the first limitation concerns the fact that we only interviewed adult caregivers and the youth perspective is missing. moreover, the caregivers might have different perspectives and attitudes towards technology-based communications, due to generational difference (zhitomirsky-geffet & blau, 2016) . in israel most professional caregivers only start their practice in their late twenties. most of the interviewees in this study therefore belonged to generation y (aged 23-41). they are still considered very technology oriented (compared with previous generations) but they might have more complex attitudes towards internet-based communication, compared with the youth at risk (who belong to generation z and received their first smartphones as children). these differences in attitudes towards technology may have an impact on the online therapeutic relationship. yet, we note that all interviewees emphasized the importance of integrating online communication in their work and declared that digital literacy is a required skill when working with youth. a second limitation concerns the location of the study. two specific contexts may limit the generalizability of the findings: the country (israel) and the socio-geographic location (east border). therapists in israel may differ from therapists in other countries (e.g., in israel, therapists are allowed to provide online counseling, without formal training in this field) and youth in the east border may have different challenges than youth in other places (e.g., ideological crisis that reflects the tension between their conservative upbringing and their new identity). we therefore recommend that further research and future training programs would consider cultural differences. a third limitation concerns the fact that the study is based on qualitative interpretations of interviews with counselors and social workers. the exploratory nature of the study calls for future works that will build upon the current findings and establish the benefits of social media therapeutic interactions. further studies are recommended to quantify these potential benefits and explore their reliability overtime and in additional contexts. studies should also examine the perspectives of the youth themselves and compare them with those of their caregivers. there is also a need to examine potential confounding factors that might impact the online therapeutic relationships, such as gender differences (noguti, singh, & waller, 2019) , generational differences (hargittai, 2010) , cultural contexts (mesch & talmud, 2008) , and religious contexts (rosenberg, blondheim, & katz, 2019) . these factors might affect the very legitimacy of the online relationship. it is recommended that the current findings will be further examined thorough a quantitative and wide-ranging research. despite these limitations, the current study suggests that the new communication technologies are bringing viable and exciting opportunities to the field of youth at risk. most interviewees believe that online therapeutic relationships are beneficial for at risk youth as they help to maintain a trustful relationship with detached youth and provide a platform for early detection of distress and for delivering psychosocial counseling. yet, as noted above, the clinical picture is more complicated than these advantages. thus, it is advisable to provide special training to youth caregivers to help them bridge the gap between traditional and online practices and help them provide an effective treatment to at-risk youth online. is online communication discussed with you at work? are there certain procedures or guidelines set by the council with regard to social networks? have you undergone training on this topic? what do you do with this information? 14. do you have joint whatsapp groups with the youths? do you have one group for all of them? what is the role of the group? what do you do there? what interaction takes place there? what are the advantages and disadvantages of these whatsapp groups? 15 ethical issues in the use of in-depth interviews: literature review and discussion the future of online therapy internet-delivered psychological treatments expectations and preferences the promise, reality and dilemmas of secondary school teacher-student interactions in facebook: the teacher perspective does activity level in online support groups for distressed adolescents determine emotional relief a comprehensive review and a meta-analysis of the effectiveness of internet-based psychotherapeutic interventions defining internet-supported therapeutic interventions why shy people use instant messaging: loneliness and other motives manual or electronic? the role of coding in qualitative data analysis narrowing the adolescent service gap: counseling and mental support for adolescents on the internet -yelem as a representative case common factors" and "nonspecific variables": clarification of the two concepts and recommendations for research research design. qualitative, quantitative, and mixed methods approaches adolescents' use of instant messaging as a means of emotional relief teacher-student relationship and snsmediated communication: perceptions of both role-players education, socialization and community: coping with marginal youth in rural frontier communities in israel seeking help among adolescents: their attitudes toward online and traditional sources detecting depression and mental illness on social media: an integrative review learning, teaching, and scholarship in a digital age: web 2.0 and classroom research: what path should we take now? educational researcher gender and ethnic differences in formal and informal help seeking among israeli adolescents depression: the benefits of early and appropriate treatment rethinking sub-cultural resistance: core values of the straight edge movement digital na(t)ives? variation in internet skills and uses among members of the "net generation student-teacher relationship in the facebook era: the student perspective whatsapp is the message: out-of-class communication, student-teacher relationship, and classroom environment adolescents at risk and their willingness to seek help from youth care workers therapist-delivered internet psychotherapy for depression in primary care: a randomised controlled trial effects of anonymity, invisibility and lack of eye-contact on toxic online disinhibition online counseling: reviewing the literature from a counseling psychology framework networked privacy: how teenagers negotiate context in social media qualitative research design cultural differences in communication technology use: adolescent jews and arabs in israel facebook enhances antidepressant pharmacotherapy effects phenomenological research method beyond theology: toward anthropology of "fundamentalism the 'non-aligned' young people's narratives of rejection of social networking sites gender differences in motivations to use social networking sites evidence-based therapy relationships: research conclusions and clinical practices sos on sns: adolescent distress on social network sites unfolding the notes from the walls: adolescents' depression manifestations on facebook the digital footprints of adolescent depression, social rejection and victimization of bullying on facebook in times of war, adolescents do not fall silent: teacher-student social network communication in wartime cognitive behavioral and psychodynamic therapies: points of intersection and divergence sex differences in suicidal behaviour of referred adolescents youth at risk: definition and implication for service delivery whatsapp, teacher?": student perspectives on teacherstudent whatsapp interactions in secondary schools it's the text, stupid! mobile phones, religious communities, and the silent threat of text messages a virtual safe zone: teachers supporting teenage student resilience through social media in times of war. teaching and teacher education techniques to identify themes image, word, action: interpersonal dynamics in a photo-sharing community social consequences of the internet for adolescents: a decade of research. current directions in psychological science parentvictim agreement in adolescent suicide research cross-generational analysis of predictive factors of addictive behavior in smartphone usage characterization of youths with whom he works: what is the youths' background? what is the number of youths; their age range, place of residence; characterization of youths according to the family they came from; in which frameworks did they spend time in the past; do they have a delinquent background; did they undergo molestation as children or another form of victimization. are they in a formal framework, or not? if not, what do they do in their free time? to what extent do the youths reveal their personal lives to you? are they cooperative? what types of problems do you encounter when working with them? are the youths in contact with their families? if so, in what way is this contact expressed? what is the frequency of the youths' contact with their families, do they feel respect towards them or hatred and anger? what types of problems do the youths that you encounter generally suffer from? (e.g., personal problems, family, social, certain extreme behavior.) focus on drugs, crime, behavioral disorders -bullying, attention disorders, depression and anxiety -have they been hospitalized or expressed suicidal tendencies?section b -working with the youths and online communication the nature of the work and dynamics of the connection between the youth and the social workers? where do you actually meet them? are they obliged to attend some of the meetings and what do you do if they fail to come? what are the most common modes of treatment? to what extent are you exposed to what youth at risk do on social networks? what characterizes this use? what applications do they like in particular? what do they reveal on the network? (network advantages and disadvantages for the youth) to what extent are you involved in the youths' social network world and in what ways? (e.g., follow them on instagram, friends on facebook, etc.?) where you work, is it legitimate to communicate with youths via the networks or even make contact with them? (e.g., whatsapp groups or facebook friendship requests? if so -for what purposes? do you learn more about their world thanks to the network? do you use it as a tool for making personal contact with them? are you active on the social media networks, and how much do you know about using these networks for your personal needs?  counselors and social workers hold a positive view towards online therapeutic relationships  social media improve the therapeutic staff's capabilities  positive practices related to distinctive features of online communication channels  significant challenges included dilemmas regarding privacy, authority and boundaries key: cord-324654-nnojaupv authors: vordos, nick; gkika, despoina a.; maliaris, george; tilkeridis, konstantinos e.; antoniou, anastasia; bandekas, dimitrios v.; ch. mitropoulos, athanasios title: how 3d printing and social media tackles the ppe shortage during covid – 19 pandemic date: 2020-06-07 journal: saf sci doi: 10.1016/j.ssci.2020.104870 sha: doc_id: 324654 cord_uid: nnojaupv during the recent covid-19 pandemic, additive technology and social media were used to tackle the shortage of personal protective equipment. a literature review and a social media listening software were employed to explore the number of the users referring to specific keywords related to 3d printing and ppe. additionally, the qaly model was recruited to highlight the importance of the ppe usage. more than 7 billion users used the keyword covid or similar in the web while mainly twitter and facebook were used as a world platform for ppe designs distribution through individuals and more than 100 different 3d printable ppe designs were developed. the population is informed for health issues and communicate with health professionals through social media [13] . the best advantage of that form of contact is that it is a means of mass communication and offers the ability to facilitate disease surveillance [12] , [14] . at the same time, medical applications for 3d printing are expanding rapidly and are expected to revolutionize health care [15] . the first 3d printers were invented from hideo kodama and charles hulls in the early 1980s [16] . additive technology used in various sections such as the aerospace and automotive industry, military, sports field, architecture, toys industry and bioengineering with different benefits and disadvantages [17] . since then, different 3d printing methods have been used, based on extrusion, powder solidification and liquid solidification, with different types of materials as building materials [18] . the aim of the present article is to explore the relationship between social media and 3d printing, in the context of the recent covid-19 pandemic. we will analyze which types of personal protective equipment can be printed, and how the 3d printing users can be coordinated to achieve mass printing volumes. two independent searches have been conducted, in order to examine the degree that social media has affected the development and dissemination of ppe and medical equipment parts. they explore how 3d printed designs were utilized to address the covid-19 pandemic, as well how the qaly model can be applied in this case to measure the effects of the use of ppe. in the first in depth search, social media was studied using targeted keywords, while an official database was implemented and the qaly model was applied. since covid -19 appeared in the wuhan district in china, several drawings of ppe were shared between social media users, 3d community and individual citizens. a specialized social media software (awario) was recruited for gathering information about content which refers to covid-19, 3d printing and ppe. targeted queries were performed on popular social media such as facebook, twitter, instagram, youtube, reddit and also on news/blogs and the web in general. the searches were conducted between january 1, 2020 through april 14, 2020. the results were further examined for details on the representative designs of ppes. the search strategy consisted of using two generic lists of terms with one specific keyword. the first keyword set refers to the different names of the novel coronavirus and will be referred to as keywords set 1 (ks1), with the following syntax: the general keywords set 2 (ks2) comprised of terms which correlate to 3d printing technology and have the following boolean syntax: "design" or "3d" or "additive technology" or "3d printing" or "print" the specific keyword is related to the ppes and specific medical equipment, which for this study are "ppe", "personal protective equipment", "face shield", "goggles", "gloves", "boots", "surgical hood", "valves", "ventilator" and "respirator". the syntax of the queries that were submitted in the social media listening software has the following format: "ks1" and "ks2" and "x", where x is one of the specific keywords mentioned. for example, when looking into which social media post includes the keywords covid (or similar)" and 3d printing (or similar) and ppe, the following query syntax "ks1" and "ks2" and "ppe" was used. there are two major parameters that have been studied, the life expectancy of health professionals and the average life expectancy until the death of someone who became ill with covid -19 which suggests that hygiene rules or personal protective equipment have not been used or are being misused. data on life expectancy were obtained from who and oecd, while the calculation of the average lifespan was researched in the scientific databases pubmed and scopus with the keywords "symptoms" and "death" and "covid" and "days" [19] , [20] . the results of the search were analyzed and summarized in the next section. table 1 table 1 . goggles, gloves, boots and surgical hoods have the lowest resonance in social media and web, in contrast to respirators and face shields (9100 k and 3100k users respectively used combinations of keywords). ventilators and valves are also prominent in the results. fig 2 is a graphical representation of table 1 results where each combination of keywords is presented in relation to the percentage of appearances in the corresponding social media or web. in many cases, zero results were returned, while in most of the cases with high percentages, those were found through websites and blogs. table 2 shows which of the who proposed pp equipment are printable with personal or university laboratory 3d printers. the third column of the table shows whether they can be sterilized while the fourth column shows the number of the different designs appeared in social media. 109 different designs of face masks (similar in many cases) were proposed in order to help the health workers avoid exposure. more than 30 designs were proposed for respirators, two designs for goggles, while there were not any design for boot covers. fig. 3 . according to the world health organization, life expectancy on average in the world is 72 years. a more detailed study of the organization for economic co-operation and development (oecd) showed that life expectancy for men is 77.7 years while for women it is 83.1 years. by the time of this study, more than 200 healthcare professionals have died from the novel coronavirus disease [21] . 29% were physicians in the usa, with ages ranging between 56 -65 years old and 17% were over 66 years old, while in europe 38% of healthcare professionals were older than 55 years old [22] , [23] . the search in the scientific databases for the time interval between the onset of the symptoms and the death showed, as expected, different results. table 3 shows the results from the scopus bibliographic database, which procured 17 articles, as opposed to 12 from pubmed. the total number of those results that included the desired information was 6, whereas the time between symptoms and death appeared between 8 and 23.4 days. in this study, the relationship between the epidemic, the social media and the use of three-dimensional printing is presented. the ultimate goal of the article is to highlight the use of social media and additive technology in general from common users, to address the lack of basic personal protective equipment or even parts of machinery used by health professionals. the qaly model was employed to illustrate the necessity of ppe use. more than 5000 scientific articles, for the role of social media during different crisis types (terrorist attacks, hurricanes, tsunamis, earthquakes, etc.) appear in scientific databases (scopus, pubmed and google scholar). in many cases, social media can operate as a crisis platform to generate community crisis maps [24] , [25] . there are four types of connection between the users of social media during a crisis: i) the authorities communicate with citizens (statements, information, instructions, etc.), ii) authorities communicate with other authorities for an inter-organizational crisis management, iii) citizens to citizens (photographs, information, communities) and iv) the citizens communicate with authorities [26] . as was expected, social distancing due to covid -19 resulted in an increase in social media use. only in march 11, 2020, more than 19 million mentions related to the new coronavirus. in general, social media are used for communication between citizens or simply to kill time and to reduce distancing. on the other hand, healthcare organizations like who used social media to provide information and rules of hygiene to battle covid -19. almost all platforms set at the main page a joint statement for the fight of the pandemic [27] . the spread of covid -19 in the global community, in a short period of time, has resulted in the rapid depletion of ppe. risk to health or the danger of accident of professional health workers can be minimized with the use of ppe [28] . the choice of the appropriate ppe must be done according to the degree of exposure in gems, the material of ppe and the ergonomics of it [29] . many healthcare professionals, from different countries use social media to express the lack of respirators, gowns and face masks [30] [31] [32] . who also uses social media to ask from government and industry to increase the manufacturing process of ppe by 40% [33] . the interaction between authorities and other authorities, as well as authorities and citizens and vice versa was as expected. the reaction was impressive between the citizens, who used social media to tackle the shortage of ppe and to help the doctors and nurses during the pandemic. worldwide, social media groups exchanged ideas, drawings, schematics and technical instructions, in order to produce face shields, reusable respirators, ventilators and other ppe [34]. one of the most important actions of social media users is the use of 3d printing technology to produce specific ppe. social networks have the potential to combine both information from the official sources and popular information from citizens in a short period of time, making them a valuable tool in crisis management [35] . previous studies show that in times of crisis, citizens seem to be more cooperative and better able to respond to authorities' instructions via social media [36] . the main advantage of using these tools, is the dynamic capability they provide to disseminate the information [37] . unfortunately, the use of media has not only advantages, but also disadvantages, such as unreliability of the information, and inefficiency and dispersal of panic [13] , [38] . at the time of the covid -19 pandemic, the network platform use rose by at least 20%, communicating information for the virus spread and for the shortages of ppe [39] . this information has motivated the global community to look for possible solutions to these problems. the formulation of relevant groups was immediate, and designs for the possible solutions were placed in digital repositories, where the main technology used is 3d printing. it is not the first time that 3d printing and additive technology in general has been used to provide solutions in the field of medicine and biomechanics. various technologies and materials have been used to print surgical instruments, dentures, organ dummies, etc. [18] . only in the united stated of america, there is an estimated amount of 444.000 printers, then the united kingdom has more than 168.000 printers, while germany occupies the 3 rd place in the world [40] , [41] . it is the first time that the community of 3d printer home users has come together and mass-produced ppe. more than 180.000 users worldwide can produce up to 6 face shields in 10 hours each, depending on the capabilities of printers and the design. assuming that a country like greece has about 500 printers, then in one day more than 6.000 face shields can be produced, enough to equip doctors, nurses, rescuers, staff working with patients in general, in a short period of time. in our study, the queries that included the general terms ppe and personal protective equipment show the most results in user response, as expected, because they concern general fields and not individual ppe. in most cases, the largest percentage is due to the impact of social media users on news / blogs and web, due to the fact that social media groups display their products/results on news websites. proof that the teams were organized through social media is that in the ppes that can be printed (face shields, respirator but also valves and ventilators) show increased percentages on twitter, facebook and youtube, while in the rest they are zero. although several designs have been developed for 3d printing, the biggest concern is whether all of them can be used. some of these designs have been approved by nih and fda. particular attention should be paid to the materials used for their production, but also to the way they are used so as not to endanger human lives. the american society of anesthesiologists made a statement about the possibility of having multiple patients per ventilator. characteristically, they state a number of reasons why it is impossible to do so, for example, "volumes would go to the most compliant lung segments. positive endexpiratory pressure, which is of critical importance in these patients, would be impossible to manage." [42] . for such types of reasons, even after about three months from the beginning of the pandemic, licenses for the manufacture of respirators by individuals in the vast majority, have not been issued in figure 5 a proposed process used before 3d printing is illustrated. the need for ppe and medical supplies triggers the beginning of the process. individuals are usually searching for designs and solutions for the problems concerning the local community through social media. accordingly, they should verify whether the final product has been approved by the competent authorities. in case the proposed design lacks a license, a similar design that has the relevant license should be selected. following, the prototype is printed and inspected from the local healthcare professionals for micro adjustment, if necessary. finally, if all the previous steps are completed, the product can be printed in mass quantities. the qaly allows for the combined study of outcomes of any health-related actions and their effect on mortality into a single indicator, thus establishing a method that enables comparisons across multiple disease areas. throughout their life, people have different health states, which are weighted based on the preassigned utility ranks. the qaly model was applied as shown in figure 4 , taking into account that one of the main reasons of viral infection affecting the respiratory system is the non-usage or poor application of protective equipment as well as the fact that health professionals are more exposed and subsequently more likely to get infected. patients using ppe gained quality of life compared to health worker professionals from europe or us, without the usage of ppes. this study examined the way individual social media users and 3d printer owners tackle the ppe shortage during a pandemic. social media influences the problem in multiple levels: firstly, they highlight the problem, in this context the lack of ppes, secondly they encourage and promote the formation of task forces/teams from the general population with a relevant interest, thirdly they provide the means for exchanging information and technology and finally they can identify the number of required 3d printers in a local, national, or even international level needed to achieve the task. at the same time the role of ppe as necessary equipment for health professionals is fundamental. the qaly model was employed to show the importance and effect of using personal equipment on life quality and expectancy. during the recent coronavirus pandemic, the world faced a serious shortage of ppe. individuals and universities co-ordinated their action using web networking and social media in order to produce around 150 3d printable designs of ppe that got developed and distributed. social networking and 3d printing combined can be seen as a new tool for tackling pandemic emergency situations. no competing financial interests exist coronavirus covid-19 global cases by the demographic science aids in understanding the spread and fatality rates of covid-19 identification of coronavirus isolated from a patient in korea with covid-19 a novel coronavirus from patients with pneumonia in china epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in wuhan, china: a descriptive study association of airborne virus infectivity and survivability with its carrier effectiveness and costeffectiveness of vaccination against pandemic influenza (h1n1) 2009 the future of social media in marketing the emerging use of social media for health-related purposes in low and middle-income countries: a scoping review patients' and health professionals' use of social media in health care: motives, barriers and expectations use of social media in health communication: findings from the health information national trends survey the digital incunabula: rock additive manufacturing (3d printing): a review of materials, methods, applications and challenges 3d printing in pharmaceutical and medical applications -recent achievements and challenges global health observatory (gho) data, world health observatory in memoriam: healthcare workers who have died of covid-19 social media as crisis platform: the future of community maps/crisis maps social media and crisis management: cerc, search strategies, and twitter content social media in crisis management: an evaluation and analysis of crisis informatics research how covid-19 is testing social media's ability to fight misinformation emarketer marking of personal protective equipment minimizing occupational hazards in endoscopy: personal protective equipment, radiation safety, and ergonomics attenuated total reflectance infrared spectroscopy as a screening tool for the urinary calculi characterization nhs staff 'gagged' over coronavirus shortages american college of surgeons statement on ppe shortages during the covid-19 pandemic american college of surgeons shortage of personal protective equipment endangering health workers worldwide shortage-of-personal-protective-equipment-endangering-health-workers-worldwide social media in disaster risk reduction and crisis management examining the role of social media in effective crisis management: the effects of crisis origin, information form, and source on publics' crisis responses the dynamic role of social media during hurricane #sandy: an introduction of the stremii model to weather the storm of the crisis lifecycle 40+ 3d printing industry stats you should know printing industry stats you should know dig deeper into 3d printing sentiment joint statement on multiple patients per ventilator  social networking and 3d printing combined can be seen as a new tool for tackling pandemic emergency situations. key: cord-011906-ek7joi0m authors: throuvala, melina a.; griffiths, mark d.; rennoldson, mike; kuss, daria j. title: mind over matter: testing the efficacy of an online randomized controlled trial to reduce distraction from smartphone use date: 2020-07-05 journal: int j environ res public health doi: 10.3390/ijerph17134842 sha: doc_id: 11906 cord_uid: ek7joi0m evidence suggests a growing call for the prevention of excessive smartphone and social media use and the ensuing distraction that arises affecting academic achievement and productivity. a ten-day online randomized controlled trial with the use of smartphone apps, engaging participants in mindfulness exercises, self-monitoring and mood tracking, was implemented amongst uk university students (n = 143). participants were asked to complete online preand post-intervention assessments. results indicated high effect sizes in reduction of smartphone distraction and improvement scores on a number of self-reported secondary psychological outcomes. the intervention was not effective in reducing habitual behaviours, nomophobia, or time spent on social media. mediation analyses demonstrated that: (i) emotional self-awareness but not mindful attention mediated the relationship between intervention effects and smartphone distraction, and (ii) online vigilance mediated the relationship between smartphone distraction and problematic social media use. the present study provides preliminary evidence of the efficacy of an intervention for decreased smartphone distraction and highlights psychological processes involved in this emergent phenomenon in the smartphone literature. online interventions may serve as complementary strategies to reduce distraction levels and promote insight into online engagement. more research is required to elucidate the mechanisms of digital distraction and assess its implications in problematic use. attentional focus is one of the most fundamental resources and a key to successful and high-order work [1] . in the attention economy [2] , multiple online and offline activities compete for an alternative share of attention [3] . this trend is expected to grow in the face of increasing communication complexity and information overload [4] , which is becoming even more prevalent partially due to the vast online accessibility, immediacy and convenience of smartphones, acting as a major motivational pull for engagement [5] and prompting constant multitasking and frequent attentional loss [6] . there are currently more than 3.5 billion smartphone users [7] and smartphone use is an emergent area of research [8] [9] [10] . emerging evidence on cognitive function has shown that smartphone availability and daily interruptions compete with higher-level cognitive processes creating a cognitive interference effect [11] [12] [13] [14] [15] , associated with poorer cognitive functioning [16] [17] [18] [19] , performance impairments in daily life [20] and potential supplanting of analytical thinking skills by "offloading thinking to the device" [21] (p. 473). in spite of such initial evidence, there are cognitive correlates within the digital wellbeing apps or mhapps (apps that track an individual's behaviour, i.e., time spent online, or that aid cognitive, emotional and/or behavioural wellbeing) [96] have been suggested as supporting self-awareness and self-regulation [97] and utilized in mental healthcare given their functionality, accessibility, higher adherence rates, real-time assessment, low-cost and for their intervention potential [98, 99] . the literature suggests that evidence-based apps may be efficacious in raising self-awareness, mental health literacy and wellbeing, self-efficacy, and ability to cope [96, [100] [101] [102] . online psychological interventions are becoming more prominent in the digital age [103] , rendering numerous positive health outcomes [102, [104] [105] [106] [107] [108] , complementing service provision and recognized by governmental health institutions (e.g., national institute for health and care excellence (nice) in the uk) [109] . however, more research is required to determine the comparative effectiveness of these therapies and their components [110] in improving mental health and wellbeing and rigorous objective evaluation beyond their developers is required. to date, there have been a small number of internet-based interventions associated with device use in university settings. distraction is not considered a dysfunctional construct by itself, but has been implicated in emotion regulation, adhd, and other disorders [111] [112] [113] , and has been minimally examined in the context of the digital environment with no evidence to date as to strategies that could ameliorate its occurrence [114] . therefore, the aim of the present study was to test the preliminary efficacy of an online intervention based on cognitive behavioural principles (i.e., self-monitoring, mood tracking, and mindfulness) to reduce distraction and related psychological outcomes (i.e., stress) among university students. given: (i) young adults are keen users of smartphone apps, with increased vulnerability to self-regulation and technology use [74] , (ii) the high stakes for academic achievement, and (iii) the similarity in processes observed between gambling addiction and social media overuse [115] , the strategies of mindfulness, activity monitoring, and mood tracking utilized in gambling harm-reduction [86, 116, 117] are employed in the present study. these strategies were delivered and facilitated through the use of smartphone mhapps and were tested for their efficacy in reducing levels of distraction and related psychological outcomes and their role in inducing changes in wellbeing [118] [119] [120] . the following hypotheses were formulated: hypothesis 1 (h1). compared to the control condition at follow-up, students receiving the intervention would report: (i) lower rates of smartphone distraction, smartphone and social media use duration, impulsivity, stress, problematic social media use, fomo and nomo and (ii) higher levels of mindful attention, emotional self-awareness, and self-efficacy. hypothesis 2 (h2). at follow-up, high distractors (hds) compared to low distractors (lds) (based on a median-split analysis) would show a greater reduction in distraction and significant improvement in outcomes. hypothesis 3 (h3). the intervention will mediate the relationship between (i) mindful attention and smartphone distraction, and (ii) emotional awareness and smartphone distraction. additionally, online vigilance will mediate the relationship between smartphone distraction and problematic social media use. to the authors' knowledge and given the novelty of the construct of smartphone distraction, this is the first study to examine a preliminary online randomized controlled trial via mhapps for the reduction of smartphone distraction. the present study fills a gap in the smartphone literature by assessing the efficacy of engaging with behaviour change strategies (i.e., mindfulness, self-monitoring, and mood-tracking) used successfully in gambling harm prevention for the reduction of distraction. the present study tested the efficacy of a ten-day online app-delivered randomized controlled trial (rct) based on cognitive-behavioural principles to reduce distraction (primary outcome) and a number of secondary psychological outcomes: self-awareness, mindful attention, fomo, anxiety, and depression among university students. rcts are considered the gold standard in intervention effectiveness despite limitations addressed by scholars [121, 122] , primarily for the lack of external validity or methodological choices [123] . a pragmatic psychosocial intervention with an rct design was chosen [124] . the duration of the intervention was set given a pragmatic consideration of the free use period of one of the apps (headspace) and, secondly, due to the preliminary nature of this investigation. consolidated standards of reporting trials (consort) guidelines were followed in the protocol and the procedures and reporting of the intervention [125] . the intervention involved the active engagement for the period of ten consecutive days with three smartphone apps serving three different functions: to assess smartphone and social media use, conduct mindfulness sessions with an emphasis on eliminating distraction, and track mood and assess its impact on distraction, stress, self-regulation, and other measures. interaction with apps was encouraged to: (i) raise emotional awareness of common mood states, such as feeling down, worried, or stressed through mindfulness, (ii) guide basic smartphone monitoring, focusing skills, and awareness, and (iii) provide insight through mood tracking (table 1) . to further support active engagement with these intervention components, eligible participants were asked to keep a daily online activity log for the duration of the intervention (i.e., the number of screen-unlocks and the time of day and number of minutes for which the smartphone was used, usefulness of apps, etc.), to aid time perception of daily activities, raise awareness levels, and help increase the accuracy of self-reporting and adherence to the intervention [126, 127] . promoting self-awareness of media use and understanding of own behaviour was a key target of the intervention in order to curb distraction. the study was reviewed and approved (no. 2018/226) by the research team's university ethics committee. daily reminders and messages via blogging were sent as a reminder to maintain routine and reflect on levels of activity [126, 137] . participants were recruited using convenience and snowball sampling techniques. after gaining institutional ethical approval, the study was advertised to students through the research credit scheme, in university lectures and labs, and to the public through social media as an online intervention to assess the reduction of smartphone distraction. this experimental intervention demanded a significant time involvement and offering incentives increased the chances of participation and completion of the full ten-day intervention. in return for participation, students were offered either research credits or entry in a prize draw (£50 gift cards). participants were included in the study based on two screening criteria: regular smartphone and social media usage. only those affirming both and granting consent were able to continue with participation. following the completion of the survey, participants were allocated to one of the two conditions (intervention [ig] or control [cg] ) and further instructions for participation in the intervention were provided depending on the allocation condition. after initially providing age and gender demographics, participants responded to survey items regarding habitual smartphone and social media behaviour (estimates of duration of use), smartphone distraction severity, trait self-regulation, trait mindfulness and other psychological constructs (detailed in "materials"). the survey took approximately 25 min to complete. a total of 261 participants were recruited who participated in the baseline assessment. of these, 155 were undergraduate psychology students in the uk (59.3%). the sample comprised 47 males (18%) and 214 females (82%), with an age range of 18 to 32 years (m = 20.72, sd = 3.12). figure 1 depicts the flow of participants through the study procedures. after the baseline assessment, during the intervention period two individuals of the intervention group withdrew from the study and were not considered in the analysis. from the 259 remaining participants, seven were removed due to providing 90% incomplete data. the final sample considered at baseline was 252 participants (intention to treat (itt) group) and included 123 participants in the intervention group and 129 in the control group. participants who completed both assessments were considered in the per-protocol analysis (pp) (n = 143, 56% of the original sample), with 72 participants comprising the ig and 71 participants the cg. between the two groups, as standardising can easily distort judgements of the magnitude of an effect (due to changes to the sample sd but not the population sd, which may bias the estimate of the effect size measure, such as cohen's d) [178] . as cohen's d has been reported in other rct and pre-post intervention studies, cohen's d was estimated [179] . finally, because the sample sizes of the two groups were unequal, type iii sums of squares were used for the ancova. to test the third hypothesis and the hypothesized psychological mechanisms underlying the intervention results, three different mediation analyses were performed across the chosen psychological constructs using spss statistics (version 25) and process (model 4; [180] [181] [182] [183] ), using a non-parametric resampling method bootstrap with 5000 bootstrapped samples and bias-corrected 95% confidence intervals, to probe conditional indirect effects for the variables examined. these analyses were performed on the itt sample in post-intervention results. the survey consisted of sociodemographic and usage data (questions related specifically to smartphone and social media use [hours per day]). the demographic questions and user-related questions had open responses (i.e., "how many hours per day do you use social media?"). the following scales were used for the psychological measures of the study: the smartphone distraction scale [138] is a newly developed scale comprising of 16 likert-type items. the scale comprises four factors: attention impulsiveness, online vigilance, emotion regulation, and multitasking. scores range from 1 (almost never) to 5 (almost always) with higher scores representing a greater degree of distraction. individual items on the test were summed to give composite scores. sample items included in the scale are the following: "i get distracted by my phone notifications", and "i constantly check my phone to see who liked my recent post while doing important tasks". the scale has demonstrated good psychometric properties [138] and excellent reliability in the present study with a cronbach's alpha of 0.90 for time 1 (t1) and 0.88 for time 2 (t2). the mindful attention awareness scale (maas) [139] is a 15-item assessment tool that assesses the dispositional tendency of participants to be mindful in everyday life and has been validated among young people, university students and community samples [139, 140] . item statements reflect experience of mindfulness, mindlessness in general and specific daily situations and are distributed across a range of cognitive, emotional, physical, interpersonal, and general domains. response options are based on a six-point likert scale from 1 (almost always) to 6 (almost never). scores were averaged across the 15 items to obtain an overall mindfulness score with higher scores reflecting higher levels of dispositional mindfulness. sample items include "i could be experiencing some emotion and not be aware of it until sometime later" and "i find it difficult to stay focused on what's happening in the present" and exhibited a high degree of internal consistency in the present study with a cronbach's alpha of 0.92 for t1 and 0.93 for t2. the emotional self-awareness scale (esas) [92] was used to assess esa and comprises five variables: recognition, identification, communication, contextualization, and decision making. the scale consists of 32 items (e.g., "i usually know why i feel the way i do") rated from 0 (strongly disagree) to 4 (strongly agree). the total esa score ranged from 0 to 128, and sub-scale items are combined to produce a composite score with higher scores indicating higher esa. the esas has presented reasonable internal consistency (cronbach's alpha = 0.72, 0.69, and 0.76 for pre-test, post-test and six-week follow-up) [92] . the scale has demonstrated good validity in prior studies [92, 101] and adequate internal consistency in the present study (cronbach's alpha of 0.87 for t1 and 0.86 for t2). the perceived stress scale (pss) [141] is one of the most widely used scales to assess perceived stress and the degree of unpredictability, uncontrollability, and burden in various situations. the scale used was the 10-item version rated from 0 (never) to 4 (very often) with sample items such as "in the last month, how often have you felt that you were unable to control the important things in your life?", and "in the last month, how often have you felt that you were on top of things?" scores are obtained by summing the items, with the higher score indicating more perceived stress. the scale possesses good psychometric properties [142] and its internal consistency in the present study was 0.86 for t1 and 0.83 for t2. the seven-item generalized anxiety disorder scale (gad-7) [143] is a brief clinical measure that assesses for the presence and severity of generalized anxiety disorder (gad). the self-report scale asks how often during the last two weeks individuals experienced symptoms of gad. total scores range from 0-21 with cut-off scores of 5, 10, and 15 being indicative of mild, moderate, and severe anxiety, respectively. increasing scores on the gad-7 are strongly associated with greater functional impairment in real-world settings. sample items are rated from 0 (not at all) to 3 (nearly every day) and sample items include: "feeling nervous, anxious or on edge" and "trouble relaxing". the scale has been widely used and considered a valid and reliable screening tool in previous research, presenting good reliability, factorial and concurrent validity [144, 145] , and demonstrated excellent internal consistency in the present study (α = 0.93 t1 and α = 0.90 for t2). the self-report behavioural automaticity index (srbai) [146] was used to assess habitual strength. the four-item scale was used to assess the degree of automaticity and contained items such as: "using social media on my smartphone is something . . . i do automatically" and "i start doing before i realize i'm doing it". participants indicate their agreement with each item on a likert scale ranging from 1 (does not apply at all) to 7 (fully applies). scores were averaged across items to obtain an overall habit score, with higher scores indicating stronger habitual smartphone use behaviour. the scale has been reported as psychometrically sound in previous studies with good reliability, convergent and predictive validity [146, 147] and demonstrated good internal consistency in the present study with a cronbach's alpha of 0.87 (t1) and 0.89 (t2). the generalized self-efficacy scale (gse) [148] is a widely used psychometric instrument comprising ten items that assess perceived self-efficacy ("i can always manage to solve difficult problems if i try hard enough."). items are rated on a four-point scale ranging from 1 (not at all true) to 4 (exactly true). the gse has demonstrated satisfactory internal consistency and validity across studies [149, 150] . cronbach's alpha in the present study was 0.90 (t1) and 0.88 (t2). the online vigilance scale (ovs) [46] is a 12-item likert scale which assesses a relatively new construct in the internet-related literature, referring to individuals' cognitive orientation towards online content, expressed as cognitive salience, reactivity to online cues and active monitoring of online activity. sample items include "my thoughts often drift to online content" and "i constantly monitor what is happening online". scale items are rated on a four-point likert scale from 1 (does not apply at all) to 4 (fully applies). higher mean scores indicate a higher degree of online vigilance. the scale has evidenced sound construct and nomological validity and high internal consistency [46, 49, 78] . the cronbach's alpha in the present study was 0.89 (t1) and 0.87 (t2). the eight-item barratt impulsiveness scale-alternative version (bis-8) [151] is a psychometrically improved abbreviated version of the 11-item bis scale [151] presenting good construct and concurrent validity in young populations [152, 153] . the scale assesses impulsive behaviour and poor self-inhibition and uses a four-point likert scale from 1 (do not agree) to 4 (agree very much). sample items include: "i do things without thinking" and "i act on the spur of the moment". cronbach's alpha coefficient in the present study was 0.85 (t1) and 0.86 (t2). the deficient self-regulation measure [154] is a seven-item scale assessing deficient self-regulation in videogame playing adapted for unregulated internet use [155] . the scale is rated on a seven-point likert scale from 1 (almost never) to 7 (almost always) and has demonstrated sound psychometric properties [154] . the scale was adapted for smartphone use with sample items such as "i would go out of my way to satisfy my urges to use social media" and "i have to keep using social media more and more to get my thrill". the original scale and its adaptation has presented satisfactory psychometric properties [154, 155] . the cronbach's alpha coefficient in the present study was 0.89 (t1) and 0.87 (t2). the bergen social media addiction scale (bsmas) [115, [156] [157] [158] ] is a six-item self-report scale for assessing social media addiction severity based on the framework of the components model of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse) [159] . each item examines the experience of using social media over the past year and is rated on a five-point likert scale from 1 (very rarely) to 5 (very often), producing a composite score ranging from 6 to 30. higher bsmas scores indicate greater risk of social media addiction severity. a sample question from the bsmas is "how often during the last year have you used social media so much that it has had a negative impact on your job/studies?" a cut-off score over 19 indicates problematic social media use [160] . the bsmas has presented sound psychometric properties [115, [156] [157] [158] with high internal consistency (α = 0.82) [161] . the cronbach's alpha in the present study was 0.91 (t1) and 0.87 (t2). the fear of missing out scale (fomos) [162] includes ten items and asks participants to evaluate the extent to which they experience symptoms of fomo. the scale is rated on a seven-point likert scale from 1 (not at all true) to 5 (extremely true of me). the statements include: "i fear others have more rewarding experiences than me... i get anxious when i don't know what my friends are up to...it bothers me when i miss an opportunity to meet up with friends...". a total score was calculated by averaging the scores, with higher mean scores indicating a greater level of fomo. this instrument has demonstrated good construct validity [162, 163] , and good internal consistency with cronbach's alphas of α = 0.93 [164] and 0.87 [64] with α = 0.87 in the present study. the nomophobia questionnaire (nmp-q) [165] comprises 20 items rated using a seven-point likert scale from 1 (strongly disagree) to 7 (strongly agree). total scores are calculated by summing up responses to each item, resulting in a nomophobia score ranging from 20 to 140, with higher scores corresponding to greater nomophobia severity. nmp-q scores are interpreted in the following way: 20 = absence of nomophobia; 21-59 = mild level of nomophobia; 60-99 = moderate level of nomophobia; and 100+ = severe nomophobia. the scale has demonstrated good psychometric properties [165, 166] with cronbach's alphas of 0.94 [165] and 0.95 [167] . in the present study, internal consistency was: 0.89 for (t1) and 0.88 for (t2) respectively. the intervention initially involved the search and identification of appropriate mobile apps (in both the apple itunes store and the android google play store) for daily self-monitoring of social media activity for mindfulness practices and mood tracking. the apps needed to be freely available in order to be accessible by the participants. due to time limitations, the development of an app that would encompass all three features (mindfulness of distraction, self-monitoring, and mood-tracking) was deemed adequate for the study given the ample availability of well-designed products offering these services. the following three freely available smartphone lifestyle apps were utilized: (i) antisocial (screen time): to self-monitor screen time/social media use and for voluntary self-exclusion (block app after time limit is reached), (ii) headspace (mindfulness): brief mindfulness sessions, (iii) pacifica (mood tracking): the app encouraged monitoring and tracking an individual's emotional state at various times during the day to enhance awareness. at the outset of the study, participants were directed to an information statement followed by the digital provision of informed consent before responding to the questions. at the end of the survey, they were automatically assigned through the automatic randomization procedure used by the online survey platform qualtrics to either an intervention or a control group. therefore, the intervention was double-blind (to participants and investigators). participants assigned to the ig were asked to download the apps onto their smartphones and to actively engage with all three apps daily for 10 days, which was the maximum free period offered by one of these apps. participants were encouraged to engage with mindfulness/focusing exercises to track their emotional state during the day and monitor patterns in their wellbeing as well as report daily on smartphone usage rates. thereafter, participants received daily notifications via email for the duration of the intervention to remind them to provide online reports about their own social media usage rates, apps accessed, checking frequency, potential self-restriction from use, and satisfaction with the intervention. this process was used to motivate engagement with the apps and accountability. efficacy was evaluated by having a cg condition where participants did not engage in any app use and only completed assessments on the first and tenth day. the target of the intervention was to induce a more mindful state, raise awareness of media and smartphone use, enhance self-regulation and therefore reduce distractions and time spent on smartphones and indirectly on social media by using these apps. the sample size for the rct was determined a priori using g*power v.3 software for the expected increased effectiveness of the intervention compared to control on the primary outcome distraction at post-assessment (t2). empirical reviews [168] have suggested a median standardised target effect size of 0.30 (interquartile range: 0.20-0.38), with the median standardised observed effect size 0.11 (iqr 0.05-0.29). the present study was a low-threshold intervention for a non-clinical population, so a mean effect of d = 0.30 was expected. with a power of 1-ß = 0.8, and a significance level of α = 0.05, the sample size was calculated to be n = 95 participants per group to find between-and within-group effects. to account for attrition rates in online interventions and control for both type i and ii error rates, n = 125 participants per group were targeted for recruitment [169] . all data were analysed through spss v.25 (chicago, il, usa). preliminary data analyses included examining the data for data entry errors, normality testing, outliers, and missing data. seven cases were treated with listwise deletion due to a very high percentage of incomplete data at baseline, resulting in a final sample size of 252. for the rest of the dataset, little's missing completely at random (mcar) test showed that data were missing completely at random (p = 0.449). multiple imputation was used to complete the dataset for the baseline analysis and for the non-completers from post-intervention assessment based on patterns of missingness. the data were also checked to ensure that all assumptions for the outlined statistical analyses were satisfied. the kolmogorov-smirnov test was used to evaluate the normal distribution of the variables, and skewness and kurtosis values were examined. for both assessments, all self-report data were normally distributed. assumptions of t-tests included normality, homogeneity of variance, and independence of observations. violations of the assumption of homogeneity of variance were tested using levene's test of equality of variances [170] . descriptive statistics were conducted to summarize the demographic characteristics of the sample as well as scores for the self-reported and performance-based measures of interest (i.e., stress). pearson's correlations examined bivariate relationships between smartphone distraction and psychological variables, and frequency of smartphone and social media use (presented in table 3 ). while allocation randomisation aimed to reduce any differences between the groups at baseline, a series of independent sample t-tests for the continuous variables and chi-square tests for the categorical variables (gender, ethnicity and education and relationship status) were conducted to analyse group mean differences and compare the baseline and post-intervention outcomes for the control and intervention groups. these were also applied at post-intervention outcomes for both the control and the intervention group. a decrease from the baseline to the post-intervention assessment was hypothesised for the primary outcomes of smartphone distraction, stress, anxiety, deficient self-regulation, fomo and nomo and an increase was hypothesized for mindful attention, self-awareness and self-efficacy. following the descriptive analysis, data from the baseline and post-intervention assessments were analysed to test each of the hypotheses provided to inform the assessment of the intervention efficacy. two approaches to analysis were adopted. first, to isolate any effect of the intervention, a per-protocol (pp) analysis was conducted to maintain the baseline equivalence of the intervention group produced by random allocation [171] . however, given the limitations to this first analysis approach and to minimise biases resulting from noncompliance, non-adherence, attrition or withdrawal [172, 173] , analysis was performed also on an intention-to-treat (itt) basis [172] . however, these results were not reported in the present study. the effects of the intervention were assessed with an analysis of covariance (ancova), with a minimum significance level at p < 0.05. ancova was chosen given that it is quite robust with regard to violations of normality, with minimal effects on significance or power [174, 175] with any differences between the groups at baseline, for the various assessments being used as covariates in the model and considered artefacts of the randomisation [176] . co-varying for baseline scores supported the analysis in two ways. first, while randomisation aimed to reduce any pre-intervention differences between the groups, residual random differences may have occurred. accounting for such differences isolated the effect of the intervention. partial eta-squared were used as measures of strength of association [177] . to better understand the effect size of the intervention, it has been recommended to use the differences in adjusted means (standardized mean difference effect sizes) between the two groups, as standardising can easily distort judgements of the magnitude of an effect (due to changes to the sample sd but not the population sd, which may bias the estimate of the effect size measure, such as cohen's d) [178] . as cohen's d has been reported in other rct and pre-post intervention studies, cohen's d was estimated [179] . finally, because the sample sizes of the two groups were unequal, type iii sums of squares were used for the ancova. to test the third hypothesis and the hypothesized psychological mechanisms underlying the intervention results, three different mediation analyses were performed across the chosen psychological constructs using spss statistics (version 25) and process (model 4; [180] [181] [182] [183] ), using a non-parametric resampling method bootstrap with 5000 bootstrapped samples and bias-corrected 95% confidence intervals, to probe conditional indirect effects for the variables examined. these analyses were performed on the itt sample in post-intervention results. the t-test results for the pre-test scores found no significant differences between the groups, indicating independence. the post-test scores were significantly lower in the intervention group. for the smartphone distraction scale, the mean pre-test score was 58.06 (sd = 7.69) for the intervention group and 59.72 (sd = 8.08) for the control group. the mean post-test score was 39.70 (sd = 17.67) for the intervention and 58.78 (sd = 17.47) for the control group, respectively. the pre-test score mean was not significantly different between groups (t = −0.70, ns), but the post-test score mean was significantly lower for the intervention group than for the comparison group (t = −6.69, p < 0.001). the pattern was similar in the results for the other variables except for nomo, habitual behaviour, and social media use per day. table 2 provides a summary of the baseline t-test and chi-square outcomes and internal consistency for each scale at each measurement period. all scales demonstrated good internal consistency for the sample considered. a series of bivariate pearson's r correlation analyses was conducted to examine the results obtained amongst sds and the secondary outcomes (table 3) . smartphone distraction correlated significantly with problematic social media use (r(252) = 0.63, p < 0.01), anxiety (r (252) = 0.46, p < 0.01), online vigilance (r (252) = 0.51, p < 0.01), automaticity (r (252) = 0.57, p < 0.01), impulsivity (r(252) = 0.45, p < 0.01), deficient self-regulation (r(252) = 0.33, p < 0.01), smartphone use/day (r(252) = 0.31, p < 0.01), p < 0.01), fomo (r(252) = 0.28, p < 0.01) and nomo (r(252) = 0.51, p < 0.01). however, smartphone distraction correlated negatively with two variables: mindful attention (r(252) = −0.52, p < 0.01) and self-awareness (r(252) = −0.34, p < 0.01). to test h1 and assess the effect of the intervention on smartphone distraction, two separate ancovas were conducted. first, to isolate any effect of the intervention, a per-protocol analysis was conducted. as depicted in table 4 online vigilance (r (252) = 0.51, p < 0.01), automaticity (r (252) = 0.57, p < 0.01), impulsivity (r(252) = 0.45, p < 0.01), deficient self-regulation (r(252) = 0.33, p < 0.01), smartphone use/day (r(252) = 0.31, p < 0.01), p < 0.01), fomo (r(252) = 0.28, p < 0.01) and nomo (r(252) = 0.51, p < 0.01). however, smartphone distraction correlated negatively with two variables: mindful attention (r(252) = −0.52, p < 0.01) and self-awareness (r(252) = −0.34, p < 0.01). to test h1 and assess the effect of the intervention on smartphone distraction, two separate ancovas were conducted. first, to isolate any effect of the intervention, a per-protocol analysis was conducted. as depicted in table 4 ancova analyses for the secondary outcomes were also tested across both pp and itt samples. specifically, for the pp sample, main effects of the experimental group on post-intervention outcomes after controlling for baseline scores were found for self-awareness (f(1, 140) in order to evaluate the effects of the intervention in the intervention group based on level of distraction and to assess whether the effects were consistent in the intervention group independent of degree of distraction, participants were classed into two categories of high distractors vs. low distractors depending on perceived distraction level. a median-split analysis with high vs. low distractor levels was determined by scores above vs. below the median and these were separately analysed inside the intervention group. therefore, a two-way mixed anova with time (pre-test and post-test) as within-factor and distraction severity (high and low distraction) as between-factor was performed to investigate the impact of the intervention (time) and degree of distraction (high vs. low) as assessed at baseline on distraction levels at post-intervention. this analysis was conducted only for the dependent variable for which the interactions were found to be significant. results more specifically for mediation 1, the intervention group was the proposed independent variable in these analyses, mindfulness was the proposed mediator, and smartphone distraction was the outcome variable. for mediation 2, stress was the proposed independent variable in these analyses, online vigilance was the proposed mediator, and smartphone distraction was the outcome variable. for mediation 3, smartphone distraction was the predictor, social media addiction was the outcome and online vigilance was the mediator. analysed variables included the t1 scores on the constructs examined as covariates to account for pre-intervention performance. for mediation 1, it was hypothesized that mindful attention would mediate the relationship between the intervention and smartphone distraction ( table 5) . no mediation effect was found for mindful attention on the variables. however, a main effect of the intervention on smartphone distraction (path a: b = −0.67, t = −8.23, p < 0.001) was found, but no main effect of mindful attention on smartphone distraction (path b; b = 1.16, t = 0.67, ns). table 5 . mediation effects of mindful attention and emotional self-awareness on intervention effects and smartphone distraction and of online vigilance on smartphone distraction and social media addiction (n = 252). for mediation 2, it was hypothesized that self-awareness would mediate the relationship between the intervention and smartphone distraction (table 5 ). an indirect effect was found on self-awareness on the variables (a × b: b = −2.02, bca ci = [−3.10, −1.59]), indicating mediation. the intervention significantly predicted self-awareness (path a; b = −6.78, t = −4.32, p < 0.001) and self-awareness significantly predicted lower levels of smartphone distraction (path b; b = 0.30, t = 4.02, p < 0.001). for mediation 3, it was hypothesized that online vigilance would mediate the relationship between distraction and social media addiction (table 5 ). an indirect effect was found on self-awareness on the variables (a × b: b = 0.02, bca ci = [0.01, 0.03]), indicating mediation. the intervention significantly predicted self-awareness (path a; b = −0.01, t = −3.32, p < 0.001) and self-awareness significantly predicted lower levels of smartphone distraction (path b; b = 1.66, t = 4.02, p < 0.001). the present study tested the efficacy of an online intervention employing an integrative set of strategies-consisting of mindfulness, self-monitoring and mood tracking-in assisting young adults to decrease levels of smartphone distraction and improve on a variety of secondary psychological outcomes, such as mindful attention, emotional awareness, stress and anxiety, and perceived self-efficacy, as well as to reduce stress, anxiety, deficient self-regulation, problematic social media use and smartphone-related psychological outcomes (i.e., online vigilance, fomo and nomo). results of the present study provided support for the online intervention effectiveness in impacting these outcomes. findings suggested that students receiving the intervention reported a significant reduction in the primary outcome of smartphone distraction, unlike students in the control group who reported a non-significant reduction in smartphone distraction. in terms of the secondary outcomes, participants in the intervention condition experienced a significant increase in self-awareness, mindful attention, and self-efficacy, and a significant decrease in smartphone use/day, impulsivity, stress, anxiety, deficient self-regulation, fomo, and problematic use. no significant results were found for social media use per day, habitual/automated use and nomo. according to the findings of the present intervention, it appears likely that practising mindfulness and monitoring mood and smartphone activity could lead to a desired behavioural change towards less distraction and less perceived stress with carry-over effects in self-awareness and self-efficacy, similar to interventions for other mental health problems [83, 85, 87, 91, 93, 184, 185] . these findings are consistent with the growing body of research indicating that mindfulness and self-monitoring are effective strategies to increase self-awareness and reduce stress [84] [85] [86] [87] [88] [89] [90] 186] . mindful attention could enhance awareness of individual media behaviour by: (i) raising understanding and awareness of disruptive media multitasking activities (i.e., predictors, patterns and effects), and (ii) raising awareness of different strategies for coping with digital distraction and of which strategies are most effective. second, self-monitoring could help in developing an understanding of media habits and time spent on smartphone and social media activities and could curb perceived excess smartphone interaction, consistent with other study findings [92, 101, 187, 188] . therefore, strategies employing increased mindfulness practice and self-monitoring could aid attentional capacity and self-awareness, which is considered a necessary condition in the behaviour change process of risky behaviours [189, 190] . third, mood tracking could enhance awareness of triggers of negative mood and ensuing negative emotional states acting as drivers for distraction. it appears that the same technologies which may impact negatively on young people may be used to leverage smartphone use [100] and deflect psychological distress if evidence-based behaviour change strategies are applied. intervention strategies such as mindfulness and self-monitoring may encourage increased self-awareness and thus help reduce distraction levels and increase mindful attention. the intervention was also successful in reducing secondary outcomes, such as stress levels and fomo, and it had a positive effect on emotion regulation and loss of control levels. distraction appears to be associated with higher access to social media content and is mediated by online vigilance. salience of smartphone-mediated social interactions (i.e., the salience dimension of online vigilance) has been found to be negatively related to affective wellbeing [49] . it has been reported that emotional dysregulation mediates the relationship between psychological distress and problematic smartphone use [191] . higher self-regulation online has been identified as a moderator between need to belong and problematic social media use in young people [192] and emotion dysregulation as a mediator between insecure attachment and addiction [193] . although distraction is an emotion regulation strategy with a protective function against emotionally distressing states [111] and dysphoric mood [194] , or is used for adaptive coping [195, 196] , deficits in attentional control, such as distraction, may also be implicated in stress, anxiety or other affective disorders [197] and in generalized anxiety disorder with core cognitive symptoms related to excessive thoughts and deficits associated with increased perseverative worry [198] . therefore, higher mindful attention and monitoring of mood may have influenced the reduction of distraction and the enhancement of emotional control. mediation analyses were also performed to understand the relationships between intervention effects on smartphone distraction via two mediators, mindful attention and self-awareness, and of online vigilance on the relationship between distraction and social media addiction. mediation effects were significant for the relationship among intervention effects and distraction via self-awareness, and for distraction and problematic social media use via online vigilance, indicating that self-awareness could be a potential behaviour strategy to mitigate distraction levels. however, the relationship among intervention effects and distraction was not significant via mindful attention as a mediator. therefore, in the present study it appeared that despite its statistically significant increase, mindful attention was not a mediating factor for distraction in the intervention. mindful attention could potentially be the vehicle to increasing emotional self-awareness [93, 184, 199] , prompting more controlled smartphone interactions. on the contrary, online vigilance was found to be a mechanism associated with smartphone distraction and problematic social media use, given the strong preoccupation with the content prompted even by the mere presence of smartphones, confirming previous findings [200] . therefore, despite its protective function, distraction may concurrently serve as a gateway to increased smartphone engagement and time spent on devices. time spent alone is not a defining factor and it has been argued instead that the interaction of content, context and time spent, as well as the meaning attached to these interactions, may determine the level of problematic media use [5, 201] . within smartphone use, distraction is a salient behaviour with evidence that distraction and mind-wandering are associated with online vigilance, which via reduced mindfulness may be associated with decreased wellbeing [78] . furthermore, inattention symptoms have been implicated in risk for smartphone addiction and problematic smartphone use [202] . therefore, handling distraction, which has neural correlates [203] , may be the means to resisting cue reactivity, implicated in smartphone addiction, in reduced cognitive performance [113] or in obsessive-compulsive symptoms [204] . further research is required to assess these cognitive and emotive dimensions of smartphone distraction and its effects on engagement in line with current trends [205] . however, it has been proposed that the construct of distraction extends beyond the debate on smartphone addiction by considering the role of the smartphone in coping with negative emotions and addressing preference for online vs. offline communications [206] . research is still conflicted in relation to the cognitive function of distraction. experimental smartphone research has provided initial evidence that social apps compared to non-social apps on smartphones do not capture attention despite their perceived high reward value [207, 208] , but other studies support a high interference effect [209] . therefore, more research is required to elucidate the mechanisms of digital distraction and delineate how digital technologies, individual choices, and contexts affect individuals' attention spans and attentional loss, as well as mental health conditions, such as adhd and anxiety and overall psychological wellbeing [210] . the present rct assessed the effectiveness of the impact of the use of mindfulness, self-monitoring, and mood tracking delivered through interaction with smartphone apps in reducing distraction arising from recreational smartphone use and social media use. the findings suggest that engaging with the aforementioned practices was effective in reducing distraction levels, stress, anxiety, deficient self-regulation, impulsivity and smartphone-related psychological outcomes, and improving mindful attention and emotional self-awareness and self-efficacy. some limitations need to be taken into consideration. first, a convenience sample of university students was used, which hinders the generalizability of the findings to other groups (i.e., older adults or children). however, this population was considered of primary interest for the study because university students are digital natives liable to experience negative academic consequences due to vulnerability to problematic smartphone use [211] . the effect sizes found in this rct were medium to large for the variables examined, exceeding the expected range for low-intensity, non-clinical interventions [212] . however, as a result of the main recruitment protocol, the intervention may have attracted participants who had an interest in the outcomes and a potential self-assessed vulnerability. therefore, the voluntary, self-selected nature of participation could have introduced a significant degree of participant response and confirmation bias [213] , resulting in the medium to high effect sizes. additionally, the high drop-out rates, consistent with other online rcts [214] , could have significantly affected the strength of the findings [215] , and the use of a passive control group might have led to an overestimation of the effects [216] . due to the use of market-available apps, actual adherence and engagement with the intervention was not accounted for, nor were reasons for dropout [217] . therefore, the findings should be treated with caution and replicated in future designs. future studies should systematically address response bias and include methods in the rct to improve the accuracy of self-reported data [218, 219] . combining self-report with behavioural data [220] , ecological momentary sampling [221] , psycho-informatics and digital phenotyping, the provision of a digital footprint for prognostic, diagnostic and intervention purposes [222] , could enhance the ecological validity of the study. equally, incorporating the measurement of brain activity using magnetic resonance imaging (mri) in interventions could greatly enhance accuracy of assessment of prevention efforts and understanding of the role of neurobiology in behaviour [223, 224] . the impact of the intervention on gender was not examined because this university student sample consisted mainly of female participants. considering the gender differences reported in smartphone use [48, 225] and in attention processes [226] , future studies should explore its effect, which could have significant implications for the intervention and prevention of attention failures and poor student outcomes [227] . additionally, the study design did not manage to provide a longer intervention period due to the lack of freely available apps for participants to use and did not include a second follow-up period to track maintenance of long-term effects, as is customary in rcts, or the use of qualitative process evaluation for a critical understanding of impact of the intervention components [228] . finally, social, economic and family conditions as well as other issues, which are critical to young people's psycho-emotional states and sense of identity, were not accounted for in the present study [229, 230] . despite these limitations, the study provides initial evidence for efficacy of strategies in curbing smartphone distraction and adds to the limited body of knowledge of cognitive-emotive processes in smartphone and social media use [205] . it also contributed to the still limited knowledge on interventions in smartphone distraction and constitutes a simple, first-step, low key intervention programme, which may be practised by individuals seeking support for attentional difficulties on a self-help basis or within a stepped-care clinical framework for prevention purposes [96] . experiencing distraction from smartphones and social media content, interferes with high-level cognitive processes and has productivity and emotional implications (i.e., stress) in various contexts and situations [51, [231] [232] [233] [234] , being further compromised by digital triggers and the structural design of smartphones prompting salience and reactivity [235] . these results have clinical implications as low-intensity interventions may prevent small scale emotional problems from developing into clinical disorders and can reduce incidences of mental health problems [236, 237] . practitioners may also find value in using mindfulness and monitoring practices as an adjunct to therapy for problematic use of smartphones. it may be of high value for academic institutions to build specific university-based programmes on maintaining balanced technology use, tackling unregulated and promoting positive smartphone use, or guiding students towards suitable methods to address attention problems more effectively [238, 239] . apps may also be utilized by schools for students that are faced with attentional/excessive use difficulties and in assisting young people to become aware of their emotions in preparation for learning more adaptive coping strategies. distraction is an emergent phenomenon in the digital era considering that the boundaries between work and recreation are increasingly blurred with both domains arguably dependent on the use of digital media [240] . more research on attentional processes within smartphone use could aid the understanding of these processes and impacts experienced across different age groups. psychological low-cost interventions may be effective in addressing precursors of problematic behaviours and enhancing wellbeing dimensions. the aim of the present study was to assess the efficacy of an rct combining evidence-based cognitive-behavioural strategies to reduce distraction from smartphone use, increase mindful attention, emotional self-awareness and self-efficacy and reduce stress, anxiety, deficient self-regulation and smartphone related psychological outcomes (i.e., online vigilance, fomo and nomo). second, it tested the mediating effect of mindful attention and self-awareness of the intervention on distraction, and of online vigilance on the relationship between distraction and social media addiction. findings suggested that students receiving the intervention reported a significant reduction in the primary outcome of smartphone distraction, whereas students in the control group reported a non-significant reduction in smartphone distraction. in terms of the secondary outcomes, participants in the intervention condition experienced a significant increase in self-awareness, mindful attention and self-efficacy and a significant decrease in smartphone use/day, impulsivity, stress and anxiety levels, fomo, deficient self-regulation and problematic social media use. no significant results were found for duration of social media use/day, habitual use and nomo. mediation effects of the intervention were also observed on distraction and problematic social media use via the mediators of emotional self-awareness and online vigilance in mitigating distraction levels. mindful attention was not found to be a mediating process for reducing distraction in the intervention. research on digital distraction is still scarce, yet there is increasing interest in cognitive impacts within digital environments. more evidence is required to assess the nature of attention failures and difficulties occurring both in normative and excessive online use. this evidence would allow an understanding of the prevalence and the nature of these difficulties, as well as their integration in intervention media literacy and risk prevention programmes, enhancing wellbeing, productivity and academic performance. the authors declare no conflict of interest. the forgotten frontier of attention cognition in the attention economy attention economies information and communication technology overload and social networking service fatigue: a stress perspective a 'control model' of social media engagement in adolescence: a grounded theory analysis the myth of multitasking number of smartphone users worldwide from antecedents and consequences of problematic smartphone use: a systematic literature review of an emerging research area problematic smartphone use: investigating contemporary experiences using a convergent design how to overcome taxonomical problems in the study of internet use disorders and what to do with "smartphone addiction freedom makes you lose control: executive control deficits for heavy versus light media multitaskers and the implications for advertising effectiveness batching smartphone notifications can improve well-being the digital expansion of the mind: implications of internet usage for memory and cognition brain drain: the mere presence of one's own smartphone reduces available cognitive capacity the mere presence of a cell phone may be distracting: implications for attention and task performance is the smartphone a smart choice? the effect of smartphone separation on executive functions smartphones and cognition: a review of research exploring the links between mobile technology habits and cognitive functioning emotion-related impulsivity moderates the cognitive interference effect of smartphone availability on working memory answering the missed call: initial exploration of cognitive and electrophysiological changes associated with smartphone use and abuse smartphone addiction, daily interruptions and self-reported productivity the brain in your pocket: evidence that smartphones are used to supplant thinking understanding smartphone usage in college classrooms: a long-term measurement study the distracted mind: ancient brains in a high-tech world the effect of cellphones on attention and learning: the influences of time, distraction, and nomophobia media multitasking and memory: differences in working memory and long-term memory the neural bases of distraction and reappraisal rethinking rumination emotion-regulation choice cognitive strategies to regulate emotions-current evidence and future directions the emerging field of emotion regulation: an integrative review the distracted mind: ancient brains in a high-tech. world an empirical examination of the educational impact of text message-induced task switching in the classroom: educational implications and strategies to enhance learning the attentional cost of receiving a cell phone notification integrating emotion regulation and emotional intelligence traditions: a meta-analysis distraction-conflict theory: progress and problems distraction as a source of drive in social facilitation research association of screen time with self-perceived attention problems and hyperactivity levels in french students: a cross-sectional study the relationship between media multitasking and attention problems in adolescents: results of two longitudinal studies: media multitasking and attention problems the impact of mobile phone usage on student learning bison, i. mobile social media usage and academic performance cell phone usage and academic performance: an experiment analysis of problematic smartphone use across different age groups within the 'components model of addiction' self-reported dependence on mobile phones in young adults: a european cross-cultural empirical survey mobile technology and social media: the "extensions of man" in the 21st century the effects of mobile phone use on academic performance: a meta-analysis permanently online and permanently connected: development and validation of the online vigilance scale habits make smartphone use more pervasive. pers. ubiquitous comput modeling habitual and addictive smartphone behavior the relationship between online vigilance and affective well-being in everyday life: combining smartphone logging with experience sampling does impulsivity relate to perceived dependence on and actual use of the mobile phone? interactions of impulsivity, general executive functions, and specific inhibitory control explain symptoms of social-networks-use disorder: an experimental study overlapping attentional networks yield divergent behavioral predictions across tasks: neuromarkers for diffuse and focused attention? in-class distractions: the role of facebook and the primary learning task attentional disengagements in educational contexts: a diary investigation of everyday mind-wandering and distraction motivators of online vulnerability: the impact of social network site use and fomo out of sight is not out of mind: the impact of restricting wireless mobile device use on anxiety levels among low, moderate and high users the extended iself: the impact of iphone separation on cognition, emotion, and physiology smartphone restriction and its effect on subjective withdrawal related scores fear of missing out, need for touch, anxiety and depression are related to problematic smartphone use fear of missing out (fomo): overview, theoretical underpinnings, and literature review on relations with severity of negative affectivity and problematic technology use fear of missing out as a predictor of problematic social media use and phubbing behavior among flemish adolescents fear of missing out (fomo) is associated with activation of the right middle temporal gyrus during inclusion social cue fear of missing out), the smartphone, and social media may be affecting university students in the middle east social and emotional correlates of the fear of missing out the association between smartphone use, stress, and anxiety: a meta-analytic review cross-generational analysis of predictive factors of addictive behavior in smartphone usage social norms and e-motions in problematic social media use among adolescents ünal-aydın, p. the relationship between dysfunctional metacognitive beliefs and problematic social networking sites use when the smartphone goes offline: a factorial survey of smartphone users' experiences of mobile unavailability non-social smartphone use mediates the relationship between intolerance of uncertainty and problematic smartphone use: evidence from a repeated-measures study the serially mediated relationship between emerging adults' social media use and mental well-being report submitted to the uk parliament science and technology committee (impact of social media and screen-use on young people's health inquiry) the evolution of internet addiction: a global perspective internet addiction in students: prevalence and risk factors social media use and adolescent mental health: findings from the uk millennium cohort study are smartphones really that bad? improving the psychological measurement of technology-related behaviors the power of mobile notifications to increase wellbeing logging behavior mind-wandering and mindfulness as mediators of the relation between online vigilance and well-being mindfulness-based interventions in context: past, present, and future preliminary evidence on the efficacy of mindfulness combined with traditional classroom management strategies mindfulness practices in addictive behavior prevention, treatment, and recovery mindfulness for adolescents: a promising approach to supporting emotion regulation and preventing risky behavior how does mindfulness meditation work? proposing mechanisms of action from a conceptual and neural perspective the use of personalized behavioral feedback for online gamblers: an empirical study online-based mindfulness training reduces behavioral markers of mind wandering behavioural tracking, responsible gambling tools and online voluntary self-exclusion: implications for problem gamblers a randomised controlled trial of a brief online mindfulness-based intervention review of self-exclusion from gambling venues as an intervention for problem gambling a randomized controlled pilot study of a brief web-based mindfulness training cognitive control in media multitaskers audio-guided mindfulness training in schools and its effect on academic attainment: contributing to theory and practice self-monitoring using mobile phones in the early stages of adolescent depression: randomized controlled trial mindfulness and its relationship to emotional regulation attentional biases to emotional stimuli: key components of the rdoc constructs of sustained threat and loss co-designing a mobile gamified attention bias modification intervention for substance use disorders: participatory research study mental health smartphone apps: review and evidence-based recommendations for future developments advances in clinical staging, early intervention, and the prevention of psychosis the efficacy of app-supported smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials a randomized controlled trial of three smartphone apps for enhancing public mental health engagement in mobile phone app for self-monitoring of emotional wellbeing predicts changes in mental health: moodprism putting the 'app' in happiness: a randomised controlled trial of a smartphone-based mindfulness intervention to enhance wellbeing a population-based survey effects of brief mindfulness-based interventions on health-related outcomes: a systematic review mobile health technology interventions for suicide prevention: systematic review state of the field of mental health apps mindfulness-based mobile applications: literature review and analysis of current features current research and trends in the use of smartphone applications for mood disorders technology matters: the human touch in a digital age-a blended approach in mental healthcare delivery with children and young people a systematic review of internet-based therapy for the treatment of addictions perceived functions of worry among generalized anxiety disorder subjects: distraction from more emotionally distressing topics? distraction by smartphone use during clinical practice and opinions about smartphone restriction policies: a cross-sectional descriptive study of nursing students neural correlates of opposing effects of emotional distraction on working memory and episodic memory: an event-related fmri investigation media multitasking, attention, and distraction: a critical discussion the relationship between addictive use of social media and video games and symptoms of psychiatric disorders: a large-scale cross-sectional study gambling among adolescents and emerging adults: a cross-cultural study between portuguese and english youth the extent and distribution of gambling-related harms and the prevention paradox in a british population survey our future: a lancet commission on adolescent health and wellbeing prevalence of problematic smartphone usage and associated mental health outcomes amongst children and young people: a systematic review, meta-analysis and grade of the evidence effectiveness of online mindfulness-based interventions in improving mental health: a review and meta-analysis of randomised controlled trials are rcts the gold standard? biosocieties getting off the "gold standard": randomized controlled trials and education research understanding and misunderstanding randomized controlled trials a new generation of pragmatic trials of psychosocial interventions is needed consort statement: extension to cluster randomised trials accountability: a missing construct in models of adherence behavior and in clinical practice mind wandering (internal distractibility) in adhd: a literature review tracking distraction: the relationship between mind-wandering, meta-awareness, and adhd symptomatology social cognitive theory of self-regulation overcoming distractions during transitions from break to work using a conversational website-blocking system effect of brief gaming abstinence on withdrawal in adolescent at-risk daily gamers: a randomized controlled study effectiveness of brief abstinence for modifying problematic internet gaming cognitions and behaviors short abstinence from online social networking sites reduces perceived stress, especially in excessive users mobile apps for mood tracking: an analysis of features and user reviews development and pilot evaluation of smartphone-delivered cognitive behavior therapy strategies for mood-and anxiety-related problems: moodmission efficacy of a text messaging (sms) based smoking cessation intervention for adolescents and young adults: study protocol of a cluster randomised controlled trial development and validation of the smartphone distraction scale (sds): a cognitive and emotion regulation construct the benefits of being present: mindfulness and its role in psychological well-being psychometric assessment of the mindful attention awareness scale (maas) among chinese adolescents a global measure of perceived stress review of the psychometric evidence of the perceived stress scale a brief measure for assessing generalized anxiety disorder: the gad-7 quick and easy self-rating of generalized anxiety disorder: validity of the dutch web-based gad-7, gad-2 and gad-si reliability and validity of the portuguese version of the generalized anxiety disorder (gad-7) scale. health qual towards parsimony in habit measurement: testing the convergent and predictive validity of an automaticity subscale of the self-report habit index intention and automaticity toward physical and sedentary screen-based leisure activities in adolescents: a profile perspective generalized self-efficacy scale. in measures in health psychology: a user's portfolio. causal and control beliefs assessment of perceived general self-efficacy on the internet: data collection in cyberspace validation of the general self-efficacy scale in psychiatric outpatient care psychometrically improved, abbreviated versions of three classic measures of impulsivity and self-control a test of the psychometric characteristics of the bis-brief among three groups of youth new tricks for an old measure: the development of the barratt impulsiveness scale-brief (bis-brief) guitar hero or zero?: fantasy, self-esteem, and deficient self-regulation in rhythm-based music video games unregulated internet usage: addiction, habit, or deficient self-regulation? psychometric validation of the persian bergen social media addiction scale using classic test theory and rasch models social networking addiction, attachment style, and validation of the italian version of the bergen social media addiction scale portuguese validation of the bergen facebook addiction scale: an empirical study a 'components' model of addiction within a biopsychosocial framework problematic social media use: results from a large-scale nationally representative adolescent sample psychometric testing of three chinese online-related addictive behavior instruments among hong kong university students motivational, emotional, and behavioral correlates of fear of missing out adaptation of fear of missing out scale (fomos): turkish version validity and reliability study establishing validity of the fear of missing out scale with an adolescent population exploring the dimensions of nomophobia: development and validation of a self-reported questionnaire addicted to cellphones: exploring the psychometric properties between the nomophobia questionnaire and obsessiveness in college students smartphone withdrawal creates stress: a moderated mediation model of nomophobia, social threat, and phone withdrawal context a study of target effect sizes in randomised controlled trials a simplified guide to randomized controlled trials reading and understanding multivariate statistics; reading and understanding multivariate statistics intention-to-treat concept: a review intention-to-treat principle randomisation and baseline comparisons in clinical trials statistical methods for comparative studies: techniques for bias reduction parametric ancova and the rank transform ancova when the data are conditionally non-normal and heteroscedastic design and analysis: a researcher's handbook cautionary note on reporting eta-squared values from multifactor anova designs measures of effect size for comparative studies: applications, interpretations, and limitations a power primer the analysis of mechanisms and their contingencies: process versus structural equation modeling. australas mark regression-based statistical mediation and moderation analysis in clinical research: observations, recommendations, and implementation statistical mediation analysis with a multicategorical independent variable process: a versatile computational tool for observed variable mediation, moderation, and conditional process modelling; white paper assessing the relationship between mindful awareness and problematic internet use among adolescents mindfulness training improves working memory capacity and gre performance while reducing mind wandering mindfulness facets and problematic internet use: a six-month longitudinal study a randomised controlled trial to reduce sedentary time in young adults at risk of type 2 diabetes mellitus: project stand (sedentary time and diabetes) reactive self-monitoring: the effects of response desirability, goal setting, and feedback the transtheoretical model of health behavior change in search of how people change. applications to addictive behaviors psychological distress, emotion dysregulation, and coping behaviour: a theoretical perspective of problematic smartphone use. advance online publication problematic social networks use in german children and adolescents-the interaction of need to belong, online self-regulative competences, and age insecure attachment and addiction: testing the mediating role of emotion dysregulation in four potentially addictive behaviors rumination, distraction and mindful self-focus: effects on mood, dysfunctional attitudes and cortisol stress response connection, meaning, and distraction: a qualitative study of video game play and mental health recovery in veterans treated for mental and/or behavioral health problems a longitudinal study of rumination and distraction in formerly depressed inpatients and community controls the impact of anxiety-inducing distraction on cognitive performance: a combined brain imaging and personality investigation attentional control in ocd and gad: specificity and associations with core cognitive symptoms mindfulness and mind-wandering: finding convergence through opposing constructs hard to resist? the effect of smartphone visibility and notifications on response inhibition is excessive online usage a function of medium or activity? empirical pilot study attention deficit hyperactivity symptoms predict problematic mobile phone use neural correlates of cue reactivity in individuals with smartphone addiction evaluation of obsessive-compulsive symptoms in relation to smartphone use cognitive correlates in gaming disorder and social networks use disorder: a comparison the psychology of smartphone: the development of the smartphone impact scale (sis) social smartphone apps do not capture attention despite their perceived high reward value use of instant messaging predicts self-report but not performance measures of inattention, impulsiveness, and distractibility attention deficit hyperactivity disorder-symptoms, social media use intensity, and social media use problems in adolescents: investigating directionality silence your phones': smartphone notifications increase inattention and hyperactivity symptoms fear of missing out is associated with disrupted activities from receiving smartphone notifications and surface learning in college students oxford guide to low intensity cbt interventions information bias in health research: definition, pitfalls, and adjustment methods dropout from internet-based treatment for psychological disorders reporting attrition in randomised controlled trials waiting list may be a nocebo condition in psychotherapy trials: a contribution from network meta-analysis adherence in internet interventions for anxiety and depression: systematic review beyond self-report: tools to compare estimated and real-world smartphone use time distortion associated with smartphone addiction: identifying smartphone addiction via a mobile application (app) the descriptive epidemiology of sedentary behaviour introduction to special issue: adolescent and emerging adult development in an age of social media digital phenotyping and mobile sensing: rapidly evolving interdisciplinary research endeavor neuroimaging and biomarkers in addiction treatment the neurobiology of addiction: the perspective from magnetic resonance imaging present and future how age and gender affect smartphone usage gender differences in visual reflexive attention shifting: evidence from an erp study who is better adapted in learning online within the personal learning environment? relating gender differences in cognitive attention networks to digital distraction importance of mixed methods in pragmatic trials and dissemination and implementation research social determinants of mental health: where we are and where we need to go what has economics got to do with it? the impact of socioeconomic factors on mental health and the case for collective action orienting of attention examining the implications of internet usage for memory and cognition: prospects and promise the digital expansion of the mind: implications of internet usage for memory and cognition mindfulness broadens awareness and builds eudaimonic meaning: a process model of mindful positive emotion regulation online-specific fear of missing out and internet-use expectancies contribute to symptoms of internet-communication disorder the promise of well-being interventions for improving health risk behaviors psychological well-being as part of the public health debate? insight into dimensions, interventions, and policy a. ehealth literacy among college students: a systematicreview with implications for ehealth education how to strengthen patient-centredness in caring for people with multimorbidity in europe? policy brief 22 icare4eu consortium understanding the determinants of digital distraction: an automatic thinking behavior perspective this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license key: cord-332181-k90i33gp authors: degeling, chris; kerridge, ian title: hendra in the news: public policy meets public morality in times of zoonotic uncertainty date: 2012-12-29 journal: soc sci med doi: 10.1016/j.socscimed.2012.12.024 sha: doc_id: 332181 cord_uid: k90i33gp public discourses have influence on policymaking for emerging health issues. media representations of unfolding events, scientific uncertainty, and real and perceived risks shape public acceptance of health policy and therefore policy outcomes. to characterize and track views in popular circulation on the causes, consequences and appropriate policy responses to the emergence of hendra virus as a zoonotic risk, this study examines coverage of this issue in australian mass media for the period 2007–2011. results demonstrate the predominant explanation for the emergence of hendra became the encroachment of flying fox populations on human settlement. depictions of scientific uncertainty as to whom and what was at risk from hendra virus promoted the view that flying foxes were a direct risk to human health. descriptions of the best strategy to address hendra have become polarized between recognized health authorities advocating individualized behaviour changes to limit risk exposure; versus populist calls for flying fox control and eradication. less than a quarter of news reports describe the ecological determinants of emerging infectious disease or upstream policy solutions. because flying foxes rather than horses were increasingly represented as the proximal source of human infection, existing policies of flying fox protection became equated with government inaction; the plight of those affected by flying foxes representative of a moral failure. these findings illustrate the potential for health communications for emerging infectious disease risks to become entangled in other political agendas, with implications for the public's likelihood of supporting public policy and risk management strategies that require behavioural change or seek to address the ecological drivers of incidence. hendra virus is a zoonosis e which means it can be transmitted across species boundaries from its natural host (flying foxes or fruit bats) to cause infection and disease in domestic animals and people. emerging bat-borne infections such as hendra are a pressing global public health concern (wong, lau, woo, & yuen, 2007) . hendra is highly lethal to humans and endemic in australian flying fox populations. like other new and re-emerging infectious diseases, changes in the incidence and cross-species transmissibility of hendra are likely to hinge on the ecological impacts of natural events and human activities (jones et al., 2008) . indeed it is clear that hendra has 'spilled' over from flying fox populations into horses, and then people and pet dogs through their increasingly intense interaction in rural and peri-urban areas. importantly, the impacts of these interactions are bi-directional; anything that induces 'stress' in flying foxes is thought to amplify viral shedding into the environment (parrish et al., 2008) . efforts to disrupt flying-fox encroachment on human settlement, therefore, are likely to increase the risk of human infection. current hopes of prevention rest on the development of a vaccine for horses, the only confirmed intermediate host for hendra transmission to humans. in the interim, public health responses to hendra have focussed on education and behaviour modification amongst high-risk groups such as veterinarians, horse owners and people who work in equine industries, and the institution of disease surveillance and quarantine measures involving both human and animal health sectors (adamson, marich, & roth, 2011) . since hendra virus first emerged in 1994 there have only been four human deaths and seven human infections. however concerns in australia about the risks to human health escalated sharply in 2011 when outbreaks in horses occurred over a greater geographic area and at a far higher frequency than past 'hendra seasons' (field, crameri, kung, & wang, 2012) . concern about the risks posed by the bat-borne virus were further heightened by the subsequent and unexpected discovery of a pet dog with a naturally acquired infection (tapim & withey, 2011) . many australians, particularly those living in regional areas, already consider flying foxes to be a noisy and unhygienic pest. towns and city suburbs in north-eastern parts of the country can find themselves 'under siege' from large groups of roosting 'fruit bats' e with these 'camps' or colonies sometimes comprised of several thousand individuals. aside from the impacts of noise and faeces, flying fox colonies can 'fly in and feed' on commercial orchards causing significant economic losses for fruit growers. for this reason there has been a longstanding practice of shooting flying foxes and disrupting their colonies with sirens, smoke bombs and helicopters to deter them from congregating in agricultural and residential areas. whereas in the past diseases of wild animals were thought to pose limited risk to humans, the connection between human activity e particularly changes in land use e and changes in patterns of infectious disease is becoming increasingly clear (newell et al., 2005) . the urbanisation of coastal habitats is thought to have had a number of effects on flying fox populations in eastern australia e restricting access to their normal foods and forcing them to both turn to, and increasingly rely on, commercial orchards and urban gardens for sustenance (plowright et al., 2008) . and because food is scarce flying foxes are also less inclined to migrate, leading to the formation of permanent camps in agricultural and residential areas. these effects are also exacerbated by natural events that further limit the availability of natural and horticultural food resources, such as cyclones and floods (plowright et al., 2011) . as large groups of flying foxes congregate in and around human settlements this gives the impression that the population is thriving, whereas this is more a result of reduction in their natural habitat, and several species of flying foxes are, in fact, vulnerable to extinction (welbergen, klose, markus, & eby, 2008) . for this reason, since 1986 in nsw and 1994 in queensland, flying foxes camps have been legally protected from human interference to try and rehabilitate the population. in 2008 the queensland government took the further step of refusing all applications by farmers for permits to shoot flying foxes to protect their crops, both on ecological grounds and because attempts to break up established camps may be counterproductive. it was argued that any measures that stressed flying foxes would increase the risk of hendra spilling over into horses; and dispersing specific camps would likely be ineffective as there would be nothing to stop the colony re-establishing itself nearby, and once again in conflict with human settlement. thus, what was designed to be legislation to protect a vulnerable species of native animal became a policy instrument with which to try and limit zoonotic risk exposure. this transition from conservation-focussed environmental policy to public health policy has been incremental, such that the policy aim was not to solve either problem but manage areas of emerging concern. yet as people directly affected by flying foxes have struggled to 'live with' the growing throngs of unwelcome neighbours, many have come to believe that by protecting flying fox populations and advocating the adoption of low risk behaviours towards them, governments and health authorities have put the health and welfare of another species above that of human populations. experiences with bovine spongiform encephalopathy (bse) and severe acute respiratory syndrome (sars) indicate that policymaking for a new zoonotic disease is always difficult and prone to polarising different stakeholders in affected communities (phillips, bridgeman, & ferguson-smith, 2000; singer et al., 2003) . a key feature in matters surrounding animal disease control is that radically different policy responses e such as wholesale culling or vaccination e can typically be presented as plausible points of intervention. for this reason decisions surrounding what should be done about new or pressing zoonotic risks are often contested, and finding the right balance between over-caution, laissez-faire approaches, and determining the weight given to different socioeconomic factors can be difficult. for example a lack of due diligence can expose the population to the risk of infection for far longer than necessary, as was the case with bse. conversely, the overzealous application of the precautionary principle can destroy the livelihood of a population, impact its food supply, limit development, and entrench or exacerbate socioeconomic disadvantage (world health organization, 2004) . furthermore, when the zoonotic risk is new, attempts to explain the choice of policy are likely to be further complicated by uncertainty regarding the precise risk of infection, the drivers of disease emergence, and the measures needed to control the risk of infection. therefore public support for policies that disrupt people's lives and communities or place precautionary limits on the development of natural resources might depend on their understanding of the causes and risks of zoonotic outbreaks, their trust in government agencies, and the likely consequences for them of different public health responses. in this regard news media are an important source of information for the public (brodie, hamel, altman, blendon, & benson, 2003) , particularly with regard to the complex relationship between the environment and human health and with regards to the risk posed by animals. for not only does the media reflect the issues that concern people, it also impacts upon the issues the public thinks about, and the criteria through which they think about them e influencing people's understanding of what is at stake, of who or what is to blame, of who is at risk, and of what can be done to address the situation (entman, 1993; scheufele, 2000) . in this regard, while individual journalists my privilege independence, accuracy and balance, media organisations are rarely neutral and can both influence public opinion themselves, or be used by industry, politicians and interests groups to influence public perception of particular issues and/or promote their own ends (callaghan & schnell, 2001; terkildsen, schnell, & ling, 1998) . therefore the effects of how the media chooses to raise to public prominence and then 'frame' events and opinions surrounding a new health issue such as an emerging zoonosis can be recursive. for example because elected officials, politicians and policy advisors are responsive to public opinion, public perceptions about the causes of a disease threat like hendra virus might influence the level of public support for specific health policies, and, thereby, ongoing political debate (gollust, lantz, & ubel, 2009; harrabin, coote, & allen, 2003) . in this paper we analyse representations in the australian media of the causes and consequences of the emergence of hendra as a zoonotic risk, focussing on how the unavoidable uncertainty about its causes and likely consequences shaped perceptions of the health policies put in place in the attempt to mitigate the risk of human infection. in short we seek to examine media representations of an infectious threat within a broader policy context. because flying foxes are a highly visible, widespread and relatively novel source of infectious risk for humans, the emergence of hendra virus presents an opportunity to track and compare media representations of disease 'events', health policy goals, political discourses and public opinions in ways that are difficult for noncommunicable diseases. in this our research is consistent with other reports examining media portrayals of the health risk and scientific and policy uncertainty associated with contested environmental exposures (mayer, 2012) and emerging infectious diseases (eids) (daku, gibbs, & heymann, 2012; hilton & hunt, 2011; washer, 2010) . to identify australian media coverage of issues surrounding the emergence of hendra virus, the database factiva was searched using the following terms as textwords: (hendra virus) or (flying fox*) or (fruit bat*) for the period 1 january 2007 through 31 december 2011. a news filter limiting content to the region of australia was then applied identifying more than 10,000 items. because we were focussing on representations of hendra and associated public policy, further subject filters were applied to restrict the sample to political and general news, limiting the corpus to just over 6000 articles. full-text reports from 9 news media sources from areas affected by hendra virus outbreaks were then downloaded. the sample includes 2 national media organizations . pretesting confirmed that this search strategy would produce a larger and more heterogeneous sample of news reports, while still including coverage that focussed more narrowly on specific zoonotic events. from the resulting corpus of 1383 articles we discarded 43 duplicates and 356 reports that were not immediately relevant to hendra virus or flying fox populations (such as those that refer to the zip-line apparatus known as a 'flying-fox' or businesses or products with 'flying fox' in the title), after which 984 unique articles remained to be analysed. the media sample was then read, catalogued manually, and cross-compared by the lead author in order to identify and track prominent concepts, differences, and themes. next, both authors manually cross-coded a pilot sample (n ¼ 40) of the media corpus for specific types of content to confirm and to extend the preliminary thematic analysis. articles then were coded for: mention of horses, flying foxes/fruit bats or hendra virus mention of debates about flying fox control report of distal ecological causes (loss of natural habitat) for the emergence of hendra virus or the possibility of viral mutation mention of ignorance about hendra virus amongst scientists, healthcare providers or members of the public reference to government inaction as a factor contributing to the hendra problem reference to people's health and welfare not being high enough on the political agenda. these codes were both emergent and informed by similar media analyses (for example gollust & lantz, 2009; washer, 2004) . because hendra events tend to occur seasonally (from winterspring), the calendar year was chosen as the unit of analysis. the results from coding were then tabulated in matrix form, cross referenced and displayed visually as descriptive statistics in charts to aid interpretation. regular discussions among the authors served to generate additional enquires and to validate insights as they emerged. this approach is consistent with ethnographic content analysis, a qualitative research method for interpreting documents within the context of their use, which enables researchers to generate insights about how documents promote particular ways of understanding, interpreting and responding to an issue or event such as the emergence of a new disease from both numerical and narrative data (altheide, 1987) . with these concepts and types of content in mind, our analysis of the 984 articles proceeded through several cycles of immersion and crystallization of insights e a research process comprised of repeated readings and comparisons across and between newssources, discussions amongst the authors, periods of testing of alternate explanations, and then re-immersion within the research materials (lecompte & schensul, 1999) . coverage of issues surrounding hendra virus and flying fox populations in the media sample more than doubled in 2008, then plateaued before rising sharply in 2011 (table 1 ). this two-phase pattern can be attributed to the tragic deaths from hendra virus of the veterinarians ben cuneen [2008] and alister rodgers [2009] ) report on debate about flying fox control does mention hendra virus 0 6 (3.6) 13 (7.7) 9 (5.5) 90 (20.5) report suggests that people's welfare is not high enough on the political agenda 4 (9.1) 6 (3.6) 8 (4.7) 6 (3.7) 47 (10.1) report suggests that 'government' inaction is contributing to the problem 3 (6.8) 5 (3.0) 1 (0.6) 9 (5.5) 38 (8.6) during the one or two outbreaks that occurred in each of these years; and to the unprecedented number and geographic spread of hendra events in 2011. table 1 shows the results of cross coding for types of content and the frequency with which specific constellations of information appear each year in the media sources sampled. given that flying foxes were first identified as the natural host of hendra virus in 1996 (young et al., 1996) it is surprising to find that until 2011, the media paid little attention to the link between hendra virus and flying fox populations. in previous years reports about flying foxes tended not to mention hendra, but focussed on the impact of flying fox populations on horticultural enterprises and residential areas. in these reports first hand testimony and events on the ground were used to framepublic perceptions of this animal's protected status, and the bureaucratic and 'hands off' approach of the government agencies to this issue. at the same time reports that are primarily about hendra virus during this period typically focus on 'unfolding' zoonotic outbreaks, suspected cases of hendra infection, quarantine orders, human exposure and disease, experimental treatments, and the impacts of these incidents on individuals. it is not until the unprecedented number of hendra outbreaks in 2011 that the relationship between hendra virus infection and contact with flying foxes becomes an item of sustained interest in our media sample. in 2011 with 17 zoonotic events across two states, 21 dead horses, and upwards of 70 people potentially exposed to the virus through their dealings with sick and dying animals, discussions about the potential role of flying foxes as hendra carriers became firmly established as a central part of the media discourse. although the relative prominence given to different types of nonhuman animal changed across the years sampled. by 2011 news about hendra virus and flying foxes suddenly began to be described in reference to each other in the majority of news reports (table 1) . the nonhuman species most often linked to hendra virus in the media sample was the horse, both as a victim of zoonotic outbreaks and as the intermediate host for human infection (see fig. 2 ) . as the link between hendra and flying foxes was given greater prominence in the media, representations of the position and status of flying foxes and horses also began to change. an article in the sydney morning herald in 2009 provides a typical example of the more cautious approach taken in the earlier years sampled: hendra is not highly contagious but is very often fatal. carried by fruit bats, it infects horses, which can then transmit the disease to humans through bodily fluids. (marriner, 2009, p.2) in this quote flying foxes are identified as the source of the virus and the role of horses as intermediate hosts is well defined, as is the mode of transmission for human infection. at this stage the media are at pains to make it clear that hendra virus is a fatal and devastating disease of horses that can, on occasion, infect and kill people who work in equine industries. flying foxes are not portrayed as an immediate or direct risk to human health but as an environmental factor, they are to be understood as the inadvertent reservoir of equine infection. although horses continue to be a feature of hendra news stories, by 2011 it was not uncommon for some subtle distinctions in the natural history of the disease to be dispensed with. rather than intermediate hosts, horses began to be portrayed as victims of a dangerous 'bat-borne' disease; their role as intermediate hosts is implied rather than explicitly stated, as shown by this 2011 report in the australian: the outbreak of hendra virus that has killed six horses and exposed 26 people to infection has reached the outskirts of brisbane and is the most virulent in the known history of the fruit bat-borne disease. (barrett, 2011a, p.3) at the same time as the media start to problematize the role of flying foxes in hendra outbreaks column inches begin to be devoted to identifying the source of each equine case. reports begin to note forensically significant details, often with direct quotes from the investigating scientists, much in the same way a crime reporter might ask a detective to profile a likely suspect and then "set the scene" of a murder. "the infected horse was grazing in a small paddock containing flowering and fruiting trees, where flying foxes were active." ("national snapshot: almost 50 waiting for hendra results," 2011, p.14) these descriptions of 'place' resonate closely with public health recommendations given to horse owners to place feeding bins and water troughs under cover and keep their horses out of paddocks with flowering or fruiting trees, if possible. yet translating this advice into practice sometimes only served to increase people's sense of anxiety and powerlessness, and their anger with the government. for example one property owner in gayndah told the courier-mail: "the head vet of queensland is saying, 'keep your horses away from the bats'. well, i'm doing that but the bats aren't keeping away from my horses" mrs robertson said. "fifty thousand infected bats are pooping on my horses, their grass and their water every day." (robertson & hall, 2011, p.8) portrayals of uncertainty, risk, and the drivers of incidence public acknowledgement of scientific uncertainty about hendra virus was also a feature of the media coverage in 2011, heightening the sense that events were out of control (table 1 ). in previous years, statements about expert uncertainty were either general reflections about the lack of knowledge of how horses acquired the virus, or they had a sharp clinical focus e describing how little experience health professions had in dealing with human hendra infections. the former were typically framed as a matter of scientific curiosity, the later a matter of critical urgency that had immediate impact on people's lives. but during 2011 the media spotlight on hendra returned to unanswered questions surrounding the health and management of other species. even as scientists and veterinarians were described as being "baffled" by hendra virus, they were required to publically speculate on two issues: the implications of finding a pet dog with a naturally acquired infection, and theorise as to why there were so many equine cases that year. that scientists interviewed by the media appeared to be taken aback by the discovery of the canine case drew this response from the queensland premier, at the time, anna bligh: "the scientists themselves are being very honest with us. they're telling us that science does not fully understand every part of this disease. we, as human beings, have had to cope with that in our past, we've conquered diseases before, i believe we will conquer this one." (miles, macdonald, & helbig, 2011, p.2) throughout the same period members of the public were increasingly portrayed as being forced to deal with the effects and risks of hendra outbreaks with inadequate information about the disease. reflecting the statements of the experts, the concerns of the public expressed in the media were, for the most part, articulated around what people should do to protect themselves and their animals from hendra events, particularly as pet dogs now appeared to catch the disease, and therefore might be a risk to humans. as reports emerged that the infected dog had been destroyed and the media sought to highlight existing scientific literature that showed that dogs, cats and guinea pigs could be infected with the virus under "controlled" laboratory conditions, repeated assurances by health authorities that there was minimal risk of catching hendra from animals other than horses began to wear thin. with this single canine infection, in the words of one reporter: "the bats' presence has become all the more sinister". (macdonald, 2011, p.5) despite increasing attention to the link between hendra virus and flying fox populations, for the most part media reports of the hendra outbreak in 2011 infrequently gave any consideration to nonproximal factors for disease emergence. while earlier reports, in 2007 and 2010, regularly pointed to land clearing and environmental events such as prolonged drought, floods and cyclones as explanations for flying foxes congregating in residential and agricultural areas, and approximately one in four articles describe habitat loss as inevitably increasing the risk of hendra jumping species, there is a clear and precipitous decline in such reports in 2011. of the 440 reports analysed from 2011, only one in 10 has any mention of these types of ecological drivers for disease emergence or the increasing prevalence of hendra outbreaks (table 1) . indeed by 2011 the media focus is almost always on events 'as they happen' e the vast majority of reports doing little more than point to the presence of flying foxes and horses in the same locality as the most important characteristic of a causal story of disease or of public health risk. finally cross-coding the media sample revealed that issues surrounding the risk of hendra virus to human health became a dominant feature of reports that specifically address, or at least in some way refer to, the continuing controversy about the control of flying fox populations (fig. 2) . while the frequency with which media reports describe the level of dissatisfaction with the government's policy of flying fox protection remains much the same from 2010 onwards, in 2011 heightened uncertainty about who and what was at risk from hendra virus increasingly became a factor of some importance to assessments of the queensland government's response to the unfolding situation. as the lack of knowledge about hendra virus became an increasingly prominent feature of debates about what should be done about the encroachment of flying fox populations on human settlement and agriculture, so did assertions in the media by members of the public and politicians that concern for the health and welfare of people affected be flying foxes was being ignored in deference to the interests of another species. in 2011 key features of the justification for the policy of not moving or culling flying foxes increasingly became contested in the media e particularly the claim that flying fox numbers were in decline and the claim that attempts to disperse flying fox colonies would spread the disease. for those wanting to repeal protective legislation, assertions that flying fox numbers were not falling could be backed up by simple first hand observation. people just had to come to affected areas and look for themselves. what they would find, according to the mayor of charters towers, was: "these bats aren't endangered, they are in plague numbers and more and more communities are feeling the effects of bat infestation." (gilham, 2011, p.5) proponents of policy change to protect the public against batborne diseases also dismissed in the media claims that culling or dispersing flying foxes would spread hendra virus. even as the premier or a government spokespersons re-iterated publically that they would: "respond to the outbreak of hendra virus on the basis of the best-informed science that we can access" (barrett, 2011b, p.7) , what constituted evidence became a point of contention. those seeking to remove protections told the media that current policies were based on a set of (unproven or unsound) assumptions. in support of this, scientists themselves, as is their fashion, acknowledged the limits of scientific certainty e noting that they did not know how horses caught the virus, and that they lacked incontestable proof that disrupting bat colonies would lead to its spread e something admitted by the chief veterinarian of queensland (to the significant discomfort of the previous labor government). in contrast, politicians supporting the 'rights' of land-owners to government protection from flying foxes, such as the independent member of parliament, bob katter, were able to tell press conferences that fruit bats had been removed from towns for years before the current ban without causing an outbreak. and of claims by scientists, health authorities and government spokesperson that these types of interventions could heighten zoonotic risks, katter noted in the townsville bulletin: "if you were looking for a more stupid claim you couldn't find it. we've got empirical evidence on our side, all they've got is conjecture on theirs". (galloway, 2011, p.15) a prominent advocate for bat culling known for his theatrical turn of phrase, one week earlier katter framed the deeper issues in the following terms in the courier-mail: "if it comes to a choice of our children dying or us going out there and killing flying foxes, then i have a very grave moral problem about not going out there and killing the flying foxes," mr katter said. (miles, helbig, & michael, 2011, p.11) pointedly, a number of other politicians even claimed that calls by scientist and the government for more research on hendra was further evidence that current policies put the needs of flying foxes above the health and welfare of 'real' people: "why don't we, as an immediate first response, reduce flying fox numbers? . why is the government throwing money at research, which will not protect families now?" ("hendra virus claims another victim," 2011, p. 3) the newsworthiness of the hendra outbreak in 2011 and the growing sense of uncertainty expressed by those charged with managing the situation created a media forum in which previously distinct health and environmental issues related to flying foxes became redefined as one and the same problem. consequently, even as a large pool of research funding was announced for projects related to protecting people and communities from hendra virus, this only served to highlight the absence of direct government action. this study employs ethnographic methods to evaluate news coverage of hendra virus and flying foxes in australia. past studies indicate the australian media have a tendency to use alarmist headlines in matters relating to infectious disease (holland, blood, imison, chapman, & fogarty, 2012) . nonetheless our impression is that the australian news media did not 'over-hype' the risks posed by hendra virus; a finding that echoes uk media coverage of the recent swine flu event (hilton & hunt, 2011) . what is noteworthy however was the way politicians, interest groups and other 'policy entrepreneurs' used the media to challenge the established policy community surrounding public health, animal disease control and wildlife management to put controlling flying fox populations higher on the public agenda (kingdon, 2011) . as the number of hendra outbreaks increased in 2011, the values and expertise of public health officials, epidemiologists, wildlife ecologists and environmentalists were questioned, as was their current choice of actions. several features of how hendra outbreaks and debates about flying fox population control were reported in the media permitted opponents of the current policies and practices to reframe the issues, recast debate about what types of actions needed to be taken, and then prime the public as to how these actions could and should be justified in the face of imperfect scientific knowledge. media representation of hendra virus enabled the issue to become not simply a policy or public health issue e but a moral one. by this we mean the discourse surrounding hendra virus and flying fox populations became a debate about what is and should be valued, what is important and worth protecting, and, more broadly, what is the 'right' thing to do. throughout the period sampled media reports increasingly depict the encroachment of flying foxes on human settlement as being the source of the hendra problem. thus while horses remained a risk to human health, rather than being the intermediate host they become fellow victims of the disease. in contrast, the role of flying foxes in hendra transmission and disease ecology becomes increasingly prominent in media coverage, as shown by fig. 1 . as a species they were stigmatized and pathologized. news media increasingly depicted flying foxes as an invasive plague, both reflecting and potentially re-enforcing community sentiment that they are unhygienic disease carrying vermin. and ecological drivers that bring these native wild animals into increasing contact with human settlement, such as land clearing, were increasingly backgrounded. the small number of articles that did identify distal causes for emergent disease neither suggest, nor reflect upon, the relationship between human activities and novel zoonotic events as being a point for plausible preventive interventions. in contrast to the number of reports that advocate some form of control for flying fox populations, only a handful of reports raise the possibility completely removing horses from flying fox habitats as a possible solution. instead by emphasising proximal causes, flying foxes become increasingly blamed for the emergence of poorly understood and increasingly unpredictable emerging zoonotic disease. only a handful of reports offer any attempt to describe the implications of habitat loss for planning policy and disease prevention. this pattern of media reports externalising the origins and threats posed by new infectious diseases is, of course, not unique, but is consistent with media representations of other zoonoses and emerging infectious diseases (eids) (joffe, 2011) . unlike previous studies of zoonotic events such as sars or swine flu, where the strength of the human-to-human link dominates media representations of risk, in the case of hendra virus the role of nonhuman animals in disease emergence was the most salient feature. rather than being characterised and understood as a consequence of modernity and globalisation e such as is said to be the case for other eids (washer, 2010) e the threat posed to human interests by another species of animal became the dominant 'frame' in the media. as was also the case in news reporting about issues surrounding other late 20th century diseases, analysis and critique of the policies, actions and inactions of the government eventually become a central part of the causal story, and, thereby, of increasing moral significance. against this background calls for further research, and for nuanced responses to hendra virus that took account of human, animal, and ecological factors only seemed to reinforce the idea being put forward in the media by opponents of the current policy responses that scientists and environmentalists, rather than the concerns and plight of the very people at risk of disease, were dictating the terms of flying fox and hendra management strategies. questions in the media about the value orientations of those in authority offered an alternative framework through which their actions could be judged, potentially eroding public trust in the government and their stated commitment to evidence-based policymaking. according to this view, the risk of hendra virus was not voluntarily assumed by affected individuals, but forced on them by current government policies. such causal stories, of course, tend to have far greater political potency than discourses reflective of aetiological uncertainty as harmful consequences are viewed as being the product of human intentions, rather than the products of chance (stone, 1997) . in policy terms, the unprecedented number of outbreaks in 2011 seems to have served as a "focussing event" that raised the public visibility of both hendra virus and flying fox management and made them a pressing public health issue (birkland, 1998; kingdon, 2011) . the policy rationale of seeking to protect and promote the collective interest was poorly represented in the media; the focus instead was on the immediate impacts on individuals and discrete communities. in these terms the costs of current protections for flying foxes were being borne by an increasingly visible and vocal segment of the population; the benefits were diffuse, unpredictable, unrecognised by the electorate, and therefore difficult to defend politically (oliver, 2006) . as the amount of coverage of hendra related issues escalated throughout 2011, government spokespersons began to struggle to frame the terms of the debate. those charged with providing information to the public fell into the trap of emphasising the scientific aspects of the health threat, rather than the real world implications. from a public health perspective, the message then became that hendra virus and flying foxes presented a serious but unquantifiable risk to human health, and, as a consequence of the government's cautious and "hands off approach", an imminent threat to communities. at the heart of this transition, media portrayals of scientific uncertainty, moral ambiguity and inconsistencies in descriptions of the government's policy position became linked to a wider political discourse surrounding the control of flying fox populations. opponents of current policies were able to use the media to point to how government legislation was protecting flying foxes, while discounting the implicit policy goals of limiting the environmental drivers of hendra risk exposure. in these terms opponents of the current policies were able to appeal to a set of established norms, tropes and 'rhetorics' for rescue in public health interventions, creating a moral weight for action against flying fox populations in ways that economic arguments and environmental policies had not. as novel zoonotic events such as canine infection remained unexplained, hendra virus and the increasing presence of flying foxes in residential and horticultural areas increasingly became portrayed as a pressing moral issue that required a moral, rather than a scientific solution. against the government's scientific uncertainty and apparent inaction in deference to what they thought was likely to happen in the future, their political opposition offered moral certainty and direct immediate action. people's livelihoods and their 'way of life' were being sacrificed for the good of another species of animal: one that posed a threat to human heath. for the safety and wellbeing of humans, particularly those being forced to live in and around flying foxes, had to be prioritised over the welfare of another species of animal, irrespective of whether interventions intended to resolve their health risks and promote their wellbeing actually did so. this research has several limitations. first given our focus on reactions in selected news media to zoonotic events, health communications and public policy, we were not able to capture the nuance that a broader analysis of press releases, policy documents and the grey literature on hendra virus and flying fox population control would illuminate. second we did not seek to include the political leanings, funding relationships and commercial affiliations of the news organisations examined in the analysis. in this regard australia, and especially queensland, does not have a competitive and diverse media market. ownership of news media is highly concentrated. aside from the news provided by the government funded national broadcaster, the abc, many major towns and cites are served by only one newspaper. this may have coloured how statements by member of the public, politicians, scientists and other experts were framed and presented in the media. finally our focus was on hendra virus in australia. without further research it is doubtful our findings can be generalised to the coverage of other bat-borne emergent zoonotic diseases such as nipah virus in malaysia and bangladesh or sars-like corona virus in south-east asia. what this study reveals is the extent to which the media can be used to construct the risks of hendra virus not as a scientific problem but as a moral question; that is, a problem that requires moral solutions. given that any government measure aimed at protecting public health involves moral judgements that are legitimated through political processes (leichter, 2003; oliver, 2006) , the infectious risk posed by wild animals to agricultural production and human populations is undoubtedly both a political and a moral issue. but the moral dimensions of this issue, and the development of policy responses to it, are deeply contested and heavily influenced by media representations of the link between human health, animals and the environment. our findings illustrate the potential for health communications around emerging infectious disease risks to become entangled in other political agendas and conflict with widely held human values, with implications for the public's likelihood of supporting public policy and risk management strategies that require behavioural change or seek to address the ecological drivers of incidence. more broadly, our research illustrates the value of methodologies from the social sciences to expand the relevance of the one world one health public health agenda to reflect lived realities and the needs of communities. one health in nsw: coordination of human an animal sector management of zoonoses of public health significance deadliest hendra closer to capital. the australian. canbera: act news limited row over bat culling as another horse dies. the australian. canbera focusing events, mobilization, and agenda setting health news and the american public assessing the democratic debate: how the news media frame elite policy discourse. political communication representations of mdr and xdr-tb in south african newspapers framing: toward clarification of a fractured paradigm ecological aspects of hendra virus katter's threat to sue state cyclone yasi: the recovery victim feels sorry for bat the polarizing effect of news media messages about the social determinants of health health in the news: risk, reporting and media influence hendra virus claims another victim uk newspapers' representations of the 2009e10 outbreak of swine flu: one health scare not over-hyped by the media risk, expert uncertainty, and australian news media: public and private faces of expert opinion during the 2009 swine flu pandemic public apprehension of emerging infectious diseases: are changes afoot? global trends in emerging infectious diseases agendas, aletrnatives, and public policies analyzing and interpreting ethnographic data evil habits" and "personal choices": assigning responsibility for health in the 20th century bats out of hell e amid the stench, noise and threat of disease, these residents say they're living a nightmare. the courier-mail deadly virus could spread here. the sydney morning herald relax and take a deep breath': print media coverage of asthma and air pollution in the united states family's hendra heartache goes on e son's pet dog faces a death sentence. the courier-mail national snapshot: almost 50 waiting for hendra results2011 a conceptual template for integrative humaneenvironment research the politics of public health policy cross-species virus transmission and the emergence of new epidemic diseases the bse inquiry. london: stationery office reproduction and nutritional stress are risk factors for hendra virus infection in little red flying foxes (pteropus scapulatus) urban habituation, ecological connectivity and epidemic dampening: the emergence of hendra virus from flying foxes (pteropus spp to cull or not to cull is the burning question. the courier-mail agenda-setting, priming, and framing revisited: another look at cognitive effects of political communication policy paradox: the art of political decision making hendra dog case sparks crisis meeting. abc news interest groups, the media, and policy debate formation: an analysis of message structure, rhetoric, and source cues representations of sars in the british newspapers emerging infectious diseases and society climate change and the effects of temperature extremes on australian flying-foxes bats as a continuing source of emerging infections in humans. reviews in medical virology, 17, 67e91. world health organization serologic evidence for the presence in pteropus bats of a paramyxovirus related to equine morbillivirus funding for this research was provided by the nhmrc centre for research excellence in critical and emerging infectious disease. chris degeling's position at the centre for values, ethics and the law in medicine at sydney university is part funded by a grant from alberta innovates e health solutions awarded to melanie rock. sources of funding had no involvement in design, data collection, analysis or drafting of this article. key: cord-342360-d7qc20i4 authors: mohamad, siti mazidah title: creative production of ‘covid‐19 social distancing’ narratives on social media date: 2020-06-03 journal: tijdschr econ soc geogr doi: 10.1111/tesg.12430 sha: doc_id: 342360 cord_uid: d7qc20i4 this paper offers an insight into the role of young people in shifting risk perception of the current global pandemic, covid‐19, via social distancing narratives on social media. young people are creatively and affectively supporting the social distancing initiatives in brunei darussalam through the use of social media platforms such as instagram, twitter, and tik tok. using qualitative content analysis (qca) data of social media content by bruneian youth, this paper reveals the localised and contextualised creative production of five ‘social distancing’ narratives as a response to the national and global concerns in times of a global pandemic: narrative of fear; narrative of responsibility; narrative of annoyance; narrative of fun; and narrative of resistance. this paper reflects on three key socio‐cultural reconfigurations that have broader implications beyond the covid‐19 crisis: new youth spatialities and social engagements; youth leadership in development; and consideration of social participation and reach in risk communication. this paper is motivated by the socio-cultural implications and reconfiguration of everyday life amidst and beyond the covid-19 pandemic in the period of intense social media use. the introduction of social media to the public in the mid-2000s and its development in recent years have created new youth spatialities and socio-spatial engagements that have significantly altered the way audiences consume information, participate in the content creation, and engage with the content circulated on the social media platforms. with a social mediascape that is characterised by participatory and networked culture and user-generated content (jenkins et al. 2013) , the creation, circulation, and consumption of information and contents are increasingly contextualised, and socio-culturally, and politically shaped. the ontological nature of our communication culture (the intense and expected users' self-disclosure) and the current social media practices, in the context of risk communication, are not making it easier for relevant stakeholders, especially public health practitioners to disseminate health risk information and to understand the communicative health practices and risk perception of the population. social media can be effectively utilised to communicate information on the covid-19 for public health awareness and interventions, while at the same time poses risk due to the confusion and uncertainties among the public from the misinformation, disinformation, and malinformation, apparent in the growing infodemic that accompanies a contemporary epidemic or a pandemic (cinelli et al. 2020; zarocostas 2020) . for instance, the upsurge of false information (or myths) circulated on social media related to covid-19 (commonly seen myth -eating garlic help prevent infection with the new coronavirus). world health organisation (who) and relevant government agencies (brunei's ministry of health included) took action by posting on their official instagram to highlight and debunk false information. equally important in this risk communication is risk perception of the population. risk perception cannot be generalised to the whole population as it is known to be based on a 'diverse array of information that (individuals) have processed on risk factors … and technologies, as well as on their benefits and contexts' (who 2002 ). an individual assesses risk according to own knowledge, experience, and socio-cultural environment. hence, the need to look into context and localities in this study. unlike during the time of the sars outbreak in 2003 when social media was uncommon, digital technology and new social media are now ubiquitously used by who and government bodies to spread awareness of the health risk, to share latest information, and to influence risk perception of the public on the severity of this pandemic. the intense internet and social media penetration create digital landscapes where information is widely available; from information, data, advice on the one hand to misinformation, speculation and even conspiracy theories on the other. to add to this, the audience of a few of the recent social media sites such as instagram, snap chat, and tik tok are predominantly younger people (ortiz-ospina 2019). this makes it imperative to study young people's responses to the pandemic in today's social media time. in brunei, young people are observed to be using social media in playing their part as community members in their own locality and as global citizens. in the context of covid-19 pandemic, considering that there is still not much knowledge on how risk is communicated, understood, and acted upon (smith 2006) including risk communication on social media platforms (kass-hout & alhinnawi 2013), this paper aims to reflect on how young people as active audience on social mediascapes are playing a key role in communicating risk to a fellow (young) audience and changing risk perception of a global pandemic, covid-19, via social media. of equal importance here are the potential socio-cultural transformations these young people's social media engagements could create beyond this current crisis, which are revealed in this paper. in the following section, a description of the qualitative content analysis (qca) of social distancing initiatives on social media is offered. this followed by a section that demonstrates young people's localised and contextualised creative responses to covid-19 through the five social distancing narratives: the narrative of fear; the narrative of responsibility; the narrative of annoyance; the narrative of fun; and the narrative of resistance. one particular action worth highlighting is their effort in making social distancing contents accessible and readable by other users such as by translating official documents to everyday social media language. the penultimate section, reveals three key socio-cultural implications and configurations (new youth spatialities and social engagements, youth leadership in development, and consideration of social participation and reach in risk communication) that have broader implications beyond covid-19. this paper draws from the researcher's preliminary study for an ongoing research project 'social media, risk perception, and risk communication of covid-19 in brunei darussalam', a research collaboration between universiti brunei darussalam and health promotion centre, ministry of health, brunei darussalam. as there is not much information known on audience's social media consumption in risk communication and their individualised, as well as contextualised risk perception, a preliminary research on how the audience deliver and circulate covid-19 related content on social media was conducted, leading to this preliminary finding on the active involvement of young people in highlighting the significance of social distancing in flattening the curve in the country. according to kemp (2020) , social media penetration in brunei was 94 per cent (410,000 population) by january 2020. the growth in the size of the digitally connected group consuming social media content in the nation justifies this interest in looking into social media use in risk communication in brunei darussalam. given the intensive digital transaction through the social media, this research examined the affective consumption and transaction of social media content on covid-19 among public in brunei darussalam and the impact of their social media transaction on their risk perception of covid-19. it seeks to investigate: one, the official social media content on covid-19 circulated by health practitioners and health organisations; two, the social media content on covid-19 consumed by the public in brunei; three, their risk perception and understanding of the covid-19 based on the social media contents transacted and consumed; four, their own appropriation, framing, and circulation of covid-19 on their social media platforms; and five, their health and behavioural practices as a response to their risk perception of covid-19. this research using qualitative content analysis (qca) on social media content between early march to the end of april 2020 is part of the fourth objective, which is to investigate audience appropriation, framing, and circulation of social distancing initiatives using the multimodal features of the sites, such as captions, images, videos, and hashtags. prior to finalising the research objectives, the author conducted a pilot discussion with her undergraduate students to seek their views of covid-19. at that time in point, the students were not really concern about this crisis. the general consensus was that this new coronavirus is only risky for those with underlying health conditions and older people. one student said that 'if i get covid, i'm going to recover', while another student claimed that the crisis was 'sensationalised' by the media. interestingly, a number of the students pointed out that the covid-19 memes (mostly humorous contents) circulated on social media help in changing the perception of covid-19 from high risk to low risk. their views, although not representative of the young people in the country, point to this group's low risk perception. it is safe to say that, this could also be the reason for the lack of discussion among the social media users on covid-19 in the country prior to the announcement of the first case on 9 march 2020. however, the social media landscape changed drastically right after, justifying the qca conducted on bruneians' social media from march 2020 onwards. to achieve the objective set and taking the above findings and observation into consideration, two data collection strategies were employed. one, the researcher followed covid-19 latest cases and issues in the country via the daily press conferences hosted by the ministry of health since 9th march 2020. social distancing related cases highlighted by the minister of health and invited ministers, the issues the public brought to front via media personnel through the question and answer session at the press conference, and issues mentioned by the audience of the press conference via the instagram live (comment section) of the invited media personnel were used to guide the second data collection strategy. this first strategy was employed to obtain the key concerns and issues that are considered important to bruneians in the context of covid-19 and social distancing measures. two, qca was conducted on random and selected (young people in the author's social media network who share social distancing contents on their social media) young bruneians' instagram and twitter contents based on the information obtained from the first data collection strategy. this step by step approach in the data collection allowed the author to follow the issues and the young people's individual social media sharing on social distancing issues. in total, over 30 individual profiles from instagram and twitter combined were observed for social distancing contents. specific social distancing contents observed include social distancing initiatives conducted by individuals, groups, and companies; key incidents happening in the country related to social distancing; the discussion on covid-19 statistics on number of infected, recovered and death to obtain the public's thoughts on the effectiveness of social distancing initiatives in the country; the growing social distancing creative contents on instagram for instance those that are accessible via #artcovidbn hashtag; and related viral cases in the country. contents on other sites such as youtube and tik tok are checked when they appear on the young people's instagram and twitter posts. the cross-platform integration functionality of social media allows for the same media contents to be shared simultaneously (bossetta 2018 ) and the spreadability of social media enables content on youtube to be retweeted on twitter via url sharing. the young people randomly and selectively chosen are between the ages of 18 and 36. a number of them are known in the country as pro-active youth who are keen to support the country's development. they are also currently volunteering as front liners to support the ministry of health. a few of these young people are not youth leaders and not directly involved in supporting the country's effort in curbing the crisis. there is a mixture of students, employed, and currently unemployed young people in the group. these young people's identity as bruneians were cross-checked with the details provided on their biography, their mutual followers, and their contents that are specific to brunei. the findings point to the role of young people in pushing the idea and practice of social distancing apparent via the social distancing narratives (narratives of fear, social responsibility, annoyance, and fun) affectively created, reproduced and circulated online. the findings did suggest that social distancing initiatives are supported more by the pro-active youth. the contents shared on the pro-active youth's social media include their volunteering activities as front liners. there are only limited findings that point to the existence of resistance among the young people. these young people's active engagements on social media sites in the context of social distancing initiatives reveal two interrelated factors that not only could lead us to reconsider how risk are contextually, spatially, and individually perceived, practised, and communicated by the audience, as both the producers and consumers of digital content. equally significant and at a more macro-scale level, it reveals the issue of access and power to social media and digital contents in this era of connectivity and media spreadability. when the first covid-19 case was confirmed in the nation on 9 march 2020, brunei government was quick to take actions. social distancing initiatives were disseminated to the public a few days after the first case was announced at the nation's first covid-19 press conference. a school holiday that was previously set on 16 march 2020, started three days earlier. within the two weeks since the first case, restaurants and gyms were closed, travel restrictions into and out of the country were imposed, a few supermarkets started to implement physical gap at counters and limiting number of customers entering their premises, places of worships are closed temporarily, working from home (wfh), and digital learning were quickly introduced. physical mobilities have not been restricted due to the relatively small number of infected cases (139 cases as of 6 may 2020) and low rate of infection in the country, unlike our neighbouring countries with their lockdown measures, singapore's circuit breaker and malaysia's movement control order. while physical movements are allowed, the public has been consistently advised to maintain social distancing, including the physical distancing of at least 1 metre. despite the nationwide social distancing initiatives, confining the public to their home and to maintain social distancing were not easy tasks, as experienced globally. mass gathering were still seen in some parts of the country despite the government's effort in halting the virus transmission suggesting a low risk perception among the members of the public. one critical incident that sparked the public outrage was the irresponsible act of a large group of locals visiting a night market in temburong district on the day of the opening of the temburong bridge, which connects brunei-muara with temburong district after 130 years of physical separation. the next day, brunei government restricted the opening time of the bridge and closed the night market to prevent potential community transmission of covid-19. social media sites were swiftly flooded by users reprimanding the public for going to the market in mass, disregarding the government's social distancing initiatives. apart from the government's circulation of social distancing reminders on their official channel (mass and new media) after this incident, social distancing efforts were affectively circulated, exchanged, and reproduced by the public on social media. from these localised and contextualised creatively and affectively produced and circulated content on covid-19, a combination of five narratives of social distancing initiatives are apparent: narrative of fear, narrative of responsibility, narrative of annoyance, narrative of fun, and narrative of resistance. the narrative of fear is visible in contents that stress danger and risk to older people and loved ones. detachment from family members due to isolation and quarantine for undetermined duration feed into the narrative of fear. the narrative of responsibility is visible in contents that call for the community to play their role as responsible citizen and community members to flatten the curve. this responsibility includes stressing the unselfish act of medical health professionals in looking after public in the isolation and quarantine centres. general public singing the praises for other front liners including youth volunteers who dedicated their energy and time to support the ministry of health in handling the pandemic. this narrative of responsibility is further instilled via the circulation of a video created by the ministry of health, a video of a medical health professional captioned hargai pergorbanan mereka (english translation -appreciate their sacrifices), warmly (with teary eyes) requesting members of the public to stay at home and together be responsible in preventing local transmission. via a personal communication with a public health officer from the ministry of health, she confirmed that the ministry wanted the public to hear the voices of the health care workers; being apart from their family members while caring for the infected patients and their gratefulness towards the community for their support. the video emphasises the need for the community to be equally responsible and supporting the front liners in their effort to curb the spread of the disease and to treat infected individuals was reposted and affectively appropriated on sites such as whatsapp, instagram, and twitter. through this affective content, it was hoped that the audience would be able to empathise with the front liners and adjust their views, actions, and habits (pedwell 2017) . living in a country with a small population (under 440,000 population as of may 2020) and in a collectivist society, bruneians imagined themselves related to each other either by blood or marriage. such an emotive video that emphasised communal responsibility would be more effective in evoking the emotion of the general public and to create and sustain the sense of shared responsibility, and sense of community and togetherness in time of a crisis and physical separation. a local hip hop duo, guardian of rhythm, created a video music titled 'don't push it' dedicated to the front liners, a title they took from minister of health's famous statement 'don't push it' that reminds public to be vigilant and responsible and to not push the country's limit in health provision during this pandemic (figure 1 ). this music video uploaded on youtube is one of the many creative contents appropriating minister of health's advice at the daily press conference to put pressure on social distancing. stickers, gif, songs, and appropriated hashtags such as #dontpushit #teranahsajadirumah (english translation -stay at home) are produced and circulated. to reach certain pockets of the population that might not be familiar with english language, one user took an initiative to translate the social distancing poster circulated via social media and mass media into colloquial malay language ( figure 2) . the narrative of annoyance is apparent in the deliberate sharing of one's frustration towards members of the public who insisted on leaving home, travelling overseas and possibly contracting covid-19 due to their international travel. there were also a few contents on social media highlighting cases of the public undergoing mandatory self-isolation and self-quarantine leaving home. the rising number of cases in the country was used strategically by the audience to highlight the severity (and potential risk) of covid-19. the creation and use of new terms such as 'covaval' (derived from the word covid and babal, a local term for unrepentant individuals particularly those who went to temburong and those insisted on travelling amid covid-19) to stress their annoyance and to push public to be more responsible (figure 3) . the narrative of fun by the young people were evident in the growing number of videos to show their coping strategies while on mandatory self-isolation and 'stay at home' are uploaded on tik tok and reshared on twitter, instagram and whatsapp. one example is a tik tok video made by bruneian international students who were isolated in their homes and a few hotels turned isolation centres for 14 days upon their return to the country as part of the country's precautionary measures to avoid potential community transmission. these students created individual videos of themselves dancing from one end of their room to the other end. when combined, this video is creatively demonstrating 'mobilities' while in isolation. this is one of the isolation/quarantine videos such as pass the brush and don't rush challenge seen on tik tok globally during this period exemplifying the spreadability of social media content and the upsurge in the use of one particular social media site, tik tok, (emarketer 2018; crowley 2020; johnston 2020; leslie 2020) as a coping mechanism and a mode to socially and creatively connect with others. interestingly, the narrative of resistance is less prominent in brunei's context. as previously mentioned, pro-social distancing narratives are more apparent on the local social media contents than those demonstrating anti-social distancing, which could be due to the less restrictions imposed on physical mobilities in the country. perhaps, there are contents demonstrating resistance to the social distancing initiatives in the country but have not surfaced or made known to the public for a number of possible reasons such as the author's limited access to contents of this group of social media users. to the best of the author's knowledge, there are only two contents (in video format) known to the public that fit the narrative of annoyance and/or resistance to social distancing. one video directly addressed the 'stay at home' instruction and was created and uploaded by a 22-year-old male on his instagram urging those who fear death to stay at home, while criticising the public for fearing the coronavirus, a human-made virus. he was charged for causing a breach of peace under the section 19 of the minor offences act, chapter 30 (faisal 2020) . after this event, the public is consistently reminded that any act that involves the publishing, forwarding or creating fake news and misinformation about covid-19 may be regarded as offences. another video made and shared on social media by a male in his 30s did not directly address social distancing initiatives but the after effect of the 'stay at home' on parents with schoolkids. in his video, he expressed his dissatisfaction with the current e-learning arrangements where the role of educators is transferred to parents. his video was uploaded on local reddit community page (r/brunei) and received backlash from the r/brunei community. other narrative of annoyance demonstrated by the young people are not specifically on staying at home rather was focused on the slow internet connection in the country that affected their ability to continue their studies online and the impact social distancing brought to their livelihoods. these narratives emerging from the creative production of social media contents demonstrate local youth responses to current situation by contextualising social distancing practice in the country. this is supported and made possible by the growing digital creative youth via their (digital) affective practices (wetherell 2012 ) that went beyond the content sharing to transmitting and recreating a discourse -it was civic engagement. both creatives and non-creatives take part in the social distancing initiatives as an active youth citizen who are involved in the community to strive for changes (adler & goggins 2005) . there are also presence of annoyance and resistance to social distancing as demonstrated above. social media sites act as new youth spatialities highlight the spreadability of digital content made possible affective production and consumption of the social distancing narratives (the narratives of fear and responsibility in particular) and abetted in initiating and sustaining the call for a change in social practices and mobilising youth's actions. this initiative needs 'mobile' and active youth taking the helm in creating and pushing new subjectivities and taking collective actions to improve and change the society, albeit temporarily due to this pandemic. three key socio-cultural implications and reconfigurations could be observed in the country and may become common practices after covid-19 crisis: one, new youth spatialities and social engagements; two, youth leadership in development; and three, consideration of social participation and reach in risk communication. these reconfigurations of everyday life due to the crisis could open up new avenues and research focus in geography intersecting between the geographies of young people and the geographies of digital media and communication and beyond geography, those relevant to risk communication strategies and public health. new youth spatialities and social engagements -as previously mentioned, the use of social media by young people is not new, this group of users has been known to dominate the online spaces, particularly social media sites. we could observe new online platforms offering youth with spaces for negotiating their current immobilities; stay at home/ social distancing measures. new sociotechnological adoptions in the country, possibly becoming the new normal in the community post crisis, are observed such as poetry club activities, open mic, and youth mentorship sessions conducted online by young people in the country pointing to the creation of creative spaces online and new ways for conducting social activities. such technological adaptation and creation of new social spaces are not limited to the young demographics but have also been adopted by the older generation, for instance the group recital of the al-quran via zoom exemplifying the rising digitalisation in the country. as a matter of fact, digital infrastructures (applications, websites, and internet of things to name a few) were already in place in the country prior to the crisis but was slow to be taken up by relevant agencies and individuals due to a number of reasons including lack of access due to financial constraints, low knowledge of technology, lack of motivation, and possibly fear of technology. the current pandemic that reconfigured our day to day operations and social practice, however, left these agencies and individuals with no option but to adopt new digital technologies. digital technologies and platforms have transformed the relationship between media and the geographies of everyday life (ash 2019) and young people as the main users of social media play a key role in this transformation as illustrated in this paper via the young people's creative contents creation on social media. these intense socio-spatial engagements of young people on social media amidst covid-19 demonstrate the interplay between media and the young people in reconfiguring the micro-geographies of young people. notwithstanding the growing literature on young people and social media use within the geography disciyouth leadership in development -in the context of brunei, a young developing country, youth engagements with the country's progress and development is relatively low. the country has only recently seen its young people's active involvement in addressing key issues and concerns in the country. government agencies in the country have been providing the young people with offline platforms for engagements, in particular, the ministry of culture, youth, and sports (mcys), the key supporter of young people's progress. in this time of crisis, mcys plays a huge role in expanding and intensifying youth communal engagement as can be seen in the establishment of covid-19 youth volunteer group to support the ministry of health. other youth involvement includes key youth leaders actively involved in creating and distributing personal protective equipment (ppe) to the medical health workers. moreover, the availability of social media platforms offers these young people with spaces to engage leading to the growing number of active and creative youth supporting and sustaining progress and development in the country. social media sites as seen in the previous section are affectively used by the young people to promote and support social distancing initiatives. such active engagement by the young people during this time of crisis signals the already existing youth leadership in the country. online and offline spaces are effectively utilised in pushing forward their agenda and concerns and for personal development, the expansion of individual agencies on social media. young people's engagement with the media (contents) and through the media (platforms) has potential implications on development planning and execution and could help in creating better futures (cupples 2015) . the growing social media content on social distancing via the cocreation of narratives highlights community engagement and collaboration based on the collective concerns of young people and their interests in keeping the community safe and healthy. at this point, despite this active participation and the youth's role in the context of social distancing initiatives, we need to question the reach and success of these narratives and its implication on risk perception of the population. the first question considers the issue of digital divide and access. who are these young people reaching out to and what about those who are not part of this social media community? despite high social media penetration in the country, public health information and risk communication on social media remain exclusive and are not reaching the socially 'disconnected' population. these narratives and initiatives by the young people only reach the social media users with access to their content. the asynchronicity of content delivery and consumption somewhat limit the access of the information. for example, instagram story posts that are only visible for twenty-four hours are only reaching the audience who are able to access the story within the time allowed. second, social distancing effort relies on the individual's perception of the risk of covid-19 to their individual health and family members. connected to the first point, risk communication that does not reach the population could have led to low risk perception rendering the initiative impossible or difficult to achieve. notwithstanding the efforts made by the young people on creating and adapting to this new social distancing practice, if information reach is limited, risk perception of the population will remain low. the audience (young people) and relevant government and non-government agencies have worked collectively and informally to create contents that are affective and impactful. however, such efforts would not reach majority of the population if access is limited. closely tied to this access to contents is the language use in communicating risk. using words such as 'social distancing' and 'social responsibility' may not work well with some segments of the population. those without access to social media where the social distancing term and practices are commonly demonstrated via caption, images, and videos, would not be able to fully understand what are meant by 'social distancing'. official information in the country is commonly disseminated in the official language, malay language, and in a second language, english. evidently, there are people in the country who are not well-versed in these two languages and would need the information to be translated to their colloquial language or dialects. one noteworthy context appropriation action in ensuring the social distancing measures reaching other segments of the population is the translation of the official document to lay people's everyday language in the forms of caption and hashtag (such as #teranahsajadirumah) by social media users. perhaps, this is one of the best times for the country to rethink and restructure its risk communication strategy by learning from the current socio-spatial practices and the loopholes in our (risk) communication system and strategy. beyond covid-19 crisis, considering media convergence (media accessible through various platforms), spreadability of social media, differential access, and language preferences of the population in communication would improve not only public health communication but has broader developmental implications. to conclude, this paper offers preliminary findings on the localised and contextualised creative productions of social distancing narratives in brunei and the role of young people in emphasising the importance of social distancing in this crisis via social media platforms such as instagram, twitter, and tik tok. using qca on young people's instagram and twitter contents, five narratives of local responses to social distancing practices were apparent: the narrative of fear, the narrative of responsibility, the narrative of annoyance, the narrative of fun, and the narrative of resistance. fascinatingly, the research data demonstrated more pro-social distancing narratives than it did for the narrative around resistance. technological affordances such as the participatory culture and spreadability of social media content supported the creative production of social distancing initiatives, which could bring community and civic impact. through individuals' collective actions on social media, they accentuate the role and social responsibility of each member of the public. this is not a single person's tasks; everyone plays an equal role in keeping the community safe and healthy. in times of a crisis, young people in the country played a huge role in supporting the social distancing measures through their everyday creative self-disclosure on social media and as such has brought three key socio-cultural reconfigurations to everyday life that have broader academic and developmental implications. the creation of new youth spatialities and intense social engagements were observed, which are of importance to geographers in understanding youth social realities and could open up academic discussions relevant to the geographies of young people and the geographies of media and communication. the expansion of already existing youth leadership in the country offers a reconsideration of the significance of youth's active social media engagements in the context of development. social participation and reach in risk communication brought our attention to the issue of access to information, differential use of communication platforms, and language preference for the betterment of public health communication. specific to risk communication during this pandemic, what are shared in this paper offer a rethinking of risk communication strategy that includes consideration of socio-culturally and politically appropriate and relevant approaches, and a strategy that is inclusive to all segments of the population. as observed globally, the covid-19 pandemic has forced us to alter our socio-cultural practices and adapt to our drastically reconfigured everyday life creating the new normal in our everyday practices. what is clear at this point in time is our dependencies on digital interconnectivity. with the uncertainties surrounding covid-19, we foresee other significant socio-cultural implications of covid-19, which could either favourably or unfavourably impact communities in their respective localities. what do we mean by 'civic engagement physical and virtual public spaces for youth: the importance of claiming spaces in media and popular culture digital geographies new spaces, blurred boundaries, and embodied performances on facebook the digital architectures of social media: comparing political campaigning on facebook, twitter, instagram, and snapchat in the 2016 u.s. election. journalism & mass communication quarterly 95 the covid-19 social media infodemic. available at . accessed on 4 development communication, popular pleasure and media convergence available at accessed on 29 teens aren't using facebook as much as millennials and gen xers -here's the social platform each generation uses the most available at available at social distancing via tiktok: using humor and facts to educate during covid-19. medscape, 16 april everyday lived islam: malaysian muslim women's performance of religiosity online mediated habits: images, networked affect and social change establishing geographies of children and young people responding to global infectious disease outbreaks: lessons from sars on the role of risk perception, communication and management affect and emotion: a new social science understanding simple saja permintaan yang berhormat dato, social responsibilities, teranah saja di rumah kalau nada keperluan penting untuk bejalan-jalan. 🤦🏻♂️ #covid19 #socialdistancing #brunei #flattenthecurve #ter-anahdirumah the contents of this page will be used as part of the graphical abstract of html only.it will not be published as part of main article.in times of a global crisis, young people play a huge role in supporting the covid-19 social distancing measures through their everyday creative self-disclosure on social media. this paper demonstrates young people's collaborative, participatory, creative, and affective curation of 5 social distancing narratives on instagram and twitter: narratives of fear; responsibility; annoyance; fun; and resistance. via these narratives, they are shifting risk perception of covid-19 amongst their followers. beyond these micro-scale activities, the intense social media engagements and extensive digital connectivity during this covid-19 pandemic brought socio-cultural implications that reconfigure everyday practices and led to the creation of new youth spatialities and social engagement, the enhancement of youth leadership in development, and the rethinking of social participation and reach in risk communication. key: cord-258389-1u05w7r4 authors: verma, anju; verma, megha; singh, anchal title: animal tissue culture principles and applications date: 2020-06-26 journal: animal biotechnology doi: 10.1016/b978-0-12-811710-1.00012-4 sha: doc_id: 258389 cord_uid: 1u05w7r4 animal cell culture technology in today’s scenario has become indispensable in the field of life sciences, which provides a basis to study regulation, proliferation, and differentiation and to perform genetic manipulation. it requires specific technical skills to carry out successfully. this chapter describes the essential techniques of animal cell culture as well as its applications. cell culture is the process by which human, animal, or insect cells are grown in a favorable artificial environment. the cells may be derived from multicellular eukaryotes, already established cell lines or established cell strains. in the mid-1900s, animal cell culture became a common laboratory technique, but the concept of maintaining live cell lines separated from their original tissue source was discovered in the 19th century. animal cell culture is now one of the major tools used in the life sciences in areas of research that have a potential for economic value and commercialization. the development of basic culture media has enabled scientists to work with a wide variety of cells under controlled conditions; this has played an important role in advancing our understanding of cell growth and differentiation, identification of growth factors, and understanding of mechanisms underlying the normal functions of various cell types. new technologies have also been applied to investigate high cell density bioreactor and culture conditions. many products of biotechnology (such as viral vaccines) are fundamentally dependent on mass culturing of animal cell lines. although many simpler proteins are being produced using rdna in bacterial cultures, more complex proteins that are glycosylated (carbohydrate-modified) currently have to be made in animal cells. at present, cell culture research is aimed at investigating the influence of culture conditions on viability, productivity, and the constancy of posttranslational modifications such as glycosylation, which are important for the biological activity of recombinant proteins. biologicals produced by recombinant dna (rdna) technology in animal cell cultures include anticancer agents, enzymes, immunobiologicals [interleukins, lymphokines, monoclonal antibodies (mabs)], and hormones. animal cell culture has found use in diverse areas, from basic to advanced research. it has provided a model system for various research efforts: 1. the study of basic cell biology, cell cycle mechanisms, specialized cell function, cellàcell and cellàmatrix interactions. 2. toxicity testing to study the effects of new drugs. 3. gene therapy for replacing nonfunctional genes with functional gene-carrying cells. 4. the characterization of cancer cells, the role of various chemicals, viruses, and radiation in cancer cells. 5. production of vaccines, mabs, and pharmaceutical drugs. 6. production of viruses for use in vaccine production (e.g., chicken pox, polio, rabies, hepatitis b, and measles). today, mammalian cell culture is a prerequisite for manufacturing biological therapeutics such as hormones, antibodies, interferons, clotting factors, and vaccines. the first mammalian cell cultures date back to the early 20th century. the cultures were originally created to study the development of cell cultures and normal physiological events such as nerve development. ross harrison in 1907 showed the first nerve fiber growth in vitro. however, it was in the 1950s that animal cell culture was performed at an industrial scale. it was with major epidemics of polio in the 1940s and 1950s and the accompanying requirement for viral vaccines that the need for cell cultures on a large scale became apparent. the polio vaccine from a de-activated virus became one of the first commercial products developed from cultured animal cells (table 14 .1). tissue culture is in vitro maintenance and propagation of isolated cells tissues or organs in an appropriate artificial environment. many animal cells can be induced to grow outside of their organ or tissue of origin under defined conditions when supplemented with a medium containing nutrients and growth factors. for in vitro growth of cells, the culture conditions may not mimic in vivo conditions with respect to temperature, ph, co 2 , o 2 , osmolality, and nutrition. in addition, the cultured cells require sterile conditions along with a steady supply of nutrients for growth and sophisticated incubation conditions. an important factor influencing the growth of cells in culture medium is the medium itself. at present, animal cells are cultured in natural media or artificial media depending on the needs of the experiment. the culture medium is the most important and essential step in animal tissue culture. this depends on the type of cells that need to be cultured for the purpose of cell growth differentiation or production of designed pharmaceutical products. in addition, serum-containing and serum-free media are now available that offer a varying degree of advantage to the cell culture. sterile conditions are important in the development of cell lines. cells from a wide range of different tissues and organisms are now grown in the lab. earlier, the major purpose of cell culture was to study the growth, the requirements for growth, the cell cycle, and the cell itself. at present, homogenous cultures obtained from primary cell cultures are useful tools to study the origin and biology of the cells. organotypic and histotypic cultures that mimic the respective organs/tissues have been useful for the production of artificial tissues. there are three methods commonly used to initiate a culture from animals. whole organs from embryos or partial adult organs are used to initiate organ culture in vitro. these cells in the organ culture maintain their differentiated character, their functional activity, and also retain their in vivo architecture. they do not grow rapidly, and cell proliferation is limited to the periphery of the explant. as these cultures cannot be propagated for long periods, a fresh explanation is required for every experiment that leads to interexperimental variation in terms of reproducibility and homogeneity. organ culture is useful for studying functional properties of cells (production of hormones) and for examining the effects of external agents (such as drugs and other micro or macro molecules) and products on other organs that are anatomically placed apart in vivo. fragments exercised from animal tissue may be maintained in a number of different ways. the tissue adheres to the surface aided by an extracellular matrix (ecm) constituent, such as collagen or a plasma clot, and it can even happen spontaneously. this gives rise to cells migrating from the periphery of the explant. this culture is known as a primary explant, and migrating cells are known as outgrowth. this has been used to analyze the growth characteristics of cancer cells in comparison to their normal counterparts, especially with reference to altered growth patterns and cell morphology. cell culture this is the most commonly used method of tissue culture and is generated by collecting the cells growing out of explants or dispersed cell suspensions (floating free in culture medium). cells obtained either by enzymatic treatment or by mechanical means are cultured as adherent monolayers on solid substrate. cell culture is of three types: (1) precursor cell culture, which is undifferentiated cells committed to differentiate; (2) differentiated cell culture, which is completely differentiated cells that have lost the capacity to further differentiate; and (3) stem cell culture, which is undifferentiated cells that go on to develop into any type of cell. cells with a defined cell type and characteristics are selected from a culture by cloning or by other methods; this cell line becomes a cell strain. the monolayer culture is an anchorage-dependent culture of usually one cell in thickness with a continuous layer of cells at the bottom of the culture vessel. some of the cells are nonadhesive and can be mechanically kept in suspension, unlike most cells that grow as monolayers (e.g., cells of leukemia). this offers numerous advantages in the propagation of cells. passaging is the process of subculturing cells in order to produce a large number of cells from preexisting ones. subculturing produces a more homogeneous cell line and avoids the senescence associated with prolonged high cell density. splitting cells involves transferring a small number of cells into each new vessel. after subculturing, cells may be propagated, characterized, and stored. adherent cell cultures need to be detached from the surface of the tissue culture flasks or dishes using proteins. proteins secreted by the cells form a tight bridge between the cell and the surface. a mixture of trypsin-edta is used to break proteins at specific places. trypsin is either protein-degrading or proteolytic; it hydrolyzes pepsindigested peptides by hydrolysis of peptide bonds. edta sequesters certain metal ions that can inhibit trypsin activity, and thus enhances the efficacy of trypsin. the trypsinization process and procedure to remove adherent cells is given in flowchart 14.1. quantitation is carried out to characterize cell growth and to establish reproducible culture conditions. cell counts are important for monitoring growth rates as well as for setting up new cultures with known cell numbers. the most widely used type of counting chamber is called a hemocytometer. it is used to estimate cell number. the concentration of cells in suspension is determined by placing the cells in an optically clear chamber under a microscope. the cell number within a defined area of known depth is counted, and the cell concentration is determined from the count. for high-throughput work, electronic cell counters are used to determine the concentration of each sample. in some cases, the dna content or the protein concentration needs to be determined instead of the number of cells. cells propagated as a cell suspension or monolayer offer many advantages but lack the potential for cellto-cell interaction and cellàmatrix interaction seen in organ cultures. for this reason, many culture methods that start with a dispersed population of cells encourage the arrangement of these cells into organ-like structures. these types of cultures can be divided into two basic types. cellàcell interactions similar to tissue-like densities can be attained by the use of an appropriate ecm and soluble factors and by growing cell cultures to high cell densities. this can be achieved by (a) growing cells in a relatively large reservoir with adequate medium fitted with a filter where the cells are crowded; (b) growing the cells at high concentrations on agar or agarose or as stirred aggregates (spheroids); and (c) growing cells on the outer surface of hollow fibers where the cells are seeded on the outer surface and medium is pumped through the fibers from a reservoir. to simulate heterotypic cell interactions in addition to homotypic cell interactions, cells of differentiated lineages are re-combined. co-culturing of epithelial and fibroblast cell clones from the mammary gland allows the cells to differentiate functionality under the correct hormonal environment, thus producing milk proteins. primary cell culture these cells are obtained directly from tissues and organs by mechanical or chemical disintegration or by enzymatic digestion. these cells are induced to grow in suitable glass or plastic containers with complex media. these cultures usually have a low growth rate and are heterogeneous; however, they are still preferred over cell lines as these are more representative of the cell types in the tissues from which they are derived. the morphological structure of cells in culture is of various types: (1) epithelium type, which are polygonal in shape and appear flattened as they are attached to a substrate and form a continuous thin layer (i.e., monolayer on solid surfaces); (2) epitheloid type, which have a round outline and do not form sheets like epithelial cells and do not attach to the substrate; (3) fibroblast type, which are angular in shape and elongated and form an open network of cells rather than tightly packed cells, are bipolar or multipolar, and attach to the substrate; and (4) connective tissue type, which are derived from fibrous tissue, cartilage, and bone, and are characterized by a large amount of fibrous and amorphous extracellular materials. these cultures represent the best experimental models for in vivo studies. they share the same karyotype as the parent and express characteristics that are not seen in cultured cells. however, they are difficult to obtain and have limited lifespans. potential contamination by viruses and bacteria is also a major disadvantage. depending on the kind of cells in culture, the primary cell culture can also be divided into two types. these cells require a stable nontoxic and biologically inert surface for attachment and growth and are difficult to grow as cell suspensions. mouse fibroblast sto cells are anchorage cells. these cells do not require a solid surface for attachment or growth. cells can be grown continuously in liquid media. the source of cells is the governing factor for suspension cells. blood cells are vascular in nature and are suspended in plasma and these cells can be very easily established in suspension cultures. when primary cell cultures are passaged or subcultured and grown for a long period of time in fresh medium, they form secondary cultures and are longlasting (unlike cells of primary cell cultures) due to the availability of fresh nutrients at regular intervals. the passaging or subculturing is carried out by enzymatic digestion of adherent cells. this is followed by washing and re-suspending of the required amount of cells in appropriate volumes of growth media. secondary cell cultures are preferred as these are easy to grow and are readily available; they have been useful in virological, immunological, and toxicological research. this type of culture is useful for obtaining a large population of similar cells and can be transformed to grow indefinitely. these cell cultures maintain their cellular characteristics. the major disadvantage of this system is that the cells have a tendency to differentiate over a period of time in culture and generate aberrant cells. the primary culture, when subcultured, becomes a cell line or cell strain that can be finite or continuous, depending on its lifespan in culture. they are grouped into two types on the basis of the lifespan of the culture. cell lines with a limited number of cell generations and growth are called finite cell lines. the cells are slow growing (24à96 hours). these cells are characterized by anchorage dependence and density limitation. cell lines obtained from in vitro transformed cell lines or cancerous cells are indefinite cell lines and can be grown in monolayer or suspension form. these cells divide rapidly with a generation time of 12à14 hours and have a potential to be subcultured indefinitely. the cell lines may exhibit aneuploidy (bhat, 2011) or heteroploidy due to an altered chromosome number. immortalized cell lines are transformed cells with altered growth properties. hela cells are an example of an immortal cell line. these are human epithelial cells obtained from fatal cervical carcinoma transformed by human papilloma virus 18 (hpv18). indefinite cell lines are easy to manipulate and maintain. however, these cell lines have a tendency to change over a period of time. nowadays, for the production of biologically active substances on an industrial scale, a mammalian cell culture is a prerequisite. with advancements in animal cell culture technology, a number of cell lines have evolved and are used for vaccine production, therapeutic proteins, pharmaceutical agents, and anticancerous agents. for the production of cell lines, human, animal, or insect cells may be used. cell lines that are able to grow in suspension are preferred as they have a faster growth rate. chinese hamster ovary (cho) is the most commonly used mammalian cell line. when selecting a cell line, a number of general parameters must be considered, such as growth characteristics, population doubling time, saturation density, plating efficiency, growth fraction, and the ability to grow in suspension. advantages of continuous cell lines 1. continuous cell lines show faster cell growth and achieve higher cell densities in culture. 2. serum-free and protein-free media for widely used cell lines may be available in the market. 3. the cell lines have a potential to be cultured in suspension in large-scale bioreactors. the major disadvantages of these cultures are chromosomal instability, phenotypic variation in relation to the donor tissue, and a change in specific and characteristic tissue markers (freshney, 1994) . the cells in the culture show a characteristic growth pattern, lag phase, exponential or log phase, followed by a plateau phase. the population doubling time of the cells can be calculated during the log phase and plateau phase. this is critical and can be used to quantify the response of the cells to different culture conditions for changes in nutrient concentration and effects of hormonal or toxic components. the population doubling time describes the cell division rates within the culture and is influenced by nongrowing and dying cells. the population doubling time, lag time, and saturation density of a particular cell line can be established and characterized for a particular cell type. a growth curve consists of a normal culture and can be divided into a lag phase, log phase, and plateau phase. this is the initial growth phase of the subculture and re-seeding during which the cell population takes time to recover. the cell number remains relatively constant prior to rapid growth. during this phase, the cell replaces elements of the glycocalyx lost during trypsinization, attaches to the substrate, and spreads out. during the spreading process, the cytoskeleton reappears; its reappearance is probably an integral part of the process. this is a period of exponential increase in cell number and growth of the cell population due to continuous division. the length of the log phase depends on the initial seeding density, the growth rate of the cells, and the density at which cell proliferation is inhibited by density. this phase represents the most reproducible form of the culture as the growth fraction and viability is high (usually 90%à100%), and the population is at its most uniform. however, the cell culture may not be synchronized, and the cells can be randomly distributed in the cell cycle. the culture becomes confluent at the end of the log phase as growth rates during this phase are reduced, and cell proliferation can cease in some cases due to exhaustion. the cells are in contact with surrounding cells, and the growth surface is occupied. at this stage, the culture enters the stationary phase and the growth fraction falls to between 0% and 10%. also, the constitution and charge of the cell surface may be changed, and there may be a relative increase in the synthesis of specialized versus structural proteins. the animal cell culture can be grown for a wide variety of cell-based assays to investigate morphology, protein expression, cell growth, differentiation, apoptosis, and toxicity in different environments. product yields can be increased if monitoring of cell growth is managed properly. a number of factors affect the maximum growth of cells in a batch reactor. regular observation of cells in culture helps monitor cell health and the stage of growth; small changes in ph, temperature, humidity, o 2 , co 2 , dissolved nutrients, etc., could have an impact on cell growth. monitoring the rate of growth continuously also provides a record that the cells have reached their maximum density within a given time frame. animal cell cultures show specific characteristics and differ from microbial cultures. the important characteristics of the animal cell are slow growth rate, requirement of solid substrata for anchoragedependent cells, lack of a cell wall (which leads to zf4 and ab9 cells embryonic fibroblast cells zebrafish fragility), and sensitivity to physiochemical conditions such as ph, co 2 levels, etc. some of the fundamental bioprocess variables are as follows: temperature is one of the most fundamental variables as it directly interferes with the growth and production processes. on a small scale, thermostatically controlled incubators can be used to control temperature. however, cell cultures grown on a large scale in bioreactors require more sensitive control of temperature. different bioreactors use different methods to maintain the temperature of the cell culture. temperature in a bioreactor is maintained by a heat blanket and water jacket with a temperature sensor. ph ph of the culture medium can be controlled by adding alkali (naoh, koh) or acid (hcl) solution. addition of co 2 gas to the bioreactor, buffering with sodium bicarbonate, or use of naturally buffering solutes help maintain the ph of the culture. a silver chloride electrochemical-type ph electrode is the most commonly used electrode in the bioreactor. dissolved oxygen is the most fundamental variable that needs to be continuously supplied to the cell culture medium. it is consumed with a carbon source in aerobic cultures (moore et al., 1995) . diffusion through a liquid surface or membranes is one of the methods for providing dissolved oxygen to the medium. the number of viable cells in the culture provides an accurate indication of the health of the cell culture (stacey and davis, 2007) . trypan blue and erythrosin b determine cell viability through the loss of cellular membrane integrity. both these dyes are unable to penetrate the cell membrane when the membrane is intact, but are taken up and retained by dead cells (which lack an intact membrane). erythrosin b stain is preferred over trypan blue as it generates more accurate results with fewer false negatives and false positives. the toxic chemicals in the culture medium affect the basic functions of cells. the cytotoxicity effect can lead to the death of the cells or alterations in their metabolism. methods to access viable cell number and cell proliferation rapidly and accurately is the important requirement in many experimental situations that involve in vitro and in vivo studies. the cell number determination can be useful for determining the growth factor activity, concentration of toxic compound, drug screening, duration of exposure, change in colony size, carcinogenic effects of chemical compounds, and effects of solvents (such as ethanol, propylene, etc.). the assays to measure viable cells (viability assays) are as follows: bromide] (mtt)/mts/resazurin assay. 2. protease marker assay. 3. atp assay. the mtt assay allows simple, accurate, and reliable counting of metabolically active cells based on the conversion of pale yellow tetrazolium mtt. nicotinamide adenine dinucleotide in metabolically active viable cells reduces tetrazolium compounds into brightly colored formazan products or reduces resazurin into fluorescent resorufin (fig. 14.1 ). mtt and resazurin assays are widely used, as they are inexpensive and can be used with all cell types. the protease marker assay utilizes the cell-permeant protease substrate glycylphenylalanyl-aminofluorocoumarin (gf-afc). the substrate, which lacks an aminoterminal blocking moiety, is processed by aminopeptidases within the cytoplasm to release afc. the amount of afc released is proportional to the viable cell number. this assay has better sensitivity than resuzurin and the cells remain viable; thus, multiplexing is possible. the atp assay is the most sensitive cell viability assay. it is figure 14.1 schematic summary of biochemical events in different viability assays. measured using the beetle luciferase reaction to generate light. the mtt assay and procedure is given in flowchart 14.2. assays to detect dead cells are as follows: the viable cells in culture have intact outer membranes. loss of membrane integrity defines a "dead" cell. the dead cells can be detected by measuring the activity of marker enzymes that leak out of dead cells into the culture medium or by staining the cytoplasmic or nuclear content by vital dyes that can only enter dead cells. ldh is an enzyme that is present in all cell types. it catalyzes the oxidation of lactate to pyruvate in the presence of co-enzyme nad 1 . in the damaged cells, ldh is rapidly released. the amount of released ldh is used to assess cell death (fig. 14.2 ). this assay is widely used but has limited sensitivity as half-life of ldh at 37 c is 9 hours. the protease release assay is based on the intracellular release of proteases from the dead/compromised cell into the culture medium. the released proteases cleave the substrate to liberate aminoluciferin, which serves as a substrate for luciferase ( fig. 14. 3) and leads to the production of a "glowtype" signal (cho et al., 2008 ). hayflick limit or hayflick's phenomena is defined as the number of times a normal cell population divides before entering the senescence phase. macfarlane burnet coined the term "c limit" in 1974. hayflick and moorhead (1961) demonstrated that a population of normal human fetal cells divide in culture between 40 and 60 times before stopping. there appears to be a correlation between the maximum number of passages and aging. this phenomenon is related to telomere length. repeated mitosis leads to shortening of the telomeres on the dna of the cell. telomere shortening in humans eventually makes cell division impossible, and correlates with aging. this explains the decrease in passaging of cells harvested from older individuals. one of the most important factors in animal cell culture is the medium composition. in vitro growth and maintenance of animal cells require appropriate nutritional, hormonal, and stromal factors that resemble their milieu in vivo as closely as possible. important environmental factors are the medium in which the cells are surrounded, the substratum upon which the cells grow, temperature, oxygen and carbon dioxide concentration, ph, and osmolality. in addition, the cell requires chemical substances that cannot be synthesized by the cells themselves. any successful medium is composed of isotonic, low-molecular-weight compounds known as basal medium and provides inorganic salts, an energy source, amino acids, and various supplements. the 10 basic components that make up most of the animal cell culture media are as follows: inorganic salts (ca 21 , mg 21 , na 1 , k 1 ), nitrogen source (amino acids), energy sources (glucose, fructose), vitamins, fat and fat soluble component (fatty acids, cholesterols), nucleic acid precursors, growth factors and hormones, antibiotics, ph and buffering systems, and oxygen and carbon dioxide concentrations. complete formulation of media that supports growth and maintenance of a mammalian cell culture is very complex. for this reason, the first culture medium used for cell culture was based on biological fluids such as plasma, lymph serum, and embryonic extracts. the nutritional requirements of cells can vary at different stages of the culture cycle. different cell types have highly specific requirements, and the most suitable medium for each cell type must be determined experimentally. media may be classified into two categories: (1) natural media and (2) artificial media. natural media consist of naturally occurring biological fluids sufficient for the growth and proliferation of animals cells and tissues. this media useful for promoting cell growth are of the following three types: 1. coagulant or clots: plasma separated from heparinized blood from chickens or other animals is commercially available in the form of liquid plasma. 2. biological fluids: this includes body fluids such as plasma, serum lymph, amniotic fluid, pleural fluid, insect hemolymph, and fetal calf serum. these fluids are used as cell culture media after testing for toxicity and sterility. 3. tissue extract: extracts of liver, spleen, bone marrow, and leucocytes are used as cell culture media. chicken embryo extract is the most common tissue extract used in some culture media. the media contains partly or fully defined components that are prepared artificially by adding several nutrients (organic and inorganic). it contains a balanced salt solution with specific ph and osmotic pressure designed for immediate survival of cells. artificial media supplemented with serum or with suitable formulations of organic compounds supports prolonged survival of the cell culture. the artificial media may be grouped into the following four classes: serum-containing media, serum-free media, chemically defined media, and protein-free media. the clear yellowish fluid obtained after fibrin and cells are removed from blood is known as serum. it is an undefined media supplement of extremely complex mixture of small and large molecules and contains amino acids, growth factors, vitamins, proteins, hormones, lipids, and minerals, among other components (table 14. 3). advantages of serum in cell culture medium 1. it has basic nutrients present either in soluble or in protein-bound form. transferrin. insulin is essential for the growth of nearly all cells in culture and transferrin acts as an iron binder. 3. it contains numerous growth factors such as platelet-derived growth factor (pdgf), transforming growth factor beta (tgf-b), epidermal growth factor (egf), and chondronectin. these factors stimulate cell growth and support specialized functions of cells. 4. it supplies protein, which helps in the attachment of cells to the culture surface (e.g., fibronectin). 5. it provides binding proteins such as albumin and transferrin, which helps transport molecules in cells. 6. it provides minerals such as ca 21 , mg 21 , fe 21 , k 1 na 1 , zn 21 , etc., which promote cell attachment. 7. it increases the viscosity of the medium, which provides protection against mechanical damage during agitation and aeration of suspension cultures. 8. it provides appropriate osmotic pressure. disadvantages of serum-containing medium 1. expensive: fetal calf serum is expensive and difficult to obtain in large quantities. and there is no uniformity in composition of serum. this can affect growth and yields and can give inconsistent results. 3. contamination: serum medium carries a high risk of contamination with virus, fungi, and mycoplasma. the serum itself may be cytotoxic and may contain inhibiting factors, which in turn may inhibit cultured cell growth and proliferation. the enzyme polyamine oxidase in serum reacts with polyamines such as spermine and spermidine to form cytotoxic polyamino-aldehyde. 5. downstream processing: the presence of serum in culture media may interfere with isolation and purification of culture products. additional steps may be required to isolate cell culture products. the use of serum in culture media presents a safety hazard and source of unwanted contamination for the production of biopharmaceuticals. as a number of cell lines can be grown in serum-free media supplemented with certain components of bovine fetal serum, the development of this type of medium with a defined composition has intensified in the last few decades. eagle (1959) developed a "minimal essential medium" composed of balanced salts, glucose, amino acids, and vitamins. in the last 50 years, considerable work has been carried out to develop more efficient culture media to meet the specific requirements of specific cell lines. advantages of serum-free culture media 1. serum-free media are simplified, and the composition is better defined. 2. they can be designed specifically for a cell type. it is possible to create different media and to switch from growth-enhancing media to differentiationinducing media by altering the combination and types of growth factors and inducers. 3. they decrease variability from batch to batch and improve reproduction between cultures. 4. downstream processing of products from cell cultures in serum-free media is easier. 5. they reduce the risk of microbial contamination (mycoplasma, viruses, and prions). 6. serum-free media are easily available and ready to use. they are also cost-effective when compared with serum-containing media. disadvantages of serum-free media 1. growth rate and saturation density attained are lower than those compared to serum-containing media. 2. serum-free media prove to be more expensive as supplementing with hormone and growth factors increases the cost enormously. 3. different media are required for different cell types as each species has its own characteristic requirements. 4. critical control of ph and temperature and ultrapurity of reagent and water are required as compared to serum-containing media. these media contain pure inorganic and organic constituents along with protein additions like egfs, insulin, vitamins, amino acids, fatty acids, and cholesterol. these media contain nonprotein constituents necessary for the cell culture. the formulations of dme, mem, rpmi-1640, procho tm, and cdm-hd are the characterization of cell lines is important to ensure the quality of cell-derived biopharmaceutical products. it helps in determining the cell source with regard to its identity and the presence of other cell lines, molecular contaminants, and endogenous agents. the characterization of mammalian cell lines is species-specific and can vary depending on the history of the cell line and type of media components used for culturing. mammalian cell line characterization can be done in four ways: identity testing can be carried out by isoenzyme analysis. the banding pattern of the intracellular enzyme (which is species-specific) can be determined by using agarose gels. dna fingerprinting and karyotyping, and dna and rna sequencing are alternative methods to identity testing. karyotyping is important as it determines any gross chromosomal changes in the cell line. the growth conditions and subculturing of a cell line may lead to alteration in the karyotype; for example, hela cells were the first human epithelial cancer cell line established in long-term culture, and they have a hypertriploid chromosome number (3n1). bacterial and fungal contamination of cell lines occurs due to impure techniques and source material. the occurrence of contaminants can be tested by a direct inoculation method on two different media. mycoplasma infection is the contamination of cell cultures/cell lines with mycoplasmas, and it represents a serious problem. detection by microscopy is not adequate and requires additional testing by fluorescent staining pcr, elisa assay, autoradiography, immunestaining, or microbiological assay. characterization and testing of cell substrate (cell line derived from human or animal source) is one of the most important components in the control of biological products. it helps to confirm the identity, purity, and suitability of the cell substrate for manufacturing use. the substrate stability should be examined at a minimum of two time points during cultivation for production. in addition, genetic stability can be tested by genomic or transcript sequencing, restriction map analysis, and copy number determination (fda guidelines, 2012). virus testing of cell substrate should be designed to detect a spectrum of viruses. appropriate screening tests should be carried out based on the cultivation history of cell lines. the development of characteristic cytopathogenic effect (cpe) provides an early indication of viral contamination. some of the viruses of special concern in cell production work are human immunodeficiency virus, human papilloma virus, hepatitis virus, human herpes virus, hantavirus, simian virus, sendai virus, and bovine viral diarrhea virus. for detection of viruses causing immunodeficiency diseases and hepatitis, detection of sequences by pcr testing is adequate. cells exposed to be serum or bovine serum albumin require a bovine virus test. some of the viral testing assays are xc plaque assays, s 1 l-focus assay, reverse transcription assay. xc plaque assay is utilized to detect infectious ecotropic murine retroviruses. s 1 l-focus assay is used to test cells for the presence of infectious xenotropic and amphotropic murine retroviruses that are capable of interacting with both murine and nonmurine cells. real-time (rt) assays such as real-time fluorescent product-enhanced reverse transcriptase (fpert) assay and quantitative real-time for fluorescent productenhanced reverse transcript (qpert) assay detect the conversion of an rna template to cdna due to the presence of the rt template when retrovirus infection is present in the cell line. advantages of animal cell culture 1. physiochemical and physiological condition: role and effect of ph, temperature, o 2 /co 2 concentration, and osmotic pressure of the culture media can be altered to study their effects on the cell culture (freshney, 2010 in addition, this system cannot replace the complex live animal for testing the response of chemicals or the impact of vaccines or toxins. despite considerable progress in the development of cell culture techniques, the potential biohazards of working with animal and human tissues presents a number of ethical problems, including issues of procurement, handling, and ultimate use of material. in most countries, biomedical research is strictly regulated. legislation varies considerably in different countries. research ethics committees, animal ethics committees for animal-based research, and institutional research boards for human subjects have a major role in research governance. some guidelines for the use of experimental or donor animals include assurances of proper conditions for housing animals and minimal pain or discomfort to any animal that is put to death or operated upon. these guidelines apply to higher vertebrates and not to lower vertebrates such as fish or other invertebrates. fetal bovine serum (fbs)-supplemented media are commonly used in animal cell cultures. in recent years, fbs production methods have come under scrutiny because of animal welfare concerns. fbs is harvested from bovine fetuses taken from pregnant cows during slaughter. the common method of harvesting the fetus is by cardiac puncture without any anesthesia. this practice of harvesting fbs is inhumane as it exposes the fetus to pain and/or discomfort. in addition to moral concerns, numerous scientific and technical problems exist with regard to the use of fbs in cell culture. efforts are now being made to reduce the use of fbs and replace it with synthetic alternatives. in the case of human tissues, some considerations that need to be addressed are as follows (freshney, in biomedical research, the use of animal and human cell cultures has become beneficial for diverse applications. it provides indispensable tools for producing a number of products, including biopharmaceuticals, mabs, and products for gene therapy. in addition, animal cell cultures provide adequate test systems for studying biochemical pathways, intra-and intercellular responses, pathological mechanisms, and virus production. some of the applications of animal cell culture are discussed below. animal cell culture technology has played an important role in the development of viral vaccine production. the establishment of cell culture technology in the 1950s and the consequent replacement of live animals for the development of antigens have led to considerable progress in bioprocess technology. with the advent of dna technology, molecular manipulation of viruses has led to the development of a recombinant vaccine against hepatitis b virus (hbv) and several others potential vaccines that are in the final phase of clinical trials. viral particles production by cell culture viral particle production by cell culture differs from the production of molecules such as proteins, enzymes, and toxins by bacteria or animal cells. the product formation may not be related to the development or growth of a cell and may occur through secondary metabolic pathways, unlike virus production, which does not result from secondary metabolic pathway. virus production occurs after the viral infection directs cell machinery to perform viral particle production. two stages are involved in viral production: 1. cell culture system: this requires the development of an efficient system for conversion of the culture medium substrate in the cell mass. 2. virus production: this phase differs from the infection phase and has different nutritional and metabolic requirements. a number of immortalized cell lines are used for the industrial production of viral vaccines. table 14 .5 gives the cell lines used for vaccines. most of the existing classical vaccines for viral disease are either altered or chemically inactive live viruses. however, incomplete inactivation of a virus or reversion of an attenuated strain can risk infection in vaccinated individuals. viruses with segmented genomes with a high degree of genetic exchange can undergo re-assortment or recombination of genetic material with viruses of different serotypes in the vaccinated host, which can result in the production of new variants of the virus. moreover, some live virus vaccines are teratogenic; for example, smithburn neurotropic strain (sns) (smithburn, 1949) and mp12attenuated (caplen et al., 1985) vaccine strains of the rift valley fever virus. a new type of vaccine that does not present the typical side effects of an attenuated or inactivated viral vaccine has been made possible with the development of rdna technology. virus-like particles (vlps) are highly effective as they mimic the overall structure of the virus; however, these particles lack the infectious genetic material. capsid proteins can aggregate to form core-like particles in the absence of nucleic acids. these spontaneously assembled particles are structurally similar to authenticate viruses and are able to stimulate b-cell-mediated immune responses. in addition, vlps stimulate a cd4proliferative response and cytotoxic t-lymphocyte response (jeoung et al., 2011) . vlps resemble and mimic virus structure and are able to elicit a strong immune response without causing harm. the major advantage of vlps is their simplicity and nonpathogenic nature. they are replicationdeficient as they lack any viral genetic information, thus eliminating the need for inactivation of the virus. this is important as inactivation treatments lead to epitope modifications (cruz et al., 2002) . as the structural morphology of vlps is similar to the virus, the conformational epitopes presented to the immune system are the same as for the native virus particles. the immune response/antibody reactivity in the case of vlps is significantly improved as vlps present conformation epitopes more similar to the native virus. vlps also induce a strong b-cell response. for broader and more efficient protection, it is possible to adapt one or more antigens to the multimeric protein structure. another advantage offered by vlps is that they significantly reduce vaccine costs as these can elicit a protective response at lower doses of antigen. the fda has approved vlp-based vaccines for hbv and hpv. the hbv vaccine was approved in 1986 and the hpv one in 2006 (justin et al., 2011) . to generate immunogenic vlps, the s gene is cloned and expressed in a eukaryotic expression host such as yeast or mammalian cells (e.g., cho cell line). the mammalian cell culture allows easy recovery because the cells are able to secrete the antigen hbsag. the two companies producing cho-based vaccines are the frenchbased pasteur-merieux aventis (gene hevac b) and the israeli-based scigen (sci-b-vac). the gene hevac b vaccine contains the hbsag s protein and m protein, whereas sci-b-vac contains the m and l proteins. viruses of the papillomaviridae family are known to induce lesions and warts and also cause cervical cancer. fifteen strains of papillomaviridae are known to cause cervical cancer. hpv-16 is considered a high-risk hpv type as the risk of cancer may be higher than for other high-risk hpv types. the two virally encoded proteins of hpv are l1 and l2. l1 is the main capsid protein that forms the outer shell of the virus. l2 is found in the interior of the viral particle and is less abundant. the recombinant l1 vlp is able to induce neutralizing antibodies in animals. gardasil (the first hpv vaccine) was approved by the fda in 2006. this vaccine is manufactured by merck and co., inc. ceravarix, another hpv vaccine (manufactured by glaxo smithkline), was approved by the fda in 2009. it uses the trichoplusia ni (hi-5) insect cell line infected with l1 recombinant baculovirus (jiang et al., 1998; wang et al., 2000) . a number other vlp-based vaccines are in clinical trials. these include the anti-influenza a m2-hbcag vlp vaccine (clarke et al., 1987) , two antimalarial vaccine nicotine-qβ vlps (maurer et al., 2005) , and an anti-angiiqβ vlp. the vlp production in mammalian cell lines and baculo cell lines of viruses infecting humans and other animals is summarized in table 14.6. proteins play a major role in carrying out biochemical reactions, transporting small molecules within a cell or from one organ to another, formation of receptors and channels in membranes, and providing frameworks for scaffolding. the number of functionally distinct proteins in humans far exceeds the number of genes as a result of post-translational modifications. these modifications include glycosylation, phosphorylation, ubiquitination, nitrosylation, methylation, acetylation, and lipidation. the changes in protein structure as a result of mutation or other abnormalities often lead to a disease condition. protein therapeutics offer tremendous opportunities for alleviating disease. the first therapeutic from recombinant mammalian cells was human tissue plasminogen, which obtained market approval in 1986. at present, 60%à70% of all the recombinant therapeutic proteins are produced in mammalian cells. the main therapeutic proteins can be divided into seven groups (walsh, 2003): 1. cytokines 2. hematopoietic growth factors 3. growth factors 4. hormones 5. blood products 6. enzymes 7. antibodies most of the proteins have complex structures and undergo chemical modification to insure full biological activity. protein post-translation modifications (ptm) can happen in several ways. the most widely recognized form of ptm is glycosylation, which involves extensive sequence processing and trimming in the golgi apparatus and endoplasmic reticulum. eukaryotic cells are capable of carrying out this type of modification and are thus preferred in biopharmaceutical processes. hamster, baby hamster kidney (bhk), and cho cells are often the host cells of choice as glycosylation patterns generated from these cells are more similar to human patterns. table 14 .7 lists various therapeutic proteins produced in animal cell lines. cytokines are proteins of the immune system that play a central role in immune response. cytokines are produced as a result of immune stimulus by various white blood cells. interferons (ifns) were the first family of cytokines to be discovered and used as biopharmaceuticals. ifnα is used for treatment of hepatitis, and more recently has been approved for leukemia and other types of cancers. ifnβ is used for treatment of multiple sclerosis and is marketed under the names avonex, belaseron, and rebif. ifnγ is used for the treatment of chronic granulomatous disease. interleukin is another kind of cytokine that helps regulate cell growth, differentiation, and motility and is used as a biopharmaceutical. the recombinant form of il-2 is used for the treatment of renal cell carcinoma. growth factors are proteins that bind to receptors on the surface of cells to activate the cells for proliferation and or differentiation. the different types of growth factors are tgf, insulin-like growth factor, and (egf. the primary sources of pdgf are platelets, endothelial cells, and the placenta. two isoforms of this protein are present in the human body and both of these have one glycosylation site and three disulfide bonds. examples of growth factors used as biopharmaceuticals are the following: 1. osigraft/eptotermin alfa (bone morphogenetic protein) is used for the treatment of tibia fractures, is grown commercially in cho cells, and was first approved in 2001 in europe. surgery; it is also commercially grown in cho cells. this product was first approved in europe in 2002. insulin, glucagon, gonadotropins, and growth hormones are the most well-known therapeutic hormones. the first biopharmaceuticals that obtained approval by regulatory agencies were insulin and recombinant human growth hormones. these were produced in microbial cells. the commercial recombinant forms of the gonadotropin family of hormones are gonal-f, luveris, puregon, and ovitrelle. all these are produced using cho cells and are used for treating female infertility. a number of recombinant therapeutic enzymes are expressed in mammalian cells. tissue plasminogen activator (tpa) is a thrombolytic agent involved in dissolving blood clots. recombinant tpa is commercially is known as alteplase and tenectplase, which are used for the treatment of acute myocardial infraction. fabry disease, a genetic metabolic disorder, is characterized by a lack of enzyme α-galactosidase a. α-galactosidase a and is produced by genetically modified cho cells. hemophilia a is caused by the lack of bloodclotting factor viii, hemophilia b is caused by deficiency of factor ix, and hemophilia c by lack of factor xi. factor viii and ix are proteins. the first recombinant factor vii products were recombinate and kogenate, which were expressed in cho and bkh cells, respectively. recombinant factor fix is commercially sold as benefix and is produced in recombinant cho cells. therapeutic antibodies are used in the treatment of cancer, cardiovascular disease, infections, and autoimmune diseases. in 2004, the antibody avasin (bevacizeimab) was approved for the treatment of metastatic colorectal cancer. this antibody acts as an inhibitor of vascular endothelial growth factor. zenapax, another commercially available antibody, is used during prophylaxis for preventing the rejection of transplanted organs. this is commercially grown in the nso cell line and was approved for human use in 1997. gene therapy involves the insertion, removal, or alteration of a therapeutic or working gene copy to cure a disease or defect or to slow the progression of a disease, thereby improving the quality of life. the human genome map was the first major step toward a new way of addressing human health and illness. gene therapy holds great promise, however, the task of transferring genetic material into the cell remains an enormous technical challenge and requires ex vivo cell cultivation and adaptation from the lab to a clinically relevant state. the development of animal cell culture technology is imperative for advances in gene therapy. monogenic diseases caused by single gene defects (such as cystic fibrosis, hemophilia, muscular dystrophy, and sickle cell anemia) are the primary targets of human gene therapy. the first step in gene therapy is to identify the faulty gene. this is followed by gene isolation and generation of a construct for correct expression. integration of the gene followed by delivery of the genetic material in vivo or ex vivo is crucial to the success of gene therapy. in in vivo therapy, the genetic material is introduced directly into the individual at a specific site, and in ex vivo treatment, the target cells are treated outside the patient's body. these cells are then expanded and transferred back to the individual at a specific site. the ex vivo technique involves gene therapy in the cultured cells, which are expanded and subsequently transferred to the targeted tissue. a number of clinical studies and trials for gene therapy have already been approved and are being conducted worldwide. from 1989 up to the present, about 500 clinical studies have been reported; 70% of these studies are intended for cancer treatment. the first product designed for gene therapy was gendicine, a medication produced by shenzhen sibiono genetech, china. gendicine is used for head and neck carcinoma treatment. the tumor 4 suppressing gene p53 in recombinant adenovirus expresses protein p53, which leads to tumor control and elimination. sbn-cel is a cell line that was subcloned from the human embryonic kidney (hek) cell line 293 and has been used for the production of gendicine. in recent years, biopesticides have gained importance due to increased concerns about agrochemicals and their residues in the environment and food. biopesticides provide an effective means for the control of insects and plant disease, and they are environmentally safe. the biological control of insect pests by another living organism (in order to suppress the use of pesticides) is an age-old practice. presently, a number of biological controls are being used as biopesticides. with the high cost of chemical-based pesticides and the development of resistance to multiple chemical pesticides, baculoviruses are one of the most promising biocontrols for insect pests and have been increasingly used effectively against caterpillars worldwide. however, the major impediment in the development of baculoviruses as biopesticides is the high cost and small volumes of in vitro methods. development of an in vitro production process for large quantities of baculoviruses at comparable costs to chemical pesticides will help provide insect control that is safe, efficacious, cost-effective, and environmentally safe. baculovirus production in animal cell culture a number of factors are important for a successful commercial production of bioinsecticides: 1. large-scale production of viruses at competitive costs. 2. economic production of viruses (i.e., low cost for the media and running the culture). 3. effective cell line with high virus per cell productivity. 4. with passage of the virus into cells, there is a loss of virulence and an increased risk of mutant formation; this should be avoided. 5. the quality of the polyhedral produced in the cell culture should be comparable to those obtained from caterpillars. the insect baculovirus cell system offers a number of advantages. it produces recombinant proteins that are functional and are immunologically active, as it is able to make post-translational modifications. the recombinant system uses a powerful promoter polyhedron. the most commonly used cell lines in biopesticide production are the sf21 and sf9 cell lines, which are derived from ovarian tissues of the fall army worm (spodoptera frugiperda). sf9 cells show a faster growth rate and higher cell density than sf21 cells and are preferred. high five cell lines (designated bti-tn-5bi-4) established from trichoplusia ni embryonic tissue are also being used. the continuous culturing of cells for virus production leads to virus instability and the so-called passage effect. this can result in a decrease of virulence and polyhedral production and a variety of mutations. all these changes affect commercial production in vitro. two types of mutations are commonly seen in continuous passaging of cell cultures for viral productions: (1) defective infective particles (dips) and (2) few polyhedral (fp) mutations. fp mutations are characterized by (1) reduced polyhedral, (2) enhanced production of bv, and (3) lack of occluded virions in polyhedra. all these factors reduce the infectivity of the target pest. spontaneously generated fp mutants have been reported in acmnpv (autographa california nucleopolyhedroviruses) (wood, 1980) , galleria mellonella nucleopolyhedroviruses (gmmnpv) (fraser and hink, 1982) , and helicoverpa armigera nucleopolyhedroviruses (hasnpv) (chankraborty and reid, 1999) . dip mutations are the formation of dips. they occur due to serial passaging for long periods, which results in a decrease in the filtering of infectious virus. dips have been reported in a number of animal virus systems and in baculovirus systems. dip formation can be avoided by low multiplicity of infection. this minimizes the probability of the defective virus entering the cell with an intact helper virion. the majority of antibodies available on the market today are produced in animal cell cultures (van dijk and van de winkle, 2001) . animal cells are preferred because they are capable of glycosylation and structural conformation, which is essential for a drug to be productive. hybridoma technology has been the most widely used method for small-and large-scale production of mabs. however, these antibodies have limited therapeutic applications since they produce an adverse immune response on repeated use. a number of cell lines are now being used for the production of recombinant antibodies. the cho lines are the most commonly used. other cell lines used are marine myelomas nso, sp 2/0, hek-93, and bhk. a number of factors influence the production of mabs. for a high concentration of mab production, the cell line should have high productivity. for high protein productivity, it is important that the selected cell line be productive in order to avoid large reaction volumes and the high cost of protein purification. cell lines with the capacity to grow without anchorage offer an advantage in terms of scaling up the process; it is much simpler than with those designed for the growth culture of anchorage-dependent cells. sp2/0 and nso cell lines can grow naturally in suspension; other cells such as cho and bhk can be easily adapted to this form of cultivation. stem cells are unspecified cells that have the potential to differentiate into other kinds of cells or tissues and become specialized cells. the two characteristics that define stem cells are their ability of self-regenerate and to differentiate into any other cells or tissues. these cells have the capability to renew themselves to form cells of more specialized function. in recent years, stem cell research has been hailed as a major breakthrough in the field of medicine. this property of turning a cell into any other specialized function cell has made researchers believe that stem cells could be utilized to make fully functional, healthy organs to replace damaged or diseased organs. human embryonic stem cells (hescs) are grown on nutrient broth. these cells are traditionally cultured on mouse embryonic fibroblast feeder layers, which allows continuous growth in an undifferentiated stage. the mouse cells at the bottom of the culture dish provide a sticky surface to which the cells can attach. in addition, the feeder cells release nutrients into the culture medium. researchers have now devised animalfree culture systems for hescs and have used human embryonic fibroblasts and adult fallopian tube epithelial cells as feeder layers (in addition to serum-free mediums). more recently, methods to subculture embryonic cells without the feeder layer have been developed. martigel from bd biosciences has been used to coat the culture plate (hassan et al., 2012) for effective attachment and differentiation of both normal and transformed anchorage-dependent epithelioid and other cell types. this is a gelatinous protein mixture isolated from mouse tumor cells. a major milestone in the biological sciences was the establishment of the tissue culture technique that can both maintain and propagate the growth of living cells under sterile in vitro conditions. traditional cell cultures, which are two-dimensional (2d), are grown as monolayer cultures on a flat and rigid surface. since their development, several advancements have been made to improve cell culture media as well as the biological materials used for culturing. the improvements have proven valuable for cell-based study due to their amalgamation of various modern analytical techniques, such as fluorescence, electrochemistry, and mass spectroscopy. 2d cell culture does not provide an adequate in vivo environment, where other cells surround the cells in a three-dimensional (3d) ecm (edmondson et al., 2014) . cells under in vivo conditions both produce and continuously consume oxygen nutrients and other molecules, and such dynamic distributions are not mimicked in conventional 2d cell cultures. moreover, 2d cell cultures fail to recapitulate the highly complex 3d environment, function, and physiology of living tissues, the multitudinous regulatory interactions from surrounding tissue cells, the ecm, and other systemic factors that lead to nonpredictive data of an in vivo response (li et al., 2012) . the limitations of 2d cell culture systems have recently become more evident. recent standard protocol advances in the fields of quantitative and system biology and imaging technology have allowed analysis of individual cells and observation of live individual cells growing in a natural physiological 3d environment. cells cultured in a 3d model system more closely mimic in vivo conditions. thus, unlike 2d cell cultures, which can sometimes cause misleading and nonpredictive data of in vivo responses, 3d systems are realistic for translating study findings. compared to the 2d cell culture system, the 3d cell culture system provides a physiologically relevant and closer biomimetic environment, promotes better cell differentiation, and improves cell function (edmondson et al., 2014) . the 3d culture system holds great promise for applications in various fields, such as cancer cell biology, stem cell research, drug discovery, and various cell-based analyses and devices. while this culturing model offers state-of-the-art technology for facilitating drug development and numerous other applications, several hurdles remain before a universal, standardized, and validated system can be established (sung et al., 2014) . recent developments in the transition from 2d to 3d cell cultures indicate promising applications for many industries; however, the cost of automation and easyto-use readout systems are still key concerns. the 3d cell culture system has provided a powerful tool that mimics a highly complex and dynamic in vivo environment, and it has gained greater momentum with the integration of microfluidic technology. microfluidics is a technology characterized by the manipulation of fluids at the micron-scale for the improvement of diagnostics and cell culture research. it uses microfluidic devices to manipulate fluids in the small capillaries or microchannels. microfluidics is a science of manipulating, mixing, monitoring, and analyzing minute volumes of fluids or gases on the surfaces of chips and microfluidic chips. this technology is ideal because it recreates the microenvironment of the vasculature and has become a powerful tool in cell culture research. it encompasses knowledge of the biological sciences, chemistry, physics, and engineering applications (xu and attinger, 2008) . the microfluidic 3d cell culture model also allows precise spatial control over the gradients and medium exchange. it not only mimics but also promotes several biologically relevant functions not seen in the 2d cell culture. furthermore, it has been increasingly used to generate high-throughput cell culture models and has shown considerable promise for improving diagnostics and biological research (el-ali et al., 2006) . notably, microfluidic cell cultures are potential candidates for next generation cell analysis systems. several 3d-based cell culture approaches have been created to provide a better biomimetic microenvironment for cells than those of 2d cultures. in addition, crucial liquid handling steps, including cell loading, nutrient supply, and waste removal-under physiologically relevant conditions-can be performed with real-time microscopy (xu et al., 2014) . numerous microfluidic devices have been developed to not only provide nutrients and oxygen continuously for cell proliferation but also to investigate several characteristics of a dynamic 3d cell culture, such as differences in concentration, temperature gradients, and shear force conditions on cell transport and cultivation. numerous microfluidic platforms for 3d cell culturing have been developed and based on the substrates used for microdevice fabrication, including glass/siliconbased, polymer-based, and paper-based platforms. polydimethylsiloxane (pdms)-based microdevices are the predominant form of microfluidic 3d cell culture systems because they are economical and allow permeability of o2, which is vital in cell proliferation. to provide an in vivo-like environment that resembles living tissues, several natural polymers, such as collagen, fibrin, and agarose, have been used to fabricate microfluidic devices (li et al., 2012) . microfluidics technology has emerged as a viable and robust platform for tissue engineering-a multidisciplinary field aimed at replacing and repairing damaged and diseased tissues and/or organs and developing in vitro models to mimic physiological conditions. successful clinical applications include the development of organ-on-a-chip technology-a microfluidic perfusion device for regenerative medicineand a chip-based platform for the culture of cells and toxicological studies. scientists currently rely on in vitro cell culture platforms and in vivo animal models to study biological processes and develop therapeutic strategies, although informative have significant shortcomings (zió łkowska et al., 2011) . in vitro platforms may not simulate the intricate cellàcell and cellàmatrix interactions that are vital to regulating cell behavior in vivo (guillouzo & guguen-guillouzo, 2008) . organ-on-a-chip devices could offer biological relevance and be a requisite for high-throughput applications. an organ-on-a-chip is a microfluidic cell culture device comprising a microchip with continuously perfused chambers that are infused with living cells that are arranged to mimic the 3d tissue microenvironment and physiology (ghaemmaghami et al., 2012) . these chips have the potential to significantly impact drug discovery and toxicity testing (ghaemmaghami et al., 2012) . the simplest functional unit of organ-on-a-chip devices consists of a single, perfused microfluidic chamber that is composed of a single type of cultured cell. these systems are utilized for studying organ-specific responses, chemical responses, such as drugs or toxins, and physical stimuli. in a complex system, two or more independently perfused parallel microchannels are connected by porous membranes to recreate interfaces between different tissues. numerous tissue models have been developed to mimic in vivo biological processes. on-chip tissue models include those for the liver, kidney, lungs, intestines, muscle, fat, and blood vessels as well as models of tumors. various chemicals and drugs, when administered over a long period, result in adverse effects and acute liver toxicity, known as hepatotoxicity (gershell & atkins, 2003) . in vitro models used for identifying drug-induced liver toxicity have drastically limited utility. therefore, efficient and reliable tools for testing liver toxicity are required. microfluidics devices for liver tissue and cells that can maintain metabolic activity and can be used for drug discovery and toxicity studies have shown great potential for solving this problem. bioreactors with a perfused multiwell plate device were developed by domansky et al. (2010) to recapitulate both the physiological and mechanical microenvironments of hepatocytes that can support both growth and functional integrity for up to 1 week. khetani and bhatia (2008) developed microscale cultures of human liver cells in a multiwell micropatterned co-culture system that can maintain phenotypic functions of liver cells for up to several weeks. a significant challenge for cancer research is the early detection and development of in vitro strategies for studying the role of drug-carrier design in tumor transport and therapies for targeting rapidly dividing cancer cells while leaving normal, healthy cells untouched. the microfluidics tumor-on-a-chip platform can be used for detecting circulating tumor cells (ctcs) in blood flow, which may lead to early diagnosis of cancer (millner et al., 2013) . a variety of designs for studying the microenvironment of microfluidic devices that culture solid and liquid tumors were reviewed by young (2013) . tatosian and shuler (2009) developed a novel microfluidic system to study the multidrug resistance of cancer cells to chemotherapeutic combinations. jang et al. (2011) fabricated a microfluidic device with an active injection system that produced 64 of 100 combinations of different chemical solutions at various concentrations and stored them in isolated chambers. to optimize system parameters for varied types of cancer cells while requiring minute amounts of reagents and cells, jedrych et al. (2011) generated a microfluidics system for photodynamic therapy-based measurements. this system allows light-induced photosensitizers to be delivered to the carcinoma cells, which-on reaction with oxygenproduce a chemical toxin that is lethal to tumor cells. http://amgenscholars.com/images/uploads/ contentimages/biotechnology-timeline.pdf amgen scholars provides hundreds of undergraduate students with the opportunity to engage in a hands-on summer research experience at some of the world's leading institutions. 5. http://monographs.iarc.fr http://monographs.iarc.fr/eng/monographs/ vol90/mono90-6.pdf the iarc monographs identify environmental factors that can increase the risk of human cancer. these include chemicals, complex mixtures, occupational exposures, physical agents, biological agents, and lifestyle factors. 6. www.iptonline.com http://www.iptonline.com/articles/public/ iptfive76np.pdf iptonline publishes "the pharmaceutical technology journal," which is designed to provide information on the latest ideas, cutting-edge technologies, and innovations shaping the future of pharmaceutical research, development, and manufacturing. 7. http://www.aceabio.com http://www.aceabio.com/userfiles/doc/ literature/xcell_appnotes/ rtca_appnote07_acea_lores.pdf acea biosciences, inc. (acea) is a privately owned biotechnology company. acea's mission is to transform cell-based assays by providing innovative and cutting-edge products and solutions to the research and drug discovery community. the food and drug administration (fda or usfda) protects and promotes public health through the regulation of all foods (except meats and poultry), the nation's blood supply, and other biologics (such as vaccines and transplant tissues). drugs must be tested, manufactured ?la 5 en promega manufactures enzymes and other products for biotechnology and molecular biology the world health organization (who) is a specialized agency that is concerned with international public health. it is affiliated with the united nations and headquartered in geneva, switzerland. who ensures that more people, especially those living in dire poverty virus-like particle and viral vector production using the baculovirus expression vector system/insect cell system animal cell culture concept and application. alpha science international limited mutagen-directed attenuation of rift valley fever virus as a method for vaccine development serial passage of a helicoverpa armigera nucleopolyhedrovirus in helicoverpa zea cell cultures recombinant hemagglutinin produced from chinese hamster ovary (cho) stable cell clones and a pelc/cpg combination adjuvant for h7n9 subunit vaccine development a bioluminescent cytotoxicity assay for assessment of membrane integrity using a proteolytic biomarker improved immunogenicity of a peptide epitope after fusion to hepatitis b core protein integrated process optimisation: lessons from retrovirus and virus like production perfused multiwell plate for 3d liver tissue engineering amino acid metabolism in mammalian cell cultures threedimensional cell culture systems and their applications in drug discovery and cell-based biosensors cells on chips the isolation and characterization of the mp and fp plaque variants of galleria mellonella nuclear polyhedrosis virus culture of animal cells: a manual of basic technique culture of animal cells: a manual of basic technique and specialized applications culture of animal cells: a manual of basic technique and specialized applications a brief history of novel drug discovery technologies evolving concepts in liver tissue modeling and implications for in vitro toxicology role of c-jun n-terminal protein kinase 1/2 (jnk1/2) in macrophage-mediated mmp-9 production in response to moraxella catarrhalis lipooligosaccharide (los) the serial cultivation of human diploid cell strains assembly of human severe acute respiratory syndrome coronavirus-like particles an integrated microfluidic device for two-dimensional combinatorial dilution evaluation of photodynamic therapy (pdt) procedures using microfluidic system immunogenicity and safety of the virus-like particle of the porcine encephalomyocarditis virus in pig synthesis of rotavirus-like particles in insect cells: comparative and quantitative analysis indian vaccine innovation: the case of shantha biotechnics microscale culture of human liver cells for drug development microfluidic 3d cell culture: potential application for tissue-based bioassays contribution of ebola virus glycoprotein, nucleoprotein, and vp24 to budding of vp40 virus-like particles a therapeutic vaccine for nicotine dependence: preclinical efficacy, and phase i safety and immunogenicity viral vaccines: concepts, principles, and bioprocesses circulating tumor cells: a review of present methods and the need to identify heterogeneous phenotypes apoptosis in cho cell batch cultures: examination by flow cytometry efficient assembly and release of sars coronavirus-like particles by a heterologous expression system the vp6 protein of rotavirus interacts with a large fraction of human naive b cells via surface immunoglobulins rift valley fever: the neurotropic adaption of virus and experimental use of this modified virus as a vaccine medicines from animal cell culture using physiologically-based pharmacokinetic-guided "body-on-a-chip" systems to predict mammalian response to drug and chemical exposure virus-like particles exhibit potential as a pan-filovirus vaccine for both ebola and marburg viral infections a novel system for evaluation of drug mixtures for potential efficacy in treating multidrug resistant cancers mers-cov virus-like particles produced in insect cells induce specific humoural and cellular imminity in rhesus macaques self-assembly of the infectious bursal disease virus capsid protein, rvp2, expressed in insect cells and purification of immunogenic chimeric rvp2h particles by immobilized metal-ion affinity chromatography isolation and replication of an occlusion bodydeficient mutant of the autographa californica nuclear polyhedrosis virus drop on demand in a microfluidic chip three-dimensional in vitro tumor models for cancer research and drug evaluation assembly of siv virus-like particles containing envelope proteins using a baculovirus expression system cells, tissues, and organs on chips: challenges and opportunities for the cancer tumor microenvironment microfluidic devices as tools for mimicking the in vivo environment from biopharmaceuticals to gene therapy culture of animal cells: a manual of basic techniques and specialized applications prospects for the use of animal cell cultures in screening of pharmaceutical substances microfluidic single-cell manipulation and analysis: methods and applications membrane protein of human coronavirus nl63 is responsible for interaction with the adhesion receptor towards single-cell lc-ms phosphoproteomics medicines from animal cell culture animal biotechnology: models in discovery and translation engineering microfluidic organoid-on-a-chip platforms cytotoxicity the degree to which an agent has specific destructive action on certain cells differentiation a change in a cell causing an increase in morphological or chemical heterogeneity immortalized changing a cell type with limited lifespan in vitro into a cell type with unlimited capacity to proliferate; sometimes achieved by animal cells in vitro or by tumor cells in vitro cell growth outside the body, in glass, as in a test tube what is the hayflick effect? 2. what is the source of cells for primary monolayer cell culture? serum is one of the basic components of cell culture media (true/false)? what was the first recombinant human protein? 5. what are the different phases of the growth curve? 6. is the vlp-based hpv vaccine approved by the fda? answers to short answer questions limited replication capacity of cells in culture medium lag phase, log phase, and plateau phase gardasil (the first hpv vaccine) was approved by the fda in yes/no type questions are cells obtained directly from organs and tissues in primary cell culture? is secondary culture used for studying transformed cells? is identity testing a way to determine purity of culture? is ifn-α used for the treatment of multiple sclerosis? is bevacizumab approved for the treatment of colorectal cancer? does passage effect leads to an increase in the virulence of cultured viruses? 7. do stem cells can not differentiate into other kinds of cells? microfluidic devices provide nutrients and oxygen for cell proliferation living cells are used in organ-on-a-chip microfluidic cell culture can embryonic cells be cultured without any feeder layer? answers to yes/no type questions yes-mechanical, chemical, or enzymatic disintegration of tissues and organs is required in primary cell culture. 2. yes-secondary cultures are used in the study of transformed cells as these cultures maintain their cellular characteristics no-for testing the purity, one should use fluorescent staining pcr or elisa no-ifn-β is used in the treatment of multiple sclerosis yes-it is an inhibitor of vascular endothelial growth factor no-passage effect leads to viral instability no-stem cells can differentiate into other kind of cell types yes-microfluidic devices also help in investigating characteristics of 3d cell culture yes-chambers of organ-on-a-chip devices are continuously infused with living cells yes-martigel from bd biosciences can be used to coat the culture plate short answer questions key: cord-162326-z7ta3pp9 authors: shahi, gautam kishore title: amused: an annotation framework of multi-modal social media data date: 2020-10-01 journal: nan doi: nan sha: doc_id: 162326 cord_uid: z7ta3pp9 in this paper, we present a semi-automated framework called amused for gathering multi-modal annotated data from the multiple social media platforms. the framework is designed to mitigate the issues of collecting and annotating social media data by cohesively combining machine and human in the data collection process. from a given list of the articles from professional news media or blog, amused detects links to the social media posts from news articles and then downloads contents of the same post from the respective social media platform to gather details about that specific post. the framework is capable of fetching the annotated data from multiple platforms like twitter, youtube, reddit. the framework aims to reduce the workload and problems behind the data annotation from the social media platforms. amused can be applied in multiple application domains, as a use case, we have implemented the framework for collecting covid-19 misinformation data from different social media platforms. with the growth of the number of users on different social media platforms, social media have become part of our lives. they play an essential role in making communication easier and accessible. people and organisations use social media for sharing and browsing the information, especially during the time of the pandemic, social media platforms get massive attention from users talwar et al. (2019) . braun and gillespie conducted a study to analyse the public discourse on social media platforms and news organisation. the design of social media platforms allows getting more attention from the users for sharing news or user-generated content. several statistical or computational study has been conducted using social media data braun and gillespie (2011) . but data gathering and its annotation are time-consuming and financially costly. in this study, we resolve the complications of data annotation from social media platforms for studying the problems of misinformation and hate speech. usually, researchers encounter several problems while conducting research using social media data, like data collection, data sampling, data annotation, quality of the data, copyright © 2020, association for the advancement of artificial intelligence (www.aaai.org). all rights reserved. and the bias in data grant-muller et al. (2014) . data annotation is the process of labelling the data available in various formats like text, video or images. researchers annotate social media data for researches based on hate speech, misinformation, online mental health etc. for supervised machine learning, labelled data sets are required so that machine can quickly and clearly understand the input patterns. to build a supervised or semi-supervised model on social media data, researchers face two challenges-timely data collection and data annotation shu et al. (2017) . one time data collection is essential because some platforms either restrict data collection or often the post itself is deleted by social media platforms or by the user. for instance, twitter allows data crawling of only the past seven days (from the date of data crawling) by using the standard apis stieglitz et al. (2018) . moreso, it is not possible to collect the deleted posts from social media platforms. another problem stands with data annotation; it is conducted either in an in-house fashion (within lab or organisation) or by using a crowd-based tool(like amazon mechanical turk(amt)) aroyo and welty (2015) . both approaches of data annotations require an equitable amount of effort to write the annotation guidelines along with expert annotators. in the end, we are not able to get quality annotated data which makes it challenging to a reliable statistical or artificial intelligence based analysis. there is also always a chance of wrongly labelled data leading to bias in data cook and stevenson (2010) . currently, professional news media or blogs also cover the posts from social media posts in their articles. the inclusion of social media posts in the news and blog articles creates an opportunity to gather labelled social media data. journalists cover humongous topics of social issues such as misinformation, propaganda, rumours during elections, disasters, pandemics, and mob lynching, and other similar events. journalists link social media posts in the content of the news articles or blogs to explain incidents carlson (2016) . to solve the problems of on-time data collection and data annotation, we propose a semi-automatic framework for data annotation from social media platforms. the proposed framework is capable of getting annotated data on social issues like misinformation, hate speech or other critical social scenarios. the key contributions of the paper are listed below-• we present a semi-automatic approach for gathering an-arxiv:2010.00502v1 [cs.si] 1 oct 2020 notated data from social media platforms. amused gathers labelled data from different social media platform in multiple formats(text, image, video). • amused reduces the workload, time and cost involved in traditional data annotation technique. • amused resolves the issues of bias in the data (wrong label assigned by annotator) because the data gathered will be labelled by professional news editors or journalists. • the amused can be applied in many domains like fake news or propaganda in the election, mob lynching etc. for which it is hard to gather the data. to present a use case, we apply the proposed framework to gather data on covid-19 misinformation on multiple social media platforms. in the following sections, we discuss the related work, different types of data circulated and its restrictions on social media platforms, current annotation techniques, proposed methodology and possible application domain; then we discuss the implementation and result. we also highlight some of the findings in the discussion, and finally, we discuss the conclusion and ideas for future works. much research has been published using social media data, but they are limited to a few social media platforms or language in a single work. also, the result is published with a limited amount of data. there are multiple reasons for the limited work; one of the key reason is the availability of the annotated data for the research thorson et al. (2013) ; ahmed, pasquier, and qadah (2013) . chapman et al. highlights the problem of getting labelled data for nlp related problem chapman et al. (2011) . researchers are dependent on in-house or crowd-based data annotation. recently, alam et al. uses a crowd-based annotation technique and asks people to volunteer for data annotation, but there is no significant success in getting a large number of labelled data alam et al. (2020) . the current annotation technique is dependent on the background expertise of the annotators. on the other hand, finding the past data on an incident like mob lynching, disaster is challenging because of data restrictions by social media platforms. it requires looking at massive posts, news articles with an intensive amount of manual work. billions of social media posts are sampled to a few thousand posts for data annotation either by random sample or keyword sampling, which brings a sampling bias in the data. with the in-house data annotation, forbush et al. mentions that it's challenging to hire annotator with background expertise in a domain. another issue is the development of a codebook with a proper explanation forbush et al. (2013) . the entire process is financially costly and time taking duchenne et al. (2009) . the problem with the crowdbased annotation tools like amt is that the low cost may result in wrong labelling of data. many annotators who cheat, not performing the job, but using robots or answering randomly fort, adda, and cohen (2011); sabou et al. (2014) . with the emergence of social media as a news resources caumont (2013), many people or group of people use it for different purpose like news sharing, personal opinion, social crime in the form of hate speech, cyber bullying. nowadays, the journalists cover some of the common issues like misinformation, mob lynching, hate speech, and they also link the social media post in the news articles cui and liu (2017) . in the proposed framework, we used the news articles from profession news website for developing the proposed framework. we only collect the news articles/blog from the credible source which does not compromise with the news quality meyer (1988) . in the next section, we discuss the proposed methodology for the amused framework. social media platform allows users to create and view posts in multiple formats. every day billions of posts containing images, text, videos are shared on social media sites such as facebook, twitter, youtube or instagram aggarwal (2011) . people use a combination of image, text and video for more creative and expressive forms of communication. data are available in different formats and each social media platform apply restriction on data crawling. for instance, facebook allows crawling data only related to only public posts and groups. giglietto, rossi, and bennato discuss the requirement of multi-modal data for the study of social phenomenon giglietto, rossi, and bennato (2012) . in the following paragraph, we highlighted the data format and restriction on several social media platforms. text almost every social media platform allows user to create or respond to the social media post in text. but each social media platform has a different restriction on the size of the text. twitter has a limit of 280 characters, while on youtube, users are allowed to comment up to a limit of 10000 characters. reddit allows 40,000 characters; facebook has a limit of 63206 characters, wikipedia has no limit and so on. the content and the writing style changes with the character limit of different social media platform. image like text, image is also a standard format of data sharing across different social media platforms. these platforms also have some restriction on the size of the image. like twitter has a limit of 5 megabytes, facebook and instagram have a limit of 30 megabytes, reddit has a limit of 20 megabytes. images are commonly used across different platform. it is common in social media platforms like instagram, pinterest. video some platforms are primarily focused on video like youtube. while other platforms are multi-modal which allows video, text and image. for video also there are restrictions in terms of duration like youtube has a capacity of 12 hours, twitter allows 140 seconds, instagram has a limit of 120 seconds, and facebook allows videos up to 240 minutes. the restriction of video's duration on different platforms catches different users. for instance, on twitter and instagram users post video with shorter duration. in contrast, youtube has users from media organisation, vlog writer, educational institution etc where the duration of video is longer. in the current annotation scenario, researchers collect the data from social media platforms for a particular issue with different search criteria. there are several problems with the current annotation approaches; some of them are highlighted below. • first, social media platforms restrict users to fetch old data; for example, twitter allows us to gather data only from the past seven days using the standard apis. we need to start on-time crawling; otherwise, we lose a reasonable amount of data which also contains valuable content. • second, if the volume of data is high, it requires filtering based on several criteria like keyword, date, location etc. these filtering further degrades the data quality by excluding the major portion of data. for example, for hate speech, if we sample the data using hateful keyword, then we might lose many tweets which are hate speech but do not contain any hateful word. • third, getting a good annotator is a difficult task. annotation quality depends on the background expertise of the person. even we hire annotator in our organisation; we have to train them for using the test data. for crowdsourcing, maintaining annotation quality is more complicated. moreover, maintaining a good agreement between multiple annotators is also a tedious job. • fourth problem is the development of annotation guidelines. we have to build descriptive guidelines for data annotation, which handle a different kind of contradiction. writing a good codebook requires domain knowledge and consultant from experts. • fifth, overall, data annotation is a financially costly process and time-consuming. sorokin and forsyth highlighted the issue of cost while using a crowd-based annotation technique sorokin and forsyth (2008) . • sixth, social media is available in multiple languages, but much research is limited to english. data annotation in other languages, especially under-resourced languages is difficult due to the lack of experienced annotators. the difficulty adversely affects the data quality and brings some bias in the data. in this work, we propose a framework to solve the above problems by crawling the embedded social media posts from the news articles and a detailed description is given in the proposed method section. in this section, we discuss the proposed methodology of the annotation framework. our method consists of nine steps, they are discussed belowstep 1: domain identification the first step is the identification of the domain in which we want to gather the data. a domain could focus on a particular public discourse. for example, a domain could be fake news in the us election, hate speech in trending hashtags on twitter like #blacklivesmatter, #riotsinsweden etc. domain selection helps to focus on the relevant data sources. step 2: data source after domain identification, the next step is the identification of data sources. data sources may consist of either the professional news websites or the blogs that talk about the particular topic, or both. for example, many professional websites have a separate section which discusses the election or other ongoing issues. in the step, we collect the news website or blog which discuss the chosen domain. step 3: web scraping in the next step, we crawl all news articles from a professional news website or blogs which discuss the domain from each data source. for instances, a data source could be snopes snopes (2020) or poynter institute (2020) . we fetch all the necessary details like the published date, author, location, news content. step 4: language identification after getting the details from the news articles, we check the language of the news articles. we use iso 639-1 codes wikipedia (2020) for naming the language. based on the language, we can further filter the group of news articles based on spoken language from a country and apply a language-specific model for finding meaning insights. step 5: social media link from the crawled data, we fetch the anchor tag( a ) mentioned in the news content, then we filter the hyperlinks to identify social media platforms like twitter and youtube. from the filtered link, we fetch unique identifiers to the social media posts, for instance, for a hyperlink consisting of tweet id, we fetch the tweet id from the hyperlink. similarly, we fetch the unique id to social media for each platform. we also remove the links which are not fulfilling the filtering criteria. step 6: social media data crawling in this step, we fetch the data from the respective social media platform. we build a crawler for each social media platform and crawl the details using unique identifiers or uniform resource locator (url) obtained from the previous step. due to the data restriction, we use crowdtangle team (2020) to fetch them from facebook and instagram posts. example-for twitter, we use twitter crawler using tweet id (unique identifier), we crawl details about the tweets. step 7: data labelling in this step, we assign labels to the social media data based on the label assigned to the news articles by journalists. often news articles describe the social media post to be hate speech, fake news, or propaganda. we assign the class of the social media post mentioned in the news article as a class described by the journalist. for example, if a news article a containing social media post s has been published by a journalist j and journalist j has described the social media post s to be a fake news, we label the social media post s as fake news. usually, the news article is published by a domain expert, and it assures that social media post embedded or linked in the news article is correctly labelled. step 8: human verification in the next step, to check the correctness, a human verifies the assigned label to the social media post and with label mentioned in the news articles. if the label is wrongly assigned, then data is removed from the corpus. this step assures that the collected social media post contains the relevant post and correctly given label. a human can verify the label of the randomly selected news articles. step 9: data enrichment in this, we merge the social media data with the details from the news articles. it helps to accumulate extra information which might allow for further analysis. data merging provides analysis from news authors and also explains label assigned to the social media post. in this section, we consider the possible application domains of the proposed framework. nevertheless, the framework is a general one, and it can be tailored to suit varied unmentioned domains as well where the professional news website or blogs covers the incident like election, fake news etc. "fake news is an information that is intentionally, and verifiable false and could mislead readers" allcott and gentzkow (2017) . misinformation is part of fake news which is created deliberately intended to deceive. there is an increasing amount of misinformation in the media, social media, and other web sources. in recent years, much research has been done for fake news detection and debunking of fake news zhou and zafarani (2018) . in the last two decades, there is a significant increase in the spread of misinformation. nowadays more than 100 fact-checking websites are working to tackle the problem misinformation cherubini and graves (2016) . fact-checking websites can help to investigate claims and assist citizens in determining whether the information used in an article is true or not. in a real-world scenario, people spread a vast amount of misinformation during the time of a pandemic, an election or a disaster. gupta et al. (2013) . there is a 3v problem of fake news -volume -a large number of fake news, velocity -during the peak the speed of propagation also intensifies, variety -different formats of data like images, text, videos are used in fake news. still, fake news detection requires a considerable effort to verify the claims. one of the most effective strategies for tackling this problem is to use computational methods to detect false news. misinformation has attracted significant attention in recent years as evidenced in recent publications li et al. (2012) ; li, meng, and yu (2011); li et al. (2016) ; popat et al. (2016) . additionally, misinformation is adopted across language borders and consequently often spread around the globe. for example-one fake news "russia released lions to implement the lockdown during covid-19" was publicised across multiple countries in different languages like italian and tamil poynter (2020). mob lynching is a violent human behaviour where a group of people execute the legal practice without a trial which ends with a significant injury or death of a person apel (2004) . it is a worldwide problem, the first case executed in the 15th century in ireland, then it was trending in the usa during the 18-19th century. often, mob lynching is initiated by rumours or fake news which gets triggered by the social media by a group of peoplearun (2019). the preventive measures taken by the government to overcome all obstacles and prevent further deaths were not successful in its entirety. getting the data for analysis of mob lynching is difficult because of the unexpected events occurring throughout the year, mainly in remote areas. there is no common search term or keyword that helps to crawl social media. so, if we fetch the specific social media post from the news articles which is covering analysis about the mob lynching arun (2019), we can use it for several studies. it will also help to analyse the cause and pattern from the previous incident griffin (1993) . online abuse is any kind of harassment, racism, personal attacks, and other types of abuse on online social media platforms. the psychological effects of online abuse on individuals can be extreme and lasting mishra, yannakoudakis, and shutova (2019) . online abuse in the form of hate speech, cyberbullying, personal attacks are common issue mishra, yannakoudakis, and shutova (2019) . many research has been done in english and other widely spoken languages, but under-resourced languages like hindi, tamil (and many more) are not well explored. gathering data in these languages is still a big challenge, so our annotation framework can easily be applied to collect the data on online abuse in multiple languages. in this, we discuss the implementation of our proposed framework. as a case study, we apply the amused for data annotation for covid-19 misinformation in the following way: step 1: domain identification out of several possible application domains, we consider the spread of misinformation in the context of covid-19. we choose this the topic since because, december 2019, the first official report of covid-19, misinformation spreading over the web shahi and nandini (2020). the increase of misinformation is one of the big problems during the covid-19 problems. the director of the world health organization(who), considers that with covid, we are fighting with both pandemic and infodemic the guardian (2020). infodemic is a word coined by world health organization (who) to describe the misinformation of virus, and it makes hard for users to find trustworthy sources for any claim made on the covid-19 pandemic, either on the news or social media world health organization and others (2020); zarocostas (2020). one of the fundamental problems is the lack of sufficient corpus related to pandemic shahi, dirkson, and majchrzak (2020) . content of the misinformation depends on the domain; for example, during the election, we have a different set of misinformation compared to a pandemic like covid-19, so domain identification helps to focus on specif topic. step 2: data sources for data source, we looked for 25 fact-checking websites(like politifact, boomlive) and decided to use the poynter and snopes. we choose poynter figure 1 : amused: an annotation framework for multi-modal social media data because poynter has a central data hub which collects data from more 98 fact-checking websites while snopes is not integrated with poynter but having more than 300 fact-checked articles on covid-19. we describe the two data sources as follow-snopes-snopes snopes (2020) is an independent news house owned by snopes media group. snopes verifies the correctness of misinformation spread across several topics like election, covid-19. as for the fact-checking process, they manually verify the authenticity of the news article and performs a contextual analysis. in response to the covid-19 infodemic, snopes provides a collection of a fact-checked news article in different categories based on the domain of the news article. poynter-poynter is a non-profit making institute of journalists institute (2020). in covid-19 crisis, poynter came forward to inform and educate to avoid the circulation of the fake news. poynter maintains an international fact-checking network(ifcn), the institute also started a hashtag #coronavirusfacts and #datoscoronavirus to gather the misinformation about covid-19. poynter maintains a database which collects fact-checked news from 98 factchecking organisation in 40 languages. step 3: web scraping in this step, we developed a pythonbased crawler using beautiful soup richardson (2007) to fetch all the news articles from the poynter and snopes. our crawler collects important information like the title of the news articles, name of the fact-checking websites, date of publication, the text of the news articles, and a class of news articles. we have assigned a unique identifier to each of them and its denoted by fcid. a short description of each element given in table 1. step 4: language detection we collected data in multiple languages like english, german, hindi etc. to identify the language of the news article, we have used langdetect shuyo (2014) , a python-based library to detect the language of the news articles. we used the textual content of new articles to check the language of the news articles. our dataset is categorise into 40 different languages. step 5: social media link in the next step, while doing the html crawling, we filter the url from the parsed tree of the dom (document object model). we analysed the url pattern from different social media platforms and applied keyword-based filtering from all hyperlinks in the dom. we store that urls in a separate column as the social media link. an entire process of finding social media is shown in figure 2 . some of the url patterns are discussed below-twitter-for each tweet, twitter follows a pattern twitter.com/user name/status/tweetid. so, in the collection hyperlink, we searched for the keyword, "twitter.com" and "status", it assures that we have collected the hyperlink which referring to the tweet. youtube-for each youtube video, youtube follows a pattern hwww.youtube.com/watch?v=vidoeid. so, in the collection hyperlink, we searched for the keyword, "youtube.com" and "watch", these keyword assures that we have collected the hyperlink which referring to the particular youtube video. reddit-for each subreddit, reddit follows a pattern www.reddit.com/r/subreddit topic/. so, in the collection hyexample news id we provide a unique identifying id to each news articles. we use acronym for news source and the number to identify a news articles. newssource url it is a unique identifier pointing to the news articles. https://factcheck.afp.com/vi deo-actually-shows-anti-gove rnment-protest-belarus news title in this field, we store the title of the news articles. a video shows a rally against coronavirus restrictions in the british capital of london. published date each news articles published the fact check article with a class like false, true, misleading. we store it in the class column. news class we provide a unique identifying id to each news articles. false published-by in this field, we store the name of the professional news websites or blog, for example, afp, quint etc. country each news articles published the fact check article with a class like false, true, misleading. we store it in the class column. we provide a unique identifying id to each news articles. english table 1 : name, definition and an example of elements collected from new articles. perlink, we searched for the keyword, "reddit.com" and a regex code to detect "reddit.com/r/", which confirms that we have collected the hyperlink which referring to the particular subreddit. similarly, we followed the approach for other social media platforms like facebook, instagram, wikipedia, pinterest, gab. in the next step, we used the regex code to filter the unique id for each social media post like tweet id for twitter, video id for youtube. step 6: social media data crawling after web scraping, we have the unique identifier of each social media post like tweet id for twitter, video id for videos etc. we build a python-based program for crawling the data from the respective social media platform. we describe some of the crawling tool and the details about the collected data. twitter-we used python crawler using tweepy roesslein (2020), which crawls all details about a tweet. we collect text, time, likes, retweet, user details such as name, location, follower count. youtube-for youtube, we built a python-based crawler which collects the textual details about the video, like title, channel name, date of upload, likes, dislikes. we also crawled the comments of the respective. similarly, we build our crawler for other platforms, but for instagram and facebook, we used the crowdtangle for data crawling, data is limited to posts from public pages and group team (2020). step 7: data labelling for data labelling, we used the label assigned in the news articles then we map the social media post with their respective news article and assign the label to the social media post. for example, a tweet extracted from news article is mapped to the class of the news article. an entire process of data annotating shown in figure 3. step 8: human verification in the next step, we manually overlook each social media post to check the correctness of the process. we provided the annotator with all necessary information about the class mapping and asked them to verify it. for example-in figure 3 , human open the news article using the newssource url and verified the label assigned to the tweet. for covid-19 misinformation, a human checked randomly sampled 10% social media post from each social media platforms and verified the label assign to the social media post and label mentioned in the news articles. with the random checks, we found that all the assigned labels are correct. this helps make sure the assigned label is correct and reduces the bias or wrongly assigned label. we further normalise the data label into false, partially false, true and others using the definitions mentioned in shahi, dirkson, and majchrzak (2020) . the number of social media post found in four different category is shown in table 3 . step 9: data enrichment in this step, we enrich the data by providing extra information about the social media post. the first step is merging the social media post with the respective news article, and it includes additional information like textual content, news source, author. the detailed analysis of the collected data is discussed in the result section. based on the results, we also discuss some of the findings in the discussion section. a snapshot of the labelled data from twitter is shown in figure 4. we will release the data as open-source for further study. for the use case of misinformation on covid-19, we identified ifcn as the data source, and we collected data from different social media platforms. we found that around 51% of news articles contain linked their content to social media websites. overall, we have collected 8077 fact-checked news articles from 105 countries in 40 languages. a detailed description of social media data extracted using the amused framework is presented in table 2. we have cleaned the hyperlinks collected using the amused framework. we filtered the social media posts by removing the duplicates using a unique identifier of social media post. we have presented a timeline plot of data collected from different social media platforms in figure 5 . we plotted the data from those social media platform which has (2020) figure 5: a timeline distribution of data collected from a number of different social media platform from january 2020 to august 2020, we have presented the platform having data count more than 25. facebook 2776 325 94 6 instagram 166 28 2 1 pinterest 3 0 0 0 reddit 21 9 2 1 tiktok 9 0 0 0 twitter 1318 234 50 13 wikipedia 154 18 3 1 youtube 739 164 13 0 table 3 : summary of covid-19 misinformation data collected from different social media platforms, deleted and duplicate posts are excluded in the count. the total number of post more than 25 unique posts in table 3 because it depreciates the plot distribution. we dropped the plot from pinterest (3), whatsapp(23), tiktok (25), reddit(43) . the plot shows that most of the social media posts are from facebook and twitter, then followed by youtube, then wikipedia and instagram. we have also presented the class distribution of these social media post in table 3. the figure 5 shows that the number of post overall social media post was maximum during the mid-march to mid-may, 2020. misinformation also follows the trend of the covid-19 situation in many countries because the number of social media post also decreased after june 2020. the possible reason could be either the spread of misinformation is reduced, or fact-checking websites are not focusing on this issue as during the early stage. from our study, we highlighted some of the useful points. usually, the fact-checking website links the social media post from multiple social media platforms. we tried to gather data from various social media platforms, but we found the maximum number of links from facebook, twitter, and youtube. there are few unique posts from reddit (21), tik-tok(9) but they were less than what we were expecting brennen et al. (2020) . surprisingly there are only three unique posts from pinterest, and there are no data available from gab, sharechat, and snapchat. however, gab is well known for harmful content, and people in their regional languages use sharechat. there are only three unique posts from pinterest. many people use wikipedia as a reliable source of information, but there are 393 links from wikipedia. hence, overall fact-checking website is limited to some trending social media platforms like twitter or facebook while social media platforms like gab, tiktok is famously famous for malformation, misinformation brennen et al. (2020) . what-sapp is an instant messaging app, used among friends or group of people. so, we only found some hyperlink which links to the public whatsapp group. to increase the visibility of fact-checked articles, a journalist can also use schema.org vocabulary along with the microdata, rdfa, or json-ld formats to add details about misinformation to the news articles shahi, nandini, and kumari (2019) . another aspect is the diversity of social media post on the different social media platforms. more often, news articles mention facebook, twiter, youtube but less number of post from instagram, pinterest, no post from gab, tiktok. there might be these platforms actively ask or involve the factchecking website for monitoring the content on their platform, or the journalists are more focused on these platforms only. but it would be interesting to study the proposition of fake news on different platforms like tiktok, gab. we have also analysed the multi-modality of the data on the social media platform. in the case of misinformation on covid-19, the amount of misinformation on text is more compare to video or image. but, in table 3 we show that apart from text, the fake news is also shared as image, video or mixed-format like image+text. it will also be beneficial to detect fake news on different platforms. it also raises the open question of cross-platform study on a particular topic like misinformation on covid-19. someone can also build a classification model shahi et al. (2018) ; nandini et al. (2018) to detect a class of fake news into true, false, partially false or other categories of news articles. while applying amused framework on the misinformation on covid-19, we found that misinformation across multiple source platform, but it mainly circulated across facebook, twitter, youtube. our finding raises the concern of mitigating the misinformation on these platforms. in this paper, we presented a semi-automatic framework for social media data annotation. the framework can be applied to several domains like misinformation, mob lynching, and online abuse. as a part of the framework, we also used a python based crawler for different social media websites. after data labelling, the labels are cross-checked by a human which ensures a two-step verification of data annotation for the social media posts. we also enrich the social media post by mapping it to the news article to gather more analysis about it. the data enrichment will be able to provide additional information for the social media post. we have implemented the proposed framework for collecting the misinformation post related to the covid-19 as future work, the framework can be extended for getting the annotated data on other topics like hate speech, mob lynching etc. amused will decrease the labour cost and time for the data annotation process. amused will also increase the quality of the data annotation because we crawl the data from news articles which are published by an expert journalist. an introduction to social network data analytics key issues in conducting sentiment analysis on arabic social media text fighting the covid-19 infodemic in social media: a holistic perspective and a call to arms social media and fake news in the 2016 election imagery of lynching: black men, white women, and the mob truth is a lie: crowd truth and the seven myths of human annotation on whatsapp, rumours, and lynchings hosting the public discourse, hosting the public: when online news and social media converge types, sources, and claims of covid-19 misinformation embedded links, embedded meanings: social media commentary and news sharing as mundane media criticism 12 trends shaping digital news overcoming barriers to nlp for clinical text: the role of shared tasks and the need for additional creative solutions the rise of fact-checking sites in europe automatically identifying changes in the semantic orientation of words how does online news curate linked sources? a content analysis of three online news media automatic annotation of human actions in video was coronavirus predicted in a 1981 dean koontz novel? what a catch! traits that define good annotators amazon mechanical turk: gold mine or coal mine? the open laboratory: limits and possibilities of using facebook, twitter, and youtube as a research data source enhancing transport data collection through social media sources: methods, challenges and opportunities for textual data narrative, event-structure analysis, and causal interpretation in historical sociology faking sandy: characterizing and identifying fake images on twitter during hurricane sandy a dean koontz novel the international fact-checking network truth finding on the deep web: is the problem solved? a survey on truth discovery t-verifier: verifying truthfulness of fact statements defining and measuring credibility of newspapers: developing an index tackling online abuse: a survey of automated abuse detection methods modelling and analysis of temporal gene expression data using spiking neural networks credibility assessment of textual claims on the web russia released 500 lions beautiful soup documentation corpus annotation through crowdsourcing: towards best practice guidelines fakecovid -a multilingual cross-domain fact check news dataset for covid-19 analysis, classification and marker discovery of gene expression data with evolving spiking neural networks an exploratory study of covid-19 misinformation on twitter inducing schema. org markup from natural language context fake news detection on social media: a data mining perspective language-detection library snopes. 2020. collections archive utility data annotation with amazon mechanical turk social media analytics-challenges in topic discovery, data collection, and data preparation why do people share fake news? associations between the dark side of social media use and fake news sharing behavior crowdtangle. facebook, menlo park, california the guardian. 2020. the who v coronavirus: why it can't handle the pandemic youtube, twitter and the occupy movement: connecting content and circulation practices list of iso 639-1 codes world health organization and others world report how to fight an infodemic fake news: a survey of research, detection methods, and opportunities key: cord-353041-qmpatq8m authors: han, ruixia; cheng, yali title: the influence of norm perception on pro-environmental behavior: a comparison between the moderating roles of traditional media and social media date: 2020-09-30 journal: int j environ res public health doi: 10.3390/ijerph17197164 sha: doc_id: 353041 cord_uid: qmpatq8m the activation of norm perception can promote pro-environmental behavior. how does media, as important variables in activating norm perception, affect pro-environmental behavior? through an online survey of 550 randomly selected chinese citizens, this study examines the roles of traditional media and social media in influencing the relationship between norm perception and pro-environmental behavior. based on multi-level regression analysis of data, this study found that (1) compared with traditional media, social media play a more significant role in moderating the relationship between norm perception and pro-environmental behavior; (2) the promotion of the perception of injunctive norms by traditional media has a negative relationship with pro-environmental behaviors; (3) the activation of subjective norm perception by social media will promote pro-environmental behaviors. according to this research, in the current media environment, we should carefully release pro-environmental information on social media and encourage relevant discussions, and appropriately reduce environment-relevant injunctive normative information on traditional media. the study also discusses the role of media in regulating norm perception and pro-environmental behavior in different cultural contexts. the global spread of the covid-19 has brought issues of health-related risk prevention back to public attention, of which environmental protection is one. according to the data released by the environmental performance index (epi) in 2020, overall, china ranks 120th among 180 countries. regarding china's performance in individual indicators it ranks 96th in health, 96th in air quality, and 156th in ecosystem vitality. this indicates the importance of improving environmental quality in china. china proposed the "ecological civilization system reform" in the 2017 nineteenth report. however, the launch and implementation of any policy is inseparable from the cooperation and implementation by individual citizens. how to promote environmental protection on a personal level is very important to the success of various environmental protection movements. various studies have explored factors influencing personal pro-environmental behaviors. gifford and nilsson [1] summarize 18 major personal and social factors affecting pro-environmental behaviors. personal factors include knowledge experience, personality and self-construal, felt responsibility, cognitive biases, etc. social factors include urban-rural differences; religion, cultural and ethnic variations; norms, etc. these variables work together to influence pro-environmental behaviors, and it is important to discover the moderating and mediating mechanisms between these factors. through a meta-analysis of 57 articles, bamberg and möser [2] demonstrate that eight core psycho-social variables influence pro-environmental behaviors. they are perceived behavior control, feelings of guilt, problem awareness, social norms, internal contribution, attitude, ethics, and intention. they believe that pro-environmental behavioral intention interferences with other variables' effect on pro-environmental behaviors. these researchers prove that there are complex correlations between various influencing factors of pro-environmental behavior. it is very important to understand the relationship between different factors, as some of them play a more important role than others. then, what are these factors, and what are the mechanisms affecting their role in pro-environmental behavior? there are four theoretical paradigms to analyze factors influencing pro-environmental behavior: rational behavior theory [3] , planned behavior theory [4] , normative behavior theory [5] , and value-belief-norm theory (vbn) [6] , the last of which is based on the theory of normative activation. each theory has a different focus. for example, rational behavior theory emphasizes the importance of self-interest and believes that people conduct pro-environmental behaviors to reduce personal health risks. the theory of planned behavior emphasizes the importance of behavioral intention. it believes that behavioral intention is the key variable in affecting pro-environmental behaviors, while the influencing factors of individual behavioral intention may be multifaceted. the theory of normative activation focuses on activating internal norms, such as expectations, stress, and subjective perception. value-belief-normative theory incorporates the belief in ecological values into the theory of normative activation. generally speaking, rational behavior theory and planned behavior theory are often integrated, and various theoretical frameworks also attach great importance to the importance of norms. for example, in the theory of planned behavior, "norm" is considered to be an important variable affecting individual behavioral intention. in the normative activation theory, different types of "norm" become the core variables of discussion. recent scholarship tends to advocate the integration of these theoretical orientations to form integrated models [2, 7, 8] . "norm" occupies an important position in virtually all of these models. there are various types of norms. according to their source, they can be divided into descriptive norms and injunctive norms [9] . some scholars have proposed subjective norms. the different influences of each norm on various environmental behaviors have also become an important part of recent research. for example, park and smith [10] study the effect of subjective norms, personal descriptive norms and injunctive norms, social descriptive norms and injunctive norms on organ donation intentions and behaviors. however, we tend to neglect the fact that the formation of norms is also affected by external information and environment. social media play an increasingly significant role as one source of information that ultimately affects pro-environmental behaviors by influencing the formation of norms. for example, based on a survey of 173 households in hong kong, chan [11] finds that mass media can influence residents' pro-environmental behavior by affecting their subjective norms. the research by hynes and wilson [12] indicates that social media can effectively activate people's social comparison psychology and promote people's pro-environmental behavior by improving people's normative cognition. these studies indicate that media environment does affect people's social normative cognition and pro-environmental behaviors. in mainland china, the development of social media in recent years is very rapid. social media have infiltrated into the daily lives of ordinary people. take the most widely used wechat as an example. according to wechat's official report, the number of wechat users has exceeded 1.1 billion [13] . what is the impact of social media on pro-environmental behavior in comparison to traditional media? what is the difference between them in changing people's social norms? what is the specific logic of the use of media to influence social norms and then affect the occurrence of pro-environmental behavior? the theory of social norms derives from perkins and berkowitz's [14] study of adolescent alcohol abuse behavior. they discover that providing alcoholic adolescents with information about the attitudes of peer groups is far more effective than providing information about the negative consequences of alcoholism itself. therefore, they advocate that intervention should focus more on improving the health awareness and behaviors of the general public. they discover the existence and great influence of social norms and further develop it into an important theoretical path in the study of behavioral and attitude changes [15] [16] [17] [18] . in practice, social normative intervention has also become an important path of behavioral intervention, such as the use of seat belts [19] , prevention of drunk driving [20] , prevention of sexual assault [21] [22] [23] and other issues have achieved significant results. pro-environmental behavior is one of these behaviors. pro-environmental behavior refers to environmentally beneficial behaviors that people exhibit in their daily lives, that is, behaviors that tend to be pro-environmental [24] . in general, pro-environmental behavior is influenced by a variety of self-states and external perceptions, such as individual age, gender, knowledge and education, values, politics and worldview, goals, responsibility, childhood experiences, perceived environmental risk perception, environmental knowledge, etc. normative cognition actually involves the intrinsic specific path of pro-environmental behavior. in this regard, the normative focus theory provides theoretical support. social psychologists cialdini et al. [25] suggest that people do a lot of good behaviors not because they have a good sense, attitude or purpose, but because they are constrained by social norms, other people's behaviors, and typical practices. taking energy-saving behaviors as an example, they are conducted not so much due to the consideration of environmental protection, social benefit, and money saving than to the influence of the surrounding environment [26] . that is to say, social norms actually affect people's pro-environmental behavior. different social norms affect pro-environmental behavior differently. social norms are generally divided into two categories, namely descriptive norms and injunctive norms. descriptive norms refer to the popularity of a certain act, whereas injunctive norms refer to social approval of the act [9] . reno, cialdini, and kallgren [27] argue that injunctive norms can lead people to perform certain behaviors beyond specific socio-cultural contexts and have stronger behavioral guidance. many scholars have carried out empirical research on different aspects of garbage disposal behavior, energy conservation behavior, conservation and resource conservation behavior, by which they are able to demonstrate the impact of the two norms on various pro-environmental behaviors. however, social norms are not just descriptive norms and injunctive norms. park and smith [10] indicate that subjective norms, personal descriptive norms, personal injunctive norms, societal descriptive norms and societal injunctive norms all affect organ donation behavior. they pay special attention to the perceptibility of norms and distinguish the individual from the social aspects. the personal level mainly refers to the influence of family and friends who are in daily contact with people, while the social level mainly refers to the social-cultural environment. because this study focuses on groups in the same cultural context, it will focus on perceptual descriptive norms and perceptual injunctive norms at the individual level. in addition, park and smith [10] also point out that descriptive and injunctive norms are only derived from the division of normative theory. in fact, from the perspective of planned behavior theory, subjective norms should also be included. subjective norms are designed to capture descriptive norms, i.e., whether important others themselves perform the behavior [28] . we have reason to believe that the norm perception of these three aspects will affect the pro-environmental behavior. thus, we assume that: h1: social norm perception has a positive correlation with pro-environmental behavior; h1-1: subjective norm perception has a positive correlation with pro-environmental behavior; descriptive norm perception has a positive correlation with pro-environmental behavior; h1-3: injunctive norm perception has a positive correlation with pro-environmental behavior. the influence of media on pro-environmental behavior has been confirmed by many research institutes. the impact mechanism can be roughly divided into three categories: first, from the perspective of risk communication, media have an amplifying effect on public environmental risk perception, which affects people's pro-environmental behavior by affecting their attitudes. agha [29] indicates that the exposure to information about aids on mass media amplifies people's perception of risk and causes them to change behavior. mileti [30] finds that media communication plays a mediating role between risk perception and pro-environmental behavior. zeng [31] believes that new media have a greater amplifying effect on environmental risk perception. understanding of the role of media in affecting people's risk perception is multi-dimensional. for example, wahlberg and sjoberg [32] believe that media do have an impact on risk perception, but the impact is not as heavy as interpersonal communication, and their relationship with behavioral change is not certain. meanwhile, fischhoff [33] points out that the relationship between media and risk perception has undergone several stages. in sum, these studies all demonstrate the fact that media influence pro-environmental behavior by affecting people's risk perception. the second category is from the perspective of use satisfaction theory which points out that media can raise people's environmental concern, provide environmental-related knowledge, and then affect pro-environmental behavior. for example, huang [34] discovers that the global warming information obtained by taiwanese residents from television, newspapers, and the internet did influence them to behave in a more pro-environmental manner. the works of holbert et al. [35] and trivedi, patel and acharya [36] both examine the impact of media use on pro-environmental behavior from the perspective of promoting environmental concern. the third category is to analyze the role of media according to the framework of planned behavior theory, subjective norm theory, media dependence and other theories. among them, chan [11] combines the influence of mass media on subjective norms and the analysis model of planned behavior theory to analyze the pro-environmental behavior of hong kong residents. lee [37] integrates media exposure in an attitude-intention-behavior model of pro-environmental behavior, and ho [38] combines planned behavior theory and media dependence theory. liao et al. [39] test the mediating role of perceived media influence between perceived media exposure of others and perceived social norms. in these studies, the most typical and basic research is the ipmi (the influence of presumed media influence) model [40] . the above research informs us to propose the following hypotheses: h2: media usage of environmental information acquisition has a positive correlation with pro-environmental behavior. by further analyses, we find that the influence of media on pro-environmental behavior shows different characteristics at different stages in the history of media development. for example, in the early days, people mainly focused on the influence of such traditional media types as television, radio, and newspapers on pro-environmental behaviors. then internet gradually displayed its influence. in recent years, however, social media have become a new force that initiates their influence on pro-environmental behaviors. if the research of holbert [35] mainly focuses on tv, the research of huang [34] compare tv, newspapers and the internet, while the research of ho [38] further refers to other media as traditional media to be compared with the internet. in comparison, the impact of social media on pro-environmental behavior deserves further exploration. in fact, social media have multiple potentials to influence pro-environmental behavior. for example, oakley et al. [41] , mankoff et al. [42] have pointed out that the display function of social media reminds the public to follow and promote environmental behavior. at the same time, the recording function of social media also allows the public to have an intuitive feeling about the effects of their pro-environmental behaviors, thereby strengthening their engagement in this aspect of behavior. social media also function to stimulate social comparison, activate normative perception, and improve pro-environmental behavior [12] . the last advantage of social media is that the information they provide, in comparison to that from official media, attracts more public trust. this helps the construction of online communities and promotes participation in pro-environmental behaviors [43] . therefore, we argue that social media not only promote pro-environmental behaviors, but also combine the characteristics of media communication with interpersonal communication, and help us perform more than traditional media are capable to do. therefore, we assume the following: h2-1: traditional media have a positive correlation with pro-environmental behavior; h2-2: social media have a positive correlation with pro-environmental behavior; the degrees of the influence of social media and traditional media on pro-environmental behavior are different. previous studies have indicated that the perception of social norms can significantly affect pro-environmental behavior. normative activation theory provides one theoretical support. it is also supported by the focus theory of normative conduct. relevant behavioral studies have confirmed that regulating people's norm perception does affect people's pro-environmental behavior. for example, if the information of the average household electricity consumption in a community is told to those who consume more, they will reduce electricity consumption [44] . this discovery is more significant to the promotion of pro-environmental behavior in countries like china, where it is increasingly difficult to improve people's environmental awareness and environmental level [45, 46] . under the circumstance when the influencing factors discovered by the planned behavior theory become less influential, it is particularly important to effectively activate public norm perception. the study also finds that perceived descriptive norms and perceptual injunctive norms have different effects on people's pro-environmental behavior. taking "discarding garbage" and "zoo feeding" as examples, descriptive norms describe what most people do, whereas injunctive norms look at behaviors' social consequences in different contexts [27, 47] . it is, therefore, particularly important to discover how to arouse people's normative focus. subjective norms which develop from planned behaviors are also affecting people's pro-environmental behavior. our study pays particular attention to how the three social norms are related to pro-environmental behavior in the chinese environment. social norms can only affect individual behavior when they are perceived and activated. researchers have explored the connection between social norms and pro-environmental behaviors by examining the mechanism in which social norms are produced and activated. miller and prentice [48] conclude that there are three sources of normative beliefs: direct observation of the behavior of others; communication between people or media communication; and speculation of others' behavior based on personal knowledge. media, particularly social media, have become more and more important in our lives. social media combine the functions of traditional mass media with interpersonal communication. while affecting our lives, social media also help adjust our understanding of norms. existing research demonstrates that media composure does affect people's perception of social norms and affect people's pro-environmental behavior. for example, chan [11] argues that mass media influence pro-environmental behavior by affecting people's subjective norms. hynes [12] also indicates that social media indirectly enhance people's normative cognition by activating people's social comparative psychology. whether traditional media and social media activate the perception of social norms in the same way, and how the possible difference affects their influence on pro-environmental behavior are the focus of our research. therefore, we propose the following: h3: media exposure moderates the relationship between norm perception and pro-environmental behavior; h3-1: traditional media moderate the relationship between norm perception and pro-environmental behavior; h3-2: social media moderate the relationship between norm perception and pro-environmental behavior. previous studies demonstrate that different norm perceptions have different effects on pro-environmental behavior. for example, injunctive norm perception is more likely to make people jump out of specific situations and make the norm more effective than descriptive norm perception [27] . this finding is particularly important for the guidance of public behaviors in countries with a low level of pro-environmental behavior. at the same time, we also find that the mechanism of media exposure's impact on pro-environmental behavior is different. for example, ho [38] finds that traditional media attention and internet media attention have different effects on green buying behavior. chan [11] indicates that television, newspapers and magazines have different effects on subjective norms. the influence and mechanism of social media on norm perception are more likely to be different. for example, it has been found that social media play a role in pro-environmental behavior through social display, social comparison, and public environmental protection efficacy. because of their own production content, they have a higher influence than the official media. then, the impact of social media on different norm perceptions may also differ and ultimately affect the environmental behavior, so we assume the following: h3-1-1: traditional media moderate the relationship between subjective norm perception and pro-environmental behavior; h3-1-2: traditional media moderate the relationship between descriptive norm perception and pro-environmental behavior; h3-1-3: traditional media moderate the relationship between injunctive norm perception and pro-environmental behavior; h3-2-1: social media moderate the relationship between subjective norm perception and pro-environmental behavior; h3-2-2: social media moderate the relationship between descriptive norm perception and pro-environmental behavior; h3-2-3: social media moderate the relationship between injunctive norm perception and pro-environmental behavior. summarizing these research hypotheses, this study focuses on the following two levels of research questions: rq1. how do social norm perception and media exposure affect pro-environmental behavior? rq2. how does media exposure influence the relationship between social norm perception and pro-environmental behavior? if so, how do social media and traditional media moderate the relationship between different type of norm perceptions and pro-environmental behaviors in the chinese social context? this research is mainly based on 550 items of sample data collected through china's online questionnaire platform wenjuanxing (changsha ranxing information technology co., ltd, changsha, hunan province, china). the website's sample database is composed of 2.6 million people in 30 provincial units in china (excluding xinjiang). in this survey, by setting ip addresses, user restrictions, logic questions, etc., and by randomly rolling out questionnaire links, the minimum effective sample number set in this survey is 550. the survey was conducted between august 20 and 27 in 2019. the specific demographic distribution of the survey sample is as follows: 44.7% male and 55.3% female. taking into account the sex ratio 104.64 (female 100) of china's actual population, in the subsequent data processing, we carry out a weighted processing. the education distribution is 83.3% for universities and colleges, 11.8% for masters and above, and 2.8% for high schools and below. in terms of income distribution, the annual income of cny 50,000 to 100,000 accounts for 33.8%, cny 100,000 to 200,000 accounts for 30.5%, less than cny 50,000 accounts for 28.1%, more than cny 200,000 accounts for 7.5%. the average age of the entire sample is 30.5. the specific demographic distribution can be found in our other study [49] . in addition, 83.9% of the samples came from cities, 10.49% from counties and towns, and 5.61% from rural areas. public pro-environmental behavior is the core variable of this study. there are many measurement methods, such as those of bratt [50] , gatersleben et al. [51] , and dono et al. [52] . taking into account the specific social and cultural background of china, we adopt the question series used by hong et al. (2014) [53] in cgss (the china general social survey) of 2003, 2010, and 2015. this group of questions consists of 10 measurement items, including recycling and discussing environmental issues with relatives and friends [49] . the answers include four items of never, seldom, sometimes and always, to which we assigned 0, 1, 2 3, respectively. the value range of the entire scale is 0-30, and cronbach's alpha is 0.741. the mean is 13.51, and the standard deviation is 3.89. see table 1 . there are various measurement scales for the perception of social norms. this study uses the measurement of subjective norm perception, perceptual descriptive norm, and perceptual injunctive norm in park and smith [10] . the research theme has been transformed into specific operations. table 1 . considering the classification of media types in existing studies [38] and the actual use of media by chinese residents, the use of traditional media in this study mainly includes magazines, newspapers, television, radio, and the internet. in particular, based on the history of the internet, this study separates social media from the internet as a parallel category. the types of social media usage by chinese residents in this study mainly include weibo, wechat, tiktok, kuaishou, zhihu, qq, douban, baidutieba, facebook, twitter, and instagram. the specific measurement content includes whether people obtain the four aspects of information through the above media. the specific measurement items refer to our other research [54] . one point is scored for each item. the final score-weighted average value range is 0-4 points. the means of traditional media usage for environment information acquisition (tme) scale is 1.8, the standard deviation is 80, and cronbach's alpha is 0.777. the mean of social media usage for the environment information acquisition (sme) scale is 1.04, the standard deviation is 60, and cronbach's alpha is 0.807. see table 1 . other variables in this study include some demographic variables and important variables related with pro-environmental behavior: age, gender, community participation (cp), environmental knowledge (ek), and environmental risk perception (erp). for operational measurements of gender, age, and community participation, we measured the following: gender: 1 = male, female, and female as control. age: calculated using 2019 minus the year of birth. community participation includes: (1) churches, religious groups, (2) sports and fitness groups, (3) cultural and educational groups, (4) professional associations (such as educational associations, business associations), and (5) school-related groups (alumni association), (6) owners' committee, (7) clan association, family association, association, with 1 point for each participation, ranging from 0-7; the internal consistency coefficient of the scale is 0.754 (cronbach's alpha) [49] . the environmental knowledge level adopts the cgss2013 version, consisting of 10 measurement items. the internal consistency coefficient is 0.773, and the mean and standard deviation are 9.3 and 1.23, respectively. environmental risk perception also uses the cgss2013 version. the internal consistency coefficient of the scale composed of 12 items is 0.818, and the average and standard deviation are 3.74 and 0.93, respectively. for the measurement of the above control variables, please refer to our related research [54] . see table 1 . this study uses spss 19.0 to conduct research hypothesis testing by hierarchical regression. pro-environment behavior was specifically set as the dependent variable, and then five types of predictive variables were put into the regression model in turn. these five types of predictive variables include (1) demographic variables: gender, age, and community participation; (2) environment-related variables: environmental knowledge and environmental risk perception; (3) norm perception variables, and three types of secondary indicators include: subjective norm perception, descriptive norm perception, and injunctive norm perception; (4) traditional media usage for environment information acquisition and social media usage for environment information acquisition; and (5) interaction variables. we calculated interaction variables by multiplying the relevant variables of norm perception and media exposure. after pearson's test of the relevant variables involved in the study, it can be found that for those general variables that affect pro-environmental behaviors, in addition to age (r = 0.152, p < 0.01), environmental knowledge (r = −0.116, p < 0.01) and environmental risk perception (r = 0.152, p < 0.01) are significantly correlated with pro-environmental behavior. of particular note is that environmental knowledge and pro-environmental behavior show a significant negative correlation. regarding the correlation between norm perception and pro-environmental behavior, norm perception (r = 0.482, p < 0.01) and its three constituent variables, subjective norm perception (psn) (r = 0.431, p < 0.01), descriptive norm perception (pdn) (r = 0.401, p < 0.01), and injunctive norm perception (pin) (r = 0.381, p < 0.01), are significantly related to pro-environmental behaviors. in terms of the correlation between media exposure and pro-environmental behaviors, the influence of traditional media exposure (tme) was not significant, while the influence of social media exposure (tme) was significant (r = 0.235, p < 0.01). these research results provide an understanding basis for our follow-up regression analysis. it should be noted that because social norm perception is a synthetic variable composed of subject norm perception, descriptive norm perception and injunctive norm perception, its correlation coefficient with these three variables exceeds 0.8 (r > 0.8). see table 2 . table 3 shows the results of regression analysis using demographic variables, environment-related variables, norm perception, traditional media usage for environment information acquisition (tme) and social media usage for environment information acquisition (sme), and norm perception and their interaction terms to predict pro-environment behavior. we report the coefficients for the final model and the final r-squared value. from the reported results, it can be found that there is no significant correlation between gender and pro-environmental behavior, that is, there is no significant difference in the performance of pro-environmental behavior between men and women. there is a significant influence of age on pro-environmental behavior (b = 0.073, p = 0.000), with older people engaging in more pro-environmental behavior. there is a significant influence of community participation on pro-environmental behavior (b = 0.500, p = 0.000), that is, those who are more involved in various types of community organization activities are more engaged in pro-environmental behavior. the model can account for 1.84% of variance in pro-environment behavior. the relationship between environment-related variables and pro-environmental behavior can be found in m3-2. both of them have a significant relationship with pro-environmental behavior, but the impact of environmental risk perception on pro-environmental behavior is more significant (b = 1.466, p = 0.000).the impact of environmental knowledge on pro-environmental behavior is relatively insignificant (b = −0.288, p < 0.05), and in the current sample, environmental knowledge is negatively correlated with pro-environmental behavior. m3-3 examines the impact of integrating norm perception variables on pro-environmental behaviors. it can be found that norm perception has a significant effect on pro-environmental behaviors, and they are significantly positively correlated (β = 2.498, p = 0.000). this result supports h1. the explanatory power of the entire model has been significantly improved, and the adjusted r-squared value has reached 34.6%. m3-4 examines the impact of traditional media usage for environment information acquisition (tme) and social media usage for environment information acquisition (sme) on pro-environmental behavior. it can be found that the impact of traditional media usage for environment information acquisition (tme) on pro-environmental behavior is near a significant point (b = −0.379, p = 0.053), and the impact of social media on pro-environmental behavior is significant (b = 0.875, p = 0.001). this partially supports h2 and h2-1, and fully supports h2-2, h2-3. that is, media composure will positively affect pro-environment behavior; traditional media usage for environment information acquisition (tme) has a near-significant impact on pro-environment behavior, but it is negatively correlated; social media usage for environment information acquisition (sme) has a significant positive correlation with pro-environment behavior. the impact of social media on pro-environmental behavior is greater than that of traditional media. m3-5 examines the impact of norm perception and media exposure on pro-environmental behaviors. it is found that the interaction effects of norm perception and traditional media usage for environment information acquisition (tme) have no significant impact on pro-environmental behaviors (b = −0.358, p = 0.237), but the interaction effect of norm perception and social media usage for environment information acquisition (sme) has a close and significant positive correlation to pro-environmental behavior (b = 0.783, p = 0.055), which partially supports h3. h3-1 is not verified, while h3-2 is. table 3 . multiple regression analysis of the influence of norm perception on pro-environmental behavior. gender in order to further test the robustness of the results obtained by the above causal step regression method, we use the bootstrap method to test the main relationship with the help of the process plug-in in spss. the results show that without controlling other variables, the mediating effect of social media on norm perception and pro-environmental behavior is significant (r = 0.49, p = 0.00), and the llci and ulci are. 0053 and. 0386, respectively. this interval does not contain 0, which suggests that the result is robust. the mediating effect of traditional media on norm perception and pro-environmental behavior is also significant (r = 0.48, p = 0.00), but the result is not robust, with llci and ulci of −0.0146 and. 0064, respectively. with the pearson test results of the pairwise correlation in table 2 , this result suggests the rationality of the strong moderation effect of social media and the possibility of the marginal moderation effect of traditional media without controlling other influencing variables. in order to examine in more detail the moderating effect of media environmental information exposure on the relationship between different types of norm perception and pro-environmental behaviors, we conduct further regression analysis to form table 4 . m4-1 and m 4-2 mainly examine the relationship between demographic variables and environment-related variables and pro-environmental behaviors, and the results are consistent with m3-1 and m3-2. m 4-3 examines the relationship between three types of norm perception and pro-environmental behavior. the results show the following: subjective norm perception and pro-environmental behavior are significantly positively correlated (b = 1.197, p = 0.000); descriptive norm perception and pro-environmental behavior are significantly positively correlated (b = 0.744, p = 0.003); injunctive norm perception was significantly positive correlation with pro-environmental behavior (b = 0.586, p = 0.014). this result supports h1-1, h1-2, and h1-3. m4-4 examines traditional media usage for environment information acquisition (tme) and social media environmental information exposure on pro-environmental behaviors. the results show that traditional media have no significant effect on pro-environmental behaviors (b = 0.349, p = 0.076), and social media have a significant positive impact on pro-environmental behaviors b = 0.846, p = 0.002), once again negating h2-1, supporting h2-2 and h2-3. the results of m 4-5 indicate that the interaction between injunctive norm perception and traditional media environmental knowledge acquisition is significantly negatively correlated with pro-environmental behavior (b = −0.735, p = 0.027). the interaction effects (b = −0.040, p = 0.903) between subjective norm perception and traditional media (tme) and the interaction effects (b = 0.467, p = 0.217) between descriptive norm perception and traditional media (tme) have no significant correlation with pro-environment behavior. the interaction effect (b = 0.964, p = 0.048) between subjective norm perception and social media usage for environment information acquisition (sme) is significantly positively correlated with pro-environmental behavior, and the interaction effect (b = −0.168, p = 0.757) between descriptive norm perception and social media usage for environment information acquisition (sme) and the interaction (b = −0.192, p = 0.676) between injunctive norm perception and social media usage for environment information acquisition (sme) have no significant correlation with pro-environmental behavior. this result supports h3-1-3 and h3-2-1. h3-1-1, h3-1-2, h3-2-2, h3-2-3 are not verified. this means that traditional media play a moderating role between injunctive norm perception and pro-environment behavior, and social media play a moderating role between subjective norm perception and pro-environmental behavior. in other types of norm perception and pro-environmental behavior, neither traditional media nor social media have significant influence. this result more clearly shows the specific mode of traditional media usage for environment information acquisition (tme) and social media usage for environment information acquisition (sme) influencing pro-environmental behavior. table 4 . multiple regression analysis of the influence of different types of norm perception on pro-environmental behavior. gender this article aims to examine how norm perception and media composure influence pro-environmental behavior, and whether media composure plays a mediating role between norm perception and pro-environmental behavior. in order to better respond to research questions and report research results more clearly, we have summarized the hypothesis test results, as is shown in table 5 . the table demonstrates that: (1) norm perception does affect pro-environmental behavior. different types of norm perception have different effects on pro-environmental behavior. (2) the influence of media exposure on pro-environmental behavior has been confirmed again, although the effect of traditional media deserves further exploration. (3) the influence of media exposure on the relationship between norm perception and pro-environmental behavior is characterized by differentiation. specifically, traditional media play a negative role in moderating the relationship between injunctive norm perception and pro-environmental behavior, while social media have a positive influence in moderating the relationship between subjective norm perception and pro-environmental behavior. tests on the relationship between media exposure and pro-environmental behavior tests on the moderating effects of different types of information composure on the relationship between different types of norm perception and pro-environmental behavior this study mainly discusses the role of traditional media and social media in activating public norm perception and influencing pro-environmental behavior in the current media society. the research results show that traditional media usage for environment information acquisition (tme) activates norm perception and affects pro-environmental behaviors less than social media usage for environment information acquisition (sme). further detailed research discovers that the deepening of public perception of the injunctive norms through traditional media actually reduces people's pro-environmental behaviors, whereas social media are mainly helpful to activate people's subjective norm perceptions, thereby promoting pro-environmental behaviors. the influence of environmental knowledge acquisition through traditional media on subjective norm perception and descriptive norm perception has no significant correlation with pro-environmental behavior, and the influence of social media usage for environment information acquisition (sme) on descriptive norm perception and injunctive norm perception has no significant correlation with pro-environmental behavior. the results of this study are very instructive for the promotion of pro-environmental behavior in the age of social media. the influence of media on pro-environmental behavior has been confirmed by many studies, and its influence mechanism has also been confirmed by many studies. as mentioned in the literature review, media promote pro-environmental behaviors by amplifying environmental risks [29, 31] , by increasing people's environmental concerns and environmental knowledge [34] [35] [36] , and by enhancing people's value orientation or activating their norm perceptions [11] . in short, media can indirectly affect people's pro-environmental behaviors in various ways. relevant research has also found a basis in risk perception theory, use satisfaction theory, acculturation theory, planned behavior and normative activation theory, and has been centrally presented in media dependency theory. however, existing studies have mainly focused on the impact of traditional media, such as television, radio, newspapers, and the internet on pro-environmental behaviors. research about the impact of social media on pro-environmental behavior also focuses on the mining of influence mechanisms. for example, social media bring normative pressure through their own social comparison function [12] , while improving self-efficacy to promote pro-environmental behavior [41, 42] . however, there is no direct research focus on the influence of social media and traditional media on pro-environmental behavior. our research indicates that social media play a more significant role than traditional media in influencing pro-environmental behavior. in the current era of the media environment, the influence of social media on pro-environmental behavior has become the main way to exert media influence, while the influence of traditional media on pro-environmental behavior is declining. the influence of interpersonal communication on pro-environmental behavior has long been the focus of many studies. ho compares respective roles of traditional media and interpersonal communication in media dependence and green purchasing behavior [38] . with many other studies, ho proves that interpersonal communication plays a greater role than traditional media. nevertheless, few studies have compared between interpersonal communication and social media. social media blend characteristics of both interpersonal and mass communication. by interpreting interpersonal communication as interpersonal demonstration effect, our research indicates that social media influence pro-environmental behavior by strengthening interpersonal communication's demonstration effect. social media affect pro-environmental behaviors by displaying them publicly. oakley et al. [41] and mankoff et al. [42] indicate that social media put small behaviors in daily life under public scrutiny and encourage people's pro-environmental behaviors by improving public understanding of their own behavior and the behavior of others. social media also affect pro-environmental behaviors with the recording function, which helps form public awareness of their deeds and efficiency. social media also affect pro-environmental behaviors by stimulating people's psychology of social comparison and improving their recognition of social norms [12] . han et al. [43] points out that user-generated content (ugc) is more likely to gain public trust than official information, by activating pro-environmental norms, creating environmentally friendly online communities, and increasing public pro-environmental participation. compared to studies which focus exclusively on social media, this study makes a comparative analysis of the mechanism by which interpersonal communication and social media influence pro-environmental behaviors. our research shows that in the current era of social media, the influence of traditional media on pro-environmental behavior is weakening, while the influence of social media on pro-environmental behavior is increasing. taking normative perceptual activation as an example, the influence of traditional media on pro-environmental behavior is far less significant than that of social media. at the same time, in the specific types of normative activations, the activation of injunctive norms by traditional media negatively affects pro-environment behavior. excessively diffusing pro-environment information on traditional media may make people stressful and reduce pro-environmental behavior, which is very close to the negative correlation between the increase in environmental knowledge and the pro-environmental behavior found in this sample. in comparison, disseminating pro-environmental information on social media is helpful to improve people's supervisory cognition and promote pro-environmental behaviors. in other words, we should pay attention to the diffusion of pro-environmental information on social media, guide people to form discussions on pro-environmental behaviors in social media, and promote public pro-environment behaviors by stimulating social comparison, norm perception and self-efficacy. we should shift the focus of media information promotion for pro-environmental behavior from traditional media to social media. this study indicates that there is a complex correlation between normative perceptual activation and media usage. understanding this relationship is a prerequisite to effectively promote pro-environmental behavior. the research results show that the dissemination of the prescriptive normative information must be appropriately reduced in traditional media. this result is very similar to the effect of excessive "persuasion" in communication research, especially in countries with a strong collectivist cultural tradition. an overemphasis on prescriptive normative information, or the evaluation of others, is likely to cause people's rebellious psychology and reduce people's pro-environmental behavior. the activation effect of social media on subjective norms proves that people's sense of freedom on social media can stimulate their sense of self-efficacy and subjective norms, thereby promoting pro-environmental behavior. this demonstrates the flexible effects of social media on personal and social integration. in the current situation where the relationship between media and people is more and more interactive, paying attention to the activation of people's subjective norms by social media will be an effective way to improve pro-environmental behavior. the focus of this study is the moderating effect of traditional media and social media on norm perception and pro-environmental behaviors. therefore, we focus on demographic variables and environmental protection related variables that have significant relationships. there are many factors affecting pro-environmental behaviors. gifford and nilsson [1] have summarized the 18 major personal and social factors of pro-environmental behavior, but in order to study the core themes, we select only the variables that are considered most important to this study. regarding the conceptual configuration of norm perception, we divide it into subjective norm perception, descriptive norm perception, and injunctive norm perception. we mainly draw on the research division proposed by park and smith [28] . there are other more reliable and better ways of operation, which are worthy of further research. this study examines the relationship between media's activation of norm perception and its impact on pro-environmental behavior. in fact, the type of media contact, whether through traditional media or social media, is composed of multiple specific types of media. what are the differences in the activation of specific media types such as wechat and facebook? this will be very meaningful for studying the influence mechanism of specific social media in different cultural backgrounds, and it is worth further exploration in future research. in terms of research methods, this research uses traditional multi-level regression to compare different models, even though there are many types of media and norm perception. if the structural equation method is adopted, the complexity can be more clearly shown. in addition, this study is based on specific practical considerations, and some assumptions that have a significance level slightly higher than 0.05 are marked as significant, which requires special explanation. this study is based on a chinese sample. will the role of media contact between norm activation and pro-environmental behavior be different in different cultural contexts? if we find that the activation of injunctive norms by traditional media reduces pro-environmental behaviors, and the activation of subjective norms by social media is conducive to pro-environmental behaviors, then in non-collectivist cultures, will the activation of injunctive norms by traditional media be positive? the influence on pro-environmental behavior needs to be tested by comparative studies. to conclude, this study indicates that in the current media society, social media play a more important role than traditional media in regulating and promoting pro-environmental behavior. understanding this role of social media is of great significance for regulating people's cognition and behavior in an era of constantly changing media environments. the promotion of pro-environmental behavior must rely on the media environment. the authors declare no conflict of interest. personal and social factors that influence pro-environmental concern and behaviour: a review twenty years after hines, hungerford, and tomera: a new meta-analysis of psycho-social determinants of pro-environmental behaviour ajzen, i. belief, attitude, intention, and behavior: an introduction to theory and research the theory of planned behavior normative influences on altruism a value-belief-norm theory of support for social movements: the case of environmentalism travelers' pro-environmental behavior in a green lodging context: converging value-belief-norm theory and the theory of planned behavior a review of pro-environmental behaviors from the perspective of two decision-making systems a focus theory of normative conduct: recycling the concept of norms to reduce littering in public places distinctiveness and influence of subjective norms, personal descriptive and injunctive norms, and societal descriptive and injunctive norms on behavioral intent: a case of two behaviors critical to organ donation mass communication and pro-environmental behaviour: waste recycling in hong kong i do it, but don't tell anyone! personal values, personal and social norms: can social media play a role in changing pro-environmental behaviours? perceiving the community norms of alcohol use among students: some research implications for campus alcohol education programming* from reactive to proactive prevention: promoting anecology of health on campus working with men to prevent violence against women: program modalities and formats (part two) college student misperceptions of alcohol and other drug norms among peers: exploring causes, consequences, and implications for prevention programs. in designing alcohol and other drug prevention programs in higher education: bringing theory into practice the emergence and evolution of the social norms approach to substance abuse prevention. in the social norms approach to preventing school and college age substance abuse: a handbook for educators, counselors, and clinicians normative misperceptions of peer seat belt use among high school students and their relationship to personal seat belt use state of minnesota department of public safety project: the prevention collaborative's positive social norming campaign campaign: marketing social norms to men to prevent sexual assault a social norm intervention for men to prevent sexual assault a social norms intervention to reduce coercive behaviors among deaf and hard-of-hearing college students shades of green: linking environmental locus of control and pro-environmental behaviors a focus theory of normative conduct: a theoretical refinement and reevaluation of the role of norms in human behavior the transsituational influence of social norms perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior 1 the impact of a mass media campaign on personal risk perception, perceived self-efficacy and on other behavioural predictors the causal sequence of risk communication in the parkfield earthquake prediction experiment technical risk vs. perceived risk: communication process and risk society amplification risk perception and the media risk perception and communication unplugged: twenty years of process 1. risk anal media use, environmental beliefs, self-efficacy, and pro-environmental behavior environmental concern, patterns of television viewing, and pro-environmental behaviors: integrating models of media consumption and effects causality analysis of media influence on environmental attitude, intention and behaviors leading to green purchasing the role of media exposure, social exposure and biospheric value orientation in the environmental attitude-intention-behavior model in adolescents applying the theory of planned behavior and media dependency theory: predictors of public pro-environmental behavioral intentions in singapore motivators of pro-environmental behavior: examining the underlying processes in the influence of presumed media influence model the influence of presumed influence motivating sustainable behavior. ubiquitous comput org: increasing energy saving behaviors via social networks online travel ugc as persuasive communication: explore its informational and normative influence on pro-environmental personal norms and behaviour griskevicius, v. the constructive, destructive, and reconstructive power of social norms trends in public environmental awareness in china national environmental awareness in the high-speed economic growth period: based on the data of beijing municipal environmental awareness sampling survey a focus theory of normative conduct: when norms do and do not affect behavior the construction of social norms and standards the influence of place attachment on pro-environmental behaviors: the moderating effect of social media environmental behavior: generalized, sector-based, or compensatory? measurement and determinants of environmentally significant consumer behavior the relationship between environmental activism, pro-environmental behaviour and social identity re-examining the measurement quality of the chinese new environmental paradigm (cnep) scale: an analysis based on the cgss 2010 data a comparative study of the role of interpersonal communication, traditional media and social media in pro-environmental behavior: a china-based study this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license key: cord-356353-e6jb0sex authors: fourcade, marion; johns, fleur title: loops, ladders and links: the recursivity of social and machine learning date: 2020-08-26 journal: theory soc doi: 10.1007/s11186-020-09409-x sha: doc_id: 356353 cord_uid: e6jb0sex machine learning algorithms reshape how people communicate, exchange, and associate; how institutions sort them and slot them into social positions; and how they experience life, down to the most ordinary and intimate aspects. in this article, we draw on examples from the field of social media to review the commonalities, interactions, and contradictions between the dispositions of people and those of machines as they learn from and make sense of each other. a fundamental intuition of actor-network theory holds that what we call "the social" is assembled from heterogeneous collections of human and non-human "actants." this may include human-made physical objects (e.g., a seat belt), mathematical formulas (e.g., financial derivatives), or elements from the natural world-such as plants, microbes, or scallops (callon 1984; latour 1989 latour , 1993 . in the words of bruno latour (2005, p . 5) sociology is nothing but the "tracing of associations." "tracing," however, is a rather capacious concept: socio-technical associations, including those involving non-human "actants," always crystallize in concrete places, structural positions, or social collectives. for instance, men are more likely to "associate" with video games than women (bulut 2020) . furthermore, since the connection between men and video games is known, men, women and institutions might develop strategies around, against, and through it. in other words, techno-social mediations are always both objective and subjective. they "exist … in things and in minds … outside and inside of agents" (wacquant 1989, p. 43) . this is why people think, relate, and fight over them, with them, and through them. all of this makes digital technologies a particularly rich terrain for sociologists to study. what, we may wonder, is the glue that holds things together at the automated interface of online and offline lives? what kind of subjectivities and relations manifest on and around social network sites, for instance? and how do the specific mediations these sites rely upon-be it hardware, software, human labor-concretely matter for the nature and shape of associations, including the most mundane? in this article, we are especially concerned with one particular kind of associative practice: a branch of artificial intelligence called machine learning. machine learning is ubiquitous on social media platforms and applications, where it is routinely deployed to automate, predict, and intervene in human and non-human behavior. generally speaking, machine learning refers to the practice of automating the discovery of rules and patterns from data, however dispersed and heterogeneous it may be, and drawing inferences from those patterns, without explicit programming. 1 using examples drawn from social media, we seek to understand the kinds of social dispositions that machine learning techniques tend to elicit or reinforce; how these social dispositions, in turn, help to support 1 according to pedro domingos's account, approaches to machine learning may be broken down into five "tribes." symbolists proceed through inverse deduction, starting with received premises or known facts and working backwards from those to identify rules that would allow those premises or facts to be inferred. the algorithm of choice for the symbolist is the decision tree. connectionists model machine learning on the brain, devising multilayered neural networks. their preferred algorithm is backpropagation, or the iterative adjustment of network parameters (initially set randomly) to try to bring that network's output closer and closer to a desired result (that is, towards satisfactory performance of an assigned task). evolutionaries canvas entire "populations" of hypotheses and devise computer programs to combine and swap these randomly, repeatedly assessing these combinations' "fitness" by comparing output to training data. their preferred kind of algorithm is the so-called genetic algorithm designed to simulate the biological process of evolution. bayesians are concerned with navigating uncertainty, which they do through probabilistic inference. bayesian models start with an estimate of the probability of certain outcomes (or a series of such estimates comprising one or more hypothetical bayesian network(s)) and then update these estimates as they encounter and process more data. analogizers focus on recognizing similarities within data and inferring other similarities on that basis. two of their go-to algorithms are the nearest-neighbor classifier and the support vector machine. the first makes predictions about how to classify unseen data by finding labeled data most similar to that unseen data (pattern matching). the second classifies unseen data into sets by plotting the coordinates of available or observed data according to their similarity to one another and inferring a decision boundary that would enable their distinction. machine learning implementations; and what kinds of social formations these interactions give rise to-all of these, indicatively rather than exhaustively. our arguments are fourfold. in the first two sections below, we argue that the accretive effects of social and machine learning are fostering an ever-more-prevalent hunger for data, and searching dispositions responsive to this hunger -"loops" in this paper's title. we then show how interactions between those so disposed and machine learning systems are producing new orders of stratification and association, or "ladders" and "links", and new stakes in the struggle in and around these orders. the penultimate section contends that such interactions, through social and mechanical infrastructures of machine learning, tend to engineer competition and psycho-social and economic dependencies conducive to evermore intensive data production, and hence to the redoubling of machine-learned stratification. finally, before concluding, we argue that machine learning implementations are inclined, in many respects, towards the degradation of sociality. consequently, new implementations are been called upon to judge and test the kind of solidaristic associations that machine learned systems have themselves produced, as a sort of second order learning process. our conclusion is a call to action: to renew, at the social and machine learning interface, fundamental questions of how to live and act together. the things that feel natural to us are not natural at all. they are the result of long processes of inculcation, exposure, and training that fall under the broad concept of "socialization" or "social learning." because the term "social learning" helps us better draw the parallel with "machine learning," we use it here to refer to the range of processes by which societies and their constituent elements (individuals, institutions, and so on) iteratively and interactively take on certain characteristics, and exhibit change-or not-over time. historically, the concept is perhaps most strongly associated with theories of how individuals, and specifically children, learn to feel, act, think, and relate to the world and to each other. theories of social learning and socialization have explained how people come to assume behaviors and attitudes in ways not well captured by a focus on internal motivation or conscious deliberation (miller and dollard 1941; bandura 1962; mauss 1979; elias 2000) . empirical studies have explored, for instance, how children learn speech and social grammar through a combination of direct experience (trying things out and experiencing rewarding or punishing consequences) and modeling (observing and imitating others, especially primary associates) (gopnik 2009 ). berger and luckmann (1966) , relying on the work of george herbert mead, discuss the learning process of socialization as one involving two stages: in the primary stage, children form a self by internalizing the attitudes of those others with whom they entertain an emotional relationship (typically their parents); in the secondary stage, persons-in-becoming learn to play appropriate roles in institutionalized subworlds, such as work or school. pierre bourdieu offers a similar approach to the formation of habitus. as a system of dispositions that "generates meaningful practices and meaning-giving perceptions," habitus takes shape through at least two types of social learning: "early, imperceptible learning" (as in the family) and "scholastic...methodical learning" (within educational and other institutions) (bourdieu 1984, pp. 170, 66) . organizations and collective entities also learn. for instance, scholars have used the concept of social learning to understand how states, institutions, and communities (at various scales) acquire distinguishing characteristics and assemble what appear to be convictions-in-common. ludwik fleck (2012 fleck ( [1935 ) and later thomas kuhn (1970) famously argued that science normally works through adherence to common ways of thinking about and puzzling over problems. relying explicitly on kuhn, hall (1993) makes a similar argument about elites and experts being socialized into long lasting political and policy positions. collective socialization into policy paradigms is one of the main drivers of institutional path dependency, as it makes it difficult for people to imagine alternatives. for our purposes, social learning encapsulates all those social processes-material, institutional, embodied, and symbolic-through which particular ways of knowing, acting, and relating to one another as aggregate and individuated actants are encoded and reproduced, or by which "[e]ach society [gains and sustains] its own special habits" (mauss 1979, p. 99) . "learning" in this context implies much more than the acquisition of skills and knowledge. it extends to adoption through imitation, stylistic borrowing, riffing, meme-making, sampling, acculturation, identification, modeling, prioritization, valuation, and the propagation and practice of informal pedagogies of many kinds. understood in this way, "learning" does not hinge decisively upon the embodied capacities and needs of human individuals because those capacities and needs are only ever realized relationally or through "ecological interaction," including through interaction with machines (foster 2018) . it is not hard to see why digital domains, online interactions, and social media networks have become a privileged site of observation for such processes (e.g., greenhow and robelia 2009 ), all the more so since socialization there often starts in childhood. this suggests that (contra dreyfus 1974) social and machine learning must be analyzed as co-productive of, rather than antithetical to, one another. machine learning is, similarly, a catch-all term-one encompassing a range of ways of programming computers or computing systems to undertake certain tasks (and satisfy certain performance thresholds) without explicitly directing the machines in question how to do so. instead, machine learning is aimed at having computers learn (more or less autonomously) from preexisting data, including the data output from prior attempts to undertake the tasks in question, and devise their own ways of both tackling those tasks and iteratively improving at them (alpaydin 2014) . implementations of machine learning now span all areas of social and economic life. machine learning "has been turning up everywhere, driven by exponentially growing mountains of [digital] data" (domingos 2015) . in this article, we take social media as one domain in which machine learning has been widely implemented. we do so recognizing that not all data analysis in which social media platforms engage is automated, and that those aspects that are automated do not necessarily involve machine learning. two points are important for our purposes: most "machines" must be trained, cleaned, and tested by humans in order to "learn." in implementations of machine learning on social media platforms, for instance, humans are everywhere "in the loop"-an immense, poorly paid, and crowdsourced workforce that relentlessly labels, rates, and expunges the "content" to be consumed (gillespie 2018; gray and suri 2019) . and yet, both supervised and unsupervised machines generate new patterns of interpretation, new ways of reading the social world and of intervening in it. any reference to machine learning throughout this article should be taken to encapsulate these "more-thanhuman" and "more-than-machine" qualities of machine learning. cybernetic feedback, data hunger, and meaning accretion analogies between human (or social) learning and machine-based learning are at least as old as artificial intelligence itself. the transdisciplinary search for common properties among physical systems, biological systems, and social systems, for instance, was an impetus for the macy foundation conferences on "circular causal and feedback mechanisms in biology and social systems" in the early days of cybernetics (1946) (1947) (1948) (1949) (1950) (1951) (1952) (1953) . in the analytical model developed by norbert wiener, "the concept of feedback provides the basis for the theoretical elimination of the frontier between the living and the non-living" (lafontaine 2007, p. 31) . just as the knowing and feeling person is dynamically produced through communication and interactions with others, the ideal cybernetic system continuously enriches itself from the reactions it causes. in both cases, life is irrelevant: what matters, for both living and inanimate objects, is that information circulates in an everrenewed loop. put another way, information/computation are "substrate independent" (tegmark 2017, p. 67 ). wiener's ambitions (and even more, the exaggerated claims of his posthumanist descendants, see, e.g., kurzweil 2000) were immediately met with criticism. starting in the 1960s, philosopher hubert dreyfus arose as one of the main critics of the claim that artificial intelligence would ever approach its human equivalent. likening the field to "alchemy" (1965), he argued that machines would never be able to replicate the unconscious processes necessary for the understanding of context and the acquisition of tacit skills (1974, 1979 )-the fact that, to quote michael polanyi (1966) , "we know more than we can tell." in other words, machines cannot develop anything like the embodied intuition that characterizes humans. furthermore, machines are poorly equipped to deal with the fact that all human learning is cultural, that is, anchored not in individual psyches but in collective systems of meaning and in sedimented, relational histories (vygotsky 1980; bourdieu 1990; durkheim 2001; hasse 2019) . is this starting to change today when machines successfully recognize images, translate texts, answer the phone, and write news briefs? some social and computational scientists believe that we are on the verge of a real revolution, where machine learning tools will help decode tacit knowledge, make sense of cultural repertoires, and understand micro-dynamics at the individual level (foster 2018 ). our concern is not, however, with confirming or refuting predictive claims about what computation can and cannot do to advance scholars' understanding of social life. rather, we are interested in how social and computational learning already interact. not only may social and machine learning usefully be compared, but they are reinforcing and shaping one another in practice. in those jurisdictions in which a large proportion of the population is interacting, communicating, and transacting ubiquitously online, social learning and machine learning share certain tendencies and dependencies. both practices rely upon and reinforce a pervasive appetite for digital input or feedback that we characterize as "data hunger." they also share a propensity to assemble insight and make meaning accretively-a propensity that we denote here as "world or meaning accretion." throughout this article, we probe the dynamic interaction of social and machine learning by drawing examples from one genre of online social contention and connection in which the pervasive influence of machine learning is evident: namely, that which occurs across social media channels and platforms. below we explain first how data hunger is fostered by both social and computing systems and techniques, and then how world or meaning accretion manifests in social and machine learning practices. these explanations set the stage for our subsequent discussion of how these interlocking dynamics operate to constitute and distribute power. data hunger: searching as a natural attitude as suggested earlier, the human person is the product of a long, dynamic, and never settled process of socialization. it is through this process of sustained exposure that the self (or the habitus, in pierre bourdieu's vocabulary) becomes adjusted to its specific social world. as bourdieu puts it, "when habitus encounters a social world of which it is the product, it is like a 'fish in water': it does not feel the weight of the water, and it takes the world about itself for granted" (bourdieu in wacquant 1989, p. 43) . the socialized self is a constantly learning self. the richer the process-the more varied and intense the interactions-the more "information" about different parts of the social world will be internalized and the more socially versatile-and socially effective, possibly-the outcome. (this is why, for instance, parents with means often seek to offer "all-round" training to their offspring [lareau 2011]) . machine learning, like social learning, is data hungry. "learning" in this context entails a computing system acquiring capacity to generalize beyond the range of data with which it has been presented in the training phase. "learning" is therefore contingent upon continuous access to data-which, in the kinds of cases that preoccupy us, means continuous access to output from individuals, groups, and "bots" designed to mimic individuals and groups. at the outset, access to data in enough volume and variety must be ensured to enable a particular learnermodel combination to attain desired accuracy and confidence levels. thereafter, data of even greater volume and variety is typically (though not universally) required if machine learning is to deliver continuous improvement, or at least maintain performance, on assigned tasks. the data hunger of machine learning interacts with that of social learning in important ways. engineers, particularly in the social media sector, have structured machine learning technologies not only to take advantage of vast quantities of behavioral traces that people leave behind when they interact with digital artefacts, but also to solicit more through playful or addictive designs and cybernetic feedback loops. the machine-learning self is not only encouraged to respond more, interact more, and volunteer more, but also primed to develop a new attitude toward the acquisition of information (andrejevic 2019, p.53). with the world's knowledge at her fingertips, she understands that she must "do her own research" about everything-be it religion, politics, vaccines, or cooking. her responsibility as a citizen is not only to learn the collective norms, but also to know how to search and learn so as to make her own opinion "for herself," or figure out where she belongs, or gain new skills. the development of searching as a "natural attitude" (schutz 1970 ) is an eminently social process of course: it often means finding the right people to follow or emulate (pariser 2011) , using the right keywords so that the search process yields results consistent with expectations (tripodi 2018) , or implicitly soliciting feedback from others in the form of likes and comments. the social media user also must extend this searching disposition to her own person: through cybernetic feedback, algorithms habituate her to search for herself in the data. this involves looking reflexively at her own past behavior so as to inform her future behavior. surrounded by digital devices, some of which she owns, she internalizes the all-seeing eye and learns to watch herself and respond to algorithmic demands (brubaker 2020 ). data hunger transmutes into self-hunger: an imperative to be digitally discernible in order to be present as a subject. this, of course, exacts a kind of self-producing discipline that may be eerily familiar to those populations that have always been under heavy institutional surveillance, such as the poor, felons, migrants, racial minorities (browne 2015; benjamin 2019) , or the citizens of authoritarian countries. it may also be increasingly familiar to users of health or car insurance, people living in a smart home, or anyone being "tracked" by their employer or school by virtue of simply using institutionally licensed it infrastructure. but the productive nature of the process is not a simple extension of what michel foucault called "disciplinary power" nor of the self-governance characteristic of "governmentality." rather than simply adjusting herself to algorithmic demands, the user internalizes the injunction to produce herself through the machine-learning-driven process itself. in that sense the machine-learnable self is altogether different from the socially learning, self-surveilling, or self-improving self. the point for her is not simply to track herself so she can conform or become a better version of herself; it is, instead, about the productive reorganization of her own experience and self-understanding. as such, it is generative of a new sense of selfhood-a sense of discovering and crafting oneself through digital means that is quite different from the "analog" means of self-cultivation through training and introspection. when one is learning from a machine, and in the process making oneself learnable by it, mundane activities undergo a subtle redefinition. hydrating regularly or taking a stroll are not only imperatives to be followed or coerced into. their actual phenomenology morphs into the practice of feeding or assembling longitudinal databases and keeping track of one's performance: "step counting" and its counterparts (schüll 2016; adams 2019) . likewise, what makes friendships real and defines their true nature is what the machine sees: usually, frequency of online interaction. for instance, snapchat has perfected the art of classifying-and ranking-relationships that way, so people are constantly presented with an ever-changing picture of their own dyadic connections, ranked from most to least important. no longer, contra foucault (1988) , is "permanent self-examination" crucial to self-crafting so much as attention to data-productive practices capable of making the self learnable and sustaining its searching process. to ensure one's learnability-and thereby one's selfhood-one must both feed and reproduce a hunger for data on and around the self. human learning is not only about constant, dynamic social exposure and world hunger, it is also about what we might call world or meaning accretion. 2 the self is constantly both unsettled (by new experiences) and settling (as a result of past experiences). people take on well institutionalized social roles (berger and luckmann 1966) . they develop habits, styles, personalities-a "system of dispositions" in bourdieu's vocabulary-by which they become adjusted to their social world. this system is made accretively, through the conscious and unconscious sedimentation of social experiences and interactions that are specific to the individual, and variable in quality and form. accretion here refers to a process, like the incremental build-up of sediment on a riverbank, involving the gradual accumulation of additional layers or matter. even when change occurs rapidly and unexpectedly, the ongoing process of learning how to constitute and comport oneself and perform as a social agent requires one to grapple with and mobilize social legacies, social memory, and pre-established social norms (goffman 1983) . the habitus, bourdieu would say, is both structured and structuring, historical and generative. social processes of impression formation offer a good illustration of how social learning depends upon accreting data at volume, irrespective of the value of any particular datum. the popular insight that first impressions matter and tend to endure is broadly supported by research in social psychology and social cognition (uleman and kressel 2013) . it is clear that impressions are formed cumulatively and that early-acquired information tends to structure and inform the interpretation of information later acquired about persons and groups encountered in social life (hamilton and sherman 1996) . this has also been shown to be the case in online environments (marlow et al. 2013) . in other words, social impressions are constituted by the incremental build-up of a variegated mass of data. machine learning produces insight in a somewhat comparable way-that is, accretively. insofar as machine learning yields outputs that may be regarded as meaningful (which is often taken to mean "useful" for the task assigned), then that "meaning" is assembled through the accumulation of "experience" or from iterative exposure to available data in sufficient volume, whether in the form of a stream or in a succession of batches. machine learning, like social learning, never produces insight entirely ab initio or independently of preexisting data. to say that meaning is made accretively in machine learning is not to say that machine learning programs are inflexible or inattentive to the unpredictable; far from it. all machine learning provides for the handling of the unforeseen; indeed, capacity to extend from the known to the unknown is what qualifies machine learning as "learning." moreover, a number of techniques are available to make machine learning systems robust in the face of "unknown unknowns" (that is, rare events not manifest in training data). nonetheless, machine learning does entail giving far greater weight to experience than to the event. 3 the more data that has been ingested by a machine learning system, the less revolutionary, reconfigurative force might be borne by any adventitious datum that it encounters. if, paraphrasing marx, one considers that people make their own history, but not in circumstances they choose for themselves, but rather in present circumstances given and inherited, then the social-machine learning interface emphasizes the preponderance of the "given and inherited" in present circumstances, far more than the potentiality for "mak[ing]" that may lie within them (marx 1996 (marx [1852 ). one example of the compound effect of social and automated meaning accretion in the exemplary setting to which we return throughout this articlesocial media-is the durability of negative reputation across interlocking platforms. for instance, people experience considerable difficulty in countering the effects of "revenge porn" online, reversing the harms of identity theft, or managing spoiled identities once they are digitally archived (lageson and maruna 2018) . as langlois and slane have observed, "[w]hen somebody is publicly shamed online, that shaming becomes a live archive, stored on servers and circulating through information networks via search, instant messaging, sharing, liking, copying, and pasting" (langlois and slane 2017) . in such settings, the data accretion upon which machine learning depends for the development of granular insights-and, on social media platforms, associated auctioning and targeting of advertising-compounds the cumulative, sedimentary effect of social data, making negative impressions generated by "revenge porn," or by one's online identity having been fraudulently coopted, hard to displace or renew. the truth value of later, positive data may be irrelevant if enough negative data has accumulated in the meantime. data hunger and the accretive making of meaning are two aspects of the embedded sociality of machine learning and of the "mechanical" dimensions of social learning. together, they suggest modes of social relation, conflict, and action that machine learning systems may nourish among people on whom those systems bear, knowingly or unknowingly. this has significant implications for social and economic inequality, as we explore below. what are the social consequences of machine learning's signature hunger for diverse, continuous, ever more detailed and "meaningful" data and the tendency of many automated systems to hoard historic data from which to learn? in this section, we discuss three observable consequences of data hunger and meaning accretion. we show how these establish certain non-negotiable preconditions for social inclusion; we highlight how they fuel the production of digitally-based forms of social stratification and association; and we specify some recurrent modes of relation fostered thereby. all three ordering effects entail the uneven distribution of power and resources and all three play a role in sustaining intersecting hierarchies of race, class, gender, and other modes of domination and axes of inequality. machine learning's data appetite and the "digestive" or computational abilities that attend it are often sold as tools for the increased organizational efficiency, responsiveness, and inclusiveness of societies and social institutions. with the help of machine learning, the argument goes, governments and non-governmental organizations develop an ability to render visible and classify populations that are traditionally unseen by standard data infrastructures. moreover, those who have historically been seen may be seen at a greater resolution, or in a more finelygrained, timely, and difference-attentive way. among international organizations, too, there is much hope that enhanced learning along these lines might result from the further utilization of machine learning capacities (johns 2019) . for instance, machine learning, deployed in fingerprint, iris, or facial recognition, or to nourish sophisticated forms of online identification, is increasingly replacing older, document-based ones (torpey 2018 )-and transforming the very concept of citizenship in the process (cheney-lippold 2016). whatever the pluses and minuses of "inclusiveness" in this mode, it entails a major infrastructural shift in the way that social learning takes place at the state and inter-state level, or how governments come to "know" their polities. governments around the world are exploring possibilities for gathering and analysing digital data algorithmically, to supplement-and eventually, perhaps, supersede-household surveys, telephone surveys, field site visits, and other traditional data collection methods. this devolves the process of assembling and representing a polity, and understanding its social and economic condition, down to agents outside the scope of public administration: commercial satellite operators (capturing satellite image data being used to assess a range of conditions, including agricultural yield and poverty), supermarkets (gathering scanner data, now widely used in cpi generation), and social media platforms. if official statistics (and associated data gathering infrastructures and labor forces) have been key to producing the modern polity, governmental embrace of machine learning capacities signals a change in ownership of that means of production. social media has become a key site for public and private parties-police departments, immigration agencies, schools, employers and insurers among others-to gather intelligence about the social networks of individuals, their health habits, their propensity to take risks or the danger they might represent to the public, to an organization's bottom line or to its reputation (trottier 2012; omand 2017; bousquet 2018; amoore 2020; stark 2020) . informational and power asymmetries characteristic of these institutions are often intensified in the process. this is notwithstanding the fact that automated systems' effects may be tempered by manual work-arounds and other modes of resistance within bureaucracies, such as the practices of frontline welfare workers intervening in automated systems in the interests of their clients, and strategies of foot-dragging and data obfuscation by legal professionals confronting predictive technologies in criminal justice (raso 2017; brayne and christin 2020) . the deployment of machine learning to the ends outlined in the foregoing paragraph furthers the centrality of data hungry social media platforms to the distribution of all sorts of economic and social opportunities and scarce public resources. at every scale, machine-learning-powered corporations are becoming indispensable mediators of relations between the governing and the governed (a transition process sharply accelerated by the covid-19 pandemic). this invests them with power of a specific sort: the power of "translating the images and concerns of one world into that of another, and then disciplining or maintaining that translation in order to stabilize a powerful network" and their own influential position within it (star 1990, p. 32) . the "powerful network" in question is society, but it is heterogeneous, comprising living and non-living, automated and organic elements: a composite to which we can give the name "society" only with impropriety (that is, without adherence to conventional, anthropocentric understandings of the term). for all practical purposes, much of social life already is digital. this insertion of new translators, or repositioning of old translators, within the circuits of society is an important socio-economic transformation in its own right. and the social consequences of this new "inclusion" are uneven in ways commonly conceived in terms of bias, but not well captured by that term. socially disadvantaged populations are most at risk of being surveilled in this way and profiled into new kinds of "measurable types" (cheney-lippold 2017). in addition, social media user samples are known to be non-representative, which might further unbalance the burden of surveillant attention. (twitter users, for instance, are skewed towards young, urban, minority individuals (murthy et al. 2016 ).) consequently, satisfaction of data hunger and practices of automated meaning accretion may come at the cost of increased social distrust, fostering strategies of posturing, evasion, and resistance among those targeted by such practices. these reactions, in turn, may undermine the capacity of state agents to tap into social data-gathering practices, further compounding existing power and information asymmetries (harkin 2015) . for instance, sarah brayne (2014) finds that government surveillance via social media and other means encourages marginalized communities to engage in "system avoidance," jeopardizing their access to valuable social services in the process. finally, people accustomed to being surveilled will not hesitate to instrumentalize social media to reverse monitor their relationships with surveilling institutions, for instance by taping public interactions with police officers or with social workers and sharing them online (byrne et al. 2019) . while this kind of resistance might further draw a wedge between vulnerable populations and those formally in charge of assisting and protecting them, it has also become a powerful aspect of grassroots mobilization in and around machine learning and techno-social approaches to institutional reform (benjamin 2019) . in all the foregoing settings, aspirations for greater inclusiveness, timeliness, and accuracy of data representation-upon which machine learning is predicated and which underlie its data hunger-produce newly actionable social divisions. the remainder of this article analyzes some recurrent types of social division that machine learning generates, and types of social action and experience elicited thereby. there is, of course, no society without ordering-and no computing either. social order, like computing order, comes in many shapes and varieties but generally "the gap between computation and human problem solving may be much smaller than we think" (foster 2018, p. 152) . in what follows, we cut through the complexity of this socialcomputational interface by distinguishing between two main ideal types of classification: ordinal (organized by judgments of positionality, priority, probability or value along one particular dimension) and nominal (organized by judgments of difference and similarity) (fourcade 2016) . social processes of ordinalization in the analog world might include exams, tests, or sports competitions: every level allows one to compete for the next level and be ranked accordingly. in the digital world, ordinal scoring might take the form of predictive analytics-which, in the case of social media, typically means the algorithmic optimization of online verification and visibility. by contrast, processes of nominalization include, in the analog world, various forms of homophily (the tendency of people to associate with others who are similar to them in various ways) and institutional sorting by category. translated for the digital world, these find an echo in clustering technologies-for instance a recommendation algorithm that works by finding the "nearest neighbors" whose taste is similar to one's own, or one that matches people based on some physical characteristic or career trajectory. the difference between ordinal systems and nominal systems maps well onto the difference between bayesian and analogical approaches to machine learning, to reference pedro domingos's (2015) useful typology. it is, however, only at the output or interface stage that these socially ubiquitous machine learning orderings become accessible to experience. what does it mean, and what does it feel like, to live in a society that is regulated through machine learning systems-or rather, where machine learning systems are interacting productively with social ordering systems of an ordinal and nominal kind? in this section, we identify some new, or newly manifest, drivers of social structure that emerge in machine learning-dominated environments. let us begin with the ordinal effects of these technologies (remembering that machine learning systems comprise human as well as non-human elements). as machine learning systems become more universal, the benefits of inclusion now depend less on access itself, and more on one's performance within each system and according to its rules. for instance, visibility on social media depends on "engagement," or how important each individual is to the activity of the platform. if one does not post frequently and consistently, comment or message others on facebook or instagram, or if others do not interact with one's posts, one's visibility to them diminishes quickly. if one is not active on the dating app tinder, one cannot expect one's profile to be shown to prospective suitors. similarly, uber drivers and riders rank one another on punctuality, friendliness, and the like, but uber (the company) ranks both drivers and riders on their behavior within the system, from canceling too many rides to failing to provide feedback. uber egypt states on its website: "the rating system is designed to give mutual feedback. if you never rate your drivers, you may see your own rating fall." even for those willing to incur the social costs of disengagement, opting out of machine learning may not be an option. failure to respond to someone's tag, or to like their photo, or otherwise maintain data productivity, and one might be dropped from their network, consciously or unconsciously, a dangerous proposition in a world where self-worth has become closely associated with measures of network centrality or social influence. as bucher has observed, "abstaining from using a digital device for one week does not result in disconnection, or less data production, but more digital data points … to an algorithm, … absence provides important pieces of information" (bucher 2020, p. 2) . engagement can also be forced on non-participants by the actions of other users-through tagging, rating, commenting, and endorsing, for instance (casemajor et al. 2015) . note that none of this is a scandal or a gross misuse of the technology. on the contrary, this is what any system looking for efficiency and relevance is bound to look like. but any ordering system that acts on people will generate social learning, including action directed at itself in return. engagement, to feed data hunger and enable the accretion of "meaningful" data from noise, is not neutral, socially or psychologically. the constant monitoring and management of one's social connections, interactions, and interpellations places a nontrivial burden on one's life. the first strategy of engagement is simply massive time investment, to manage the seemingly ever-growing myriad of online relationships (boyd 2015). to help with the process, social media platforms now bombard their users constantly with notifications, making it difficult to stay away and orienting users' behavior toward mindless and unproductive "grinding" (for instance, repetitively "liking" every post in their feed). but even this intensive "nudging" is often not enough. otherwise, how can we explain the fact that a whole industry of social media derivatives has popped up, to help people optimize their behavior vis-a-vis the algorithm, manage their following, and gain an edge so that they can climb the priority order over other, less savvy users? now users need to manage two systems (if not more): the primary one and the (often multiple) analytics apps that help improve and adjust their conduct in it. in these ways, interaction with machine learning systems tends to encourage continuous effort towards ordinal self-optimization. however, efforts of ordinal optimization, too, may soon become useless: as marilyn strathern (citing british economist charles goodhardt) put it, "when a measure becomes a target, it ceases to be a good measure" (strathern 1997, p. 308) . machine learning systems do not reward time spent on engagement without regard to the impact of that engagement across the network as a whole. now, in desperation, those with the disposable income to do so may turn to money as the next saving grace to satisfy the imperative to produce "good" data at volume and without interruption, and reap social rewards for doing so. the demand for maximizing one's data productivity and machine learning measurability is there, so the market is happy to oblige. with a monthly subscription to a social media platform, or even a social media marketing service, users can render themselves more visible. this possibility, and the payoffs of visibility, are learned socially, both through the observation and mimicry of models (influencers, for instance) or through explicit instruction (from the numerous online and offline guides to maximizing "personal brand"). one can buy oneself instagram or twitter followers. social media scheduling tools, such as tweetdeck and post planner, help one to plan ahead to try to maximize engagement with one's postings, including by strategically managing their release across time zones. a paying account on linkedin dramatically improves a user's chance of being seen by other users. the same is true of tinder. if a user cannot afford the premium subscription, the site still offers them one-off "boosts" for $1.99 that will send their profile near the top of their potential matches' swiping queue for 30 min. finally, wealthier users can completely outsource the process of online profile management to someone else (perhaps recruiting a freelance social media manager through an online platform like upwork, the interface of which exhibits ordinal features like client ratings and job success scores). in all the foregoing ways, the inclusionary promise of machine learning has shifted toward more familiar sociological terrain, where money and other vectors of domination determine outcomes. in addition to economic capital, distributions of social and cultural capital, as well as traditional ascriptive characteristics, such as race or gender, play an outsized role in determining likeability and other outcomes of socially learned modes of engagement with machine learning systems. for instance, experiments with mechanical turkers have shown that being attractive increases the likelihood of appearing trustworthy on twitter, but being black creates a contrarian negative effect (groggel et al. 2019) . in another example, empirical studies of social media use among those bilingual in hindi and english have observed that positive modes of social media engagement tend to be expressed in english, with negative emotions and profanity more commonly voiced in hindi. one speculative explanation for this is that english is the language of "aspiration" in india or offers greater prospects for accumulating social and cultural capital on social media than hindi (rudra et al. 2016) . in short, wellestablished off-platform distinctions and social hierarchies shape the extent to which on-platform identities and forms of materialized labor will be defined as valuable and value-generating in the field of social media. in summary, ordinality is a necessary feature of all online socio-technical systems and it demands a relentless catering to one's digital doppelgängers' interactions with others and with algorithms. to be sure, design features tend to make systems addictive and feed this sentiment of oppression (boyd 2015). what really fuels both, however, is the work of social ordering and the generation of ordinal salience by the algorithm. in the social world, any type of scoring, whether implicit or explicit, produces tremendous amounts of status anxiety and often leads to productive resources (time and money) being diverted in an effort to better one's odds (espeland and sauder 2016; mau 2019) . 4 those who are short on both presumably fare worse, not only because that makes them less desirable in the real world, but also because they cannot afford the effort and expense needed to overcome their disadvantage in the online world. the very act of ranking thus both recycles old forms of social inequality and also creates new categories of undesirables. as every teenager knows, those who have a high ratio of following to followers exhibit low social status or look "desperate." in this light, jeff bezos may be the perfect illustration of intertwining between asymmetries of real world and virtual world power: the founder and ceo of amazon and currently the richest man in the world has 1.4 million followers on twitter, but follows only one person: his ex-wife. ordinalization has implications not just for hierarchical positioning, but also for belonging-an important dimension of all social systems (simmel 1910) . ordinal stigma (the shame of being perceived as inferior) often translates into nominal stigma, or the shame of non-belonging. not obtaining recognition (in the form of "likes" or "followers"), in return for one's appreciation of other people, can be a painful experience, all the more since it is public. concern to lessen the sting of this kind of algorithmic cruelty is indeed why, presumably, tinder has moved from a simple elo or desirability score (which depends on who has swiped to indicate liking for the person in question, and their own scores, an ordinal measure) to a system that relies more heavily on type matching (a nominal logic), where people are connected based on taste similarity as expressed through swiping, sound, and image features (carman 2019) . in addition to employing machine learning to rank users, most social media platforms also use forms of clustering and type matching, which allow them to group users according to some underlying similarity (analogical machine learning in domingos's terms). this kind of computing is just as hungry for data as those we discuss above, but its social consequences are different. now the aim is trying to figure a person out or at least to amplify and reinforce a version of that person that appears in some confluence of data exhaust within the system in question. that is, in part, the aim of the algorithm (or rather, of the socio-technical system from which the algorithm emanates) behind facebook's news feed (cooper 2020) . typically, the more data one feeds the algorithm, the better its prediction, the more focused the offering, and the more homogeneous the network of associations forged through receipt and onward sharing of similar offerings. homogenous networks may, in turn, nourish better-and more saleablemachine learning programs. the more predictable one is, the better the chances that one will be seen-and engaged-by relevant audiences. being inconsistent or too frequently turning against type in data-generative behaviors can make it harder for a machine learning system to place and connect a person associatively. in both offline and online social worlds (not that the two can easily be disentangled), deviations from those expectations that data correlations tend to yield are often harshly punished by verbal abuse, dis-association, or both. experiences of being so punished, alongside experiences of being rewarded by a machine learning interface for having found a comfortable group (or a group within which one has strong correlations), can lead to some form of social closure, a desire to "play to type." as one heavy social media user told us, "you want to mimic the behavior [and the style] of the people who are worthy of your likes" in the hope that they will like you in return. that's why social media have been variously accused of generating "online echo chambers" and "filter bubbles," and of fueling polarization (e.g., pariser 2011). on the other hand, being visible to the wrong group is often a recipe for being ostracized, "woke-shamed," "called-out," or even "canceled" (yar and bromwich 2019) . in these and other ways, implementations of machine learning in social media complement and reinforce certain predilections widely learned socially. in many physical, familial, political, legal, cultural, and institutional environments, people learn socially to feel suspicious of those they experience as unfamiliar or fundamentally different from themselves. there is an extensive body of scholarly work investigating social rules and procedures through which people learn to recognize, deal with, and distance themselves from bodies that they read as strange and ultimately align themselves with and against pre-existing nominal social groupings and identities (ahmed 2013; goffman 1963) . this is vital to the operation of the genre of algorithm known as a recommendation algorithm, a feature of all social media platforms. on facebook, such an algorithm generates a list of "people you may know" and on twitter, a "who to follow" list. recommendation algorithms derive value from this social learning of homophily (mcpherson et al. 2001) . for one, it makes reactions to automated recommendations more predictable. recommendation algorithms also reinforce this social learning by minimizing social media encounters with identities likely to be read as strange or nonassimilable, which in turn improves the likelihood of their recommendations being actioned. accordingly, it has been observed that the profile pictures of accounts recommended on tiktok tend to exhibit similarities-physical and racial-to the profile image of the initial account holder to whom those recommendations are presented (heilweil 2020) . in that sense, part of what digital technologies do is organize the online migration of existing offline associations. but it would be an error to think that machine learning only reinforces patterns that exist otherwise in the social world. first, growing awareness that extreme type consistency may lead to online boredom, claustrophobia, and insularity (crawford 2009 ) has led platforms to experiment with and implement various kinds of exploratory features. second, people willfully sort themselves online in all sorts of non-overlapping ways: through twitter hashtags, group signups, click and purchasing behavior, social networks, and much more. the abundance of data, which is a product of the sheer compulsion that people feel to self-index and classify others (harcourt 2015; brubaker 2020) , might be repurposed to revisit common off-line classifications. categories like marriage or citizenship can now be algorithmically parsed and tested in ways that wield power over people. for instance, advertisers' appetite for information about major life events has spurred the application of predictive analytics to personal relationships. speech recognition, browsing patterns, and email and text messages can be mined for information about, for instance, the likelihood of relationships enduring or breaking up (dickson 2019) . similarly, the us national security agency measures people's national allegiance from how they search on the internet, redefining rights in the process (cheney-lippold 2016). even age-virtual rather than chronological-can be calculated according to standards of mental and physical fitness and vary widely depending on daily performance (cheney-lippold 2017, p. 97). quantitatively measured identities-algorithmic gender, ethnicity, or sexuality-do not have to correspond to discrete nominal types anymore. they can be fully ordinalized along a continuum of intensity (fourcade 2016) . the question now is: how much of a us citizen are you, really? how latinx? how gay? 5 in a machine learning world, where each individual can be represented as a bundle of vectors, everyone is ultimately a unique combination, a category of one, however "precisely inaccurate" that category's digital content may be (mcfarland and mcfarland 2015) . changes in market research from the 1970s to the 1990s, aimed at tracking consumer mobility and aspiration through attention to "psychographic variables," constitute a pre-history, of sorts, for contemporary machine learning practices in commercial settings (arvidsson 2004; gandy 1993; fourcade and healy 2017; lauer 2017) . however, the volume and variety of variables now digitally discernible mean that the latter have outstripped the former exponentially. machine learning techniques have the potential to reveal unlikely associations, no matter how small, that may have been invisible, or muted, in the physically constraining geography of the offline world. repurposed for intervention, disparate data can be assembled to form new, meaningful types and social entities. paraphrasing donald mackenzie (2006), machine learning is an "engine, not a camera." christopher wylie, a former lead scientist at the defunct firm cambridge analytica-which famously matched fraudulently obtained facebook data with consumer data bought from us data brokers and weaponized them in the context of the 2016 us presidential election-recalls the experience of searching for-and discovering-incongruous social universes: "[we] spent hours exploring random and weird combinations of attributes.… one day we found ourselves wondering whether there were donors to anti-gay churches who also shopped at organic food stores. we did a search of the consumer data sets we had acquired for the pilot and i found a handful of people whose data showed that they did both. i instantly wanted to meet one of these mythical creatures." after identifying a potential target in fairfax county, he discovered a real person who wore yoga pants, drank kombucha, and held fire-andbrimstone views on religion and sexuality. "how the hell would a pollster classify this woman?" only with the benefit of machine learning-and associated predictive analytics-could wylie and his colleagues claim capacity to microtarget such anomalous, alloyed types, and monetize that capacity (wylie 2019, pp. 72-74) . to summarize, optimization makes social hierarchies, including new ones, and pattern recognition makes measurable types and social groupings, including new ones. in practice, ordinality and nominality often work in concert, both in the offline and in the online worlds (fourcade 2016 ). as we have seen, old categories (e.g., race and gender) may reassert themselves through new, machine-learned hierarchies, and new, machine-learned categories may gain purchase in all sorts of offline hierarchies (micheli et al. 2018; madden et al. 2017) . this is why people strive to raise their digital profiles and to belong to those categories that are most valued there (for instance "verified" badges or recognition as a social media "influencer"). conversely, patternmatching can be a strategy of optimization, too: people will carefully manage their affiliations, for instance, so as to raise their score-aligning themselves with the visible and disassociating themselves from the underperforming. we examine these complex interconnections below and discuss the dispositions and sentiments that they foster and nourish. it should be clear by now that, paraphrasing latour (2013, p. 307), we can expect little from the "social explanation" of machine learning; machine learning is "its own explanation." the social does not lie "behind" it, any more than machine learning algorithms lie "behind" contemporary social life. social relations fostered by the automated instantiation of stratification and association-including in social mediaare diverse, algorithmic predictability notwithstanding. also, they are continually shifting and unfolding. just as latour (2013, p. 221) reminds us not to confuse technology with the objects it leaves in its wake, it is important not to presume the "social" of social media to be fixed by its automated operations. we can, nevertheless, observe certain modes of social relation and patterns of experience that tend to be engineered into the ordinal and nominal orders that machine learning (re)produces. in this section, we specify some of these modes of relation, before showing how machine learning can both reify and ramify them. our argument here is with accounts of machine learning that envisage social and political stakes and conflicts as exogenous to the practice-considerations to be addressed through ex ante ethics-by-design initiatives or ex post audits or certifications-rather than fundamental to machine learning structures and operations. machine learning is social learning, as we highlighted above. in this section, we examine further the kinds of sociality that machine learning makes-specifically those of competitive struggle and dependency-before turning to prospects for their change. social scientists' accounts of modes of sociality online are often rendered in terms of the antagonism between competition and cooperation immanent in capitalism (e.g., fuchs 2007 ). this is not without justification. after all, social media platforms are sites of social struggle, where people seek recognition: to be seen, first and foremost, but also to see-to be a voyeur of themselves and of others (harcourt 2015; brubaker 2020) . in that sense, platforms may be likened to fields in the bourdieusian sense, where people who invest in platform-specific stakes and rules of the game 6 are best positioned to accumulate platform-specific forms of capital (e.g., likes, followers, views, retweets, etc.) (levina and arriaga 2014) . some of this capital may transfer to other platforms through built-in technological bridges (e.g., between facebook and instagram), or undergo a process of "conversion" when made efficacious and profitable in other fields (bourdieu 2011; fourcade and healy 2017) . for instance, as social status built online becomes a path to economic accumulation in its own right (by allowing payment in the form of advertising, sponsorships, or fans' gifts), new career aspirations are attached to social media platforms. according to a recent and well-publicized survey, "vlogger/youtuber" has replaced "astronaut" as the most enviable job for american and british children (berger 2019) . in a more mundane manner, college admissions offices or prospective employers increasingly expect one's presentation of self to include the careful management of one's online personality-often referred to as one's "brand" (e.g., sweetwood 2017) . similarly, private services will aggregate and score any potentially relevant information (and highlight "red flags") about individuals across platforms and throughout the web, for a fee. in this real-life competition, digitally produced ordinal positions (e.g., popularity, visibility, influence, social network location) and nominal associations (e.g., matches to advertised products, educational institutions, jobs) may be relevant. machine learning algorithms within social media both depend on and reinforce competitive striving within ordinal registers of the kind highlighted above-or in bourdieu's terms, competitive struggles over field-specific forms of capital. as georg simmel observed, the practice of competing socializes people to compete; it "compels the competitor" (simmel 2008 (simmel [1903 ). socially learned habits of competition are essential to maintain data-productive engagement with social media platforms. for instance, empirical studies suggest that motives for "friending" and following others on social media include upward and downward social comparison (ouwerkerk and johnson 2016; vogel et al. 2014) . social media platforms' interfaces then reinforce these social habits of comparison by making visible and comparable public tallies of the affirmative attention that particular profiles and posts have garnered: "[b]eing social in social media means accumulating accolades: likes, comments, and above all, friends or followers" (gehl 2015, p. 7) . in this competitive "[l]ike economy," "user interactions are instantly transformed into comparable forms of data and presented to other users in a way that generates more traffic and engagement" (gerlitz and helmond 2013, p. 1349 )-engagement from which algorithms can continuously learn in order to enhance their own predictive capacity and its monetization through sales of advertising. at the same time, the distributed structure of social media (that is, its multinodal and cumulative composition) also fosters forms of cooperation, gift exchange, redistribution, and reciprocity. redistributive behavior on social media platforms manifests primarily in a philanthropic mode rather than in the equitypromoting mode characteristic of, for instance, progressive taxation. 7 examples include practices like the #followfriday or #ff hashtag on twitter, a spontaneous form of redistributive behavior that emerged in 2009 whereby "micro-influencers" started actively encouraging their own followers to follow others. 8 insofar as those so recommended are themselves able to monetize their growing follower base through product endorsement and content creation for advertisers, this redistribution of social capital serves, at least potentially, as a redistribution of economic capital. even so, to the extent that purportedly "free" gifts, in the digital economy and elsewhere, tend to be reciprocated (fourcade and kluttz 2020) , such generosity might amount to little more than an effective strategy of burnishing one's social media "brand," enlarging one's follower base, and thereby increasing one's store of accumulated social (and potentially economic) capital. 9 far from being antithetical to competitive relations on social media, redistributive practices in a gift-giving mode often complement them (mauss 1990) . social media cooperation can also be explicitly anti-social, even violent (e.g., patton et al. 2019) . in these and other ways, digitized sociality is often at once competitive and cooperative, connective and divisive (zukin and papadantonakis 2017) . whether it is enacted in competitive, redistributive or other modes, sociality on social media is nonetheless emergent and dynamic. no wonder that bruno latour was the social theorist of choice when we started this investigation. but-as latour (2012) himself pointed out-gabriel tarde might have been a better choice. what makes social forms cohere are behaviors of imitation, counter-7 an exception to this observation would be social media campaigns directed at equitable goals, such as campaigns to increase the prominence and influence of previously under-represented groups-womenalsoknowstuff and pocalsoknowstuff twitter handles, hashtags, and feeds, for example. 8 recommendation in this mode has been shown to increase recommended users' chance of being followed by a factor of roughly two or three compared to a recommendation-free scenario (garcia gavilanes et al. 2013). 9 for instance, lewis (2018, p. 5) reports that "how-to manuals for building influence on youtube often list collaborations as one of the most effective strategies." imitation, and influence (tarde 1903) . social media, powered by trends and virality, mimicry and applause, parody and mockery, mindless "grinding" and tagging, looks quintessentially tardian. even so, social media does not amount simply to transfering online practices of imitation naturally occuring offline. the properties of machine learning highlighted above-cybernetic feedback; data hunger; accretive meaning-making; ordinal and nominal ordering-lend social media platforms and interfaces a distinctive, compulsive, and calculating quality-engineering a relentlessly "participatory subjectivity" (bucher 2020, p.88; boyd 2015) . how one feels and how one acts when on social media is not just an effect of subjective perceptions and predispositions. it is also an effect of the software and hardware that mediate the imitative (or counter-imitative) process itself-and of the economic rationale behind their implementation. we cannot understand the structural features and phenomenological nature of digital technologies in general, and of social media in particular, if we do not understand the purposes for which they were designed. the simple answer, of course, is that data hunger and meaning accretion are essential to the generation of profit (zuboff 2019), whether profit accrues from a saleable power to target advertising, commercializable developments in artificial intelligence, or by other comparable means. strategies for producing continuous and usable data flows to profit-making ends vary, but tend to leverage precisely the social-machine learning interface that we highlighted above. social media interfaces tend to exhibit design features at both the back-and front-end that support user dependency and enable its monetization. for example, the "infinite scroll," which allows users to swipe down a page endlessly (without clicking or refreshing) rapidly became a staple of social media apps after its invention in 2006, giving them an almost hypnotic feel and maximizing the "time on device" and hence users' availability to advertisers (andersson 2018) . similarly, youtube's recommendation algorithm was famously optimized to maximize users' time on site, so as to serve them more advertisements (levin 2017; roose 2019) . social media platforms also employ psycho-social strategies to this end, including campaigns to draw people in by drumming up reciprocity and participation-the notifications, the singling out of trends, the introduction of "challenges"-and more generally the formation of habits through gamification. prominent critics of social media, such as tristan harris (originally from google) and sandy parakilas (originally from facebook), have denounced apps that look like "slot machines" and use a wide range of intermittent rewards to keep users hooked and in the (instagram, tiktok, facebook, …) zone, addicted "by design" (schüll 2012; fourcade 2017) . importantly, this dependency has broader social ramifications than may be captured by a focus on individual unfreedom. worries about the "psychic numbing" of the liberal subject (zuboff 2019) , or the demise of the sovereign consumer, do not preoccupy us so much as the ongoing immiseration of the many who "toil on the invisible margins of the social factory" (morozov 2019) or whose data traces make them the targets of particularly punitive extractive processes. dependencies engineered into social media interfaces help, in combination with a range of other structural factors, to sustain broader economic dependencies, the burdens and benefits of which land very differently across the globe (see, e.g., taylor and broeders 2015) . in this light, the question of how amenable these dynamics may be to social change becomes salient for many. recent advances in digital technology are often characterized as revolutionary. however, as well as being addictive, the combined effect of machine learning and social learning may be as conducive to social inertia as it is to social change. data hunger on the part of mechanisms of both social learning and machine learning, together with their dependence on data accretion to make meaning, encourage replication of interface features and usage practices known to foster continuous, data-productive engagement. significant shifts in interface design-and in the social learning that has accreted around use of a particular interface-risk negatively impacting data-productive engagement. one study of users' reactions to changes in the facebook timeline suggested that "major interface changes induce psychological stress as well as technology-related stress" (wisniewski et al. 2014) . in recognition of these sensitivities, those responsible for social media platforms' interfaces tend to approach their redesign incrementally, so as to promote continuity rather than discontinuity in user behaviour. the emphasis placed on continuity in social media platform design may foster tentativeness in other respects as well, as we discuss in the next section. at the same time, social learning and machine learning, in combination, are not necessarily inimical to social change. machine learning's associative design and propensity to virality have the potential to loosen or unsettle social orders rapidly. and much as the built environment of the new urban economy can be structured to foster otherwise unlikely encounters (hanson and hillier 1987; zukin 2020) , so digital space can be structured to similar effect. for example, the popular chinese social media platform wechat has three features, enabled by machine learning, that encourage openended, opportunistic interactions between random users-shake, drift bottle, and people nearby-albeit, in the case of people nearby, random users within one's immediate geographic vicinity. (these are distinct from the more narrow, instrumental range of encounters among strangers occasioned by platforms like tinder, the sexual tenor of which are clearly established in advance, with machine learning parameters set accordingly.) qualitative investigation of wechat use and its impact on chinese social practices has suggested that wechat challenges some existing social practices, while reinforcing others. it may also foster the establishment of new social practices, some defiant of prevailing social order. for instance, people report interacting with strangers via wechat in ways they normally would not, including shifting to horizontallystructured interactions atypical of chinese social structures offline (wang et al. 2016 ). this is not necessarily unique to wechat. the kinds of ruptures and reorderings engineered through machine learning do not, however, create equal opportunities for value creation and accumulation, any more than they are inherently liberating or democratizing. social media channels have been shown to serve autocratic goals of "regime entrenchment" quite effectively (gunitsky 2015) . 10 likewise, they serve economic goals of data accumulation and concentration (zuboff 2019) . machine-learned sociality lives on corporate servers and must be 10 with regard to wechat in china and vkontakte in russia, as well as to government initiatives in egypt, the ukraine, and elsewhere, seva gunitsky (2015) highlights a number of reasons why, and means by which, nondemocratic regimes have proactively sought (with mixed success) to co-opt social media, rather than simply trying to suppress it, in order to try to ensure central government regimes' durability. meticulously "programmed" (bucher 2018) to meet specific economic objectives. as such, it is both an extremely lucrative proposition for some and (we have seen) a socially dangerous one for many. it favors certain companies, their shareholders and executives, while compounding conditions of social dependency and economic precarity for most other people. finally, with its content sanitized by underground armies of ghost workers (gray and suri 2019), it is artificial in both a technical and literal sense-"artificially artificial," in the words of jeff bezos (casilli and posada 2019). we have already suggested that machine-learned sociality, as it manifests on social media, tends to be competitive and individualizing (in its ordinal dimension) and algorithmic and emergent (in its nominal dimension). although resistance to algorithms is growing, those who are classified in ways they find detrimental (on either dimension) may be more likely to try to work on themselves or navigate algorithmic workarounds than to contest the classificatory instrument itself (ziewitz 2019) . furthermore, we know that people who work under distributed, algorithmically managed conditions (e.g., mechanical turk workers, uber drivers) find it difficult to communicate amongst themselves and organize (irani and silberman 2013; lehdonvirta 2016; dubal 2017) . these features of the growing entanglement of social and machine learning may imply dire prospects for collective action-and beyond it, for the achievement of any sort of broad-based, solidaristic project. in this section, we tentatively review possibilities for solidarity and mobilization as they present themselves in the field of social media. machine learning systems' capacity to ingest and represent immense quantities of data does increase the chances that those with common experiences will find one another, at least insofar as those experiences are shared online. machine-learned types thereby become potentially important determinants of solidarity, displacing or supplementing the traditional forces of geography, ascribed identities, and voluntary association. those dimensions of social life that social media algorithms have determined people really care about often help give rise to, or supercharge, amorphous but effective forms of offline action, if only because the broadcasting costs are close to zero. examples may include the viral amplification of videos and messages, the spontaneity of flash mobs (molnár 2014) , the leaderless, networked protests of the arab spring (tufekci 2016) , or of the french gilets jaunes (haynes 2019), and the #metoo movement's reliance on public disclosures on social media platforms. nonetheless, the thinness, fleeting character, and relative randomness of the affiliations summoned in those ways (based on segmented versions of the self, which may or may not overlap) might make social recognition and commonality of purpose difficult to sustain in the long run. more significant, perhaps, is the emergence of modes of collective action that are specifically designed not only to fit the online medium, but also to capitalize on its technical features. many of these strategies were first implemented to stigmatize or sow division, although there is no fatality that this is their only possible use. examples include the anti-semitic (((echo))) tagging on twitter-originally devised to facilitate trolling by online mobs (weisman 2018) but later repurposed by non-jews as an expression of solidarity; the in-the-wild training of a microsoft chatter bot, literally "taught" by well-organized users to tweet inflammatory comments; the artificial manipulation of conversations and trends through robotic accounts; or the effective delegation, by the trump 2020 campaign, of the management of its ad-buying activities to facebook's algorithms, optimized on the likelihood that users will take certain campaign-relevant actions-"signing up for a rally, buying a hat, giving up a phone number" (bogost and madrigal 2020) . 11 the exploitation of algorithms for divisive purposes often spurs its own reactions, from organized counter-mobilizations to institutional interventions by platforms themselves. during the 2020 black lives matter protests, for instance, kpop fans flooded rightwing hashtags on instagram and twitter with fancams and memes in order to overwhelm racist messaging. even so, often the work of "civilizing" the social media public sphere is left to algorithms, supported by human decision-makers working through rules and protocols (and replacing them in especially sensitive cases). social media companies ban millions of accounts every month for inappropriate language or astroturfing (coordinated operations on social media that masquerade as a grassroot movement): algorithms have been trained to detect and exclude certain types of coalitions on the basis of a combination of social structure and content. in 2020, the british far right movement "britain first" moved to tiktok after being expelled from facebook, twitter, instagram, and youtube-and then over to vkontakte or vk, a russian platform, after being banned from tiktok (usa news 2020). chastised in the offline world for stirring discord and hate, the economic engines that gave the movement a megaphone have relegated it to their margins with embarrassment. the episode goes to show that there is nothing inherently inclusive in the kind of group solidarity that machine learning enables, and thus it has to be constantly put to the (machine learning) test. in the end, platforms' ideal of collective action may resemble the tardean, imitative but atomized crowd, nimble but lacking in endurance and capacity (tufekci 2016) . mimetic expressions of solidarity, such as photo filters (e.g., rainbow), the "blacking out" of one's newsfeed, or the much-bemoaned superficiality of "clicktivism" may be effective at raising consciousness or the profile of an issue, but they may be insufficient to support broader-based social and political transformations. in fact, social media might actually crowd out other solidaristic institutions by also serving as a (feeble, often) palliative for their failures. for example, crowdsourced campaigns, now commonly used to finance healthcare costs, loss of employment, or educational expenses, perform a privatized solidarity that is a far cry from the universal logic of public welfare institutions. up to this point, our emphasis has been on the kinds of sociality that machine learning implementations tend to engender on the social media field, in both vertical (ordinal) and horizontal (nominal) configurations. we have, in a sense, been "reassembling the social" afresh, with an eye, especially, to its computational components and chains of reference (latour 2005) . throughout, we have stressed, nonetheless, that machine learning and other applications of artificial intelligence must be understood as forces internal to social life-both subject to and integral to its contingent properties-not forces external to it or determinative of it. accordingly, it is just as important to engage in efforts to reassemble "the machine"-that is, to revisit and put once more into contention the associative preconditions for machine learning taking the form that it currently does, in social media platforms for instance. and if we seek to reassemble the machine, paraphrasing latour (2005, p. 233) , "it's necessary, aside from the circulation and formatting of traditionally conceived [socio-technical] ties, to detect other circulating entities." so what could be some "other circulating entities" within the socio-technical complex of machine learning, or how could we envisage its elements circulating, and associating, otherwise? on some level, our analysis suggests that the world has changed very little. like every society, machine-learned society is powered by two fundamental, sometimes contradictory forces: stratification and association, vertical and horizontal difference. to be sure, preexisting social divisions and inequalities are still very much part of its operations. but the forces of ordinality and nominality have also been materialized and formatted in new ways, of which for-profit social media offer a particularly stark illustration. the machinelearnable manifestations of these forces in social media: these are among the "other circulating entities" now traceable. recursive dynamics between social and machine learning arise where social structures, economic relations and computational systems intersect. central to these dynamics in the social media field are the development of a searching disposition to match the searchability of the environment, the learnability of the self through quantified measurement, the role of scores in the processing of social positions and hierarchies, the decategorization and recategorization of associational identities, automated feedback that fosters compulsive habits and competitive social dispositions, and strategic interactions between users and platforms around the manipulation of algorithms. what, then, of prospects for reassembly of existing configurations? notwithstanding the lofty claims of the it industry, there is nothing inherently democratizing or solidaristic about the kinds of social inclusiveness that machine learning brings about. the effects of individuals and groups' social lives being rendered algorithmically learnable are ambivalent and uneven. in fact, they may be as divisive and hierarchizing as they may be connective and flattening. moreover, the conditions for entry into struggle in the social media field are set by a remarkably small number of corporate entities and "great men of tech" with global reach and influence (grewal 2008) . a level playing field this most definitely is not. rather, it has been carved up and crenellated by those who happen to have accumulated greatest access to the data processing and storage capacity that machine learning systems require, together with the real property, intellectual property, and personal property rights, and the network of political and regulatory lobbyists that ensure that exclusivity of access is maintained (cohen 2019) . power in this field is, accordingly, unlikely to be reconfigured or redistributed organically, or through generalized exhortation to commit to equity or ethics (many versions of which are self-serving on the part of major players). instead, political action aimed at building or rebuilding social solidarities across such hierarchies and among such clusters must work with and through them, in ways attentive to the specifics of their instantiation in particular techno-social settings. to open to meaningful political negotiation those allocations and configurations of power that machine learning systems help to inscribe in public and private life-this demands more than encompassing a greater proportion of people within existing practices of ruling and being ruled, and more than tinkering around the edges of existing rules. the greater the change in sociality and social relations-and machine learning is transforming both, as we have recounted-the more arrant and urgent the need for social, political and regulatory action specifically attuned to that change and to the possibility of further changes. social and political action must be organized around the inequalities and nominal embattlements axiomatic to the field of social media, and to all fields shaped in large part by machine learning. and these inequalities and embattlements must be approached not as minor deviations from a prevailing norm of equality (that is, something that can be corrected after the fact or addressed through incremental, technical fixes), but as constitutive of the field itself. this cannot, moreover, be left up to the few whose interests and investments have most shaped the field to date. it is not our aim to set out a program for this here so much as to elucidate some of the social and automated conditions under which such action may be advanced. that, we must recognize, is a task for society, in all its heterogeneity. it is up to society, in other words, to reassemble the machine. how the reification of merit breeds inequality: theory and experimental evidence step-counting in the "health-society strange encounters:embodied others in post-coloniality introduction to machine learning cloud ethics: algorithms and the attributes of ourselves and others social media apps are "deliberately" addictive to users on the 'pre-history of the panoptic sort': mobility in market research social learning through imitation race after technology. abolitionist tools for the new jim code. cambridge: polity american kids would much rather be youtubers than astronauts. ars technica the social construction of reality: a treatise in the sociology of knowledge how facebook works for trump distinction: a social critique of the judgement of taste (r the logic of practice the field of cultural production the forms of capital mining social media data for policing, the ethical way. government technology surveillance and system avoidance: criminal justice contact and institutional attachment technologies of crime prediction: the reception of algorithms in policing and criminal courts dark matters: on the surveillance of blackness digital hyperconnectivity and the self if ... then: algorithmic power and politics nothing to disconnect from? being singular plural in an age of machine learning a precarious game: the illusion of dream jobs in the video game industry social media surveillance in social work: practice realities and ethical implications some elements of a sociology of translation: domestication of the scallops and the fishermen of st brieuc bay tinder says it no longer uses a "desirability" score to rank people. the verge non-participation in digital media: toward a framework of mediated political action: media the platformization of labor and society jus algoritmi: how the national security agency remade citizenship we are data. algorithms and the making of our digital selves between truth and power: the legal constructions of informational capitalism how the facebook algorithm works in 2020 and how to work with it following you: disciplines of listening in social media can alexa and facebook predict the end of your relationship accessed 9 the master algorithm: how the quest for the ultimate learning machine will remake our world alchemy and artificial intelligence. rand corporation artificial intelligence the drive to precarity: a political history of work, regulation, & labor advocacy in san francisco's taxi & uber economics the elementary forms of religious life the civilizing process:sociogenetic and psychogenetic investigations engines of anxiety: academic rankings, reputation, and accountability genesis and development of a scientific fact culture and computation: steps to a probably approximately correct theory of culture technologies of the self. lectures at university of vermont the fly and the cookie: alignment and unhingement in 21st-century capitalism seeing like a market a maussian bargain: accumulation by gift in the digital economy internet and society: social theory in the information age the panoptic sort: a political economy of personal information follow my friends this friday! an analysis of human-generated friendship recommendations the case for alternative social media the like economy: social buttons and the data-intensive web custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media stigma: notes on the management of spoiled identity the interaction order the philosophical baby: what children's minds tell us about truth, love, and the meaning of life ghost work. how to stop sillicon valley from building a new underclass old communication, new literacies: social network sites as social learning resources network power. the social dynamics of globalization race and the beauty premium: mechanical turk workers' evaluations of twitter accounts. information corrupting the cyber-commons: social media as a tool of autocratic stability policy paradigms, social learning, and the state: the case of economic policymaking in britain perceiving persons and groups the architecture of community: some new proposals on the social consequences of architectural and planning decisions exposed: desire and disobedience in the digital age simmel, the police form and the limits of democratic policing posthuman learning: ai from novice to expert? gilets jaunes and the two faces of facebook our weird behavior during the pandemic is messing with ai models there's something strange about tiktok recommendations turkopticon: interrupting worker invisibility in amazon mechanical turk from planning to prototypes: new ways of seeing like a state the structure of scientific revolutions the age of spiritual machines: when computers exceed human intelligence the cybernetic matrix of`french theory digital degradation: stigma management in the internet age economies of reputation: the case of revenge porn unequal childhoods: class, race, and family life the moral dilemmas of a safety belt the pasteurization of france reassembling the social: an introduction to actor-network-theory gabriel tarde and the end of the social an inquiry into modes of existence: an anthropology of the moderns creditworthy: a history of consumer surveillance and financial identity in america algorithms that divide and unite: delocalisation, identity and collective action in 'microwork google to hire thousands of moderators after outcry over youtube abuse videos | technology. the guardian distinction and status production on user-generated content platforms: using bourdieu's theory of cultural production to understand social dynamics in online fields alternative influence: broadcasting the reactionary right on youtube an engine, not a camera: how financial models shape markets privacy, poverty, and big data: a matrix of vulnerabilities for poor americans impression formation in online peer production: activity traces and personal profiles in github the eighteenth brumaire of louis bonaparte the metric society: on the quantification of the social techniques of the body the gift. the form and reason of exchange in archaic societies big data and the danger of being precisely inaccurate birds of a feather: homophily in social networks digital footprints: an emerging dimension of digital inequality social learning and imitation (pp. xiv, 341) reframing public space through digital mobilization: flash mobs and contemporary urban youth culture capitalism's new clothes. the baffler urban social media demographics: an exploration of twitter use in major american cities the palgrave handbook of security, risk and intelligence motives for online friending and following: the dark side of social network site connections the filter bubble: what the internet is hiding from you when twitter fingers turn to trigger fingers: a qualitative study of social media-related gang violence the tacit dimension displacement as regulation: new regulatory technologies and front-line decision-making in ontario works the making of a youtube radical understanding language preference for expression of opinion and sentiment: what do hindi-english speakers do on twitter? addiction by design: machine gambling in las vegas data for life: wearable technology and the design of self-care alfred schutz on phenomenology and social relations how is society possible? sociology of competition power, technology and the phenomenology of conventions: on being allergic to onions testing and being tested in pandemic times improving ratings': audit in the british university system 10 social media tips for students to improve their college admission chances the laws of imitation in the name of development: power, profit and the datafication of the global south life 3.0: being human in the age of artificial intelligence the invention of the passport: surveillance, citizenship, and the state analyzing scriptural inference in conservative news practices policing social media twitter and tear gas: the power and fragility of networked protest a brief history of theory and research on impression formation far-right activists tommy robinson and britain first turn to russia's vk after being banned from tiktok and every big social platform social comparison, social media, and selfesteem mind in society: the development of higher psychological processes towards a reflexive sociology: a workshop with pierre bourdieu. sociological theory space collapse: reinforcing, reconfiguring and enhancing chinese social practices through wechat. conference on web and social media (icwsm 2016) semitism))): being jewish in america in the age of trump understanding user adaptation strategies for the launching of facebook timeline mindf*ck. cambridge analytica and the plot to break america tales from the teenage cancel culture. the new york times rethinking gaming. the ethical work of optimization in web search engines the age of surveillance capitalism. the fight for a human future at the new frontier of power the innovation complex: cities, tech and the new economy hackathons as co-optation ritual: socializing workers and institutionalizing innovation in the publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations as well as numerous articles that explore and theorize national variations in political mores, valuation cultures, economic policy, and economic knowledge. more recently, she has written extensively on the political economy of digitality, looking specifically at the changing nature of inequality and stratification in the digital era since 2015, she has been conducting fieldwork on the role of digital technology and digital data in development, humanitarian aid, and disaster relief-work funded, since 2018, by the australian research council. relevant publications include "global governance through the pairing of list and algorithm" (environment and planning d: society and space 2015); "data, detection, and the redistribution of the sensible acknowledgments we are grateful to kieran healy, etienne ollion, john torpey, wayne wobcke, and sharon zukin for helpful comments and suggestions. we also thank the institute for advanced study for institutional support. an earlier version of this article was presented at the "social and ethical challenges of machine learning" workshop at the institute for advanced study, princeton, november 2019. key: cord-301525-gcls69om authors: van ewijk, bernadette j.; stubbe, astrid; gijsbrechts, els; dekimpe, marnik g. title: online display advertising for cpg brands: (when) does it work? date: 2020-08-18 journal: nan doi: 10.1016/j.ijresmar.2020.08.004 sha: doc_id: 301525 cord_uid: gcls69om abstract this study examines how online display ads, alone or in combination with more conventional media (television and print), can help drive sales in the consumer packaged goods (cpg) sector. it also assesses how the combined sales effect of online and offline ads depends on the volatility of their expenditures over time. we explore these relations for 154 brands across 68 dutch cpg product categories. we find that, even though display ads are not effective for the “average” cpg brand, they do have a significant impact for a sizable, and considerably larger than expected by chance, subset of brands. importantly, this impact depends on the type of product. while display ads are found to be ineffective for low-involvement utilitarian products, they can significantly enhance sales for other cpg product types. moreover, the effect depends on whether they are used in combination with other media: while display ads are best used as a stand-alone medium for high-involvement utilitarian products, it is better to combine them with traditional media for hedonic products. finally, the long-term effectiveness of display messages increases significantly when they are spread more evenly in time. advertising spending in online media is on the rise. in 2019, digital ad spending already accounted for 50.3% of the total media ad spending, and is projected to grow to 56. 1% (62.6%) in 2021 (2024) . 1 next to search ads, the largest portion of online media spending goes to online display advertising. online ads, and display ads in particular, have become especially popular in automotive, financial service and telecom markets. recently, cpg brands have also started to venture into the use of display advertising. display ad spending by cpg companies reached approximately $4 billion in the us in 2016, which corresponded to 68% of the industry"s total online spending (emarketer, 2016) . in the netherlands, brands at the forefront of this development were, among others, coca-cola, which allocated more than €500,000 to display advertising during 2016 and 2017, and nivea haircare, which stepped up their displayadvertising spending to a similar amount in 2017. still, many cpg brandsincluding leading brands like ariel (a well-known laundry detergent brand) and warsteiner (a globally distributed beer brand)were more hesitant to jump on the digital bandwagon. more strikingly, others have been reducing their investments in the medium. for instance, in 2017, procter & gamble cut through display-ad investments, proponents of the medium hope to improve their brands" performance. previous studies have suggested that online advertising may allow brands to widen their reach, generate brand awareness, and act as a purchase reminder (binet & field, 2018 ) in a more flexible and cost-effective way than conventional media (drèze & hussherr, 2003) . however, when it comes to the actual impact of display advertising on brand sales, empirical evidence is still scant. prior meta-analytic evidence on advertising elasticities pertains to offline media only (shapiro et al., 2020; frison et al., 2014; sethuraman et al., 2011) , while extant studies on display ads mostly focus on a select set of non-grocery products like apparel (dinner et al., 2014) , books (breuer et al., 2011) , cars (naik & peters, 2009) , or health-care, beauty and non-prescription drugs (manchanda et al., 2006) . given that consumers" response to ad messages is shaped by their decision-making process (draganska et al. 2014 )which is bound to differ between categoriesthe question remains whether online-ad investments for brands in cpg categories will translate into higher sales, in the short and/or the long run. in addressing this question, it is important to note that online ads are rarely used in isolation. the combined use of display ads with more traditional media such as print or tv ads could create interaction effects. on the one hand, (positive) synergies may arise (binet & field, 2009 , 2018 ; for example, online banner ads may complement a tv campaign by transmitting its core message to the audience in a different manner. conversely, the use of multiple media might also lead to negative interactions, due to duplicate reach, incongruence or irritation/overload (burmester et al., 2015; taylor et al., 2013) . moreover, to the extent that the impact of advertising expenditures on sales carries over to subsequent periods, such cross-media effects may depend not only on concurrent, but also on past expenditures in other media. even though the notion of sales-advertising interactions between media is well-accepted in the marketing literature (see, e.g., batra & keller, 2016) and empirically supported for traditional media (e.g., kolsarici & vakratsas, 2018; naik & raman, 2003) , the dynamic interplay among online and offline media is not yet well documented. the few studies that do consider such interactions again tend to pertain mostly to non-cpg brands, and produce mixed results on the presence and direction of the effects (kolsarici & vakratsas, 2018; naik & peters, 2009; taylor et al., 2013) . little is known as to whether and under what circumstances display ads and conventional media like print and tv work synergistically or counteract one another in cpg markets. are those effects more likely to occur among certain media? for certain brands? and: do these effects materialize in all cpg categories alike, or do they depend on productcategory characteristics? the current research sets out to address these issues. we study how spending in online media (i.c., display ads) and offline media (i.c., print and tv) drives offline sales in the short and the long run, across a large set of (over 150) brands from a broad (68) range of cpg categories, using data from the dutch market. we assess whether synergistic or antagonistic cross-media effects occur, thereby allowing this interplay to materialize within as well as across periods. we then document how both the stand-alone and combined sales effect of online and offline ads depend on the product category, and on the volatility of advertising spending in each medium. the effectiveness of (off-line and/or on-line) advertising spending has been studied in two broad research streams. a first set of studies has focused on the idiosyncratic characteristics of different media to identify, in an experimental setting, the underlying reasons/processes why certain media may be more or less appropriate in certain contexts. even though the insights from j o u r n a l p r e -p r o o f journal pre-proof these studies will be instrumental in our subsequent theorizing, our work is situated more in the second research stream, where secondary data from real-life settings are used to quantify, through econometric techniques, the effectiveness of one or more advertising media. web appendix a positions our study vis-a-vis four sets of studies in that second tradition: (i) largescale econometric studies on advertising's effectiveness in cpg markets, (ii) empirical studies focusing on the sales effectiveness of display advertising, (iii) empirical studies on the interplay between different media, and (iv) studies on category-related differences in (on-or offline) media effectiveness. for each set, it offers a (by no means exhaustive) list of representative studies. in the subsequent sections, we discuss how we draw on the various research streams, and highlight our contributions relative to these earlier studies. display advertising has already been a topic of interest to several scholars, not only because of its increased usage, but also because the medium, by virtue of its idiosyncratic characteristics, may produce different outcomes than more traditional media. a number of prior studies have looked at the properties of this medium (relative to offline media) and the response mechanisms that it triggers in an experimental setting (dijkstra et al., 2005; drèze & hussherr, 2003) . as these studies indicate, display advertising is a fairly low-cost, high-reach and targetable medium that differs from other media in terms of (i) modality (use of sensory modes, static vs. dynamic), (ii) information quantity, and (iii) pacing (where an important distinction is that between "retrieval" mediafor which the consumer determines whether and when to access the information and for how long -and "delivery" mediafor which the speed and sequence of information transfer is controlled much more by the sender (van raaij, 1998). j o u r n a l p r e -p r o o f journal pre-proof display advertisements often use only visual stimuliunlike tv, which is a multisensory mediumand the ads as such are very low on contentespecially compared to print. moreover, display advertising is a retrieval medium that typically has only limited "bandwidth", i.e., covers only a minor portion of the webpage that the consumer is viewing, leaving the larger part for website content. at the same time, display ads typically allow consumers to control the pacing (time spent looking at the ad) and to collect extra information (by clicking on the ad). these properties lead proponents of display advertising to advocate its effectiveness. the simplicity of the message allows for fluent processing. interested consumers can click on the ads to get more information about the brand if and when they see fit. but also consumers who do not process the ad may be influenced. the ad may leave a memory trace that primes them on subsequent advertising exposures: even if they do not remember having seen the ad, they may find it familiar (drèze & hussherr, 2003) and this familiarity may translate into brand liking (chatterjee, 2012) . as such, display ads could create awareness (hoban & bucklin, 2015) , serve as a reminder (manchanda et al., 2006) , activate consumers to buy the brand (binet & field, 2018) , and even have a brand-building function and improve performance in the long run (draganska et al., 2014) . finally, this medium may be able to reach a larger part of the population than traditional (offline) media. hence, adding display ads to the media mix may result in higher sales (abraham, 2008) . opponents of the medium, however, emphasize the downsides. because of the low "bandwidth", consumers may overlook, or even avoid looking at, the display ads while focusing on the remaining website content (burke et al., 2005) . for lack of attention and information content, consumers may not learn about or elaborate on the properties of the brand, which would make the message effect less enduring (chatterjee, 2012) . especially if there is a delay between medium that uses multiple sensory modes, is likely to affect even low-or uninvolved consumers (buchholz & smith, 1991) . print and display advertising, being retrieval media, allow consumers to more easily skip the message. hence, these media are less apt to influence less involved consumers. especially for low-bandwidth display messages, this ad avoidance or lack of attention may be a problemdisplay-message processing in low-involvement categories being mostly pre-attentive (dijkstra et al., 2005) , and click-through rates very low (drèze & hussherr, 2003) . the category"s hedonic vs. utilitarian nature, in turn, determines the amount and type of information that consumers respond to, and the way they process this information. hedonic categories call for (audio-) visual (rather than verbal) stimuli, which allow to convey brand imagery and experiential cues, and are typically processed in a more emotional fashion (del barrio-garcia et al., 2019 macinnis & jaworski, 1989 . whereas print media have limited ability to transmit such affective signals, tv has high communication power and may more readily transfer moods, feelings, and images that facilitate affective responses (chauduri & buck, 1995) . display advertising can be situated somewhere in-between: it typically has more visual content than print, but is more static than tv and does often not include auditive information. hence, for hedonic products, display ads are anticipated to generate stronger consumer reactions than print, but weaker reactions than tv. for functional, utilitarian, products, consumers are better served with verbal information on product attributes (del barrio-garcia et al., 2019; macinnis & jaworski, 1989) . whether consumers actually attend to and use that information will, again, depend on their involvement with the product. while they are less inclined to use it (and stick to their routinely behaviour) in low-involvement settings, they are likely to process this information analytically for high-j o u r n a l p r e -p r o o f journal pre-proof involvement categories. because of its transient character, television does not make it easy for consumers to retain and rehearse the presented information. therefore, rather than triggering cognitive processing, tv ads are bound to entail primarily global evaluative consumer responses (pieters & van raaij, 1992) . in contrast, print and display advertising allow consumers to process information at their own pace, and to retain the information for later use. even if display ads as such are not informative, they allow involved consumers to request more information. this implies that for high involvement, utilitarian products, print and display ads may lead to more elaborate, cognitive, message processing (dijkstra et al., 2005) , and be more impactful than tv. display ads seldom constitute the single medium-of-choice. as such, online advertising elasticities should not be considered in isolation: if a brand allocates its resources to more than one advertising channel, the effectiveness of each medium may well depend on the other media used (naik & raman, 2003; naik et al., 2007) . this interplay may be positive or negative, and it is unclear as of yet which effects to expect (sridhar et al., 2016) . synergistic, or positive, cross-media effects arise if the combined effect of two media exceeds the sum of their individual effects (naik, 2007) . different media may reach consumers in a different context (time and place) or in different modes (audio, text, visual) . as such, they may convey the advertising message in a complementary way, and/or operate on different outcome metrics (batra & keller, 2016; dijkstra et al., 2005) . for instance, print advertising appears particularly well-suited to convey detailed brand information, tv advertising may be better at creating awareness, interest, and brand imagery, and display ads may be well-suited to generate or maintain brand salience, and serve as a reminder and activation tool (batra & keller, 2016; pfeiffer & zinnbauer, 2010) . transmitting the advertising message using different media j o u r n a l p r e -p r o o f may therefore stimulate message processing and memorization throughout the purchase funnel (edell & keller, 1989) while avoiding tedium (chatterjee, 2012) , and thus lead to superior brand performance. synergistic effects have initially been documented for offline advertising media. for instance, combining print and tv ads has been shown to boost sales for clothing (naik & raman, 2003) . positive cross-effects may also materialize among offline and online media. for instance, in a study involving (four) consumer goods and (two) services, chang and thorson (2004) show that combining tv and display ads lifts the consumer"s level of attention, while naik and peters (2009) find that it increases purchase consideration for a car brand. pauwels et al. (2016) , in turn, observe synergistic effects on traffic and revenues for (five) services, furniture and apparel brands. print and display ads, too, may complement each other. for example, their joint use may enhance brand recall and attitude, as shown by chatterjee (2012) for credit cards and car rentals, and increase website visits, as found by rosenkrans and myers (2013) for a nonprofit service. conversely, also antagonistic, or negative, cross-media effects may arise. in such a case, the combined effect of two media is lower than the sum of their individual effects: instead of amplifying each other, the media weaken each other (burmester et al., 2015) . dinner et al. (2014) , for example, found that a high-fashion retailer"s traditional advertising reduced its online click-through rates, which could be attributed to an information-substitution effect, in that the former already provided information that one otherwise would try to obtain by click-throughs. in an increasingly digital world, consumers" simultaneous use of several media ("multiplexing") may inhibit attention to the brand"s advertising across these media (jeong et al., 2010) . the exposure to multiple media may also lead to redundancies (duplicate reach; taylor et al., 2013) which could distract the consumer and create confusion (assael, 2011) . moreover, multiple-j o u r n a l p r e -p r o o f media exposure may induce supersaturation and reactance (kolsarici & vakratsas, 2018) . empirical evidence on such antagonistic effects has been found by taylor et al. (2013) in two cpg categories, and kolsarici and vakratsas (2018) for selected durable and packaged-goods brands. in sum, extant studies underscore the potential for cross-effects between offline and online media. however, the few studies that empirically document this interplay (i) typically consider only few categories and brands, (ii) do so in a non-cpg setting, (iii) often focus on intermediate metrics like attention and consideration, and (iv) have produced conflicting results. hence, it is fair to say that the presence of synergistic vs. antagonistic effects, between online and offline media, on brand sales across cpg categories, are not well-documented yet. on an additional note, consumers" advertising response across media can depend on both concurrent and earlier messages (batra & keller, 2016) . the latter has typically been ignored in extant studies, which have mostly focused on cross-media spillovers at the same point in time. our analysis will accommodate dynamic interdependencies in consumers" cross-media response, and will assess to what extent the presence, extent, and nature (positive or negative) of these interactions varies across different product types. not only the (combined) use, but also the spending volatility may influence a medium"s sales impacti.e. the extent to which the expenditures on that medium are "evenly spread" or fluctuate over time (gijsenberg & nijs, 2019) . while "even" spending levels ensure a continuous advertising presence, pulsing schedules have been said to prevent wearout or tedium (i.e., avoid that the "even" stream of media messages no longer triggers attention or interest, and even produces reactance; e.g., naik et al, 1998) or to optimally exploit the advertising dynamics j o u r n a l p r e -p r o o f journal pre-proof (dubé et al., 2005) . which of these advantages prevails may be medium-specifici.e., depend on the medium"s modality, information content and pacing, and on its carry-over effectaspects that may well differ for offline and online media. for tv (which, as a multi-sensory delivery medium, is more engaging) or print (a more informative retrieval medium), advertising elaboration may enhance the duration of the message effect but also its wearout. in comparison, to the extent that display-ad processing is more inattentive, the impact of a single message is bound to be less enduring and tedium from repeated messages is expected to be lower (chatterjee, 2012 )conditions that may favor a more continued stream of messages. 3 hence, we expect display advertising to be more effective when ad spending is less volatile. this may particularly hold for low-involvement hedonic items, where ad processing is predominantly preattentive (dijkstra et al., 2005) and display ads may serve to build or maintain brand salience rather than provide consumers with (new) brand information (batra & keller, 2016) . to conclude this section: we expect the impact of display ads relative to print and tv to depend on the involvement level and hedonic nature of the product category. while some of these links have been discussed conceptually, a rigorous empirical verification is, to the best of our knowledge, still lacking. 4 our large-scale empirical analysis will shed light on how the use of display adsalone or combined with other mediaaffects brand sales for different types of cpg products, and on the role of volatility therein. through ac nielsen, we obtained high-frequency advertising-spending data on a broad cross-section of categories and brands in the dutch cpg market. the advertising data cover a period of 117 weeks from january 2016 to march 2018, and are matched with gfk scanner panel data on household purchases and prices (also aggregated to the brand level). we considered all brands in that intersection with an average market share of at least 1% (for a similar cut-off rule, see kohli & sah, 2006) . this resulted in a sample of 154 national brands across more than 60 product categories. for these brands, we quantify the effectiveness of their spending on a key online medium (display advertising) and two popular offline media (tv and print), provided that the brand used a given medium at least seven times during the observation period. 5 if the medium was used less than seven times, the expenditures on the medium were added to a control variable that captures the combined spending in a variety of smaller, and only occasionally used, media such as billboard and cinema. together, our three focal media represent the bulk (91.5%) of the brands" advertising expenditures. the advertising expenditures are corrected for inflation using the dutch consumer price index (cpi). our dataset covers multiple product classes (i.e., beverages, food, household care, personal care and pet food), each involving a varied set of product categories (table 1 , panel a). for instance, the "beverage" class includes categories like cola and tea, while the "food" class includes categories such as candy bars and yoghurt. these categories differ along multiple dimensions, such as national-brand concentration and private-label presence, and therefore allow us to address the effectiveness of the advertising media in quite different settings. also within a given category, we observe considerable diversity in the nature and size of the brands (table 1, 5 given our interest in cross-media effects, we require a higher number of spending occurrences than other recent studies (see, e.g., van heerde et al., 2013) , which required only two occurrences. panel b), in that both more and less frequently purchased brands as well as more expensive and cheaper brands are studied (e.g., clipper vs. lipton in the tea category; danone vs. zuivelhoeve in the yoghurt category). this comprehensive dataset will enable us to not only obtain generalizable insights on the effectiveness of these three key media when used in cpg markets, but to also explore the heterogeneity, if any, in their effectiveness across categories and brands. < insert tables 1 and 2 > table 2 provides descriptives on media usage. as can be seen from panel a, nearly all cpg brands invest in tv advertising (85.71% of the brands), while about half of them do so in print advertising (50.65%). tv dominates by far in terms of expenditure level (86.68%). while online display ads still get a smaller share of the advertising budget, already 104 (67.53%) brands use the medium on a fairly regular basis (28 out of 117 weeks, on average), with several of them allocating as much (or even more) of their budget to this online medium than to print. most brands use a mix of advertising media (panel b). brands that use only one medium tend to favour offline media, in particular tv. display ads are most often used in combination with one or both other media. this suggests that brand managers primarily assign a "supporting" role to this medium, and underscores the importance of studying potential synergies between the online and offline media. brands are also characterized by large differences in their volatility, as illustrated in web appendix b. while brand a (a leading coffee brand) employs tv ads quite frequently (77% of weeks), and maintains a rather constant spending level, brand b (a leading beer brand) advertises less often on tv (44% of weeks) and is characterized by extended periods of zero spending that alternate with periods of intense spending. given our research objectives, our modelling approach involves three steps. first, we obtain the sales-advertising elasticities by mediumincluding cross-media effectsfor each brand. next, we combine these estimates to derive empirical generalizations. in a third and final step, we perform a moderator analysis to explain the observed differences in elasticities. for each brand, we assess how effective each medium is at driving sales, and whether the joint use of the media results in synergistic or antagonistic effects. several methodological challenges must be met here. first, the model should distinguish between short-and long-term advertising effectiveness, while accounting for medium-and brand-specific carryover effects. second, the specification must accommodate the potential interplay among different media, not only in the same period, but also across periods. third, we should correct for possible endogeneity in media spending over time. to this end, we use the following sales equation: where denotes the volume sales of brand i in category c during week t, and are normally-distributed error terms. 6 the use of a multiplicative functional form (with the natural logarithm of sales as dependent variable) facilitates interpretation of the coefficients and subsequent cross-brand comparisons. 6 as indicated before, we assess the effectiveness of a medium if the brand uses the medium during at least seven weeks. the number of media that satisfies this condition for a given brand i in category c is given by . we assess the media effectiveness using an adstock framework, and thereby allow for advertising carryover effects over time (broadbent, 1979) . similar to dinner et al. (2014) , we define a stock variable for medium k of brand i in category c during week t as: where denotes the expenditures on medium k by brand i in category c during week t. 7 as eq. (2) shows, the adstock variable is a geometrically-weighted average of the current-and previous-period advertising expenditures on a given medium, and depends on the value of the carryover parameter ("decay rate") (0 ≤ < 1). we allow for a separate decay rate per medium k and per brand i (in category c). the former is needed given that advertising media differ on multiple dimensions (e.g., the way in which information is presented and/or the speed at which information is transferred by the medium (dijkstra et al., 2005) that may shape the duration of their effects. as for the latter, both characteristics of the brand (e.g., their purchase frequency) and characteristics of the category to which the brand belongs (e.g., market concentration) may influence the degree of carryover. similar to dinner et al. (2014) , the variable is initialized by the log-transformed ad expenditure of the first week (full details on all operationalisations are provided in table 3 ). < insert table 3 here> like previous meta-analytic and large-scale empirical studies (e.g., frison et al., 2014; sethuraman et al., 2011) , we express advertising effectiveness in terms of sales elasticities. to allow for potential cross-media effects in these elasticities, we include interaction terms. crossmedia effects have traditionally (see, e.g., naik & raman, 2003) been studied through interactions between contemporaneous media spending levels (e.g., budgets spent on two different media within the same week). technically, this approach hampers the assessment of cross-effects for media with a rather sparse data structure (as is often the case with online spending in a cpg setting), and for high-frequency data in general. 8 moreover, this approach may not fully capture the dynamic interplay between media. to the extent that past spending on a medium still affects consumers" current purchase behavior, it can also influence consumers" current response to other media. in line with onishi and manchanda (2012) and dinner et al. (2014), we therefore construct the interactions among the stock variables, instead of among the advertising expenditures themselves. in this way, the effect of spending on one medium can depend not only on concurrent, but also on past spending in another medium. 9 to more cleanly separate out the advertising effects, we add several control variables. we include a trend variable ( ), constructed to range between -1 (for the first week) and +1 (for the final week) such that the media-effect estimates pertain to the mid-observation period. we control parsimoniously for seasonality by including two trigonometric terms ( and (hanssens et al., 2001 ; see table 3 for details on the operationalization). to capture the impact of price changes, we add the (natural logarithm of the) price of brand i in category c at time t ( ), as well as the (natural logarithm of the) competitors" price , obtained as a market-share weighted average of the price of other brands in the same category. 10 we account for the influence of other, smaller media by a stock variable reflecting the brand"s combined expenditures on those media ( . the impact of competitive advertising is addressed by including a stock variable ) that is also defined according to eq. (2), but where advertising refers to the combined expenditures (across all media) by brand i"s competitors in category c ). the parameter represents the long-term effect of medium k for brand i in category c, i.e. the % cumulative sales impact across time that results from a 1% change in the medium"s advertising expenditures (dinner et al., 2014) , absent (current and/or earlier) spending on other media. the corresponding short-term (immediate) effect is given by: appendix c). we will refer to these expressions as the "stand-alone" advertising elasticities henceforth. however, given that most brands employ multiple media, these stand-alone effects do not paint the full picture when cross-media interactions arise. accounting for cross-media effects ( ), we derive the "total" long-term ( ̅̅̅̅̅ ) and short-term ( ̅̅̅̅̅ ) advertising elasticity of medium k for brand i in category c at the average adstock level of the other focal media l ( ̅̅̅̅̅̅̅̅̅̅̅̅̅ ) that are used by brand i. as shown in web appendix c, this (average) total long-term elasticity is given by: ̅̅̅̅̅̅ ∑ ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ , while the corresponding short-term elasticity is obtained as: ̅̅̅̅̅̅ ̅̅̅̅̅̅ ( ). in assessing the advertising effectiveness for a given brand, we face possible endogeneity issues: brands may adjust their advertising spending based on their observed sales during the same period, and both sales and advertising spending may be affected by common period-specific shocks unobserved by the researcher. to avoid biased parameters, we use the instrument-free gaussian copula method introduced in the marketing field by park and gupta (2012) , and recently applied, among others, by burmester et al. (2015) and datta et al. (2017) . since changes j o u r n a l p r e -p r o o f in ad spending will end up in the medium"s adstock representation (eq. (2)), we design the copula variables for the adstock variables. 11 importantly, the inclusion of the copula variables for the focal media also addresses potential endogeneity of the focal media interactions (papies et al., 2017) . because prices, too, can easily be adjusted from period to period, we use a copula for the price variable as well. the use of gaussian copulas requires that the underlying endogenous regressor is non-normally distributed, which we confirmed (in line with datta et al., 2017) through the shapiro-wilk test. 12 estimation of the model proceeds in five steps. in a first step, eq. (1) is estimated nonlinearly to assess the parameters , α, β, and starting values for these parameters are inspired by extant literature. 13 in the second step, copula variables are created for the adstock variables and the price variable; eq. (1) is extended with the copula variables one-at-a-time, and those that have a significant impact (p < 0.10) are retained (for a similar practice, see gielens et al., 2018; gim et al., 2018) . in the third step, eq. (1) is re-estimated nonlinearly including the set of retained copula variables. since the carryover parameter(s) may take on a (slightly) different value after the re-estimation, the adstock variable(s) may change somewhat as well. in the fourth step, we therefore re-construct the copulas to make them consistent with the adstock variables from the third step, and re-estimate eq. (1) with the set of retained copula variables. in the fifth and final step, we allow for contemporaneous correlations between the error terms of brands that 11 for each adstock variable , the corresponding copula is obtained as where is the inverse of the normal cumulative density function, and h(.) is the empirical distribution function of the adstock variable (see, e.g., papies et al. 2017) . 12 across all variables, 89.96% (91.3%) had a p-value below .05 (.10). for the focal advertising variables, these figures amounted to 96.8% (97.8%). 13 assmus et al. (1984) report a mean monthly carryover parameter of 0.462. the decay rate that corresponds with this empirical generalization at a weekly level equals 0.840. allenby and hanssens (2005) find that the short-term elasticity for established goods equals 0.010 on average. given the relation between the short-and long-term advertising elasticity (see appendix a), we set the starting value for the long-term elasticity equal to 0.063 (= 0.01/(1-0.840)). the initial value for is set equal to -2.620, based on the meta-analysis by bijmolt et al. (2005) . j o u r n a l p r e -p r o o f belong to the same category, by re-estimating the equations of brands in the same category jointly using seemingly unrelated regressions (sur). if the correlations reveal significant based on a likelihood ratio test, we retain these sur estimates as our final estimates. having obtained the total short-and long-term elasticity by brand for our three focal media, our next step will be to derive empirical generalizations. in so doing, because most brands only use a subset of media, we must apply a correction for possible self-selection bias. in setting up their advertising strategy, brand managers may select the advertising media they expect to be most effective. if unaccounted for, such self-selection may lead to a biased assessment of the media effectiveness across brands. to account for this potential bias, we follow frison et al. (2014) and implement heckman"s two-stage method. this method starts with the estimation of a probit selection model to assess the likelihood of the adoption of a particular medium (0/1) by a specific brand. the probability that medium k is used by brand i in category c is specified as a function of category-specific instruments: where equals 1 (0) if medium k is (not) used by brand i in category c, and represents the instruments, and is the normal cumulative density function. we consider two sets of instruments. the first set considers, for each brand i in category c, the group of brands of the same product class (i.e., beverage, food, household care, personal care, pet food) as brand i, but excluding brand i itself and all its direct (same-category) competitors. for each medium (display advertising, print, tv), it then calculates the share of those brands within the same product class that use the medium. the second set considers, for each brand i in category c, a comparable j o u r n a l p r e -p r o o f group of brands within the same product class (i.e., beverage, food, household care, personal care, pet food) in the belgian cpg market. 14 the belgian and dutch (cpg) market are highly similar in terms of several key characteristics, such as gdp per capita, annual inflation, online ad spending and private-label share. again, for each medium, we then calculate the fraction of those brands that use the medium. thus contains 6 instruments: 2 sets by 3 media. by construction, because they pertain to brands in another product category (first set) and/or country (second set) (for a similar reasoning, see, e.g., germann et al., 2015; sotgiu & gielens, 2015) , these variables are unlikely to influence the brands" own advertising elasticities, and thus represent valid instruments. they also constitute strong instruments, as evidenced by the highly significant (p < 0.001 for every medium) likelihood ratio tests on their significance in eq. (3). based on the estimates of the probit model, we construct an inverse mills ratio for each medium that enables us to account for the selection bias when deriving meta-analytic effectiveness inferences. similar to frison et al. (2014) , we construct meta-analytic regressions that allow us to derive the average short-and long-term advertising elasticity of medium k: where denotes the inverse mills ratios for display advertising (l = d), print (l = p) and tv (l = tv), with associated coefficients . the error terms and are normally distributed. eq. (4) is defined for each medium: display advertising (k = d), print (k = p) and tv (k = tv). both in the short and the long run, the equation is estimated simultaneously (pooled) across media. since the total elasticities account for cross-media effects, we need to address not only 14 we obtained these data through gfk belgium. is re-estimated. also for the stand-alone elasticities, short-and long-term empirical generalizations will be derived for each medium k (denoted as ̅̅̅̅̅̅̅ and ̅̅̅̅̅̅̅ , respectively), using models similar to eq.(4). in a final step, we will consider to what extent differences in a medium"s advertising effectiveness are systematically and predictably related to the brand"s category characteristics and to its spending volatility. focusing on the total long-rum elasticity, we augment eq. (4) by adding a medium-specific effect of (i) the level of involvement in the brand"s product category ( ), the hedonic (vs utilitarian) nature of the category ( ), and the interaction between the two ( ), and (ii) the volatility of the brand"s advertising spending on that medium ( ). to facilitate interpretation, we mean-center these category characteristics. because previous studies show that advertising effectiveness may well differ between more or less j o u r n a l p r e -p r o o f "popular" brands (e.g., draganska et al., 2014; van heerde et al., 2013) , we allow mediumspecific effects of the brand"s market share ( ) as controls. the following model is thus estimated across the three media: similar to eq. (4), the self-selection issue is managed by the inclusion of three sets of inverse mills ratios, derived from eq. (3). 15 since the dependent variable and the inverse mills ratios are estimated quantities, we again estimate eq. (5) using weighted least squares and correct the standard errors of the parameters using jackknife resampling. as before, if an inverse mills ratio is not significant, it is dropped from the model and the equation re-estimated. we first consider the results of the brand-sales models in eq. (1). the median vif value is 2.26 overall 16far below the rule-of-thumb cutoff value of 10 (hair et al., 2014) , which indicates that multicollinearity is not an issue. moreover, the coefficients of the control variables are in the expected direction (see web appendix d for a summary)attesting to the validity of the estimates. our focus, though, is on the impact of the different media, which we discuss below. 15 as a robustness check, we also included the brand and category characteristics as additional explanatory variables in eq. (3), and derived the inverse mills ratios from that regression. the results were very similar. 16 value across all brands and variables, in a model excluding interactions (following disatnik & sivan 2016). the separate values for our three focal adstock variables are similar: 5.16 for banner, 3.04 for print, and 2.93 for tv. still, we want to point out that in our meta-analytic estimates in table 3 , and our second stage results, we always correct for the standard errors of the first-stage estimates. this implies that, in the few instances where less-reliable estimates are obtained (e.g., in the occasional instances where the vif values exceeded 10), these estimates will receive only a low weight, and hardly influence our main findings. the estimates for the total elasticities are summarized in table 4 (2014) distinguished two types of print media (newspaper and magazine), and found an insignificant short-and long-term elasticity for the former, but a small significant effect for magazine (.002 and .004, respectively). as such, our findings for the offline media are largely in line with theirs, giving face validity to our results. as for display advertising, which was not studied in frison et al. (2014) , we observe no significant meta-analytic total effects ( ̅̅̅̅̅̅ = .000, ̅̅̅̅̅̅ = .003, p > .10). this suggests that, for the typical cpg brand, using display advertising does not enhance brand salescorroborating earlier concerns that consumers may not (or insufficiently) notice display ads (burke et al, 2005) . however, these total elasticities capture the impact of display ads as they are used in practicein combination with other media (see table 2 ). as indicated by dijkstra et al. (2005) , too many media in a campaign may result in lower attention and reduce medium effectiveness. the 17 a histogram of the total elasticities is given in web appendix e, panel a. 18 given the clear directional hypothesis on the sign of advertising elasticities (hanssens, 2015) , we use one-sided tests here. j o u r n a l p r e -p r o o f question then becomes: would the use as a stand-alone medium make display ads effective for the "prototypical" cpg brand? the results for the stand-alone elasticities are presented in panel b of table 4 . 19 not accounting for cross-media effects reduces the impact of tv and print advertising in the short run (to ̅̅̅̅̅̅̅̅ = .001, p < .01 and ̅̅̅̅̅̅ = .002, p < .10), yet increases the impact of these media in the long run (to ̅̅̅̅̅̅̅̅ = .009, p < .01 and ̅̅̅̅̅̅ = .011, p < .01). for display ads, we again find no evidence of a significant sales impact in the short run ( ̅̅̅̅̅̅ = -.000, p > .10), but we do find a significant stand-alone effect in the long run ( ̅̅̅̅̅̅ = .042, p < .01). to acquire more insights into the interactions by media pair across the different brands, table 5 summarizes the distribution of the cross-media parameters (i.e., the theta"s in eq. (1)), along with their signs and significance. 20 by working with the cross-period interactions (i.e., interactions between the adstock variables), we can estimate cross-media effects for all media pair users. 21 for both cross-media effects that involve display ads, we find that the share of significant positive and negative cross-media parameters is larger than chance. combining display ads with print and/or tv ads may thus yield additional sales, but it is at least as likely to retract from the effectiveness of these media -an intriguing finding that we explore in more detail in the subsequent sections. 19 a histogram of the stand-alone elasticities is given in web appendix e, panel b. 20 we used two-sided tests for the cross-media parameters, given that theoretical arguments can be given (discussed before) for both synergistic and antagonistic effects. 21 had we only considered same-period interactions (as is common in prior work), this number would have dropped from 58 to 20 for display advertising and print, from 90 to 76 for display advertising and tv, and from 66 to 27 for print and tv. to conclude this section: whereas print and tv enhance sales for a typical cpg brand in the long run, this does not hold for display adsthe (meta-analytic) total sales elasticity of which remains insignificant. however, this does not imply that the medium cannot be effective. there is a substantial subset of brands (a proportion larger than chance) for which the total displayadvertising elasticity is significantly positive, in the short (21%) and the long (16%) run. hence, the meta-analytic elasticities appear to conceal quite some variation across brands and/or categories. this is confirmed by means of the chi-square homogeneity test (rosenthal, 1991) , which evaluates the significance of the variance in effect sizes for the long-term total elasticities. for each medium, we find evidence that there is significant (p < .01) heterogeneity among their estimated elasticities. importantly, the stand-alone elasticities and cross-media parameters reveal that using the media alone or in combination makes a difference, with media interactions being positive in some instances but negative in others. to explore this variability in advertising effectiveness, we proceed to our moderation analysis, by estimating eq. (5). table 6 presents the estimation results. the medium-specific coefficients of brand share indicate that tv advertising is less effective for already more popular brands ( =-.023, p < .01)consistent with earlier findings. interestingly, the opposite pattern holds for print: print advertising being less effective for lowshare brands ( =.066, p < .05). the impact of display ads does not differ between high and low share brands ( =.012, p > .10). as for the category characteristics, category involvement and hedonicity enhance the display-advertising effect ( =.024, p < .05; =.010, p < .10). print j o u r n a l p r e -p r o o f appears less impactful in higher-involvement categories ( =-.019, p < .10), while tv has a more positive effect in more hedonic categories, and especially those with higher involvement levels ( =.004, p < .10; =.013, p < .05). for volatility, we only find a significant effect for display ads ( 0.015, p < .10). the negative coefficient suggests that a more even, sustained spending pattern is appropriate for this medium. to get a feel for the impact of category characteristics, we conduct a spotlight analysis (as in datta et al., 2017) . building on our conceptual discussion (and similar to del barrio-garcia et al. 2019), we first distinguish between four category types, based on a median split along the low vs. high involvement dimension, and the hedonic vs. utilitarian dimension. for each category type, we then use the estimates from table 6 , panel c, to obtain the average total long-term elasticity for each advertising medium, across brands in that type of category that use the medium, along with its significance level. we do so for mean levels of brand share and spending volatility. as figure 1 shows, display advertising significantly enhances sales for brands in highinvolvement, hedonic categories, with an average elasticity of ̅̅̅̅̅̅̅̅̅̅̅̅̅ .013 (p < .05). in those categories, its impact is similar to that of tv ( ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ .019, p < .05) and print ( ̅̅̅̅̅̅̅̅̅̅̅̅̅ .010, p < .10)the differences between media being statistically insignificant (p > .10). this positive effect of display ads is consistent with the observation that for hedonic ("feel") products, consumers tend to engage in peripheral processing of mostly visual information, while a high level of category involvement reduces their tendency to deliberately avoid or be irritated by the (display) ad. in all other category types, display ads do not seem to generate a significant sales increase. this stands in sharp contrast with print and tv, which exert a significant positive j o u r n a l p r e -p r o o f effect in low involvement categories, irrespective of their utilitarian or hedonic nature. so, based on figure 1 , the "opponents" of display advertising appear to be proven rightthe medium generating only a small positive impact for one particular type of cpg products, and representing spoiled arms in all other instances. however, such conclusion would ignore managers" discretion to use display advertising either as a stand-alone medium or alongside other media. the results in figure 1 capture the impact of the medium as currently deployed, that is: often in combination with traditional media. in case of antagonistic effects, such combined use may have driven down the display-advertising elasticity. to explore this further, we re-estimate eq. (5) with the stand-alone (instead of the total) long-term display-advertising elasticity as dependent variable, and again consider the impact by category type. in low-involvement categories, as anticipated, display advertising still fails to produce a positive sales effect (while print and tv continue to generate a sales lift). in high-involvement categories, we observe an interesting reversal. for high-involvement hedonic products, display ads are not effective by itself ( =.003, p > .10), but gets its impact from the synergies with other media. further exploration 22 shows that these positive cross-effects especially hold for higher-share brands, where display advertising can support other media in maintaining brand salience. print and tv, also, show lower stand-alone than total elasticities ( =.001, p > .10; =.014, p < .05). it thus seems that in these categories, the use of different media that provide complementary processing occasions and types of stimuli is reinforcing. the opposite occurs for high-involvement utilitarian products, where the total display-advertising elasticity was 22 as an additional probe, we re-ran the second-stage (weighted) regressions with the same drivers as in eq. (5), but with the cross-media parameters as dependent variables, and with parameters specific to each media pair. j o u r n a l p r e -p r o o f insignificant but display advertising produces a significant sales lift when used as a stand-alone medium ( =.026, p < .05), and where print and tv, too, show a more positive standalone than total elasticity ( = .006, p > .10; =.007, p < .05). media interactions in these categories thus appear mostly antagonistic -consumers seeking out information in response to individual media (e.g. by clicking on the display ad), and rather becoming confused or irritated when confronted with multiple media. closer analysis (cf. footnote 22) revealed that display and print advertising, in particular, interact negatively (the meta-analytic estimate for their cross-effect is negative and significant (p < .05) for these types of products). so far, we considered the media impact for average volatility levels as observed in the data. however, our estimates in table 6 showed that for display advertising, changes in the temporal spread of expenditures may significantly alter the advertising effect. 23 to further explore this, we calculate the total display-advertising elasticity for volatility ranging from the lowest (zero) to the highest observed levels (three), along with its significance bounds. given the role of category characteristics, we do so for each category type. figure 2 portrays the results. for low-involvement utilitarian products, display advertising continues to be ineffective no matter the temporal spread. in each of the other cases, display ads can generate a significant sales lift provided that the volatility is sufficiently low (i.e. up to 1.65 for low-involvement hedonic categories, 1.74 for high-involvement hedonic categories, and 2.05 for high-involvement hedonic categories). hence, if one acknowledges the role of the temporal budget spread, the 23 we also estimated a version of eq. (5) in which we added interactions between volatility and the category characteristics, for each medium. this did not improve fit, and the interaction coefficients were not significant, indicating that the impact of volatility does not differ between category types. j o u r n a l p r e -p r o o f picture becomes much less bleak, and rather proves the proponents of display advertising right. continued (even) exposure enhances the impact of display advertising (in line with the experimental findings of drèze and hussherr 2003) , and turns it into an effective medium for three out of the four category types. display advertising is becoming an increasingly important medium in brand managers" toolkit, with expenditures rising at astounding rates. as emphasized by sridhar et al. (2016, p. 39) , "online advertising has moved from a peripheral to a central advertising medium". yet, when it comes to cpg brands, the role of display ads is not yet clear. not only are there huge differences in the adoption of this medium between brands, practitioners also appear to hold conflicting views on its ability to increase brand sales. while advocates proclaim that they "see the banner"s influence again and again" (adage, 2014), sceptics consider it a waste of good money and describe it as "one of the most misguided and destructive technologies of the internet age" (the new york times, 2014). with very few empirical studies to go on, the jury is still out on whether, and when, display ads can improve the performance for cpg brands. importantly, to fully evaluate the impact of display ads, one should assess not only the medium"s "stand-alone" effect, but also its "total" effect in interaction with other media. yet, achieving individual-medium and combined media-effectiveness in a multimedia advertising spending is far from trivial (sridhar et al. 2016) . what further complicates matters is that the effects are bound to differ between product categories, depending on consumers" decision-making process in those categories. this lead draganska et al. (2014, p. 589) to explicitly call for more research on the interplay between category characteristics and media effectiveness. our study addressed this call, by conducting a large scale analysis of the impact of display ads on brand sales in different types of cpg categories, alone or in combination with other media, and with due attention to the spending volatility. below, we summarize the key findings, and highlight important managerial takeaways. we find that for the average cpg brand, unlike tv (and, to some extent, print) advertising, display ads by themselves do not exert a significant impact on brand sales, in the short nor the long run. moreover, we find that interdependencies between display ads on the one hand, and offline media on the other, are negative in as many instances as they are positive. it follows that even within a multi-media setting, display ads are not guaranteed to enhance sales for the typical cpg brand. this appears to confirm recent managerial concerns that investments in online ads for cpg are largely ineffective. still, there is a significant subset of brands for which display ads do enhance (long-term) brand sales. analysis of these heterogeneous effects generates interesting new insights. first and foremost, category characteristics matter. display ads do not lift sales for low-involvement utilitarian products. for these products, display advertising may suffer from "double-jeopardy": as a retrieval, low-bandwidth medium it can easily be circumvented by less-involved consumers (chatterjee, 2012; dijkstra et al., 2015) ; as a carrier of few and mostly visual cues, it provides hardly any factual information that might be relevant for these products. conversely, display ads can lead to a significant increase in brand sales in high-involvement categories. for highinvolvement products that are hedonic in nature, display advertising works best in combination with other media. this may be because combining display advertising and traditional-media results in higher levels of attention (chang & thorson, 2004) . it is also in line with the different j o u r n a l p r e -p r o o f functionality of the media: while display ads are mainly restricted to visual modes and cannot transfer a lot of information as such, they can reinforce the brand imagery conveyed through other mediaenhancing brand salience or creating a reminder effect (batra & keller 2016; drèze & hussherr, 2003) . such positive interactions prevail especially for already established brands, where display ads can help the brand stay "above the radar". interestingly, for highinvolvement utilitarian products, display advertising should not be used in combination with print media. in these categories, display ads offer consumers a unique opportunity to obtain detailed verbal and visual information by clicking on the ad, and to elaborate on that information at their own pace. it appears that in such setting, display ads are "self-sufficient", and rather than generating extra attention or conveying additional information, the combination with print messages leads to confusion and overload. moreover, much depends on how the display-advertising budget is allocated over time, i.e. on its volatility. display-advertising elasticities are higher when investments in the medium are not pulsed, but more evenly spread in time. this appears consistent with the observation that display ads are less subject to tedium and wearout, and can play a role in multiple stages of the purchase funnel, i.e. in creating awareness, maintaining brand salience, and activating consumers to buy (batra & keller, 2016) . with an even message stream, display messages can lift sales even for low-involvement hedonic itemscorroborating that pre-attentive processing of the ads can accumulate into higher brand awareness, familiarity and liking (chatterjee, 2012) . in all, these findings underscore that display ads can play a very diverse role in the advertising media-scapemore so than any other media. neither the proponents nor the opponents are universally right: whether display-advertising investments are "spoiled arms" or j o u r n a l p r e -p r o o f "money well spent" critically depends on the product category they are used in as well as on the temporal spread, not only for the medium itself, but also in combination with other media. our results have important implications for managers. first, they show that display ads are not the new panacea for enhancing cpg brand sales. for the average brand, they do not generate a marked sales increase. it follows that managers should not automatically imitate their competitors in an attempt to maintain competitive parity, and rush into using this medium without careful reflection on its overall effectiveness. still, they should not prematurely dismiss it either. display-advertising investments are more rewarding for some brands than for others, and a judicious allocation of budgets over time in combination with other media can make the difference between effective and non-effective ads. do managers make the right decisions? and: (how) can their decisions be improved? to shed more light on this issue, we consider the actual display spending patterns by brands in the four category types (table 7 gives an overview), against our findings on the effectiveness of the online medium. we find that the share of display-advertising users is largest in the highinvolvement categories (>70%), with a smaller share of users in the low-involvement hedonic (65.1%) or low-involvement utilitarian group (46.7%). this is consistent with our finding that display advertising is mostly effective in high-involvement categories. however, when we look at expenditure shares, the situation is different: the share of display advertising in the brands" total advertising spending is highest in the low-involvement utilitarian categories (20%, compared to less than 10% for the remaining product types). 24 moreover, for about half of the brands in these categories, display advertising is the sole medium of choice. given that -in contrast with print and tvdisplay-advertising effects for low-involvement utilitarian products 24 this figure is based on a small number of brands, so it has to be interpreted with caution. j o u r n a l p r e -p r o o f are never significant, these expenditures are spoiled arms. managers of these products would do well to re-allocate their display ad budgets to the traditional offline media. in the remaining three category types, most brands that adopt display advertising use it in combination with traditional media (>92%). for hedonic products, where synergies are found to prevail, such combined use is a good thing. however, only half of the low-involvement hedonic brands (46.3%) have a low-enough volatility to get a significant positive effect. the situation is worse for high-involvement hedonic items, where only 38.5% have a temporal spread for their display ads conducive to a lift in brand sales. our results suggest that for these brands, more even (sustained) spending would yield more bang for the online buck. turning to the highinvolvement utilitarian products, we find for only half (21 out of 43) of the brands, displayspending volatility is low-enough to yield a significant positive effect with combined use. these brands, also, are advised to maintain a more even spending stream. moreover, about half of these brands use display advertising along with printa detrimental combination. based on our findings, they would be better off using display advertising as a stand-alone medium for these products. while it offers several novel insights, our study also has a number of limitations that open up new research opportunities. first, even though we implemented an econometric correction for self-selection in the brands" use of certain media along with an endogeneity correction on the chosen levels of media expenditures, our study did not involve a random assignment of brands to the different conditions, making any causal inference conditional on the validity of these "corrective measures". second, we focused on display advertising. future studies could look j o u r n a l p r e -p r o o f into the category-specific interactions between offline media and other online media with different characteristics, such as online video, which may be more (less) similar to tv (print) advertising than the display ads we studied. consider display ads as one medium, without distinguishing the specific websites on which the ads appeared. to the extent that the ad effects are influenced by the quality of the website (e.g., the number of "real" viewers that are cpg shoppers, or the brand fit with the website content), a more refined analysis would be worthwhile. relatedly, like most previous studies on the brandsales impact of different media, we did not have data on the consumers targeted by the brand ads, nor on the media"s audience overlap. hence, in instances where we found no significant crossmedia interplay, we could not ascertain whether this was due to non-overlapping audiences, or to the intrinsic absence of synergistic or antagonistic effects even among consumers confronted with the different media. we leave this as a topic for future study. finally, it is important to emphasize that the current spending levels for online ads, while rapidly rising, are still small for most cpg brands (with an average share of 1.82%; j o u r n a l p r e -p r o o f j o u r n a l p r e -p r o o f (a) "users" are brands that use the given medium at least seven times during the observation period. j o u r n a l p r e -p r o o f sales-advertising relation sales ( ) total (volume) sales for brand i in category c during week t. (a) advertising ( ) expenditures (in €) on medium k (display adv, print, or tv) for brand i in category c during week t, deflated by the consumer price index (cpi). adstock ( ) stock variable for medium k of brand i in category c during week t, defined by eq. (2). trend ( ) trend variable (ranging from -1 to +1). seasonality (seas_cos t , seas_sin t ) seas_cos t = cos((2*π*week t )/52) and seas_sin t = sin((2*π*week t )/52), where week t is the week number (within the year) for week t price ( ) price of brand i in category c during week t. (a) competitive price ( ) market-share weighted average price of competitors of brand i in category c during week t other advertising ( ) (b) expenditures on smaller and less frequently used media (< 7) for brand i in category c during week t. competitive advertising ( ) total expenditures on display advertising, print, tv and other media by competitors of brand i within product category c during week t. level of involvement with category c. items based on steenkamp et al. (2010) :"this product category is very important to me", "this product category interests me a lot". hedonic vs. utilitarian ( ) summated scale of hedonic vs. utilitarian nature of the category. the utilitarian scores were reverse-scaled before averaging (cronbach alfa= .81). items based on voss et al. (2003) . for hedonicity:"products in this category are fun","products in this category are delightful", "products in this category are exciting", for utilitarian nature: "products in this category are functional", "products in this category are necessary", "products in this category are practical". spending volatility ( ) coefficient of variation of medium k for brand i in category c during the observation period. brand share ( ) the 2017 market share of brand i within category c. (a) brand sales are the sum of its sku sales expressed in volume units (e.g. grams, liters). brand price is obtained as brand value sales divided by volume sales. (b) we assess the effectiveness of a focal medium if the brand uses this medium at least seven times during the observation period. if not, its expenditures are added to . (c) based on a 2017 mturk survey with responses from about 50 consumers in each category (courtesy of arjen van lin and katrijn gielens). items rated on a 7 point scale, from 1="strongly disagree" to 7="strongly agree". j o u r n a l p r e -p r o o f (a) "users" are brands that use the given medium at least seven times during the observation period. (b) significance based on one-sided test, p < .10. (c) significance based on standard errors from jackknife resampling, p-values for one-sided test. j o u r n a l p r e -p r o o f (a) "common users" are brands that use each of the given media at least seven times during the observation period. (b) significance based on two-sided test, p < 0.10. j o u r n a l p r e -p r o o f j o u r n a l p r e -p r o o f (a) share of display-advertising users = % of brands in the cell using display. display-advertising spending share = share of display-advertising expenditures in total advertising expenditures by the brand on display advertising, print and tv, averaged across brands in the cell (%). users with low-enough volatility = out of brands in the cell that use display advertising, % for which volatility is below effectiveness cutoff (see legend of figure 2 ). journal pre-proof 51 combined users = out of brands in the cell that use display advertising, % that use it in combination with other media (with any other (print or tv) media, with print and with tv). figure 1 : total long-term elasticity for display advertising, print and tv advertising, by type of product category (a) (a) spending volatility set at the mean for each medium. *p<.10, one sided. figure 2 : total long-term elasticity for display advertising, by product category and as a function of display volatility (a) (a) elasticity never significant within data range for low involvement utilitarian categories. elasticity significant and positive (i) if volatility below 1.65 for low involvement hedonic categories (46.3% of brands), (ii) below 1.74 for high involvement utilitarian categories (48.8%), and (iii) below 2.05 for high involvement hedonic categories (38.5%). j o u r n a l p r e -p r o o f the off-line impact of online ads when procter & gamble cut $200 million in digital ad spend, it increased its reach 10% defense of banner ads: everybody hates them, but they work advertising response from silos to synergy: a fifty-year review of cross-media research shows synergy has yet to achieve its full potential how advertising affects sales: metaanalysis of econometric results which products are best suited to mobile advertising? a field study of mobile display advertising effects on consumer attitudes and intentions integrating marketing communications: new findings, new lessons, and new ideas new empirical generalizations on the determinants of price elasticity empirical generalizations about advertising campaign success media in focus: marketing effectiveness in the digital era online display advertising: modeling the effects of multiple creatives and individual impression histories incorporating long-term effects in determining the effectiveness of different types of online advertising one way tv advertisements work a dynamic model for digital advertising: the effects of ad formats and creative content the role of consumer involvement in determining cognitive response to broadcast advertising high-cost banner blindness: ads increase perceived workload, hinder visual search, and are forgotten the impact of pre-and post-launch publicity and advertising on new product sales television and web advertising synergies wearout or weariness? measuring potential negative consequences of online ad volume and placement on website visits the role of varying information quantity in ads on immediate and enduring cross-media synergies media differences in rational and emotional responses to advertising comparing the relative effectiveness of advertising channels: a case study of a multimedia blitz campaign how well does consumer-based brand equity align with sales-based brand equity and marketing-mix response a longitudinal crossproduct analysis of media-budget allocations: how economic and technological disruptions affected media choices across industries separate and joint effects of medium type on consumer responses: a comparison of television, print, and the internet driving online and offline sales: the cross-channel effects of traditional, online display, and paid search advertising the multicollinearity illusion in moderated regression analysis internet versus television advertising: a brand-building comparison internet advertising: is anybody watching? an empirical model of advertising dynamics the information processing of coordinated media campaigns us cpg & consumer products industry digital ad spending, by format, 2016 (billions and % of total targeting online display ads: choosing their frequency and spacing billboard and cinema advertising: missed opportunity or spoiled arms? the chief marketing officer matters the new regulator in town: the effect of walmart's sustainability mandate on supplier shareholder value advertising spending patterns and competitor impact investors" evaluations of price increase preannouncements online display advertising: targeting and obtrusiveness multivariate data analysis empirical generalizations about marketing impact. cambridge: marketing science institute market response models: econometric and time series analysis effects of internet display advertising in the purchase funnel: model-based insights from a randomized field experiment effects of exposure to sexual content in the media on adolescent sexual behaviors: the moderating role of multitasking with media some empirical regularities in market shares synergistic, antagonistic, and asymmetric media interactions aggressive marketing strategy following equity offerings and firm value: the role of relative strategic flexibility attributing conversions in a multichannel online marketing environment: an empirical model and a field experiment what happens online stays online? segment-specific online and offline effects of banner advertisements information processing from advertisements: toward an integrative framework the effect of banner advertising on internet purchasing integrated marketing communications: provenance, practice and principles planning media schedules in the presence of dynamic advertising quality a hierarchical marketing communications model of online and offline media synergies understanding the impact of synergy in multimedia communications perils of using ols to estimate multimedia communications effects marketing activity, blogging and sales do mobile banner ads increase sales? yes, in the offline channel addressing endogeneity in marketing models handling endogenous regressors by joint estimation using copulas the impact of brand familiarity on online and offline media synergy can old media enhance new media? how traditional advertising pays off for an online social network reclamewerking (how advertising works) on the relationship between motives and purchase decisions: some empirical approaches combining online, print increases ad effectiveness meta-analytic procedures for social research how well does advertising work? generalizations from meta-analysis of brand advertising elasticities generalizable and robust tv advertising effects suppliers caught in supermarket price wars: victims or victors? insights from a dutch price war relating online, regional, and national advertising to firm value what makes consumers willing to pay a price premium for national brands over private labels is the multi-platform whole more powerful than its separate parts? measuring the sales effects of cross-media advertising fall of the banner ad: the monster that swallowed the web p&g cuts more than $100 million in "largely ineffective" digital ads price and advertising effectiveness over the business cycle interactive communication: consumer power and initiative measuring the hedonic and utilitarian dimensions of consumer attitude product category characteristics mean centered across categories in the dataset. brand share and volatility mean centered by medium. (b) p-values based on two-sided test the authors gratefully acknowledge aimark for providing the performance data and thank alfred dijs for sharing his expertise. they thank arjen van lin and katrijn gielens for sharing their mturk survey data. they also thank kay peters, harald van heerde, the faculty and participants of the emac doctoral colloquium and the participants of the marketing dynamics conference in dallas for their helpful comments and suggestions. key: cord-331867-mqqtzf8k authors: shahsavari, shadi; holur, pavan; wang, tianyi; tangherlini, timothy r.; roychowdhury, vwani title: conspiracy in the time of corona: automatic detection of emerging covid-19 conspiracy theories in social media and the news date: 2020-10-28 journal: j comput soc sci doi: 10.1007/s42001-020-00086-5 sha: doc_id: 331867 cord_uid: mqqtzf8k rumors and conspiracy theories thrive in environments of low confidence and low trust. consequently, it is not surprising that ones related to the covid-19 pandemic are proliferating given the lack of scientific consensus on the virus’s spread and containment, or on the long-term social and economic ramifications of the pandemic. among the stories currently circulating in us-focused social media forums are ones suggesting that the 5g telecommunication network activates the virus, that the pandemic is a hoax perpetrated by a global cabal, that the virus is a bio-weapon released deliberately by the chinese, or that bill gates is using it as cover to launch a broad vaccination program to facilitate a global surveillance regime. while some may be quick to dismiss these stories as having little impact on real-world behavior, recent events including the destruction of cell phone towers, racially fueled attacks against asian americans, demonstrations espousing resistance to public health orders, and wide-scale defiance of scientifically sound public mandates such as those to wear masks and practice social distancing, countermand such conclusions. inspired by narrative theory, we crawl social media sites and news reports and, through the application of automated machine-learning methods, discover the underlying narrative frameworks supporting the generation of rumors and conspiracy theories. we show how the various narrative frameworks fueling these stories rely on the alignment of otherwise disparate domains of knowledge, and consider how they attach to the broader reporting on the pandemic. these alignments and attachments, which can be monitored in near real time, may be useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. understanding the dynamics of storytelling on social media and the narrative frameworks that provide the generative basis for these stories may also be helpful for devising methods to disrupt their spread. as the covid-19 pandemic continues its unrelenting global march, stories about its origins, possible cures and vaccines, and appropriate responses are tearing through social media and dominating the news cycle. while many of the stories in the news media are the product of fact-based reporting, many of those circulating on social media are anecdotal and the product of speculation, wishful thinking, or conspiratorial fantasy. given the lack of a strong scientific and governmental consensus on how to combat the virus, people are turning to informal information sources such as social media to share their thoughts and experiences, and to discuss possible responses. at the same time, the news media is reporting on the actions individuals and groups are taking across the globe, including ingesting home remedies or defying stay at home orders, and on the information motivating those actions. 1 consequently, news and social media have become closely intertwined, with informal and potentially misleading stories entangled with fact-based reporting: social media posts back up claims with links to news stories, while the news reports on stories trending on social media. to complicate matters, not all sites purporting to be news media are reputable, while reputable sites have reported unsubstantiated or inaccurate information. because of the very high volume of information circulating on and across these platforms, and the speed at which new information enters this information ecosystem, fact-checking organizations have been overwhelmed. the chief operating officer of snopes, for example, has pointed out that, "[there] are rumors and grifts and scams that are causing real catastrophic consequences for people at risk... it's the deadliest information crisis we might ever possibly have," and notes that his group and others like it are "grossly unprepared" [40] . devising computational methods for disentangling misleading stories from the actual news is a pressing need. such methods could be used to support fact checking organizations, and help identify and deter the spread of misleading stories. ultimately, they may help prevent people from making potentially catastrophic decisions, such as resisting efforts at containment that require participation by an entire citizenry or self-medicating with chloroquine phosphate, bleach or alcohol. 2 as decades of research into folklore has shown, stories such as those circulating on social media, however anecdotal, are not created from whole cloth, but rely on existing stories, story structures, and conceptual frameworks that inform the world view of individuals and their broader cultural groups [49, 62, 68, 71] . taken 1 the broad scale use of hydroxychloroquine as a possible treatment for the virus, which first gained a foothold in social media, is only one example of this feedback loop in action [57] . numerous other claims, despite being repeatedly shown to be false, persist in part because of this close connection between social media and the news [7] . 2 an arizona man died from using an aquarium additive that contained chloroquine phosphate as a source of chloroquine, a potential "miracle cure" touted by various sources, including the u.s. president [72] . several poison control centers had to make press releases warning people not to drink or gargle with bleach [12] . the governor of nairobi included small bottles of cognac in the covid-19 care kits distributed to citizens, erroneously indicating that who considers alcohol a "throat sanitizer" [22] . together, these three features (a shared world view, a reservoir of existing stories, and a shared understanding of story structure) allow people to easily generate stories acceptable to their group, for those stories to gain a foothold in the narrative exchanges of people in those groups, and for individuals to try to convince others to see the world as they do by telling and retelling those stories. inspired by the narratological work of algirdas greimas [28] , and the social discourse work of joshua waletzky and william labov [36] , we devise an automated pipeline that determines the frameworks that form the narrative bedrock of diverse knowledge domains, in this case those related to the covid-19 pandemic [67] . we also borrow from george boole's famous definition of a domain of discourse, recognizing that in any such domain, there are informal and constantly negotiated limits on what can be said:"in every discourse, whether of the mind conversing with its own thoughts, or of the individual in his intercourse with others, there is an assumed or expressed limit within which the subjects of its operation are confined" [10] . we conceptualize a narrative framework as a network comprising the actants (people, organizations, places and things) and the interactant relationships that are expressed in any storytelling related to the pandemic, be it a journalistic account or an informal anecdote [67, 69] . in our model of story telling, individuals usually activate only a small subset of the available actants and interactant relationships that exist in a discourse domain, thereby recognizing that individual storytelling events are often incomplete. this story incompleteness presupposes knowledge of the broader narrative framework on the part of the storyteller's interlocutors. building on folkloric work in rumor and legend, we further recognize that a large number of the stories circulating on and across social networks have a fairly straight forward "threat narrative" structure, comprised of an orientation (the who, what, where and when), a complicating action: threat (identifying who or what is threatening or disrupting the in-group identified in the orientation), a complicating action: strategy (a proposed solution for averting the threat), and a result (the outcome of applying that strategy to the threat) [68] . to determine the extent of narrative material available-the actants and their complex, content dependent interactant relationships-we aggregate all the posts or reports from a social media platform or news aggregator site. for social media in particular, we recognize that participants in an online conversation rarely recount a complete story, choosing instead to tell parts of it [38] . yet even partial stories activate some small group of actants and relationships available in the broader discourse. we conceptualize this as a weighting of a subgraph of the larger narrative framework network. by applying the narrative framework discovery pipeline to tens of thousands of english-language social media posts and news stories, primarily focused on events in the united states and all centered on conspiracy theories related to the covid-19 pandemic, we uncover five central phenomena: (i) the attempt by some conspiracy theorists to incorporate the pandemic into well-known conspiracy theories, such as q-anon; (ii) the emergence of new conspiracy theories, such as one aligning the domains of telecommunications, public health, and global trade, and suggesting that the 5g cellular network is the root cause of the pandemic; (iii) the alignment of various conspiracy theories to form larger ones, such as one suggesting that bill gates is using the virus as a cover for his desire to create a global surveillance state through the enforcement of a worldwide vaccination program, thereby aligning the conspiracy theory with anti-vaccination conspiracy theories and other conspiracy theories related to global cabals; (iv) the nucleation of potential conspiracy theories, such as #filmyourhospital, that may grow into a larger theory or be subsumed in one of the existing or emerging theories; and (v) the interaction of these conspiracy theories with the news, where certain factual events, such as the setting up of tents in central park for a field hospital to treat the overflow of patients, are linked to conspiracy theories. in this particular case, the tents of the field hospital are linked to central aspects of the pizzagate conspiracy theory, specifically child sex-trafficking, underground tunnels, and the involvement of highly visible public figures [34, 67] . running the pipeline on a daily basis during the time of this study allows us to capture snapshots of the dynamics of the entanglement of news and social media, revealing ebbs and flows in the overall story graph, while highlighting the parts of the news graph that are susceptible to being linked to rumors and conspiratorial thinking. conspiracy theories (along with rumors and other stories told as true) circulate rapidly when access to trustworthy information is low, when trust in accessible information and its sources is low, when high-quality information is hard to come by, or a combination of these factors [3, 6, 26, 41, 53, 54, 61] . in these situations, people exchange informal stories about what they believe is happening, and negotiate possible actions and reactions, even as events unfold around them. research into the circulation of highly believable stories in the context of natural disasters such as hurricane katrina [43] , man-made crises such as the 9/11 terrorist attacks in 2001 [24] and the boston marathon bombings in 2013 [66] , or crises with longer time horizons such as disease [4, 30, 32] , has confirmed the explanatory role storytelling plays during these events, while underscoring the impact that stories, including incomplete ones, can have on solidifying beliefs and inspiring real-world action [25, 35] . the goal of telling stories in these situations is at least in part to reach groupwide consensus on the causes of the threat or disruption, the possible strategies that are appropriate to counteract that threat, and the likely outcomes of a particular strategy [68] . in an unfolding crisis, stories often provide a likely cause or origin for the threat, and propose possible strategies for counteracting that threat; the implementation of those strategies can move into real-world action, with the strategy and results playing themselves out in the physical world. this pattern has repeated itself many times throughout history, including during recent events such as edgar welch's attempt to "free" children allegedly being held in a washington dc pizza parlor [45] , the genocidal eruptions that crippled rwanda with paroxysms of violence in 1994 [19] , and the global anti-vaccination movement that continues to threaten global health [13, 32] . although the hyperactive transmission of rumors often subsides once credible and authoritative information is brought to the forefront [4, 70] , the underlying narrative frameworks that act as a generative reservoir for these stories do not disappear. even in times of relative calm where people have high trust in their information sources and high confidence in the information being disseminated through those sources, stories based on the underlying narrative frameworks continue to be told, and remain circulating with much lower frequency in and across various social groups. this endemic reservoir of narrative frameworks serves multiple cultural functions. it supports the enculturation of new members, proffering a dynamic environment for the ongoing negotiation of the group's underlying cultural ideology [18] . also, it provides a ready store of explanatory communal knowledge about potential threats and disruptions, their origins and their particular targets, as well as a repertoire of potentially successful strategies for dealing with those threats [17] . when something does happen that needs explanation-and a possible response-but for which high trust or high confidence information sources do not exist, the story generation mechanism can shift into high gear. the endemic reservoir of narrative frameworks that exists in any population is not immutable. indeed, it is the ability of people to change and shape their stories to fit the specific information and explanatory needs of their social groups that makes them particularly fit for rapid and broad circulation in and across groups [68] . while the stability in a story telling tradition suggests that the actants and their relationships are slow to change, their constant activation through the process of storytelling leads to dynamic changes in the weighting of those narrative framework networks; new actants and relationships can be added and, if they become the subject of frequent storytelling, can become increasingly central to the tradition. because of their explanatory power, stories can be linked into cycles to form conspiracy theories, often bringing together normally disparate domains of human interaction into a single, explanatory realm [41, 33] . although a conspiracy theory may not ever be recounted in its entirety, members of the groups in which such a theory circulates internalize, through repeated interactions, the "immanent" narrative that comprises the overall conspiracy theory [14] . in turn, conspiracy theories can be linked to provide a hermetic and totalizing world view redolent of monological thinking [27] , and can thereby provide explanations for otherwise seemingly disjoint events while aligning possible strategies for dealing with the event to the story teller's cultural ideology [68] . summarizing the storytelling of thousands of story tellers and presenting these stories in an organized fashion has been an open problem in folkloristics since the beginning of the field. the representation of narratives as network graphs has been a desiderata in narrative studies at least since the formalist studies of vladimir propp [50] . lehnert, in her work on the representation of complex narratives as graphs, notes that these representations have the ability to "reveal which concepts are central to the story" [39] . in other work specifically focused on informal storytelling, bearman and stovel point out that, "by representing complex event sequences as networks, we are easily able to observe and measure structural features of narratives that may otherwise be difficult to see" [5] . later work on diverse corpora including national security documents has shown the applicability of the approach to a broad range of data resources [46] . the automatic extraction of these graphs, however, has been elusive given the computational challenges inherent in the problem. in the context of conspiracy theories, preliminary work has successfully shown how simple s-v-o (subject-verb-object) extractions can be matched to a broader topic model of a large corpus of conspiracy theories [55] . work in our group has shown how the extraction of more complex structures and their concatenation into complex narrative graphs provides a clear overview of, for example, the narrative framework supporting the decision to seek exemptions from vaccination among "antivax" groups posting on parenting blogs [69] . recent work on rumors and conspiracies focuses specifically on the covid-19 pandemic [1, 23, 74 ]. an analysis of german facebook groups whose discussions center on the pandemic uses a similar named entity analysis to our methods, and shows a strong tendency among the facebook group members to resist the news reported by recognized journalistic sources [9] . a study of the chinese social media site weibo revealed a broad range of concerns from disease origin and progression to reactions to public health initiatives [42] . an examination of 4chan that employs network analysis techniques and entity rankings traces the emergence of sino-phobic attitudes on social media; these attitudes are echoed in our narrative frameworks [58] . in earlier work, we have shown how conspiracy theories align disparate domains of human knowledge or interaction through the interpretation of various types of information not broadly accessible outside the community of conspiracy theorists [67] . we have also shown that conspiracy theories, like rumors and legends on which they are based, are opportunistic, taking advantage of low information environments to align the conspiracy theory to unexplained events in the actual news [4] . such an alignment provides an overarching explanation for otherwise inexplicable events, and fits neatly into the world view of the conspiracy theorists. the explanatory power of this storytelling can also entice new members to the group, ultimately getting them to ascribe to the worldview of that group. data for this study were derived from two main sources, one a concatenation of social media resources composed largely of forum discussions, and the other a concatenation of covid-19-related news reports from largely reputable journalistic sources. we devised a scraper to collect publicly available data from reddit subreddits and from 4chan threads related to the pandemic. the subreddits and threads were evaluated for relevance by three independent evaluators, and selected only if there was consensus. all of the data are available in our open science framework data repository [60] . 3 for 4chan, we collected ∼ 200 links to threads for the term "coronavirus", resulting in a corpus of 14712 posts. the first post in our corpus was published on march 28, 2020 and the final post was published on april 17, 2020. for reddit, we accessed ∼ 100 threads on various subreddits with 4377 posts scraped from the top comments. because these top comments are not necessarily sorted by time but rather by the process of up-voting, we did not include these timestamps in our analysis. specifically, we targeted r/coronavirus and r/covid19, along with threads from r/ conspiracy concentrating on corona virus. we removed images, urls, advertisements, and non-english text strings from both sources to create our research corpus. after running our pipeline, we were able to extract 87079 relationships from these social media posts. for news reports, we relied on the gdelt project, an open source platform that scrapes web news (in addition to print and broadcast) from around the world (https:// www.gdeltproject.org/). 4 our search constraints through this dynamic corpus of news reports included a first-order search for conspiracy theories. the corpus was subsequently filtered to only include articles written in english (gdelt built-in feature) from u.s. news sources. the top 100 news articles (as sorted by the gdelt engine) were aggregated daily from january 1, 2020 to april 14, 2020 (prior to filtering), and the body of each filtered news report was scraped with newspaper3k. these articles were then cleaned and staged for our pipeline to extract sentence-level relationships between key actors. we extracted ∼ 60 relationships from each report, ∼ 50 filtered news reports per day, and 324510 relationships. the methods used here are a refinement of those developed for an earlier study of conspiracies and conspiracy theories [67] . we estimate narrative networks that represent the underlying structure of conspiracy theories in a large social media corpus (4chan, reddit) where they are most likely to originate, and the corresponding reporting about them in the news (gdelt). this approach allows us to analyze the interplay between the two corpora and to track the time-correlation and pervasive flow of information from one corpus to the other. the latent structure of the social media networks also provides features which enable the identification of key actants (people, places and things) in conspiracies and conspiracy theories, and the detection of threat elements in these narratives. the following subsections introduce the graphical narrative model for conspiracy theories in social media as well as the pipeline for processing news reports. the end-to-end automated pipeline is summarized in fig. 1 . we model narratives as generated by an underlying graphical model [67] . the narrative graph is characterized by a set of n nodes representing the actants, a set of r relationships r = {r 1 , r 2 , … , r r } defining the edges, and k contexts c = {c 1 , c 2 , … , c k } providing a hierarchical structure to the network. these parameters are either given a priori or estimated from the data. a context c i is a hidden parameter, or the 'phase', of the underlying system that defines the particular environment in which the actants operate. it expresses itself in the distributions of the relationships among the actants, and is captured by a labeled and weighted network , where each such pair is labeled with a distribution over the relationship set r. each post to a thread describes relationships among a subset of actants. a user picks a context c i and then, from the network, draws a set of actants and inter-actant the forum posts constitute the output of this generative process. from a machine learning perspective, given a text corpus, we need to estimate all the hidden parameters of the model: the actants, the contexts, the set of relationships, and the edges and their labels. in other words, it is necessary to jointly estimate all the parameters of the different layers of the model. 5 first, each sentence in our corpus is processed to extract various syntax relationship tuples. some tuples are described as (arg 1 , rel, arg 2 ) where each arg i is a noun phrase and rel is a verb or other type of relational phrase. others include subject-verb-copula (svcop) relationships that connect a subject to a copula or subject complement. the relationship extraction framework combines dependency parse tree and semantic role labeling (srl) tools used in natural language processing (nlp) [15] . we design relationship patterns that are frequently found in these narratives, and then extract tuples matching these patterns from the parsed corpus. the phrases arg 1 and arg 2 , along with entity mentions obtained using a named entity recognition tool [2] , provide information about the actant nodes and the contexts in which they appear. noun phrases are aggregated across the entire corpus into contextually relevant groups referred to as super-nodes or contextual groups (cgs). several different methods of clustering noun phrases and entity mentions into semantically meaningful groups have been proposed in the literature [51] . we accomplish the aggregation presented here by generating a list of names across the corpus, which then acts as a seed list to start the process of creating cgs from the noun phrases. we group nouns in the ner list using co-occurrence information from the noun phrases. for example, "corona" and "virus" tend to appear together in phrases in our corpus (see table 1 ). we leverage recent work on word embedding that allows for the capture of both word and phrase semantics. in particular, the noun phrases in the same contextual group are clustered using an unsupervised k-means algorithm on their bert embeddings [20] . while the process of merging and deleting the subnodes detailed in prior work [67] offers flexibility over the choice of k we chose k = 40 as a conservative parameter to preserve the resolution of cluster members. in prior work [67] , we have shown that the final network comprising sub-nodes and their relationship edges is not sensitive to the exact size and content of the contextual groups derived at this stage. this cg grouping, which we undertake prior to applying k-means clustering on word embedding, enables us to distill the noun phrases into groups of phrases that have a semantic bias. this distillation mitigates the inherent noise issues with word embeddings when they are directly used to cluster heterogeneous sets of words over a large corpus [56] . the cgs can also be viewed as defining macro-contexts in the underlying generative narrative framework. it is worth noting that the nodes are not disjoint in their contexts: a particular sub-node might have phrases that are also associated with other sub-nodes. these sub-nodes, when viewed as groups, act as different micro-contexts of the supernodes in the overall model. the final automatically derived narrative framework graph is composed of phrase-cluster nodes or, "sub-nodes", which are representations of the underlying actants. we automatically label these sub-nodes based on the tf-idf scores of the words in each cluster resulting from the k-means clustering of the contextual groups. the edges between the sub-nodes are obtained by aggregating the relationships among all pairs of noun phrases. thus, each edge has a set of relationship phrases associated with it, and the number of relationships can serve as the weight on that edge. the relationship set defining an edge can be further partitioned into semantically homogeneous groups by clustering their bert embeddings [20, 67] . because conspiracy theories connect preexisting domains of human activity through creative speculation, often presented as being based on a theorist's access to "hidden knowledge", we expect that the narrative frameworks that we construct will have clusters of nodes and edges corresponding to the different domains. since these clusters are densely connected within themselves, with a sparser set of edges connecting them to other clusters, we can apply community detection algorithms to discover them. for example, the domain of "public health" will have dense connections between sub-nodes such as "doctors" and "hospitals", with relatively few connections to the domain of "telecommunications", which will in turn have dense connections between sub-nodes such as "5g" and "cell towers". traversing these different communities mimics the conspiracy theorist's cross-domain exploration in the attempt to create a conspiracy theory. given the unsettled nature of discussions on social media concerning the covid-19 pandemic, it seems likely that there are multiple, competing conspiracy theories in the corpus. therefore, one would expect to find a large number of communities in the overall network, some isolated from the rest and others with a limited number of shared sub-nodes. one would also expect that this network would have a hierarchical structure. to capture any such hierarchical structure, we compute overlapping network communities, where each community is defined by (i) a core set of nodes that constitute its backbone, and (ii) a set of peripheral nodes of varying significance that are shared with other communities. currently, to determine the communities in our network, we run the louvain greedy community detection algorithm multiple ( ∼ 1000 ) times using the default resolution parameter in networkx [8] . we define two nodes as belonging to the same core if they co-occur in the same community for almost all of the runs; here we use a threshold of 850. as in [67] , the threshold is aligned with the precipitous drop in the size of the giant connected component (gcc). next, a core that defines a community is a set of nodes that is closed under the co-occurrence transitive relationship: if nodes a and b belong to the same core, and nodes b and c also belong to that same core then, by transitivity, we say nodes (a, b, c) are all in the same core. the resulting disjoint sets of core nodes (i.e., equivalence classes under the co-occurrence transitive relationship), along with their edges in their original network, define non-overlapping communities that form the multitude of narrative frameworks in the corpus. 6 overlapping nodes are then brought into the communities by relaxing the co-occurrence threshold [67] . 7 these interactions among core communities, and hence, the respective narrative frameworks, capture the alignments among multiple knowledge domains that often underlie conspiracy theories. taken as a whole, the narrative framework comprising networks of actants and their inter-actant relationships (along with other metadata) reveals aspects of conspiracy theories including the threatening sub-nodes identified by the conspiracy theorists, and the possible strategies that they suggest for dealing with those threats. for instance, the network community consisting of sub-node [tower, 5g, danger] along with its associated svcop relationships "[5g] is deadly", "[tower]s should be burned", imply a threat to human well-being posed by 5g, and a strategy for dealing with that threat: burn the cell towers (strategy) to protect people from the deadly 5g (threat). because threats are often followed by strategies, we prioritize the classification of threats. to classify threats, we look for sub-nodes in the network communities that, given their associated descriptions, might be considered threatening. for example, a descriptive reference to a sub-node "vaccines" that suggests that they "can kill", would allow us to code "vaccines" as a possible threat. we repeat this process for all the sub-nodes in the network communities, and find that strong negative opinions are associated with some subset of sub-nodes, which we identify as candidate threats. by applying a semi-supervised classification method to these candidate subnodes, we can confirm or reject our suspicions about their threatening nature. the threat classifier is trained on the relationships extracted from social media posts. in particular, svcop relationships (described in section 4.1) play a special role in providing information about a particular sub-node: these relationships provide important information about the first argument and are generally descriptive in nature. in such relationships, the second argument is most often a descriptive phrase with an associated to-be verb phrase. for example (5g,is,dangerous/a threat/harmful) are svcop relationships describing the [5g] argument. we consider these relationships as self-loops for their first arguments, which are aggregated into sub-nodes. the most discussed sub-nodes tend to have a high number of such self-loop relationships, and the descriptive phrases often carry powerful characterizations of these entities. sub-node-specific aggregation of these relationships can inform us about the role of a particular actant in its community. for example, we find ∼ 350 svcop relationships describing the node "virus" as "harmful", "deadly", "dangerous", and "not real". we aggregate the entire corpus of svcop relationships ( ∼ 10000 ) and then label them in a hierarchical fashion as follows: first, each such svcop phrase is encoded using a 768 dimensional bert embedding from a model fine-tuned for entailment detection between phrases [20] . next, the vectors are clustered with hdbscan [44] , resulting in a set of ∼ 1000 density-based clusters c, with an average cluster membership size of 7. approximately, 3000 of the phrase encoding vectors are grouped in a cluster labeled as −1 , indicating that they are not close to others and are best left as singletons. for the rest, each cluster represents a semantically similar group, and can be assigned a group semantic label. thus, the task of meaningfully labeling ∼ 10000 phrases as 'threat' or 'non-threat' is reduced by almost a factor of 10. we define a binary label for each cluster. a threat is a phrase that is universally recognized as threatening: [5g] is dangerous, [a tower] is a bioweapon. here, the phrases dangerous and bioweapon are clearly indicative of threats. the remaining phrases are labeled as neutral/vague comments. for every cluster c ∈ c , we assign a label l c to c such that every descriptive phrase d ∈ c is also assigned label l c . clearly, label quality is contingent on the manual labeling of the clusters and the semantic similarity of descriptive phrases as aggregated by the bert and density-based clustering. this is ensured by three independent reviewers labeling each cluster and, in the case of disagreement, choosing the label receiving the majority vote (report krippendorf alpha here). we measure the inter-rater reliability with respect to the majority vote by the three different raters. our results for a sample size of 100, are 0.745, 0.87 and 0.829. the semantic similarity in each cluster is verified by a qualitative analysis of the clusters undertaken by domain experts. for example, most of the clusters have exact phrase matches such as these labeled phrases are used to train a k-nearest neighbor (knn)-based phrase classifier to identify threatening descriptions. once again we use the fine-tuned bert embedding. many competing knn models provide useful classification results for phrases. we found that setting k = 4 results in a model that reasonably classifies most phrases. the knn classifier is binary: 0 represents the class of non-threat and 1 represents the class of "threat". the cross-validation part is carried out at the level of the clusters: that is, when designing the training sets (for knn, the set of phrases used in performing the knn classification of a given phrase) and validation sets, we partition the phrases based on their cluster assignments. all phrases belonging to the same cluster are assigned to the same set and are not split across the training and validation sets. because the labeled phrases have duplicate second arguments and repeated phrases occur in the same cluster, this approach to cross-validation ensures against repeating phrases in both the training and validation set, which is achieved by partitioning data at the cluster level. the primary purpose of designing the phrase classifier is to identify threatening sub-nodes, which appear as core nodes in the narrative framework communities. aggregated second arguments of svcop relationships corresponding to a particular sub-node are classified with the knn phrase classifier. based on a majority vote on these second arguments, we can classify a sub-node as a potential threat. an outline of this algorithm is provided in algorithm 1. a narrative framework for a conspiracy theory, which may initially take shape as a series of loosely connected statements, rumors and innuendo, is composed from a selection of subnodes from one or more of these communities and their intra-and inter-community relationships. each community represents a group of closely connected actant sub-nodes with those connections based on the context-dependent inter-actant relationships. traversing paths based on these inter-actant relationships within and across communities highlights how members posting to the forums understand the overall discussion space, and provide insight into the negotiation process concerning main actants and inter-actant relationships. this search across communities is guided by the extended overlapping communities (which connect the core communities), as described in 4.1, taking into consideration the sub-nodes that are classified as threat nodes. the inter-actant relationship paths connecting the dominant threat nodes, both within and across communities, are then pieced together to create the various conspiracy theories. many conspiracy theories detected in social media are addressed in news reports. by temporally aligning the communities discovered from social media with the evolving communities emerging from news collected daily, we can situate the 4chan commentary alongside mass media discussions in the news. such a parallelism facilitates the analysis of information flow from smaller community threads in social media to the national news and from the news back to social media. to aggregate the published news, we consider (1-day time-shifted) intervals of 5 days. this sliding window builds s = 101 segments from january 1, 2020 to april 14, 2020. we have discovered that a longer interval, such as the one chosen here, provides a richer backdrop of actants and their interactions than shorter intervals. in addition, narratives on consecutive days retain much of the larger context, highlighting the context-dependent emergence of new theories and key actants. we use the major actants and their mentions discovered in the social media data to filter the named entities that occur in the news corpus. a co-occurrence network of key actants in news reports (conditioned on those discovered from social media), provides a day-to-day dynamic view of the emergence of various conspiracy theories over time. in addition, we model the flow of information between social media and news reports by monitoring the frequency of occurrence of social media communities (as captured by a group of representative actants in each community) in the text of news reports (see evaluation). with minimal supervision, a few actant mentions are grouped together including, [trump, donald]: donald trump, [coronavirus, covid19, virus]: coronavirus and [alex, jones]: alex jones. while such groupings are not strictly required and could be done more systematically (see [59] ), this actant-grouping enhances the co-occurrence graph by reducing the sparsity of the adjacency matrix representing subject-object interaction. for each 5-day segment of aggregated news reports, the corpus of extracted relationships r i and the associated set of entities e i are parsed with algorithm 2 to yield a cooccurrence actant network. day-to-day networks reveal the inter-actant dynamics in the news. while many metrics can be used for summarizing the dynamics within these networks, we considered the number of common neighbors (ncn) between them. if the adjacent vertices of a 1 are s a 1 and of a 2 are s a 2 , the ncn score is defined as: we temporally align the conspiracy theories discussed in social media and in news reports by first capturing a group of representative actants in each social media community. let the set of keywords representing a particular community be v i . the timestamps present in 4chan and gdelt data make these corpora suitable for temporal analysis with respect to v i (our reddit corpus does not contain dates). to facilitate a comparison between the two corpora conditioned on v i , let c j denote the raw 4chan data and d j denote the raw gdelt news data in time-segment j. the time segments are 5-day intervals between march 28,2020 and april 14, 2020, which is the intersection of date ranges for which we have temporal 4chan and gdelt data. we define a coverage score (m) that captures the presence of actants from v i in c j and d j . (1) n a 1 ,a 2 = |s a 1 ∩ s a 2 |. to normalize the coverage scores to a baseline, we compute a relative coverage score (r), where v * is a random set of actants (of size 500) as: computed across all time-segments, r c (v i ) and r d (v i ) represent a series of relative coverage scores for 4chan and gdelt, respectively, with one sample for every time segment. this metric now provides a normalized measure for coverage of a community derived from social media in the temporal corpora of 4chan and gdelt data. the cross-correlation function of these relative coverage scores can provide interesting insight into the co-existence of conspiracy theory communities in the two corpora where is the number of offset days between the news and 4chan data (see fig. 5 ). this cross-correlation score peaks for the number of offset days that results in the maximum overlap of relative coverage scores. for example a of 10 days would imply that information about a specific set of representative actants occurred in the news and 4chan data roughly 10 days apart. captures the latency or periodicity lag between communities mentioned in the news and in 4chan data. the error bars are generated over 20 random communities used for normalizing the coverage scores before cross-correlation. we present standard metrics to further compare communities of actants derived from temporal news reports and social media. our metrics are standard measurements used for clustering evaluations based on ground truth class labels [73]. algorithm 3 describes this evaluation process. we use average recall and average accuracy to evaluate the performance of the phrase-based threat classifier. the average is computed across the fivefold groupshuffled cross-validation of phrases. here, recall and accuracy are defined as, there are limitations with our approach, including those related to data collection, the estimation of the narrative frameworks, the labeling of threats, the validation of the extracted narrative graphs, and the use of the pipeline to support real-time analytics. data derived from social media sources tend to be very noisy, with considerable amounts of spam, extraneous and off-topic conversations, as well as numerous links and images interspersed with meaningful textual data. even with cleaning, a large number of text extractions are marred by spelling, grammatical and punctuation errors, and poor syntax. while these problems are largely addressed by our nlp modules, they produce less accurate entity and relationship extractions for the social media corpus than for the news corpus. also, unlike news articles which tend to be well archived, social media posts, particularly on sites such as 4chan, are unstable, with users frequently deleting or hiding posts. consequently, re-crawling a site can lead to the creation of substantively different target data sets. to address this particular challenge, we provide all of our data as an osf repository [60] . the lack of consistent time stamping across and within social media sites makes determining the dynamics of the narrative frameworks undergirding social media posts difficult. in contrast to the news data harvested from the gdelt project, the social media data are marked by a coarse, and potentially inaccurate, time frame due to inconsistent time stamps or no time stamps whatsoever. comparing a crawl from one day to the next to determine change in the social media forums may help attenuate this problem. given the potential for significant changes due to the deletion of earlier posts, or the move of entire conversations to different platforms, the effectiveness of this type of strategy is greatly reduced. because of the limited availability of consistently time-stamped data, our current comparison between the social media conspiracy theory narrative frameworks, and those appearing in the news, is limited to a three-week window. there appears to be a fairly active interaction between the "twittersphere" and other parts of the social media landscape, particularly facebook. many tweets, for instance, point to discussions on social media and, in particular, on facebook. yet, because of restrictions on access to facebook data for research purposes, we are unable to consider this phenomenon. future work will incorporate tweets that link to rumors and other conspiracy theories in our target social media arena. as part of this integration, we also plan to include considerations of the trustworthiness of various twitter nodes, and the amplification role that "bots" can play in the spread of these stories [16, 23] . as with a great deal of work on social media, there is no clear ground truth against which to evaluate or validate. this problem is particularly apparent in the context of folkloric genres such as rumor, legend and conspiracy theories, as there is no canonical version of any particular story. indeed, since folklore is always a dynamically negotiated process, and predicated on the concept of variation, it is not clear what the ground truth of any of these narratives might be. to address this problem, we consider the narrative frameworks emerging from social media and compare them to those arising in the news media. the validation of our results confirms that our social media graphs are accurate when compared to those derived from news media. currently, our pipeline only works with english language materials. the modular nature of the pipeline, however, allows for the inclusion of language-specific nlp tools, for parsing of languages such as italian or korean, both areas hard hit by the pandemic, and likely to harbor their own rumors and conspiracy theories. in addition, we believe that our semi-supervised approach to threat detection would require less human effort if we had more accurate semantic embeddings. finally, we must note that the social media threads, particularly those on 4chan, are replete with derogatory terms and abhorrent language. while we have not deleted these terms from the corpus, we have, wherever possible, masked those terms in our tables and visualizations, with obvious swears replaced by asterisks, and derogatory terms replaced by "dt" for derogatory term, or "rdt" for racially derogatory term and a qualifier identifying the target group. after running the pipeline and community detection, we find a total of two hundred and twenty-nine communities constituting the various knowledge domains in the social media corpus from which actants and interactant relationships are drawn to create narrative frameworks. many of these communities consist of a very small number of nodes. it is worth noting that several of the communities are "meta-narrative" communities, and focus on aspects of communication in social media (e.g., communities 11 and 74), or platform specific discussions (e.g., communities 44 and 46 that focus on facebook and 181 focusing on youtube and twitter). other communities are "background" communities and focus on news coverage of the pandemic (e.g., communities 7 and 62), the background for the discussion itself (e.g., community 30 that connects the pandemic to death, and community 35 that focuses on hospitals, doctors, and medical equipment such as ventilators), or discussions of conspiracy theories in general (e.g., communities 108 and 109). we find that these "meta-narrative" and "background" communities, after thresholding, tend to be quite small, with an average of 3.9 sub-nodes per community. nevertheless, several of them include sub-nodes with very high ner scores, such as community 155, with only four nodes: "use", "microwave", "hybrid protein" and "cov", all with high ner scores. this community is likely to be included as part of more elaborated conspiracy theory narrative frameworks such as those related to 5g radiation. the five largest communities, in contrast, range in size from 66 to 172 nodes. these five communities, along with several other large communities, form the main reservoir of actants and inter-actant relationships for the creation of conspiracy theory narrative frameworks. we find thirty communities with a node count ≥ 14 . (see fig. 2) . table 2 shows the temporary labels for these communities, which are based on an aggregation of the labels of the three nodes with the highest ner scores and node(s) with the highest-degree. the relationship between the discussions occurring in social media and the reporting on conspiracy theories in the media changed over the course of our study period. in mid to late january, when the corona virus outbreak appeared to be limited to the central chinese city of wuhan, and of little threat to the united states, news media reporting on conspiracy theories had very little connection to reporting on the corona virus outbreak. as the outbreak continued through march 2020, the reporting on conspiracy theories gradually moved closer to the reporting on the broader outbreak. by the middle of april, reporting on the conspiracy theories being discussed in social media, such as those in our research corpus, had moved to a central position (see fig. 3 ). the connection between these two central concepts in the news-"coronavirus" and "conspiracy theory"-can also be seen in the rapid increase in the shared neighbors of these sub-nodes (defined in eq. (1)) in the overall news graph during the period of study (see fig. 4 ). since our dataset contains dated 4chan and gdelt data from march 28, 2020 to april 14, 2020, communities from the social media corpus were explored within the subset of news media between the same dates using relative coverage scores defined in eq. (3). the cross-correlation of the ratio of coverage scores for different fixed communities to a random community is provided in fig. 5 . the higher average scores for the "5g" community including words such as {"5g", "waves", "antenna", "radio", "towers", "radiation"}, suggests that this community was matched more frequently than other communities compared to a baseline random community. a peak at zero days offset within the time period from march 28, 2020 to april 14, 2020 implies that the news reports are correlated in time to 4chan thread activity. in addition, these plots suggest that few communities dominate conspiracy theories more than others. the viability of other communities such as {"army", "us", "bioweapon"} and {"lab", "science", "wuhan"} suggests the lack of a single dominant conspiracy theory consensus narrative. instead, it appears that numerous conspiracy theories may be vying for attention. we examine "bill gates", a key actor frequently found in the common neighbors set between "coronavirus" and "conspiracy theory". key relationships extracted by our pipeline on the news reports provide a qualitative overview of the emergence of "bill gates" as a key actor (see table 5 ). finally, the evaluations based on algorithm 3 are shown in fig. 6 . the plots indicate the saturation of completeness and homogeneity scores at ∼ 92% and ∼ 82% respectively across time. similarly, the v-measure saturates at ∼ 86% . these scores per time sample, represent the fidelity of the process of cluster matching. table 2 the largest thirty communities in the social media corpus in descending order of size the labels are derived from the sub-node labels for the semantically meaningful nodes with the highest ner scores in each community (racially derogatory terms and swears have been skipped). the label of the highest degree node(s) not included in the community label is listed in the third column. nodes with a threat score ≥ 0. 5 the phrase classifier described in the methods was cross-validated and the recall and accuracy across the validation sets are provided in table 3 . recall is used as the primary performance measure in the detection of threats, as the sensitivity to threatening phrases is the most important feature of the classifier. the phrase classifiers applied to descriptive phrases of a particular sub-node provide insight into the context of the sub-node. for the phrase classifier, fig. 8 describes a histogram of the number of sub-nodes across the percentage of associated phrases fig. 2 overview graph of the largest thirty communities in the social media corpus. nodes are colored by community, and sized by ner score. narrative frameworks are drawn from these communities, each of which describes a knowledge domain in the conversation. nodes with multiple community assignments are colored according to their highest ranked community. an overarching narrative framework for a conspiracy theory often aligns subnodes from numerous domains fig. 3 progressive attachment of "coronavirus" to "conspiracy theory" in the co-occurrence network of news reports conditioned on entities found in social media: the orange-outlined nodes represent the two concepts, as they gravitate toward one another over time and form new simple paths. from top to bottom, 5-day intervals starting on january 20, 2020, march 15, 2020, and april 10, 2020. nodes are colored as follows: celebrities in yellow, media outlets in red, important actants in pink (manually colored), places in green and corporations/entities in black classified as threats. table 4 provides a sample set of sub-nodes with their respective threat scores based on the majority vote. a sample sub-node "ccp" has 53% of its associated descriptive phrases classified as threats. the end-to-end classification pipeline, along with sample nearest neighbors during the phrase classification task, is shown in fig. 9 . the lack of authoritative information about the covid-19 pandemic has allowed people to provide numerous, varied explanations for its provenance, its pathology, and both medical and social responses to it. these conversations do not occur in isolation. they not only circulate on and across various social media platforms but also interact with news reporting on the pandemic as it unfolds. similarly, journalists are keenly aware of the discussions occurring in social media, thereby creating a feedback loop between the two. the interlocking computational methods described above facilitate the discovery of a series of important features of the (i) narrative frameworks that bolster conspiracy theories and their constituent rumors circulating fig. 4 number of common neighbors between "coronavirus" and "conspiracy theory" over time in the news reports: across all 101 segments of 5-day intervals, the number of simple paths empirically increases rapidly, suggesting the closer ties between the two entities across time on and across social media, and (ii) the interaction between social media and the news. the main communities and their interconnections in the aggregated social media corpus reveal the centrality of several significant conspiracy theory narrative frameworks. in particular, groupings of large communities form expansive frameworks and may well represent the dominant conspiracy theory frameworks in the corpus. in other cases, coherent narrative frameworks can be discovered within a single community. these communities may have some connections or overlap with communities describing the contours of the pandemic, as well as to other small communities that provide support for aspects of the narrative framework. we find four large community groupings which present easy-to-interpret conspiracy theory frameworks. the first of these groupings is comprised of nodes from communities 5, 56, and 82 (see fig. 7 ). the narrative framework suggests that the corona virus is closely linked to the 5g cellular network, and bill gates's associations with both faulty research and wide-scale vaccination programs. eager to expand a global vaccination program to help prevent the explosion of the world's population, gates has contributed to the design of the corona virus, which can be characterized as a bio-weapon. potentially activated by 5g signals (a technology that is also the result of faulty research), the virus is intended to eradicate various populations throughout the world. certain key sub-nodes play key roles in connecting these communities to create the conspiracy theory narrative. for example, the sub-node "facility, faulty, practice, . coverage percentage is the fraction of actants in news report communities that also are found in social media network communities research" interacts with "bill gates" and his supposed obsession with exploding populations and vaccination efforts, the "virus"' origin story, and the emerging "5g" technology, thereby offering one potential route traversed by conspiracy theorists. this traversal aligns three distinct communities as the conspiracy theorists create fig. 7 communities with index 5, 56 and 82 sequentially describe the conspiracy theory surrounding "bill gates" and "5g". the words in bold are the sub-nodes present in the narrative network and the yellow-highlighted phrases are automatically extracted relationships between the sub-nodes. the blue-highlighted sub-node is a key actant that exists in all 3 communities and is one of the connecting components between "bill gates" and the conspiracy theory around "5g". community 5 describes gates's supposed obsession with population control along with his funding of faulty research. the same research is alleged to have created "5g" as a means of spreading the "virus" which is allegedly intended as a "bioweapon". community 56 takes it a step further tying "5g" to its carrier frequency and the associated interactions of this frequency with the human body. community 82 concludes the origin story of the virus (back to the "faulty" research conducted by "gates") and mentions the cell-level interaction between the virus and the body a unifying theory. none of these key nodes are innocuous, but rather have all been classified as threats (see fig. 10) . a second group is comprised of nodes from communities 1, 40, and 65. in this narrative framework, the limited information about the virus released by the chinese communist party is coupled to the virus's origin either in chinese wet markets selling pangolins, presumably for human consumption, or labs studying bats (or potentially both). the narrative framework is informed by bigoted discussions of chinese food practices coupled to an ongoing critique of the truthfulness of chinese fig. 8 the histogram of threat scores across the sub-nodes from the phrase classifier. the bi-modality encourages binary classification thresholds around 0.2. in our networks, we use 0.25 which is at the 57 th percentile of sub-nodes classified as threats table 4 sample threat scores: note the increasing threat score from the sub-nodes "china" to "chinese" to "chinese, government", which reflects the threat carried by more specific "china" contextualized actants researchers. several intriguing elements of the narrative framework are the "fluoroquinolone" sub-node, an antibiotic which is also a favored medication in other narrative frameworks, and the inclusion of a bill gates sub-node. both of these suggest clear points of potential attachment with other conspiracy theory frameworks, such as the 5g one described above, and another one focused on information cover-ups and the virus-as-hoax (see fig. 11) . a third group, comprised of communities 0, 23, 24, 121 and 150, presents an expansive narrative framework. here, the virus is presented as an engineered bioweapon, either deliberately or accidentally released from a lab. confirmation of the engineered nature of the virus can be provided by scientists (pulmonologists) or members of the military (researcher, soldier). the subnodes in the graph set up a clear dichotomy between western governments and the chinese government, and the controlling chinese communist party (ccp), all of which are classified as threats. it is worth noting that the ccp abbreviation is used by some social media contributors as a reference to the "chinese communist plague", a racially derogatory term for the virus analogous to trump's reference to the virus as the "kung flu" [47] . aspects of the framework also support discussions of the economic impact of the pandemic, as well as the role of "globalists" in promoting the danger of the virus through inaccurate reporting and inflated counts of victims across the world, including europe (see fig. 12) . a fourth grouping comprised of communities 18, 21 and 75, constitutes a narrative framework proposing that the pandemic is a hoax on the same level as the fig. 9 the sub-node "ccp" has associated noun phrases shown in the gray box. the noun phrases have descriptive svcop relationships, whose descriptive phrases are sampled in the light red and green blobs. the phrases in the red blob are classified as threats by our majority classifier and the phrases in the green blob are classified as non-threats. the highlighted and bold descriptive phrases are sample phrases for which the nearest neighbors are shown. the knn classifier reasonably clusters phrases that are syntactically different but semantically similar using the bert embedding. darker nearest neighbors occur more frequently global warming "hoax". this framework includes actants such as trump, the american news commentator sean hannity, the right-wing podcaster nick fuentes and republicans writ large who are fighting against globalists, democrats, scientists such as anthony fauci and, in keeping with the long history of anti-semitism in conspiracy theorizing, the "jews", all of whom have conspired to perpetrate this hoax, which is wreaking havoc on the economy. the british conspiracy theorist, david ickes appears with a direct link to a node representing the "jewish globalists". interestingly, albeit perhaps not surprisingly, bill gates appears once again in this framework, now more closely related to the mueller inquiry and democrats such as obama. while the goal of the hoax is not made explicit, the framework bolster the erroneous suggestion that the virus presents with mild symptoms, and is no more dangerous than the flu (see fig. 13 ). the belief that the pandemic is a hoax inspires the "#filmyourhospital" movement as a means for publicizing the "discovery" that the virus poses no meaningful threat other than the economic threat of stay-at-home orders [31] . several related narrative frameworks intersect with the main "hoax" framework in interesting ways. for example, a grouping of communities 6, 15 and 45, reveals a discussion of the disease, its relation to sars and the flu, the testing regimen, the accuracy of the tests and the efficacy of masks. it also includes an apparent critique of media figures who often endorse conspiracy theories (see fig. 14) . while aspects of this narrative framework can be deployed as part of the more elaborated hoax framework-which seems to be the case particularly given the "threat" coding of "masks"-it can also be activated in the service of a counter-hoax narrative, given the inclusion of anti-conspiratorial content. in that regard, this particular grouping of communities captures the ongoing negotiation of the framework and the activation of parts of a framework as individuals come together (or move apart) to construct a totalizing narrative. there are numerous other nucleations of narrative frameworks in the overall space that are worth noting. a particularly interesting community is 51 which has strong connections to the well-known "pizzagate" conspiracy theory, as well as connections to the much broader qanon conspiracy theory [67] . the intrusion of qanon, and the alignment of the pandemic with the broader narrative of a ring of pedophile human traffickers gained strong support in certain conversations associated with the pandemic-as-hoax frameworks. it also aligns well with the belief, noted above, that tents erected in central park were part of an operation to save children trafficked through underground tunnels, a key feature of the "pizzagate" conspiracy theory [67] . these smaller frameworks suggest that there is a lively, ongoing negotiation of community beliefs about the pandemic. as the conversations progress, many of these smaller narratives are likely to become more closely connected with larger groupings, while others are likely to fade away. community 42, for instance, describes the pandemic as a deliberate attack on the nation perpetrated by the democrats; despite the impact on the global economy, the virus is no worse than a bad flu. such a community could be easily subsumed in the broader virus-ashoax narrative framework. two additional examples of much smaller nucleations would be community 171 which consists of three sub-nodes: "cell phone", "ear" and "state surveillance", and community 123 with six semantically meaningful sub-nodes: "cancer", "cell phone", "cell tower", "microwave", "human cell", and "substance". one would expect, as discussions continue, that these communities would move closer to the 5g conspiracy theory narrative framework. this tendency toward monological thinking already appears to be at work in the alignment of the 5g conspiracy theory with the biological weapons conspiracy theory, with both of those frameworks sharing close connections with the narrative framework describing the pandemic as a whole. other alignments seem possible, with the 5g conspiracy and the hoax conspiracy potentially aligning through community 32, which in general focuses on italy and quarantine measures across europe. the inclusion of two peripheral sub-nodes, one labeled "5g, chemtrailz" and another one that labels the quarantines "ridiculous", not only provide an opportunity to challenge the meaningfulness of quarantine measures (thus providing a potential alignment with the hoax narrative), but also provide a connection between 5g and the longstanding "chemtrailz" conspiracy theory [75] . in earlier work on conspiracy theories, we discovered that conspiracy theorists, as part of their theorizing, tend to collaboratively negotiate a single explanatory fig. 10 a conspiracy theory narrative framework that links the virus to 5g, bill gates, and vaccination. nodes have been scaled by ner mentions; those with fewer than 250 mentions have been filtered for the sake of clarity. nodes are colored by community, and outlined with red if they represent a threat narrative framework, often composed of a pastiche of smaller narratives, aligning otherwise unaligned domains of human interaction as they develop a totalizing narrative [27, 55, 67] . in many conspiracy theories, this coalescence of disparate stories into a single explanatory conspiracy theory relies on the conspiracy theorists' selfreported access to hidden, secret, or otherwise inaccessible information. they then use this information to generate "authoritative" links between disparate domains, engaging in what goertzel has labeled "monological thinking" [27] . for the current pandemic, however, a single unifying corpus of special or secret knowledge does not yet exist-there are no "smoking guns" to which the conspiracy theorists can point, such as the wikileaks emails on which pizzagate conspiracy theorists relied [67] . consequently, the social media space is crowded by a series of conspiracy theories. in the various forums we considered, proponents of different narratives fight for attention, while also trying to align the disparate sets of actants and interactant relationships in a manner that allows for a single narrative framework fig. 11 communities comprising the narrative framework suggesting that the virus is a result of chinese wet markets and deliberate information cover-ups. the narrative framework focuses heavily on markets, exotic animals such as pangolins, and the role of chinese communist party in hiding information about the initial outbreak. nodes are colored by community, and outlined with red if they represent a threat. the graph is filtered to show nodes with degree geq 2 communities comprising the covid-19-as-bioweapon narrative framework. the narrative framework focuses heavily on laboratories and the potential role of the virus as a weapon. nodes are colored by community, and outlined with red if they represent a threat. the graph is filtered to show nodes with degree geq 2 fig. 13 the communities comprising the globalist hoax narrative framework: here, a globalist cabal has conspired to foist the hoax of the corona virus on the world, with the virus presenting with mild flu-like symptoms. trump and his allies are fighting against the democrats and their surrogates to stave off the economic impact of the hoax. nodes are colored by community, and outlined with red if they represent a threat. the graph is filtered to show nodes with degree geq 2. two nodes, "filmyourhospital" and "hoax,see global warming" have been highlighted in yellow to dominate and, by extension, to provide the "winning" theorists with the bragging rights of having uncovered "what is really going on." this type of jockeying for position is also reflected in the news. importantly, there is a lively interaction between the news media and the discussions about the pandemic on social media. consequently, while the news media reports on the conspiracy theories that are evolving on social media, the social media groups point back toward reporting on the pandemic in the news media. the interaction is not, however, one of simple endorsement. rather, the conversations on social media frequently contest, poke holes in, and otherwise challenge the narratives presented in the news media. in turn, the news media explores not only the content but also the veracity (or lack thereof) of the social media discussions. unlike normal fact checking, however, the rejection in the news media of a particular social media position may be fuel for the conspiracy theorists, given their frequent suspicion of the news media. this interaction between social media and the news, modeled by the cross-correlation of relative coverage scores in fig. 5 , indicates that the information flow fig. 14 a narrative framework that can be deployed by multiple groups. the framework focuses on the relationship between the virus, sars, the flu and the testing regimen. also included are nodes representing research on the virus and questions of immunity. a small disconnected component on the filtered graph provides a critique of qanon and glenn beck. nodes are colored by community, and outlined with red if they represent a threat. the graph is filtered to show nodes with degree geq 2 between the two corpora is swift: the correlation-maximizing offset of days was 0 or nearly 0 for all considered actant groups. since the data is smoothed over five days, this finding implies that the major actants appearing in narrative frameworks get aligned within days of appearing in either channel. a qualitative example expanding upon this dynamic of knowledge synchronization between the news and social media is observed in table 5 where "bill gates" was earlier highlighted as an important actant. news reports on april 4 actively mentioned gates's prediction of the covid-19 outbreak. at the same time, 4chan threads were embroiled in the discussion of "5g" causing the "coronavirus". perhaps the shock of such an accurate prediction--and bill gates's continued investment in pandemic prevention and vaccine research--helped motivate david icke, an influential conspiracy theorist, to proclaim on april 7 that "bill gates belongs in jail", echoing comments of a florida pastor, adam fannin, who believes gates is involved in a global effort to depopulate the world. in the ensuing days after ickes's comments, 4chan threads began denigrating "gates", alleging him to be a part of a satanic cabal (thereby creating a direct link to "pizzagate"), labeling him the anti-christ, and accusing him of being an opportunist forcing the world into a crisis to further his alleged forced vaccination campaign. news reports, seemingly in response, summarized the conspiracy theories circulating on 4chan communities with headlines such as, "the dangerous coronavirus conspiracy theories targeting 5g technology, bill gates, and a world of fear" [65] . as the global covid-19 pandemic continues to challenge societies across the globe, and as access to accurate information both about the virus itself and what lies in store for our communities continues to be limited, the generation of rumors and conspiracy theories will continue unabated. although news media have paid considerable attention to the well-known q-anon conspiracy theory (perhaps the most capacious of conspiracy theories of the trump presidency), social media conversations have focused on four main conspiracy theories: (i) the virus as related to the 5g network, and bill gates's role in a global vaccination project aimed at limiting population growth; (ii) a cover-up perpetrated by the chinese communist party after the virus leaped to human populations based largely on chinese culinary practices; (iii) the release, either accidental or deliberate of the virus from, alternately, a chinese laboratory or an unspecified military laboratory, and its role as a bio-weapon; and (iv) the perpetration of a hoax by a globalist cabal in which the virus is no more dangerous than a mild flu or the common cold. as the conversations evolve, these conspiracy theories appear to be connecting to one another, and may eventually form a single coherent conspiracy theory that encompasses all of these actants and their relationships. at the same time, smaller nucleations of emerging conspiracy theories can be seen in the overall social media narrative framework graph. because the news cycle appears to chase social media conversations, before feeding back into it, there is a pressing need for systems that can help monitor the emergence of conspiracy theories as well as rumors that might presage real-world action. we have already seen people damage 5g infrastructure, assault people of asian heritage, deliberately violate public health directives, and ingest home remedies, all in reaction to the various rumors and conspiracy theories active in social media and the news. we have shown that a pipeline of interlocking computational methods, based on sound narrative theory, can provide a clear overview of the underlying generative frameworks for these narratives. recognizing the structure of these narratives as they emerge on social media can assist not only in fact checking but also in averting potentially catastrophic actions. deployed properly, these methods may also be able to help counteract various dangerously fictitious narratives from gaining a foothold in social media and the news. at the very least, our methods can help to identify the emergence and connection of these complex, totalizing narratives that have, in the past, led to profoundly destructive actions. covid-19 and the 5g conspiracy theory: social network analysis of twitter data flair: an easy-to-use framework for state-of-the-art nlp the psychology of rumor a resistant strain: revealing the online grassroots rise of the antivaccination movement becoming a nazi: a model for narrative networks science vs conspiracy: collective narratives in the age of misinformation six zombie claims about the coronavirus that just won't go away fast unfolding of communities in large networks pandemic populism: facebook pages of alternative news media and the corona crisis-a computational content analysis an investigation of the laws of thought: on which are founded the mathematical theories of logic and probabilities network visualization in the humanities (dagstuhl seminar 18482) coronavirus cannot be cured by drinking bleach or snorting cocaine, despite social media rumors. cbs news measles, moral regulation and the social construction of risk: media narratives of "anti-vaxxers" and the 2015 disneyland outbreak the long prose form natural language processing (almost) from scratch botornot: a system to evaluate social bots the practice of everyday life the crack on the red goblet or truth and modern legend leave none to tell the story: genocide in rwanda bert: pre-training of deep bidirectional transformers for language understanding stop the coronavirus stigma now kenya governor under fire after putting hennessy bottles in coronavirus care packages. cnn news # covid-19 on twitter: bots, conspiracies, and social media activism the global grapevine: why rumors of terrorism, immigration, and trade matter whispers on the color line: rumor and race in america rumor mills: the social impact of rumor and legend belief in conspiracy theories éléments pour une théorie de l'interprétation du récit mythique life in the tv': the visual nature of 9/11 lore and its impact on vernacular response rumors and realities: making sense of hiv/aids conspiracy narratives and contemporary legends americans want to see what's happening in hospitals now. but it's hard for journalists to get inside vaccinations and public concern in history: legend, rumor, and risk perception conspiracy theories in american history: an encyclopedia conspiracy theories in american history: an encyclopedia provoking genocide: a revised history of the rwandan patriotic front narrative analysis: oral versions of personal experience. essays on the verbal and visual arts celebrating arabs': tracing legend and rumor labyrinths in post talk about the past in a midwestern town narrative text summarization one of the internet's oldest fact-checking organizations is overwhelmed by coronavirus misinformation -and it could have deadly consequences the conspiracy theory handbook data mining and content analysis of the chinese social media platform weibo during the early covid-19 outbreak: retrospective observational infoveillance study legends of hurricane katrina: the right to be wrong, survivor-to-survivor storytelling, and healing hdbscan: hierarchical density based clustering the infamous# pizzagate conspiracy theory: insight from a twitter-trails investigation graphing the grammar of motives in national security strategies: cultural interpretation, automated text analysis and the drama of global politics trump sparks backlash over racist language -and a rallying cry for supporters conspiracy theories and the paranoid style (s) of mass opinion oral repertoire and world view. an anthropological study of marina takalo's life history the morphology of the folktale clustype: effective entity recognition and typing by relation phrase-based clustering v-measure: a conditional entropy-based external cluster evaluation measure rumor and gossip: the social psychology of hearsay psychology of rumor reconsidered conspiracies online: user discussions in a conspiracy community following dramatic events the government spies using our webcams' the language of conspiracy theories in online discussions how false hope spread about hydroxychloroquine to treat covid-19 -and the consequences that followed go eat a bat an automated pipeline for character and relationship extraction from readers literary book reviews on goodreads covid-19\_conspiracy-theories improvised news: a sociological study of rumor. ardent media interpreting oral narrative. folklore fellows oligrapher naturally, we now have a cottage industry of coronavirus truther assholes. the daily beast the dangerous coronavirus conspiracy theories targeting 5g technology, bill gates, and a world of fear rumors, false flags, and digital vigilantes: misinformation on twitter after the 2013 boston marathon bombing. iconference 2014 proceedings an automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: bridgegate, pizzagate and storytelling on the web toward a generative model of legend: pizzas, bridges, vaccines, and witches mommy blogs" and the vaccination exemption narrative: results from a machine-learning approach for story aggregation on parenting social media sites satanic panic: the creation of a contemporary legend on the spread of tradition fearing coronavirus, arizona man dies after taking a form of chloroquine used to treat aquariums prevalence of low-credibility information on twitter during the covid-19 outbreak conspiracy theories and the paranoid style (s) of mass opinion publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord-254191-5cxv9l3c authors: islam, a.k.m. najmul; laato, samuli; talukder, shamim; sutinen, erkki title: misinformation sharing and social media fatigue during covid-19: an affordance and cognitive load perspective date: 2020-07-12 journal: technol forecast soc change doi: 10.1016/j.techfore.2020.120201 sha: doc_id: 254191 cord_uid: 5cxv9l3c social media plays a significant role during pandemics such as covid-19, as it enables people to share news as well as personal experiences and viewpoints with one another in real-time, globally. building off the affordance lens and cognitive load theory, we investigate how motivational factors and personal attributes influence social media fatigue and the sharing of unverified information during the covid-19 pandemic. accordingly, we develop a model which we analyse using the structural equation modelling and neural network techniques with data collected from young adults in bangladesh (n = 433). the results show that people, who are driven by self-promotion and entertainment, and those suffering from deficient self-regulation, are more likely to share unverified information. exploration and religiosity correlated negatively with the sharing of unverified information. however, exploration also increased social media fatigue. our findings indicate that the different use purposes of social media introduce problematic consequences, in particular, increased misinformation sharing. covid-19 was not only a global pandemic but according to the director of who, also an "infodemic", highlighting dire issues arising from the abundance of misinformation and fake news circulating about covid-19 (laato et al., 2020a) . in response to the infodemic, a significant number of resources were directed to curb the spread of misinformation; to ensure the availability of reliable information about covid-19 to the public (zarocostas, 2020) . among the adverse effects observed during the infodemic were messages of inefficient government (mahase, 2020) and individual-level (vaezi, and javanmard, 2020) responses. another concerning observation was health issues such as cyberchondria or increased anxiety (farooq et al., 2020) . previous work has highlighted social media to play a crucial role in the spread of misinformation (allcott and gentzkow, 2017) which calls into question what the platforms could do to prevent the spread of fake news (figueira and oliveira, 2017) . consequently, the three main research areas concerning fake news are divided into (1) the (technical) prevention of the spread of fake news; (2) the impacts of misinformation; and (3) the relationship between misinformation and population health (laato et al., 2020a) . to support these three areas of concern, human behaviour related to social media use and misinformation sharing needs to be understood. users of social media platforms such as facebook reported to be driven by several motivators such as a wish for entertainment, wish to stay informed and the desire to know the social activities of friends (kietzmann et al., 2011; quan-haase and young, 2010) . recent studies have also pointed out that social network sites are often used for selfpromotion and exhibitionism purposes (islam et al., 2019) . as such, social network sites differ from instant messaging, which is more personal, less self-promoting, more direct and driven by a wish to maintain and develop relationships (quan-haase and young, 2010) . modern social media platforms such as facebook offer multiple ways in which people can interact, including social activities, instant messaging, photo sharing, video streaming and sharing of news and articles. the social media ecosystems can cause or reinforce the stratification of people into social sub-groups characterised by having a similar mind (guerra et al., 2013) . this is the result of individuals' own choices due to psychological tendencies as well as ai-based recommendation systems that aim to provide users content they are likely to enjoy (spohr, 2017) . the lack of critique on thoughts and the amplification of radical ideas by the virtual echo-chambers created by social media have been claimed to contribute to increased dissemination of misinformation (barberá et al., t during covid-19, clear communication of the severity of the situation and recommended health measures was needed to ensure people took correct action and did not suffer from unnecessary anxiety (farooq et al., 2020) . the abundance of unclear, ambiguous and inaccurate information during covid-19 led to information overload and accelerated health anxiety (cyberchondria) as well as misinformation sharing (laato et al., 2020a) . as social media amplifies the spread of news, and people read news through links shared on social media (allcott and gentzkow, 2017; thompson et al., 2019; ku et al., 2019) , understanding the role that social media plays on the sharing of misinformation is essential. in this setting, social media fatigue (smf) and its connection to misinformation sharing may reveal further insights into human behaviour on social media and the antecedents of the spread of misinformation. thus, our research aims to observe this relationship as well as personal attributes and motivational factors connected to the two constructs via the theoretical frameworks of affordances (norman, 1988) and cognitive load theory (clt) (sweller, 2011) . with our paper, we expand the existing literature on misinformation sharing by connecting it to the affordances of social media and smf. the study context of covid-19 enables us to study our research model at a time when people were faced with a potentially lifethreatening disease that had severe consequences also on the economy. the rest of this study is structured as follows. in the background section, we review the extant literature on misinformation sharing as well as smf. we then present our theoretical foundation before moving on to forming the hypotheses for our research model. the methods and results follow the hypothesis section. in the discussion section, we go through our key findings, theoretical and practical implications of our results, as well as limitations and future work. via the internet and social media, individuals have access to an ever-increasing quantity of information. however, the availability of information has not reportedly correlated with individuals' increased knowledge (pentina and tarafdar, 2014) . while information is available, it might not be clearly structured, organised or even accessed. previous studies have given four primary reasons for this: (1) a proportion of the available information is misinformation; (2) much of the available information is irrelevant; (3) information entropy: information is poorly organised and presented; and (4) information overload, there is simply too much information for humans to make sense of (pentina and tarafdar, 2014; laato et al., 2020a) . the use of a few trusted online sources has been recommended in the literature (farooq et al., 2020; misra and stokols, 2012; zarocostas, 2020) . this becomes especially important in situations such as the covid-19 pandemic where the novelty, rapid development and unpredictability of the situation can give rise to not only misinformation but poorly structured and presented information as well. resolving these issues is essential to manage and communicate with individuals about the situation and boost their intrinsic motivation to adapt to recommended health measures (farooq et al., 2020) . social media can be regarded as the amplifier of news articles, both real and fake (allcott and gentzkow, 2017) . in 2019, the most popular social media platform facebook had over 1.5 billion registered users, 62% of which use the platform for keeping up with news (thompson et al., 2019) . allcott and gentzkow (2017) observed that during the 2016 us elections, roughly one-sixth of the population regarded social media as their primary news source. however, a recent study on adolescents' online behaviour showed a third considered social media as their primary news source, surpassing all other sources (ku et al., 2019) . besides, studies often separate online websites from social media as a news source (allcott and gentzkow, 2017; ku et al., 2019) , however, website news stories are typically disseminated and shared onwards specifically via social media. consequently, social media is particularly susceptible to be used as a platform for fake news dissemination. almost half (42%) of people sharing news reported having, at least at some point, shared misinformation (chadwick and vaccari, 2019) . there have been reports of bots being used to increase the visibility of fake news (howard et al., 2017; wang et al., 2018) , as well as attempts to algorithmically detect fake news (del vicario et al., 2016) . while bot networks spreading this news can be detected and stopped, algorithms cannot, in many cases, distinguish fake news from real news (del vicario et al., 2016) . as such, attempts to algorithmically detect and sensor fake news articles from social media would result in a large number of false positives and false negatives, contributing to censorship that would still leave some room open for fake news to be displayed. however, humans can also make mistakes in identifying fake news, on purpose or unintentionally (vicario et al., 2019) . previous studies have identified various intrinsic predictors for fake news sharing such as (1) smf; (2) fomo; (3) inexperience using the internet; (4) lack of information verification skills; (5) laziness; (6) information overload; and (7) online trust (laato et al., 2020a; khan and idris, 2019; talwar et al., 2019) . also, people are heavily impacted by confirmation bias, meaning they are more likely to believe information when it aligns with their pre-existing views regardless whether the information is reliable or not vicario et al., 2019) . in the context of pandemics, physical proximity, and perceived severity of the situation have been shown to increase information sharing in general (huang et al., 2015) . on the other hand, a recent study during covid-19 found perceived severity not to increase the intention to share unverified information (laato et al., 2020a) . nevertheless, we maintain that rapidly emerging new situations, coupled with a large quantity of ill-structured information may contribute to increased fake news sharing (huang et al., 2015) . in addition to ensuring the availability of accurate and well-structured information and directing people towards it, there have been several other recent suggestions in the academic literature on how to mitigate the negative impacts of fake news and stop humans from spreading them (nekmat, 2020) . recent studies have shown users to be more critical towards online news if they have reasons to suspect that the quality of the news is low . nudging people to pay attention to the source(s) of the news they are reading, increases their criticality towards the information and makes them less likely to share fake news onward nekmat, 2020) . these findings show promise in how social media platforms could influence peoples' news sharing and reading behaviour. another recent article proposed the use of crowdsourcing to fact-checking news as well as confirming their authenticity (pennycook and rand, 2019) . in a way, crowdsourcing of news articles is already in place. wikipedia, for example, can be regarded as a crowdsourced database of information. however, news articles need to be produced and disseminated rapidly, which means that measures for detecting fake news also need to be quick. the problem in general with currently available suggestions for curbing the spreading of misinformation is that there begins to be a trade-off. while the number of fake news shared can be minimised, other negative consequences can begin to emerge, such as the users' limited freedom (del vicario et al., 2016) . smf has several, sometimes conflicting definitions (xiao and mou, 2019) such as "persistent impulses to back away from social media due to information and communication overload" (bright et al., 2015) and "a subjective and self-evaluated feeling of tiredness from social media usage" . the definition of bright et al. (2015) relates fatigue to cognitive overload. however, it simultaneously reduces the concept of fatigue to the two components of information and communication overload. on the other hand, the definition of lee et al. (2016) is broader, but as a downside provides little theoretical guidance for understanding the factors which lead to smf. one argument for using the definition of lee et al. (2016) is that previous studies have identified several factors contributing to smf besides information and communication overload (bright et al., 2015) such as depression (cao et al., 2019) . according to piper et al. (1989) , fatigue can be acute or chronic. acute fatigue is temporary, normal and short while chronic fatigue is more permanent (aaronson et al., 1999) . in this study, we understand smf based on the above provided definitions (bright et al., 2015; lee et al., 2016) to be a temporary, however systematically triggered, state of fatigue caused by social media use. previous studies have shown compulsive social media use to be one of the primary predictors of smf, and have further demonstrated that it can lead to anxiety and depression (dhir et al., 2018) . another study conceptualised anxiety and depression as the antecedents of fatigue instead, adding a third impacting factor, cyberbullying as a predictor (cao et al., 2019) . this highlights an issue in the previous literature of smf where there seems to be a lack of a clear theoretical framework explaining what are the antecedents and what are the consequences of smf. furthermore, some studies have provided models studying the relationships between seemingly random factors and smf, resulting in a list of factors predicting it. for example, dhir et al. (2019) showed privacy concerns, self-disclosure, parental encouragement and parental worry to increase smf. furthermore, xiao and mou (2019) reviewed the literature on what causes smf and found 23 relevant quantitative studies, which gave a plethora of reasons that cause smf. these included fear of missing out (fomo), privacy concerns, technology-related factors, social media users' attitudes and personality, social overload, cognitive overload, anxiety, excessive use, cyberbullying, depression, destruction, parental influence, ubiquitous connectivity, shame, social comparison and complexity among many others (xiao and mou, 2019) . two main theories have been suggested to make sense of what causes smf: the cognitive load theory (clt) (bright et al., 2015; islam et al., 2018) and the stressor-strain-outcome model (xiao and mou, 2019) . both theories share the similarity of modelling smf as the dependant variable and theorising factors influencing it. according to clt, smf can be predicted by information overload, communication overload, system feature overload, social overload, and connection overload (islam et al., 2018) . there are also moderating factors present, as islam et al. (2018) identified multitasking computer self-efficacy to attenuate the effect of information overload. the presence of attenuating factors, as well as other factors, has also been discussed in studies using the stressor-strain-outcome theory (xiao and mou, 2019; whelan et al., 2020b) . this theory has been used to look at how social media characteristics give rise to stressors, such as privacy invasion and invasion of life, which then lead to smf (xiao and mou, 2019) . from this brief look into social media fatigue, we draw three key points. first, the quantity and quality of available information have a significant impact on developing smf (bright et al., 2015; pentina and tarafdar, 2014) . second, the social media platform, the user and the interaction between the two all need to be understood to explain smf and its behavioural impacts. finally, clt (sweller, 2011) and stressorstrain-outcome (xiao and mou, 2019) offer promising theoretical frameworks for understanding smf (bright et al., 2015; islam et al., 2018) . for the current study, we adopt the affordance lens for understanding how social media users interact with the platform during the covid-19 pandemic. for understanding smf and sharing unverified information, we also draw from clt (sweller, 2011) . in this section, we present these two theoretical approaches by connecting them to the topic of our study. the term affordance was introduced by the psychologist james gibson (1966) , who conceptualised the term to describe the potential actions that an actor can make in a specific situation. in the context of a door handle, it has the logical affordance of being used to open a door. the door handle may be used in other ways as well, such as rubbing the back or being used as a clothing stand. gibson (1977) stated that affordances are independent of the actors' ability to recognise them. norman (1988) thought this expansion to the concept of affordance was unnecessary, and re-defined affordances to be only those actions, which an individual realises to exist. in doing so, affordances were tied to the objectives, values, thoughts and capabilities of individuals (norman, 1988) . not all scholars agreed on norman's conceptualisation of the term, and this gave birth to two schools of thought, one supporting norman's definition and the other following that of gibson. in this study, we adopt the definition of norman (1988) and divide affordances into (1) technical affordances, the opportunities that the technology provides in general, in our case, social media platforms; (2) individual affordances, the opportunities given to the individual; and (3) contextual affordances, the opportunities provided by the context, in our case, the covid-19 pandemic. with technical affordances, our particular focus is on those technical features that afford social media users to read and share news and information. the platforms provide affordances to explore content as well as an opportunity for individuals to promote themselves or their ideologies or simply have fun. therefore, when looking at the individual affordances, we are concerned with personality factors (i.e. capabilities). religious people might, for example, use social media to share religious news and posts. in contrast, people with low levels of self-regulation may bombard their social media network with content they have given little thought to. with regards to contextual affordances, the covid-19 pandemic gave birth to a new situation with countless news emerging relating to the disease, policies, recommended health measures and various others. accordingly, users were provided contextual affordances to share and comment on this news. the two most significant benefits of using the affordance perspective for social media research: (1) it can provide new perspectives into how social media shapes its users' interactions; and (2) it can help understand how the users' inner needs can shape and regulate social media usage . accordingly, social media affordances are concerned with human-computer interaction and can help understand this relationship. as such, affordances can help understand and then minimise the spread of misinformation. on the other hand, one of the primary outcomes that prior literature highlights from social media use is smf (whelan et al., 2020b) . in order to understand how fatigue is developed, we now turn to clt. clt postulates that the human working memory has a limited capacity, which may be overloaded if presented with too much information (sweller, 2011) . the evolutionary reaction to such situations is to back away and retreat to safer ground (sweller, 2011) . as an example, imagine our ancestors living in a jungle. there is an obvious benefit from tending to go out and explore, such as finding food and resources. however, exploration also leads to unknown territory and situations where humans can no longer predict what will happen next, thus, making the situation potentially perilous. accordingly, retreating to a familiar environment away from potential peril has been a beneficial thing to do. this evolutionary mechanism still affects human behaviour today and is at play, especially when acquiring new knowledge (panksepp, 2013; sweller, 2011) ; also referred to as the human comfort zone. vygotsky theorises that learning happens right outside this zone, the so-called zone of proximal development (shabani et al., 2010) . using vygotsky's zone of proximal development and clt, information overload can be conceptualised to occur when individuals are overwhelmed with too much novel information or are taken too far away a.k.m.n. islam, et al. technological forecasting & social change 159 (2020) 120201 from their comfort zone. accordingly, information overload leads to impulses to step away from the new knowledge, back to the zone of proximal development (shabani et al., 2010) . consequently, in the case of information overload due to new knowledge and information coming from social media, smf emerges (bright et al., 2015) . cognitive load is conceptualised to constitute an intrinsic, extraneous and germane load (sweller, 2010) . the extraneous load has been investigated more often (mutlu-bayraktar et al., 2019) , and in the broader concept of human-computer interaction (hci), refers to the environmental stimuli to which the human brain reacts. intrinsic cognitive load, on the other hand, is the load resulting from processing this information and is affected by the individuals' psychological state of mind as well as their prior knowledge (sweller, 2011) . accordingly, well-structured information and prior expertise of the learner can both reduce intrinsic cognitive load (hollender et al., 2010) . germane load is a subconscious load that results from the working memory transferring information to long-term memory into so-called schemas. the three types of cognitive loads have been theorised to be linked so that reduced load of one kind releases cognitive capacity for the others (paas et al., 2003) . originally introduced as a theory for instructional science, clt has recently been integrated with hci (hollender et al., 2010) , and has been widely successful in explaining human online behaviour such as retention in online courses (mutlu-bayraktar et al., 2019) and the effects of social media use on learning (lau, 2017) . as a theory of learning, clt can also be used to understand how humans acquire knowledge through news articles. accordingly, it is relevant in the ongoing research about fake news and misinformation, especially during times when humans need to absorb new information rapidly and change their behaviour, such as the covid-19 pandemic (laato et al., 2020a) . more specifically, we look at factors, which may affect the intrinsic cognitive load (xiao and mou, 2019) of social media users. xiao and mou (2019) in their literature review found the following intrinsic cognitive load factors to be meaningful in this context: fear of missing out (fomo), privacy concerns, anxiety, and depression. they also revealed extrinsic cognitive load factors such as parental influence, cyberbullying, complexity, technology-related factors, and social overload to be relevant (xiao and mou, 2019) . in order to contribute to this body of literature, we propose that five key factors are yet to be taken into account with regards to intrinsic cognitive load factors (which are also aligned with the affordance perspective as discussed earlier) influencing smf. these are: (1) self-promotion; (2) entertainment; (3) religiosity; (4) deficient self-regulation (ds-r); and (5) exploration. accordingly, we place these as our independent variables and hypothesise relationships to both smf and sharing unverified information. in the next section, we will hypothesise the relationships in further detail. people have an inherent need to belong by seeking approval and recognition from others (zhou, 2011) . social media are a place where this need may be fulfiled via obtaining approval for self in forms of favourable comments and likes. studies have found that people tend to follow different strategies to enhance their image on social media (islam et al., 2019) . for example, people may share information (even private information) on social media to seek relatedness and approval from others (nesi and prinstein, 2015) . using the affordance lens (norman, 1988) , social media can be seen to provide social affordances. with these affordances, social media users actively create and maintain their self-image. islam et al. (2019) conceptualised it as self-promotion and showed it to lead to both subjective vitality and addiction. the relationship between self-promotion and addiction indirectly suggests that in the long run, social media use driven by self-promotion increases fatigue (dhir et al., 2018) . furthermore, when people use social media for self-promotion purposes, they need to actively balance between what to share and what not to in order to maintain a positive image of themselves. this may be increasingly difficult under situations such as the covid-19 pandemic where it is not easy to conceptualise which piece of information is relevant and trustworthy. this creates additional cognitive load, which in turn can lead to smf (whelan et al., 2020b) . thus, we hypothesise the following. h1. self-promotion increases social media fatigue. self-promotion on social media has been linked to narcissism (moon et al., 2016) , but is most primarily driven by a wish to stay connected (kietzmann et al., 2011) . perhaps surprisingly, focusing on others on social media has been found to have negative impacts on psychological wellbeing, whereas focusing on self-image has positive outcomes (vogel and rose, 2016) . the way social comparison on social media decreases wellbeing is that people tend to share only their best aspects online, hiding the negative, thus giving a falsified image to which to compare to (vogel et al., 2014) . thompson et al. (2019) showed status-seeking, which is closely related to self-promotion, to have a significant positive correlation with the intention to share news and information. similar findings have also been shown in previous studies (lee and ma, 2012) . in the context of intra-organisational social media platforms, the primary motivation for sharing information is helping people (vuori and okkonen, 2012) . prior research also suggests that social media users gain social capital through communicating and self-promoting themselves in social media (de zúñiga et al., 2017) . by drawing on the affordance lens, islam et al. (2019) discussed that social media provides the affordances to self-promote and gain social capital by creating an overly positivistic image of the self that appeals to other people. when the individuals' reputation is on the line, they are no longer under the influence of the online disinhibition effect (suler, 2004) and are more mindful of what they are sharing. this may lead them to double-check information sources before sharing news articles. during the covid-19 pandemic, the sharing of reliable information was being emphasised by the media and even social media platforms. thus, we theorise that individuals driven by self-promotion are extra mindful not to share misinformation on covid-19, as that may end up ridiculing them in the case the news they shared was fake. thus, we hypothesise the following. h2. self-promotion decreases the sharing of unverified information. social media has been characterised as a hedonic information system, meaning social media use is driven at least partially by factors such as enjoyment, fun, and entertainment (quan-haase and young, 2010; turel and serenko, 2012; mäntymäki and islam, 2016) . the wish for fun or entertainment materialises, for example, by enjoying funny stories shared to the user's network and making fun of celebrities and political figures (rieger and klimmt, 2019) . while people wishing to inform and help others are concerned with the validity and reliability of the information they share (vuori and okkonen, 2012) , people wishing to have fun may not feel a similar obligation. using the affordance lens (norman, 1988) , we model social media as a multimodal venue, meaning it can be used for entertainment, but also as a place to share and read information. during the covid-19 pandemic, a proportion of information sharing and social media activity was driven by a wish to have fun, often caused by humour as a coping mechanism in stressful situations (chiodo et al., 2020; lee and ma, 2012) , or because humour can be a way to make sense of new information. a study observing covid-19 related tweets on twitter found roughly 6.1% to be written in a humorous tone (kouzy et al., 2020) . while humour itself is a good thing, striving for entertainment as a goal is not concerned with the validity of the shared information as long as the content is funny. accordingly, it is feasible to predict that using social media for entertainment leads to increased sharing of unverified information. thus, we hypothesise the following. h3. entertainment increases the sharing of unverified information. a.k.m.n. islam, et al. technological forecasting & social change 159 (2020) 120201 entertainment can be a way for people to blow off steam after a long workday, and as such, it can be characterised by emotional release, escapism and anxiety relief (lee and ma, 2012) . in particular, emotional release and anxiety relief act as ways to reduce stress and fatigue. by drawing on clt (sweller, 2011) , we argue that entertaining information may provide less cognitive load, as fun and entertainment may relax our mind, and thereby reduce our cognitive load. while social media use can be characterised by several drivers (thompson et al., 2019) , the entertainment aspect of it can be regarded to reduce fatigue. during the covid-19 pandemic, several tweets and social media posts containing humour emerged (kouzy et al., 2020) . some of the content may be regarded unsuitable and being in bad taste, such as the "covid-19 is a boomer-remover" meme (brooke and jackson, 2020) . on the other hand, joking even with such grave topics can be regarded to be a form of coping with the ongoing situation, trying to find humour and lighter sides of it. entertainment or comedy is often political but maybe also otherwise incorrect, possibly even as information. the information that a comedy often provides only serve the goal of provoking and making people think, and as such, is not concerned at all with being accurate. because entertainment can thus be characterised as mindless escapism, anxiety relief and emotional release (lee and ma, 2012) , we propose the following hypothesis. h4. entertainment decreases social media fatigue. exploration has been defined as "appetitive strivings for novelty and challenge" (kashdan et al., 2004) . in the context of social media use, exploration refers to individuals' desire to go through the information, glance at novel topics and engage with new content. as such, it is linked to curiosity and courage, but also a more precise need to dig into available information (kashdan et al., 2004) . exploration as a concept is also related to novelty-seeking, which is a personality trait that varies between people. most typically, novelty-seeking is classified from low to high, meaning all people possess novelty-seeking to some degree (bardo et al., 1996) . the most important brain chemical for regulating exploration and novelty-seeking is dopamine (dulawa et al., 1999) . exploration may also be understood via clt (sweller, 2011) in that reduced cognitive load leads to increased exploration, as the primal neurofunctional seeking system activates (panksepp, 2013) . while exploration itself may lead to seeing increased quantities of information, the seeking of this information is voluntary and under the regulation of the user (panksepp, 2013) . building off clt (sweller, 2011) , the trait of exploration functions when people are not overloaded by information and have the cognitive capacity to seek more. while information overload has been found to increase the sharing of unverified information (laato et al., 2020a) , the lack of experiencing information overload, therefore, reduces unverified information sharing. as exploration and tertiary level information overload are regulated by the same primal seeking system (sweller, 2011; panksepp, 2013) , we conclude that exploration is associated with the ability to process information. in practice, this manifests in the ability to process new information as well as seek verification for news. exploration should thus have a negative impact on unverified information sharing. therefore, we hypothesise the following. h5. exploration decreases the sharing of unverified information. as exploration is connected to a primal desire to seek new content (kashdan et al., 2004) , it may manifest as increased use of social media. social media, on the other hand, is addictive (islam et al., 2019) . through addiction, exploration may impact fatigue in two ways: (1) social media users do not have sufficient time to take care of their duties related to work or family, which may increase their cognitive load. in turn, this may increase fatigue; (2) social media users are exposed to a large quantity of information, which can cause information overload, which in turn, leads to fatigue (islam et al., 2018) . during the covid-19 pandemic, as people were more at home due to government-issued limitations on movement and several workplaces closing down (farooq et al., 2020) , people had more time on their hands to explore and use social media (laato et al., 2020a) . furthermore, the novelty of the pandemic situation brought a plethora of information to social media, opening new doors for exploration. these circumstances may contribute to increased smf via increased social media use. accordingly, we hypothesise the following. h6. exploration increases social media fatigue. the positive effects of faith and religiosity have been controversial topics in academia in the current millennium with famous works having been published arguing against (hitchens, 2008) and for (mcgrath, 2013) the usefulness of religion. the complexity of religion can understand the broad spectrum of conceptualisations of religion as a phenomenon. religion has several levels: cognitive, affective, pragmatic and social. religion can serve, at all these levels, an individual, a social group, a broader community -and the community might be a religious community or a profane community using religion as an organising, constitutive or power structure -or a nation. thus, religion can define identity in its diverse forms, for example as a cognitively expressed confession of the vital dogma of one's faith, or one belongs to a group of believers that share the same faith. therefore, also the usefulness of religion has diverse interpretations, depending on the expected function and role of a given religion. should it serve an individual, their psychological integrity, sense of belongingness, meaning of life, or should religiosity support a religious community and the enforcement of the law in society? following khalaf et al. (2014) , we define religiosity as an intrinsic motivation to practise religion. the complexity of religiosity has consequences to the concepts of information, knowledge and truth, and their verifiability, thus, it may be understood through the clt (sweller, 2011) . religious people might be more sensitive to information that would refer to divine intervention; a thing that is hard to verify. at the same time, religious truth is very often unverified per se. the confirmation bias that may result from having strong religious viewpoints could cause an increase in sharing information that is regarded by the public as misinformation . accordingly, religiosity could be linked to the sharing of unverified information. thus, we hypothesise the following. h7. religiosity increases the sharing of unverified information. campbell (2012) emphasises the role of community for the expression of and living out personal faith. instead of broadcasting or streaming religious events, called "online religion", she emphasises the transformation of religion by technology, i.e., "religion online". social media provides affordances for religion online and offers a platform for synchronised and asynchronous communication. it enables religious people to interact even amidst pandemics where meeting in real life is discouraged. religious people can form online communities on social media, where they primarily share the news that is written from the viewpoint of their religion. being able to read the information that is built on a shared core belief system can reduce cognitive load and make reading news less stressful (sweller, 2011) . this may decrease smf. additionally, religiosity is often associated with discipline and weekly (or daily) routines such as prayers. this may ward against overconsumption of social media, which has been identified as the primary cause for smf (dhir et al., 2018) . this may have been particularly relevant during covid-19 where recommended social isolation measures caused people to spend an increasing amount of time at home and gave them more time to overload on social media content (laato et al., 2020a) . taking these two points together, we propose the following hypothesis. h8. religiosity decreases social media fatigue. people suffering from ds-r have trouble regulating their actions. as such, they are more susceptible to acting based on impulses or habits a.k.m.n. islam, et al. technological forecasting & social change 159 (2020) 120201 rather than planned behaviour and cognition . because of this, ds-r is connected to (1) irresponsible, and sub-optimal behaviour; and (2) decreased psychological wellbeing; (3) internet addiction (laato et al., 2020a; larose et al., 2003; lee and perry, 2004) amongst other harmful things. because ds-r leads to internet addiction (larose et al., 2003) , it can also contribute to increased social media usage. the covid-19 pandemic forced people off their routines to adopt health measures such as social isolation (laato et al., 2020b) and in many cases, remote working (barbieri et al., 2020) . covid-19 also caused significant unemployment (coibion et al., 2020) . the lack of routines hit people with poor self-regulation hard, as they have no compulsory or agreed activities guiding their time use. with social media platforms, providing hedonistic instant gratification (mäntymäki and islam, 2016) , we propose that the covid-19 pandemic may have amplified the effects of ds-r and even increased experiencing ds-r. the likely increase in social media use during covid-19 because of ds-r can contribute to smf via two mechanisms: (1) more time spent on social media leads to higher cognitive load in terms of information and communication overload; (2) more time spent on social media takes time away from other more meaningful activities. furthermore, the lack of regulation on behaviour will increase the probability of sharing news articles even when one really should not. accordingly, we propose two hypotheses. h9. deficient self-regulation increases the sharing of unverified information. h10. deficient self-regulation increases social media fatigue. as our last relationship, we investigate the connection between smf and the sharing of unverified information. conceptualising smf to be driven by communication and information overload (bright et al., 2015) we can use clt to understand this relationship. people experiencing communication and information overload have less cognitive resources at their disposal, which hinders their ability to verify the information they encounter. furthermore, the positive impact of smf on fake news sharing has been empirically demonstrated in previous work (talwar et al., 2019) . however, on the other hand, smf also leads people away from social media and its active use. these two phenomena may counter each other to an extent. however, talwar et al. (2019) argue that fatigued social media users do not disengage from using social media, but instead change their behaviour. using the clt, we predict this change of behaviour to be in accordance with reducing cognitive load. accordingly, it is highly possible that fatigued users do not go through extra trouble such as verifying the sources of information they encounter, which in turn may lead to an increase in sharing unverified information. thus, we postulate our final hypothesis. the overall proposed research model is displayed in fig. 1 . the five independent variables: (1) self-promotion; (2) entertainment; (3) religiosity; (4) ds-r; and (5) exploration are all shown connections to both smf and sharing unverified information. the direct relationship between smf and unverified information sharing is also visible. next, we present our methodology and study context for testing the proposed model. our study concentrates on social media users from bangladesh in april 2020, during which the covid-19 pandemic was causing severe restrictions and limitations on citizens' lives and mobility . bangladesh is a country in south asia, which has a population of approximately 160 million people. around 62% of the people have access to the internet, and around 20% use social media. 1 the most popular social media platforms are facebook and youtube, followed by reddit, instagram, twitter and tiktok. the covid-19 pandemic arrived in bangladesh officially on march 8th when the first three cases were reported. this caused governments to take action placing infected areas into quarantine and enforcing it by law. 2 furthermore, the government closed all educational institutes and public services and advised citizens to stay home and avoid social contact, i.e. adopt personal voluntary health measures. we collected data from bangladeshi social media users during april 2020 through an online survey drafted using an online survey software called webropol. we adapted validated scales from prior literature for all constructs in our theorised model. after drafting the initial questionnaire, three researchers were asked to carefully look through the survey and the items to ensure they were grammatically correct and made sense in the current study context. according to received suggestions, we made changes to improve the understandability of the items and fixed a few grammar errors. after that, we asked 9 bangladeshi social media users to comment on the overall final questionnaire in terms of how easy or difficult the questions were to understand and respond to. we received a few minor suggestions at this stage, which were taken into account. the final survey items and their source are listed in appendix 1. before presenting the respondents with the items to measure unverified information sharing, we described the following to clarify the context: "via the following questions, we ask you about your information sharing on social media (e.g. facebook, twitter, linkedin instagram) during the covid-19 pandemic". similarly, before asking about smf, self-promotion, ds-r, and entertainment, we explained the following: "via the following questions, we ask you about your use of social media (e.g. facebook, twitter, linkedin, instagram) during the last two weeks". we posted the survey link to two major covid-19 related facebook groups, which both had more than 10,000 members. furthermore, we asked students and alumni of a major private university in bangladesh to respond to the survey. the survey was opened 1305 times, 565 started responding, and 435 respondents completed the survey in full and submitted their responses. we removed two responses due to missing data. therefore, 433 responses were used to test our research model. out of the respondents, 62% were male, 37% were female, and 1% were other or preferred not to tell. the most popular social media platforms among the respondents were facebook (94%), youtube (79%) and instagram (69%) followed by snapchat, linkedin, twitter, reddit and tiktok. the majority of respondents were young, aged 18-25 (83,45%), followed by the age groups of 26-35 (14,25%), 36-50 (1,84%) and 51-64 (0,23%). the proposed research model was tested via a two-staged analysis approach. at the first stage, pls-sem technique was utilised to confirm the reliability, the validity of the constructs and test the causal relationships between the constructs. we followed this analysis with a neural network (nn) based approach. pls-sem is an analysis technique for evaluating relations between various independent and dependant 1 internet world stats usage and population statistics, https://www. internetworldstats.com/stats3.htm#asia, (accessed on april 24th 2020). 2 corona info bd, iedcr. https://corona.gov.bd/ (accessed on april 23rd, 2020). a.k.m.n. islam, et al. technological forecasting & social change 159 (2020) 120201 variables and is commonly used for understanding relationships between constructs in cross-sectional data. however, pls-sem cannot examine the non-linear relationships between constructs. to address this issue, we supplemented the pls-sem analysis with the nn approach. to summarise, pls-sem was used to evaluate the hypotheses shown in fig. 1 , while in the second stage, the nn was used to validate the findings of the pls-sem results, and also to prioritise predictors based on their relative importance in influencing smf and unverified information sharing. before moving to use the pls-sem and nn as analysis techniques, we tested multivariate assumptions and the validity and reliability of our data following the guidelines of wong et al. (2016) and fornell and larker (1981) . we first conducted several statistical tests to ensure that our data fulfils the multivariate assumptions for further statistical analysis (wong et al., 2016) . to ensure normality, at first, we tested our data for skewness and kurtosis in spss. all values were within −2.58 to + 2.58, and therefore, we conclude that the data is normally distributed. next, we tested the linearity of the associations between constructs. the test results (see appendix 2) showed that the predictor and target construct have a combination of linear and non-linear relationships. due to the existence of non-linear relationships, the nn approach is necessary to complement the pls-sem results (chong 2013) . third, the values of variance inflation factor (vif) were calculated (hair et al., 2010) . all values were less than 3, which is a widely accepted vif threshold (o'brien, 2007) . it was, therefore concluded that there was no question of multicollinearity with the data (tan et al., 2014) . finally, scatter plots were generated to ensure homoscedasticity (white, 1980; ooi et al., 2018) . we looked at the regression standardised residuals of all our relationships and found them to be equally distanced from the regression line. it was therefore assumed that the presumption of homoscedasticity was fulfiled. before continuing to test our model results using the sem technique, we checked the validity and reliability of our data. to this end, first, we verified the internal consistencies and convergent validity of the data. the thresholds recommended by fornell and larker (1981) were selected, meaning each item loading was to be above 0.7, construct composite reliability (cr) was to be above 0.8. the average variance extracted (ave) had to be above 0.5. as shown in appendix 1, all items (except three religiosity items) had loadings higher than 0.7. we removed those three items that did not match the criterion. furthermore, we ensured that crs were above 0.8, and aves were above 0.5 (fornell and larker, 1981) . next, we verified the discriminant validity of our data by using the correlation matrix and square roots of aves. table 1 shows the correlation matrix. from this table, we see that the inter-construct correlations were less than the diagonally presented square roots of the aves. furthermore, we verified the loadings and cross-loadings and observed that the loadings were consistently higher than cross-loadings. these tests ensured that we achieved sufficient discriminant validity. common method bias (cmb) is a problem in studies that use selfreported survey data. it refers to the variance caused by the survey method (podsakoff et al., 2003) . to address this issue, we conducted harman's single factor test (harman 1976) . the findings of our analysis islam, et al. technological forecasting & social change 159 (2020) 120201 showed that 31.39% of the total variation was due to a single construct, which is well below the required 50% (podsakoff et al., 2003) . we revalidated cmb with other methods, owing to increasing disagreements with the validity of harman's single-factor test (lowry and gaskin 2014) . following the guidelines of liang et al. (2007) , we proceeded to conduct the common method factor test. we did this using the smartpls software by re-using all our items to create a common method factor. we then calculated the variances for each item as explained by the created common method factor and our actual factors in the pls model. the average variance explained by the method factor delivered the average of 0.01 and the average variance explained by the assigned factors gave the average of 0.48. as the method variance was minimal (0.01), we concluded that common method bias was not an issue for our operationalisation and data. machine learning methods producing a neural network have been used to support pls-sem analysis (chan and chong, 2012; chong, 2013; talukder et al., 2020) . the main advantage of supplementing sem with a neural network-based analysis is that it is capable of addressing non-linearity in data (chong, 2013) . some studies have also reported that even with relatively small amounts of data 100 factor 2) was possible by media platform change and media optimization. an impact on product quality could not been shown. the best conditions were also verified in bioreactor scale (fig. 1d) . the benefits of using cell culture automation in late stage process development were shown based on two examples of current applications. for this purpose the experimental results of the development work of two late state projects using the in-house developed automated cell culture system were shown. the first example shows the capability of the automated cell culture system by reducing trisulfides significantly in just one experiment. for the other project the final product concentration could be increased by factor 2.5 by a media screening and changing to the in-house media platform. these two examples show the potential of cell culture automation as a routine tool in process development. the cell line development process has become faster and is simultaneously generating more clone-and product-related analytical data. in order to select the best producer cell line, extremely heterogeneous data types need to be systematically compared. the timely availability of all data needed to decide which cell line to pursue has become a bottleneck in the cell line development workflow. to ensure sound decision making, new integrated workflow support and data analysis methods are needed. we have developed a new end-to-end platform for bioprocess development, which includes a cell line development workflow system supporting seeding, selection, passaging, analyzing, cryo-conservation, and processing in (micro-) bioreactors. this platform, genedata bioprocess™, enables partially or fully automated cell line selection and assessment processes, and it increases process efficiency and quality. the system tracks the full history of all clones -from initial transfection all the way to their evaluation in bioreactor runs -and combines this information with analytics data on molecules, clones, and product quality. it can directly integrate with all instruments, such as pipetting robots, bioreactors, and bioanalyzers. the system is designed for a wide range of . a schematic illustration of the automated cell culture system. only the core system is shown with a robotic plate handler as key device connecting cultivation, processing and analytical parts. b illustration of an example how cell culture automation can speed up process development steps. c application in the identification of product quality levers. d application in titer optimization biologic molecules, including antibodies (iggs, novel formats) and other therapeutic proteins (e.g., fusion proteins). highlighted use cases describe the identification of top producer cell lines, decision making support, bioreactor data management, and full clone history report documentation (fig. 1 ). genedata bioprocess, which was developed in collaboration with top pharmaceutical companies, can flexibly support various (non-linear) workflows and structure the collected information in a way that fosters collaboration across an organization. while increasing throughput is crucial to ensure the timely availability of optimal producer cell lines, high-throughput is only possible when automated processes in the laboratory and the resulting data collection and aggregation can be streamlined. genedata bioprocess helps to establish more productive processes by offering support and integration for automation stations and measurement devices. thanks to the comprehensive workflow support and the possibility to integrate results from cell line stability experiments, product quality assessment, and bioreactor suitability tests, genedata bioprocess provides a unique way to evaluate cell lines. comprehensive analysis of all data collected in the process helps to ensure the highest possible quality and minimize the time and resources needed for data analysis and management. integration of bioreactor data analysis and visualization with other parameters measured in cell line development, streamlines clone evaluation in micro-bioreactors and supports highthroughput operations. genedata bioprocess comprehensively tracks the full clone history from the origin of the host cell line to the generation of the validated monoclonal producer cell line. for promising clones, the clone history report can be generated with one click. besides supporting cell line development, genedata bioprocess is a comprehensive platform capable of tracking the complete bioprocess development process. in 2012, 14.1 million people suffered from cancer [1] making it to a major concern of our society. since common cancer treatment is limited and not effective for late stage carcinoma, alternative methods are needed to reduce the high mortality rate of cancer patients. one alternative approach is the application of the oncolytic measles virus (omv), because omv has a natural affinity against cancer cells. the major drawbacks of omv is to produce the extremely high amount of at least 10 11 tcid 50 (50 % tissue culture infective dose) per dose [2] which is needed. to solve this problem, a high titer process must be established including an efficient downstream processing (dsp). we developed an appropriate upstream processing and are able to produce 10 10 -10 11 tcid 50 ml -1 in a bioreactor with 0.5 l working volume [3] . now, we focus on the dsp part. the following study tested the application of charged depth filters for the omv clarification. in contrast to common dsp schemes, a depletion of virus particles or a loss of infectivity is not desired. the aim is a reduction of protein content and dna with minimal loss of infective omv. further, we investigated the influence of the cell culture medium on the depth filtration process. to explore the influence of the surrounding cell culture medium on the depth filtration performance, omv was either produced in serum-free medium (vp-sfm) or serum-containing medium (dmem + 10% fcs). the production was done in a str with a working volume of 0.5 l as described in [3] . cells and carriers were separated with an opticap xl 1-module (polygard-cr; 5 μm; merck). for the depth filtration millistak+ ce50 filters (merck) were used. the filter material was autoclaved and rinsed with 25 ml of 20 mm tris-hcl (ph=7.4). the virus suspension was filtered with a load of 50 l m -2 using a peristaltic pump (ism931c; ismatec) applying a flux of 150 l m -2 h -1 (fig. 1 ). samples were collected at the beginning and end of a filtration run. the omv titer (tcid 50 ml -1 ) of the samples was determined according to kärber and reed [4, 5] . protein content was measured with the pierce bca protein assay kit (thermofisher scientific) according to the manufacturer's instructions. dna was measured by a microtiter assay using quant-it picogreen dsdna reagent (thermofisher scientific) according to the manufacturer's instructions. we found that positively charged depth filters were suitable to clarify omv suspensions. the cell culture medium, in which the omv was produced, influenced the outcome of the depth filtration. a log reduction value (lrv) of 0.87 was determined for omv present in serum-containing medium (scm), whereas the titer of omv in serumfree medium (sfm) was reduced 1.63 log levels. this indicates that without serum in the surrounding liquid, omv will adsorb to the filter material. however, we must evaluate if the missing serum or other components present in sfm are responsible for this effect. total protein was not relevantly reduced by the clarification using charged depth filters. for omv present in scm, the residual protein content was slightly less compared to omv present in sfm (table 1 ). in contrast, host cell dna (hcdna) was bound to the filter material. we achieved a 33% reduction of hcdna for an omv suspension in sfm. after clarifying an omv suspension in sfm, the remaining hcdna content was even lower being only 42 %. conclusions charged depth filters are suitable for the first clarification step of omv downstream processing. residual protein could pass the depth scheme of the complete cell line development workflow support in genedata bioprocess. showcasing integration of data from diverse measurement instruments, data visualization for decision making support as well as, tracking of full clone history filter almost unhindered, whereas the hcdna content was already reduced to 42% at maximum. however, the omv titer was also reduced by the depth filtration. this undesired effect was stronger for the omv present in sfm. because the agencies require avoiding serum in clinical-grade production processes, this is disadvantageous. nonetheless, because sfm will be soon standard for omv production, further experiments have to be done preventing the omv reduction during clarification. one option can be to reduce the adsorption strength of the virus to the filter material by the addition of salt. moreover, it is important to establish a standardized protocol for the upstream processing. we determined batch-to-batch variations within the clarification indicating a strong impact of upstream processing (usp) on the outcome of the dsp. therefore, further studies must investigate the influence of usp parameter e.g. time of harvest and ph of the harvest solution on the omv. fed-batch culture is commonly employed to maximize cell and product concentrations in upstream mammalian cell culture processes. typical standard platform processes rely on fixed-volume bolus feeding of concentrated feed supplements at regular intervals. however, such static approaches might result in over-or underfeeding. to mimic more closely the dynamics of a fed-batch culture, we developed a dynamic feeding strategy responsive to the actual nutrient needs of a mab-producing recombinant cho cell line. results and discussion improvements made at different steps during fed-batch development are shown in fig. 1 . at step 1, all eight cell boost supplements were added to cdm4ns0 according to a doe approach, and batch cultures were performed. this evaluation allowed us to select only those cell boost supplements that were beneficial to the overall culture performance. non-performing cell boost supplements were removed and not considered further. at step 2, the selected cell boost supplements were added daily to the cultures at different ratios according to a doe approach, and fedbatch cultures were conducted. as expected, daily feed additions to replenish consumed nutrients substantially improved mab and peak cell concentrations as well as viable cumulative cell days (vccd) compared to batch cultivation. further, the results enabled us to fine-tune the feed ratio of selected cell boost supplements. at step 3, we further optimized the best performing feed ratio by investigation of static and dynamic feed protocols. most fed-batch protocols rely on constant feed additions on distinct days. however, these approaches often lead to substantial over-or underfeeding during bioprocessing. to improve such "static" protocols, we investigated three different "dynamic" approaches as shown in table 1 by applying the selected cell boost supplements with the optimized feed ratio. this investigation allowed us to further improve the bioprocess performance. the best performing approaches, constant and retrospective feed, were further investigated in fully automated bioreactors under controlled conditions. in general, constant cultivation parameters in the bioreactor slightly enhanced mab titers compared to shake flask cultivation. the retrospective feed strategy yielded 10% higher titers than the constant strategy. overall, the established methodology for fed-batch development allowed us to obtain 2.5× higher mab titers (batch mean: 1.9 g/l vs. fed-batch 4.9 g/l) in a short time and three simple steps. in addition, the product quality was investigated. compared to the legacy fedbatch process, fed-batches that were conducted with the newly selected basal and feed media altered the distribution of charge and glycan variants. the amount of aggregated product was not altered. the established methodology for fed-batch development is a rapid protocol to select well-performing feed supplements and optimize their ratio to the culture requirements. in three steps, mab titers were boosted 2.5x from 1.9 g/l to 4.9 g/l. product glycosylation and charge variants could be influenced by the newly selected basal and feed media compared to a legacy fed-batch process. the amount of aggregated product was not altered. the present study investigates the beneficial effect of spiking hyclone™ actipro™ basal medium with hyclone cell boost™ 7a and cell boost 7b feed supplements on growth and productivity of a recombinant cho cell line. to evaluate the impact of feed-spiking compared with cultivation in basal medium only, the cell line was grown in bioreactors under controlled conditions to determine cellspecific metabolic rates, nutrient consumption, and byproduct accumulation over the process time. transcriptome analysis of the cultivated cells, using microarrays on four consecutive days to investigate differential gene expression, revealed the beneficial effect of feed-spiking compared with cells grown in basal medium. model cell line was a mab-expressing cho dg44 (licensed from cellca gmbh) cultivated either in actipro basal medium only (ge healthcare) or in actipro basal medium feed-spiked with additional supplementation with 7% cell boost 7a and 0.7% cell boost 7b (ge healthcare). both cultures were grown in batch mode using dasgip™ cellferm-pro™ stirred-tank bioreactors (eppendorf). the beneficial effect of feed-spiking was analysed by transcriptome analysis using microarray technology (8×60 k design, agilent). both basal and feed-spiked processes lasted for seven days with viabilities above 95% until day 6. on day seven, a sharp decline in viability indicated the end of the batch process (fig. 1a) . in feed-spiked medium, cells initially grew slower but reached almost twice as high peak cell concentrations (17.6 × 10 6 c/ml) than in basal medium only (9.79 × 10 6 c/ml). remarkably, the integral of the viable cell concentration over the total process time (viable cumulative cell days [vccd] ) was similar between both process strategies (fig. 1c) . while mab production plateaued after day 4 in basal medium only (final titer 0.8 g/l), a continuous increase to three-fold higher final titers (2.4 g/l) was observed in feed-spiked medium (fig. 1b) . the higher titers could be attributed to generally higher cell-specific productivities (qp), which remained rather constant (~70 pg/cell/day) in feedspiked cultures. in basal medium, the qp continuously dropped by 20% (day 0 to 3), 50% (day 4), and > 90% (day 5 to 7) from 70 to 10 pg/cell/day in basal medium cultures. in average, the qp was 70% higher in feed-spiked cultures (fig. 1d ). transcriptome analysis of differentially expressed genes between cells grown in basal medium or feed-spiked medium were used to identify relevant go terms that indicated a more active proliferative state for feed-spiked cultures (data not shown). the top go terms significantly related to cell cycle and primary metabolism, cellular division, as well as nucleobase formation or regulation. furthermore, gsea revealed several significantly enriched set of genes related to gene transcription, dna replication and repair, cell growth and proliferation, as well as inhibition of apoptosis in feed-spiked cultures. thus, feed-spiking increased the proliferative activity of cultivated cells. several of the identified genes appear as promising targets for cell line engineering, but have not yet been described in relation to high-producing recombinant cell lines and will need to be evaluated in future studies. feed-spiking of basal medium is a convenient and easy way to considerably increase product concentrations in a simple batch culture. differential gene expression revealed genes that appear important for high cell-specific production rates, and this knowledge can be leveraged into cell line engineering approaches or the design of high producing cho cell media. in the latter case, a maximized supply of high biosimilarity must be demonstrated by physicochemical and functional characterization for approval requirements of phase i and phase iii studies in terms of efficacy, safety and immunogenicity. in this study, rounds of upstream and downstream processes were run to reach the cqa limits of the originator molecule. after conducting many different development strategies, the mirror plot images of the intact deconvoluted mass were found to be identical corresponding to similar levels of glycoforms. the uv chromatogram of reversed phase ultraperformance liquid chromatography (rp-uplc) of tryptic peptide mapping demonstrated that the primary structure of tur01 is identical to the originator as shown in fig. 1a . post-translational modifications (ptms) such as oxidation, deamidation, n-terminal pyroglutamic acid, c-terminal lysine truncation levels were also comparable for two products. the glycosylation site (hc-asn301) was confirmed by peptide mapping analysis and 100% glycan site occupancy was proven for tur01 and originator. the glycosylation pattern for two products were highly similar in terms of major glycans (g0f, g1f, g0f-gn and etc.). man5 level was lower in tur01 compared to the originator product which may not have any clinical effect on the molecule. the secondary structure was determined by atr-ftir spectroscopy. absorption bands (amide i and amide ii) were overlapped completely and amounts of α-helix and β-sheet structures were comparable. furthermore; size-exclusion chromatography (sec) analysis revealed that both products have the same level of purity (>99%) and aggregate (<1%) levels. the level of impurities were determined as below 4% by ce-sds. the capillary isoelectric focusing (cief) experiments showed that the charge variant profiles of two products are indistinguishable and the isoelectric point of main peak is observed at 8.3 for both products. the association/dissociation rate constants and binding affinity for both tur01 and originator were highly similar and similarity score was calculated greater than 99%, as shown in fig. 1b . in this study, state-of-art analytical techniques were used to assess the biosimilarity of tur01 to the originator adalimumab. head-tohead comparison data clearly demonstrated that tur01 is highly similar to the originator adalimumab in terms of physicochemical and functional characteristics. based on the analytical similarities, we . process performance of basal medium (black) and feed-spiked (red) bioreactor batch cultures: a cell concentrations and viability, b viable cumulative cell days and specific growth rate, and c antibody concentrations and cell-specific productivity. error bars indicate standard deviation from three independent experiments. the black arrows on day 4 indicate the beginning of decreasing cell-specific productivities and lower cell-specific growth rates in basal medium cultures believe that tur01 will have comparable pk/pd, potency, and efficacy results to the originator adalimumab. the expanded interest in intensified continuous bioprocessing has highlighted the need to develop a small scale model for perfusion cell culture. the direction in the industry has been to increase target cell densities to ≥50x10 6 vc/ml and decrease perfusion rates to ≤3vvd. in order to increase the throughput of our perfusion media development capabilities we sought to develop a small scale model of perfusion using the ambr®15 instrument (sartorius, germany). we used a modified cell settling model from the previously published by kreye et. al. to achieve the cell retention necessary to reach perfusion relevant viable cell concentrations [1] . in this work, we will show the application of this small scale model for: (1) identification of specific productivity performance over a steady-state for tested media, (2) identification of cspr min for a specific cell line and medium combination, and (3) confirmation of consistent product quality profiles between the small scale model and benchtop perfusion (data not shown). a chozn® cell line producing an igg1 was evaluated in several proprietary chemically defined media prototypes generated during the development of the catalog excell® advanced hd perfusion medium: "fed batch medium", "early prototype", "mid prototype", "intermediate prototype" and "late prototype" [2] . small scale simulation of perfusion experiments were run in ambr®15. media exchange was performed 3 times per day in equal amounts. agitation, gassing, and liquid handling were stopped for an optimized period of time to allow cells to settle to the bottom. spent media was removed in an amount proportional to 1/3 rd daily exchange volume. agitation, gassing, and liquid handling were resumed and fresh media was added back to the vessels. for benchtop perfusion, cells were inoculated in 3l applikon bioreactors (applikon, netherlands). at a concentration of~6.0x10 6 vc/ml, perfusion was initiated using the atf2 (repligen, massachusetts). perfusion rate was limited at 1.2vvd during steady-state. using the cell settling method described above we have been able to achieve ≥90% cell retention efficiency. all media tested in this work were able to reach and maintain the 30x10 6 vc/ml target cell density at 1vvd (fig. 1) . performance of each media formulation was ranked based on specific productivity (table 1) . using "intermediate prototype", minimum steady-state cspr was determined to be 33.3pl/c/d for this cell line. n-glycan analysis of ambr®15 and bioreactor samples via intact mass spectrometry displayed only slight differences in product quality profile (data not shown). our work has shown a clear distinction between various prototype perfusion media and demonstrated a 50% increase in specific productivity over "fed batch medium" used in perfusion. additionally, we have shown the application to further characterize the process using this model to determine cspr min for a given medium and cell line. the transient process has been successfully operated at 500l in a sartorius biostat single use bioreactor (sub), yielding 0.4kg of crude product from a two-week expression culture (table 1) . successful scale up of the process to 500l creates the potential to supply transiently expressed products to support toxicology studies or even early gmp clinical supply, enabling accelerated biopharmaceutical development project timelines. the scale up from rocking bioreactors (rbr) to sub scale identified some scalability issues. lower specific productivity due to increased cell growth and decreased titres were observed in the sub ( fig. 1 iii & iv). to improve the predictability of scale up, a new process was developed and evaluated in the sub vessels utilising a modified transfection method, which resulted in comparable expression levels and specific productivity between rbr and sub scales. two sets of expression vectors comprising heavy chain and light chain plasmids expressing a human igg1 kappa mab, as previously described [1, 2] were used in the process optimisation study. the cell line used for transient expression and the pei mediated transfection method has been described previously [1] . transfected cultures were run under fed batch conditions for 14 days in 22l ge healthcare wave bioreactors (rbr), hyclone sub using 50l and 250l hyclone bioreactor bags (thermo scientific). the transfection process was modified to address the reduced titres and higher viable cell density (vcd) seen in the sub cultures. shake flask cultures were used to assess the standard (a) and modified transfection processes (b and c) (fig. 1 , i & ii). process c was identified as the process to be studied at sub scale, offering the potential to mitigate the high viable cell densities (vcd) observed. scaling up process c to 50l and 250l sub resulted in cultures producing titres exceeding 1g/l with desired cell growth profiles. scale up of process a into sub vessels resulted in decreased productivity compared to the rbr scale. after optimisation, the sub process c yielded increased specific productivities and expression titres comparable to those seen at rbr scale (table 1) . medimmune has successfully completed the first known successful cho transient culture at 500l scale producing > 800mg/l of mab at harvest. process optimisation has subsequently demonstrated reproducible titres at 50l to 250l scale exceeding 1g/l with comparable glycosylation profiles between sub and rbr cultures across scales. comprehensive analysis of the impact of trace elements in media on clone dependent process performance and product quality background state-of-the-art biopharmaceutical processes are accounting concomitantly for process performance and product quality. even though high yielding, robust processes are the cornerstones of any process development, product quality parameters such as structural integrity, charge variances and post-translational modifications are progressively becoming the focus of the developmental work. in conjunction with host cell line selection and process performance parameters, media components are crucial for the continued progress in rational modulation of product quality attributes affecting biological activity, immunogenicity, half-life or stability. among media components, trace elements (te) are of particular interest as they play a pivotal role in various cell metabolism pathways. based on a comprehensive doe approach, extensive process performance-and product quality evaluation combined with metabolic flux analysis, the impact of several trace elements on the biopharmaceutical process is assessed. in a comprehensive i-optimal doe approach ( fig. 1) , the effect of six te in various concentration levels and combinations in serum-free media was studied for four different cho-k1 cells lines in an ambr® 15 setup. a scrutiny of the process performance parameters such as cellular growth, productivity, amino acids and vitamins consumptions rates for each of the conditions was performed. the process performance evaluation was accompanied by extensive product quality analysis including size and charge variants, glycosylation patterns, oxidation and methylation. furthermore, a metabolic flux analysis was performed based on the nitrogen balance. based on extensive analytical data, the obtained response surface model provides a clear insight into the impact of particular te and their combinations on process performance and product quality. the high model quality enables discriminations between clone dependent and clone independent effects. with an elevation in titer up to 25% in the best condition of the cell lines clearly show, that even state-of-the-art media can be outperformed by trace element rebalancing. analyzing specific rates in combination with metabolic flux analysis improves the understanding of metabolic restructuring of the cell lines under distinct te levels and combinations. modulation of trace elements levels had a tremendous effect on the charge heterogeneity and glycosylation structure of the different proteins. this provides a toolbox for the fine tuning and control of product quality parameters. taken together, the data further paves the way to the rational fine tuning of process performance and product quality attributes. background due to regulatory concerns and economic impact, ensuring product quality and consistency is now one of the main challenge faced by the biopharmaceutical industry. for monoclonal antibodies (mab), glycosylation is one of the most important quality attributes as it impacts on mab structure integrity, and ultimately on both clinical efficacy and safety. many factors affect mab glycosylation and its inherent heterogeneity, including the host cell, the culture medium, the mode of operation and the operating conditions. in this context, the capacity to monitor and control on-line the antibody glycosylation, from early-to late-stage process development, would be of salient interest to reduce the time and cost to market. in order to address this unmet need, we have designed an improved spr biosensor assay to measure the kinetics of interaction between a mab and the extracellular domain of the fcγriiia receptor bound at the biosensor surface [1, 2] . of salient interest, we also demonstrated that various binding kinetic signatures, especially different dissociation kinetics could be correlated with distinct mab glycosylation patterns and with therapeutic efficacies, as deduced from mass spectrometry and a surrogate adcc assay, respectively. in parallel, we have also harnessed a spr biosensor directly to a bioreactor, which permitted the at-line determination of the concentration of antibodies by hybridoma cells during a bioreactor culture. we now plan on combining both approaches to determine on-line the glycosylation profile of the produced mabs. our ultimate goal is to design a unique and highly innovative bioprocess control tool that can be readily applied in an industrial bio-manufacturing setting. reducing timelines and costs are key factors for bio-pharmaceutical industries to accelerate process development and drug delivery to patients. enhancing throughput of bioprocess development has become increasingly important for the screening and optimization of cell culture processes. this challenge requires high throughput tools. in a previous study [1] , we showed that ambr® 15, a robotically driven mini-bioreactor system developed by tap-sartorius, could be advantageous to accelerate process development. the use of ambr® 15 system allows us to test a large number of experimental conditions in a single experiment. therefore, the large amount of production samples to be characterized for product quality attributes (pqa) increases as well: the bottleneck has moved from the generation of samples at the production bioreactor step to in-process analysis. for product quality attribute analysis at lab scale, protein purification is generally carried out on >5ml columns which is incompatible with the size of ambr® 15 bioreactors. moreover, the applied methods are relatively low throughput. the development of a high binding capacity resin (up to 70 mg/ml) [2] , combined with high performing new cell lines which are able to produce up to 5 g/l of recombinant monoclonal antibodies, allow require the development of an efficient and high throughput (hts) purification method robot. the use of robotic equipment for small scale purification purposes is a great opportunity for us to tackle this bottleneck, by enabling highthroughput sample purification at smaller scale (200μl). recombinant monoclonal antibodies were produced by a genetically engineered dihydrofolate reductase (dhfr)-/-dg44 chinese hamster ovary (cho) cell line. clarified cell culture fluid (cccf) was obtained from 2 and 2k liter bioreactors after three filtration steps. minipurifications were performed on tecan freedom evo® robot with predictor robocol-umns® containing 200μl mabselect sure® resin. larger scale purification were executed using an aktaxpress using hitrap column prota. to assess monoclonal antibody purification at small scale, we first tested the repeatability of the minipurification, purifying the samples 8 times on the same columns and using different columns, focusing on the yield of the purification and the impact on product quality attributes, especially the hmws. then, we compared those results to those obtained with the aktaxpress at larger scale purification, comparing the yield of the purification and the pqa of the protein-a eluates obtained with both purification systems. finally, we assessed the capability of the robot to perform hts of buffer and purification conditions, evaluating three different buffers at different concentrations and ph values, and also testing different loading column capacities. in this study we established that the tecan can be used as a robust platform for purification at small scale. we observed similar purification yields, intra and inter run. the analysis of the pqa1a level showed there is also very high reproducibility. and the ph of the eluate showed as well strong comparability as well. table 1 shows the coefficient of variation (cv) of the yield, hmws and eluate ph are low, demonstrating the good reproducibility of the purification. the strong reproducibility obtained between the different purifications showed that the tecan and the aktaxpress are similar in terms of purification performance and pqa (fig. 1a, b) . the tecan is a versatile automated liquid handler allowing the screening of huge purification conditions (fig. 1c) , the possibility to purify large quantities of samples, while the samples amount is limited. the tecan has the potential to purify more than 150 samples/day, reducing timelines and allowing us to deliver faster to the patients. viable cell density monitoring in bioreactor with lensless imaging geoffrey monitoring cell density and viability of mammalian cell culture bioreactors is a necessary task that presents today a number of remaining challenges. the traditional measurement for bioreactor cell count and viability rely on using the trypan blue exclusion method once a day. while automatic cell counters have reduced the statistical manual error, sampling the bioreactor remains a contamination risk and is prohibiting process control as the sampled volume becomes siginficant. lensless imaging technology is a new method for accurately determining cell concentration and viability without staining. this technique directly acquires the light diffraction properties of each individual cells through their holograms images without any objective, lens or focus settings. living and dead cells have significant holographic patterns that can be distinguished and precisely counted. lensless imaging technique directly acquires the light diffraction properties of each individual cells through their holograms images without any objective, lens or focus settings. living and dead cells have significant holographic patterns that can be distinguished and precisely counted. we compare cell counts and viability between the reference method and our lensless imaging device, the cytonote counter. measures are performed once a day on samples from 12 bioreactors, from the inoculation to the end of the culture. we also assessed the repeatability of our method. another lensless imaging prototype is setup as a measurement chamber directly connected to a perfusion bioreactor, for continuously receiving the bioreactor broth, and therefore reproducing an in situ measure. with a concentration range up to 40x10 6 cells/ml ( fig. 1) and viability range at 75-100%, we obtained a correlation factor of 0.98 between the two compared methods. the large field of view allows the analyze of several thousand cells within a single image, keeping the statistical variability of the measure as low as 3%. our measurement chamber prototype has demonstarted its capability for continuous viable cell density and viability monitoring. we are now working at designing a steam strerilizable probe, and we envision lensless imaging to become the future method of choice for on-line monitoring of suspension cells cultures. lensless imaging technology is capable of accurately and precisely monitoring viable cell density and viability with a combination of significant advantages starting from low sample volume use, label free detection, quick measure, simple device, to high number of cell analyzed which let us think that it is a good candidate for very smallcomparison between both purification systems and the ability of the system to be used as a high throughput tools for buffers screening. a purification yield (%). b pqa1.a (normalized). c impact of the ph and buffer concentration on the pqa1.a scale bioreactor and high-throughput measures. its high repeatability is also a key parameters in the effort to narrow batch to batch deviations. in addition we demonstrate that this technique is potentially powerful for in-line and continuous monitoring of a lab bioreactor. we envision lensless imaging to become the future method of choice for on-line and in-situ monitoring of suspension cells and a perfect tool for process control in fed-batch or perfusion mode in single-use bioreactors or traditional steam sterilized vessels. it can certainly become the first vcd measurement technique to work from cell line engineering, to process development, pilot scale, and up to manufacturing scale. time-dependent product heterogeneity in mammalian cell fermentation processes background a consistent product quality is a major goal in the production of biotherapeutics, especially recombinant glycoproteins. whereas it is unlikely that the polypeptide chain changes during a production process, posttranslational modifications and protein folding are sensitive to fluctuations in parameters and conditions. here we focus on protein glycosylation as one important indication for product quality [1] . during a batch process conditions change continuously. at the beginning, the supply situation for the cell is excellent, but the secreted material stays a long time in the culture fluid. later during cultivation substrate provision decreases, whereas the exposition time of the protein to the culture fluid is much shorter. altogether this leads to product heterogeneity of the secreted protein during a batch culture. four different cell lines, two of human origin and two cho clones, producing four different recombinant glycoproteins were investigated in this study. together with their respective parental cell line the clones were cultured in three replicates in shakers. supernatant from the cultures were harvested at four time points. the removed culture volume was replaced by culture supernatant of the identically cultured corresponding parental cell line. the product was isolated from the supernatant and the glycans were released. one part of the released glycans was labeled with 2-ab and separated by hilic-fld. the other part of the glycans was permethylated and analyzed by maldi-tof mass spectrometry (fig. 1a) . the investigated proteins were antibody, antithrombin iii from cho clones and α1-antitrypsin, c1-inhibitor from human clones. the antennarity of the glycans is quite stable in all production phases. the degree of core fucosylation is very high in all products. a low fucosylation degree of antibodies may be favorable for a higher adcc performance [2] . some of the products showed an antennary fucosylation, which seemed to change not very much in different cultivation phases. nevertheless, this might be an issue due to an antigenic impact in the patient. the antennary galactosylation changes noticeable for the antibody and α1-antitrypsin. in both cases the degree is highest in the first phase. an incomplete galactosylation leads to truncated glycans. this leads inevitable to undersialylated antennas to be seen for α1-antitrypsin. the sialylation is the highest in the early phases and decreases during cultivation time. sialylation of a therapeutical protein is important for the half-life in the patient. therefore highly sialylated products are desired [3] . in further studies the consistency of the galactosylation and the sialylation will investigated for fed batch and long term continuous cultures in comparison to batch cultures. due to the feed solution or the fresh media being present during such processes, the supply situation should be excellent for the whole cultivation time. the differences between the maldi-tof and hilic-fld data originate from complex and unresolved chromatograms (fig. 1b , chromatograms not shown). for that reason coupling of hilic-fld and ms is very much recommended. background novel biologics are often selected from a large library of lead candidates in the initial stage of preclinical and clinical developments. for this selection, there is a demand for high-throughput production of recombinant proteins of high quality and in sufficient quantity. transient gene expression offers a rapid approach to the production of numerous recombinant proteins for the initial-stage developments of biologics. mammalian cells are major host cells for transient gene expression, but they have the disadvantages of complicated operations and high cost of culture. insect cells are easy to handle and can be grown to a high cell density in suspension with a serum-free medium. insect cells can also produce large amount of recombinant proteins through post-translational processing and modifications of higher eukaryotes. hence, insect cells have been recognized as an excellent platform for the production of functional recombinant proteins [1, 2] . in the present study, the production of an antibody fab fragment through transient gene expression in lepidopteran insect cells was examined. the dna fragments encoding the heavy chain (hc) and light chain (lc) genes of an fab fragment of mouse anti-bovine rnasea [3] were respectively cloned into the plasmid vector pihaneo, which contained the bombyx mori actin promoter downstream of the b. mori nucleopolyhedrovirus (bmnpv) ie-1 transactivator and the bmnpv hr3 enhancer for high-level expression [4] . trichoplusia ni bti-tn-5b1-4 (high five) cells were co-transfected with the resultant plasmid vectors using linear polyethyleneimine (pei; mw 40,000). before transfection, the plasmids and pei were prepared in 150 mm nacl, ph 7.0 and incubated at room temperature for 5 min. when the transfection efficiency was checked, a plasmid vector encoding the enhanced green fluorescent protein (egfp) gene was also co-transfected. transfected cells were incubated with a serum-free medium in a static or shake-flask culture. culture supernatants were analysed by western blotting and enzyme-linked immunosorbent assay (elisa). the numbers of green fluorescent cells and total cells in culture broth was determined using a flow cytometer. western blot analysis and elisa of culture supernatants showed that transfected high five cells secreted the fab fragment with antigenbinding activity. in static cultures, transfection and culture conditions, such as hc:lc gene ratio, a serum-free medium, dna:pei ratio, and dna amount per cell, were successfully optimized by flow cytometry of egfp expression in transfected cells and the yield of the secreted fab fragment measured by elisa. the effects of culture temperature and initial cell density were also examined by comparing the cell growth and the production of fab fragments in shake-flask cultures. under optimal conditions (medium, psfm-j1 (wako pure chemical industries, japan); hc:lc gene ratio, 3:7; dna, 5 μg/(10 6 cells); pei, 10 μg/(10 6 cells); initial cell density, 1 x 10 6 cells/cm 3 ; temperature, 24°c), the yield of more than 100 mg/l of fab fragment was achieved in 5 days in a shake-flask culture ( fig. 1) . transfection did not significantly affect the growth of high five cells as compared with untransfected cells. transient gene expression using insect cells may offer a promising approach to high-throughput production of candidate proteins for the development of biologics. the increasing demand for biopharmaceuticals produced in mammalian cells has led industries to increase volumetric productivity of bioprocesses through different strategies [1, 2, 3] . in this context, fedbatch and perfusion cultures have attracted more interest than conventional batch processes. the efficient application of such alternative processes requires the availability of reliable on-line measuring tools for cell density and cell metabolic activity estimation [4] . the comparison of different culture strategies for hek293 cell line producing ifn-γ are presented below: batch, fortified batch and fed-batch. in this context, a new robust feeding strategy based on the monitoring of alkali buffer addition was applied for the estimation of nutrient requirements. this method allows to increase cell density and product titer compared with the other strategies assessed. three different culture strategies were carried out in 2-litre biostat b-dcu ii bioreactor. first, a reference batch and a batch using fortified medium (nutrient enriched medium) were run and assessed in terms of viable cell density (vcd) and product titer, and set as initial references. then, a fed-batch was performed applying a feeding strategy based on the nutrient requirements estimation by monitoring the alkali buffer addition used for the control of ph. results vcd and product titer achieved for the different culture strategies assessed (batch, fortified-batch and fed-batch) are presented in table 1. in fortified batch an increase in vcd of 145% and also 350% in product titer were obtained compared with batch. in the fed-batch culture carried out (fig. 1) , we observed that alkali buffer addition profile matched the vcd evolution trend. thus, the monitoring of alkali buffer addition was used for estimating the nutrients requirements (i.e. the volume of feeding medium) at any time during the fed batch phase. the feeding strategy based on alkali buffer addition enabled to maintain glucose concentration set point therein a narrow range during fed-batch phase (around 20 mm). as a result, higher vcd (16.6·10 6 cells/ml) was obtained when compared with both batch references: vcd was enhanced to 241% and 39% and an increase up to 381% and 7% in product titer in respect to batch and fortified batch respectively. the results prove that fed-batch strategy based on the alkali buffer addition is a robust on-line monitoring method that enables to optimize the feeding strategy in a fed-batch cultures. three different culture strategies have been tested in bioreactor with a hek293 cell line producing ifn-γ. results show as the higher vcd is reached, the higher product concentration is achieved. therefore, from bioprocess development point of view, it is very interesting to implement strategies with higher vcd outcome, such as fed-batch operation mode. in this context, a new robust method for vcd estimation in fed-batch was applied. the alkali buffer addition necessary for maintaining the ph set-point is an on-line reliable and easy measuring variable that provides information about by-products formation (mainly lactic acid). the monitoring of this variable can provide information about the cell concentration, activity and metabolism, to detect changes in culture. besides that, a relationship between alkali buffer addition and vcd can be established since the first is strongly correlated with cell growth and metabolites consumption/formation. the application of alkali buffer addition measure to implement an optimal feeding strategy in fed-batch permits to enhance vcd and product titer when comparing with batch strategies. a novel approach to high throughput screening for perfusion background perfusion systems for suspended mammalian cells raise growing interest in the biomanufacturing industry. continuous manufacturing is growing in the field and is encouraged by health authorities [1, 2, 3] . this work addresses scale down limitations inherent to continuous media exchange and cell retention by using a semi-continuous system. data was generated with a set of different clones that were previously studied in fed-batch mode [4] . materials and methods 4 cho-k1 cell lines expressing the same monoclonal antibody (mab) and issued from the same transfection were used as models. 3.5l bioreactors (sartorius) were used for fed-batch and perfusion production runs. the perfusion bioreactors were run using an alternating tangential flow filtration device (repligen, xcell™ atf 2 system). the cell biomass was controlled by removing cells through a bleed line and was controlled using a biocapacitance probe (hamilton, incyte). the perfusion rate (d) was fixed to one vessel volume a day (vvd -1 ). the semi-continuous runs were made in 50 ml shake tube (tpp, tubespin® bioreactor 50). once a day, the tubes were centrifuged (5 min, 200 g), the supernatant removed (to mimic a perfusion rate of 1 vvd -1 ), replaced with fresh media and cells were re-suspended. the clone's growth potential were preserved across the systems (fig. 1 ). clone #3 always reached the highest viable cell density (vcd), followed by clone #1. clone #2 and #4 showed similar growth characteristics. it is interesting to note that in the perfusion bioreactor different patterns in terms of vcd were observed although the cell biomass signals were similar for all 4 runs. this reflects the fact that the capacitance measures the biomass and not the absolute cell count [5] . to estimate the minimum cell specific perfusion rate (cspr min ) in the semi-continuous experiment, the perfusion rate was divided by the maximum viable cell density (vcd max ). this value was compared to the cspr obtained during the 4 th set-point (sp4) of the perfusion runs. as expected, the bleed fraction decreased when the capacitance set-point was increased and went down to 5% or less of the total perfusion rate (data not shown). since the bleed removes the excess biomass, it is an indication of how close to a limitation the system is. therefore, the cspr calculated at sp4 was considered as the minimum cspr. the cspr min obtained in both systems were very close ( table 1 ). the semi-continuous system can therefore be used to identify the cspr min before running a continuous bioreactor, it therefore facilitates the decision making early in the development (to define the target cell density for a defined perfusion rate). the specific productivity (q p ) of the 4 clones was quantified at the maximum vcd (semi-continuous) or at sp4 (perfusion). absolute values are not representative since the cell environment is so different in both systems. nevertheless, a relative ranking proved to be indicative of the respective performances ( table 1 ). the maximum cell growth in fed-batch, semi-continuous and perfusion were also compared, their ranking was always preserved. both indications can be used to assess a performance ranking for different clones. the performance of 4 clones was studied in 3 different cultivation systems: fed-batch/perfusion bioreactors and semi-continuous shake tube. the semi-continuous system was able to precisely predict the cspr min , an important process parameter for perfusion. specific productivity and maximum cell density ranking was preserved across the systems, therefore the scale down experiment can be used to assess a performance ranking for perfusion clone screening. modulating antibody galactosylation through cell culture medium for improved function and product quality jenny y. bang, james-kevin y. tan the production of therapeutic antibodies (abs) requires high titers and excellent product quality to ensure efficient manufacturing and potent drug efficacy. glycosylation, or the attachment of sugars to organic molecules, is a critical quality aspect that can significantly alter ab binding, function, and therapeutic effect [1] . galactose is a key sugar of interest due to its significant impact on ab function and the ability to control galactosylation through cell culture medium. herein, irvine scientific assessed the ability of media components to modulate galactose levels on a model therapeutic ab. various media compositions were able to modulate galactosylation levels without compromising cell growth and ab titers. in addition, an in vitro assay was utilized to evaluate the functional ability of abs to bind and activate complement-dependent cytotoxicity (cdc). differences in galactosylation significantly altered the abs' ability to induce cell cytotoxicity. furthermore, design of experiment analysis determined the optimal ratio of supplements to maximize galactosylation. this "optimized supplement" was verified and evaluated against other suppliers' galactosylation supplements in terms of growth, titer, glycan analysis, and ab function. the optimized supplement outperformed all other suppliers' supplements and resulted in the best overall cell growth, titer, galactosylation, and ab function. ab against cd20 were grown in balancd® cho growth a and were fed with balancd® cho feed 4 on days 3-7 of the cultures. viable cell density and cell viability were assessed by a beckman coulter vi-cell xr, ab titer was assessed by a pall fortébio qk e , and glycan analysis was assessed by a perkinelmer labchip gxii. for the functional cdc assay, abs were incubated with daudi b lymphoblast cells and normal human complement serum. cell cytotoxicity was assessed with a promega cytotox-glo kit. various supplements were evaluated in fed-batch cultures and resulted in 15-45% ab galactosylation without compromising cell growth and ab titers. design of experiment analysis determined an optimal composition, deemed "optimized supplement," which was evaluated against a panel of galactose-modulating supplements from other suppliers. the optimized supplement resulted in a similar viable cell density (vcd) and cell viability compared to the fed-batch culture control which had no supplements (fig. 1a) . supplements from supplier 1 resulted in similar to half the vcd of the control while supplements from supplier 2 resulted in very low vcd and percent viability. all of the supplements except those from supplier 2 resulted in ab titers similar to the control (fig. 1b) . due to the poor growth and subsequently low titer from supplier 2's supplement, supplier 2 was not further evaluated. the glycan profiles were analyzed and are presented in (fig. 1c ). all the evaluated supplements were able to raise galactosylation; however, only the optimized supplement and the 2x supplier 1 supplement resulted in over 40% galactosylation. the function of the abs was further evaluated in a cdc assay (fig. 1d) . abs from the optimized supplement were more effective than the control abs and had a significantly lower half-maximal effective concentration (ec 50 , 1.19 μg/ml) than the control (1.71 μg/ml). abs from the 2x supplier 1 supplement had a similar ec 50 to the control which may be due to the higher man5% of the abs. an optimized supplement was produced through fed-batch evaluation and design of experiment analysis. the optimized supplement outperformed all other supplenments from other suppliers and resulted in the best overall cell growth, glycan profile, and functional ab activity (table 1) . industry practice for mammalian cell culture media and feed development typically employs a high-throughput screening (hts) platform along with large sets of experiments [1] . modern hts systems often include robotic liquid handlers to replace labor intensive steps. to align with advancements in the field, a semi-automated hts platform was developed to facilitate in-house media and feed development for early stage biologics projects. selecting appropriate instruments and integrating them into a seamless system are the keys to a hts platform. the developed hts platform uses 24 deep well plates (dwps) for culture vessels, the liquid handler of the advance microscale bioreactor (ambr15) for media/ feed formulation preparation in an aseptic environment, nyone cell imager for viability and cell growth analysis, tecan freedom evo's liquid handler for activity assay sample preparation, and cedex bioht for high-throughput metabolite analysis. 24 dwps offer comparable cell growth to shake flasks and compatible layout to ambr15, which makes the 24 dwp an ideal candidate for the platform. in addition, the user friendly design of experiments (does) interface and liquid handler function of the ambr15 expediates the formulation preparation of varying doe conditions [2] . a macro program was written and developed in excel to enable the easy import of does design from major statistics software packages, such as jmp and simca, into ambr15. performance qualification of each component were performed prior to implementing the hts platform. comparable cell growth profile and productivity were achived between shake flasks and 24 dwps (fig. 1a ), indicating compatable cell culture environment for the cells. cell counts using nyone gave identical cell growth ranking as the traditional count from vi-cell xr (fig. 1b) . freedom evo's liquid handler was optimized to produce comparable activity results to manual operation while expediting the sample preparation with improved consistency (fig. 1c) . finally, implementing the liquid handler function of ambr15 to support media and feed formulation significantly reduced the labor for each experiment. summary of the capability comparison between the hts platform and the traditional method are listed in table 1 . a case study of a complex feed screening with definitive screening design was completed using the semi-automatic hts platform. this experiment, containing more than 60 feed formulations in duplicates, was handled by one operator and delivered a 40% improvement in productivity within a 4 week period (data not shown). in addition, implementing the hts platform for this study also resulted into~80% reduction in labor while improving the traceability of formulation preparation. a semi-automated hts platform was developed to support media and feeds screening and development for early stage biologics projects. the platform utilizes 24 dwps, nyone cell imager, ambr15, freedom evo liquid handler system, and bioht metabolic analyzer to accelerate the screening process. this screening platform not only improves process throughput, operational precision, and traceability of formulation preparation, but also reduces the labor for the media and feed formulation preparation. background a perfusion medium requires high concentrations of specific nutrients while balancing other components to support intensified perfusion processes. using a combination of design of experiment (doe), multivariate analysis (mva), and spent media analysis, we developed a catalog "de novo" perfusion medium by working with multiple cho cell lines and proteins. the optimization of the medium in bioreactors using alternating tangential flow (atf) cell retention devices reduced the minimum cspr from over 80pl/cell/d to under 35pl/cell/d for most cell lines while increasing specific productivity during 30 day steady states with stable growth rates, viability, volumetric productivity and product quality. high throughput screening (hts) was performed with seven cell lines, while four were used in bioreactors: cho-s, dg44, and two chozn® gs lines, each producing different monoclonal antibodies and include a fusion protein. for hts experiments, cells were inoculated at 2.0x10 6 vc/ml with a 30 ml working volume in 50 ml tpp® tubes and cultured for 7 days in a multitron shaken at 200rpm, 37°c, 80% rh, and 5% co 2 . for benchtop perfusion, cells were inoculated at 0.4-2.0x10 6 vc/ml in 3l applikon bioreactors (applikon, netherlands) with a 2l working volume. bioreactors were operated at 350 rpm, 37°c, 40% do, and a ph of 6.9 or 7.1±0.05 depending on the cell line. oxygen was supplied through an l-sparger or microsparger as needed, and excell® antifoam (milliporesigma, germany) was added at a maximum rate of 0.25% v/v to control foam. at a cell concentration of~6.0x10 6 vc/ml, perfusion was initiated using the atf2 (repligen, massachusetts), with a bleed set to maintain cell concentrations at 50 or 80*10 6 vc/ml. two "de novo" prototype media were developed using doe and mva in hts with tpps and an ambr®15 [1] and one was chosen for further development after comparing to a basal medium enriched with feed in bioreactors. eleven components were identified as significant effectors of critical parameters for perfusion processes across evaluated cell lines. doe central composite experiments were run and component concentrations were optimized in the selected prototype. in parallel, amino acid specific consumption rates were calculated from bioreactor spent media samples and used to adjust the concentration of amino acids to target a reduced cspr. increasing specific amino acids concentrations resulted in a significant reduction of the minimum cspr across all tested cell lines -for example the cspr of cho-s was reduced from 60 to 39pl/cell/day (table 1 ). however, even at the lower cspr, spent media analysis revealed excess concentration of some amino acids, so specific accumulating amino acids were reduced and components were streamlined for the final medium: excell® advanced hd perfusion medium. using this medium, a cho-s and a chozn® gs cell line producing a fusion protein were cultured at a cspr of less than 40pl/cell/day with a vcd of 50*10 6 vc/ml. metabolic profile, productivity, and product quality were constant over the 30 day steady state. the chozn® gs cell line was also tested at 80*10 6 vc/ml with a cspr of 33pl/cell/day (fig. 1 ). we have developed a catalog perfusion medium from first principles, ensuring broadness of application by using seven cell lines in scaleddown systems and four in perfusion bioreactors. the final catalog medium showed significant improvements in productivity across all cell lines, with reduced csprs when compared to enriched fed-batch medium or initial prototypes (table 1) . there is a rising demand for accelerated process development, increased efficiency and economics for biopharmaceutical production processes. furthermore, increased process understanding have evolved from the process analytical tool initiative (pat) and the quality by design (qbd) methodology. in contrast to one-factor-at-atime methods, statistical design of experiment (doe) methods are widely used to develop biopharmaceutical processes. even if highthroughput systems can handle these numbers of experiments in parallel, the heuristic restriction of boundaries and the high number of factors results in stepwise iterations with multiple runs. therefore, the combination of model-based simulations with doe methods (mdoe) for the development of sophisticated cell culture processes is a novel tool for process development [1] . it is used to reduce the number of experiments during doe and the time needed for the development of more knowledge-based cell culture processes. this concept was applied to the optimization of the initial glutamine and glucose concentrations of a cho batch process. a mechanistic model was adapted and modified from [2] and used to describe the dynamics of cell metabolism and antibody production of an il-8 antibody producing cho cell line (see abbreviation of fig. 1 for cultivation details). experiments were simulated and compared to a fully experimental doe. as can be seen from table 1 , user defined constraints were chosen to get a stable and reproducible process with the aim of maximizing the cell density but decreased lactate and ammonia production. at first, the experimental space was estimated by simulating the responses for broad concentration ranges and calculating the multiple response desirability function (fig. 1a) . this results in a small area (turquoise) suggested as experimental space. experiments were planned within these boundaries and responses were either simulated ( fig. 1b, 4 cultivations for fitting the model) or compared with the purely experimental responses (fig. 1c, 16 cultivations). optimal concentrations for glutamine and glucose with respect to the constraints are in the lower right corner and similar for both methods (red frame, fig. 1 ). compared with the fully experimental design, mdoe results in a reduction of 75 % in the number of experiments (4 experiments for modelling vs. 16 experiments in experimental doe). the method is intended to optimize cultivation strategies for mammalian cell lines and evaluated these before experiments have to be performed in laboratory scale. this results in a significant time and cost reduction during process development and process establishment. the strategy is especially intended for the use in multi-single-use-devices to speed up process development. . at a target cell density of 50*10 6 vc/ml, volumetric productivity was stable for a 30 day steady state with excell® advanced hd perfusion medium. shorter steady states were tested at 80*10 6 vc/ml background for the large-scale production of therapeutic glycoproteins, fedbatch culture has been widely used for its operational simplicity and high titer. however, repeated feeding of medium concentrates and/ or addition of a base to maintain optimal ph during fed-batch culture lead to increase in osmolality. the hyperosmolality affects glycosylation in a protein-specific manner. however, the mechanism behind such osmolality-dependent variations in glycosylation in recombinant chinese hamster ovary (rcho) cells remains unclear. in this study, to better understand the effect of hyperosmolality on the glycosylation of a protein produced from rcho cells, we investigated 52 n-glycosylation-related gene expression and n-linked glycan structure in fc-fusion protein-producing rcho cells exposed to hyperosmotic conditions. furthermore, to validate the effect of hyperosmolality on protein glycosylation, we performed hyperosmotic culture supplemented with betaine, an osmoprotectant, and then analyzed the n-linked glycan structure and mrna levels of n-glycan branching/antennary genes. after three days of hyperosmotic culture, nine genes (ugp, slc35a3, slc35d2, gcs1, manea, mgat2, mgat5b, b4galt3, and b4galt4) were differentially expressed over 1.5-fold of the control, and all these genes were down-regulated. n-linked glycan analysis by anion exchange and hydrophilic interaction hplc showed that the proportion of highly sialylated (di-, tri-, tetra-) and tetra-antennary n-linked glycans was significantly decreased upon hyperosmotic culture. addition of betaine, an osmoprotectant, to the hyperosmotic culture significantly increased the proportion of highly sialylated and tetra-antennary n-linked glycans (p ≤ 0.05), while it increased the expression of the n-glycan branching/antennary genes (mgat2 and mgat4b). thus, decreased expression of the genes with roles in the n-glycan biosynthesis pathway correlated with reduced sialic acid content of fc-fusion protein caused by hyperosmolar conditions. conclusions taken together, the results obtained in this study provide a better understanding of the detrimental effects of hyperosmolality on n-glycosylation, especially sialylation, in rcho cells. the identified genes, particularly mgat2 and mgat4b, are potential targets for engineering in cho cells to overcome the impact of hyperosmolality on glycoprotein sialylation. disruptive cost-effective antibody manufacturing platform based on cutting-edge purification process v. medvedev, m. duyck, t. albano, j. castillo univercells sa, gosselies, belgium correspondence: v. medvedev (v.medvedev@univercells.com) bmc proceedings 2018, 12(suppl 1):p-093 background demand for high-quality monoclonal antibodies is growing exponentially, calling for new production capacities. overcoming current limitations of conventional manufacturing strategies, namely the high capital investment and production cost, can only be achieved through innovative process designs based on the latest technologies. this study presents a process design combining batch-fed technology with continuous multi-column capture. an advanced cell culture clarification method was introduced to simplify downstream operations and increase overall cost-effectiveness of the process, for an optimized production of recombinant proteins. this study was performed with cho cells expressing a monoclonal antibody targeted against the coronoavirus responsible for the middle east respiratory syndrome (mers), developed by organic vaccines tm and the nih, kindly provided to univercells. upstream process: -fed-batch, 12 days culture at 10l scale with cd-cho chemically defined media and feeds. harvest treatment: -precipitation of impurities in the production bioreactor using organic compounds (<1% v/v) and flocculation by electropositive organics (<0.1% w/v). -acidic ph and physiological conductivity. upstream processing and harvest treatment: culture reached 0.5 g/ l (8x10 6 cells/ml; 90% viability), harvest treatment was found to be very effective in terms of impurities clearance. capture: capture strategies were evaluated from the point of view of simplification of downstream operations, with hcp impurities content monitored as a key performance indicator. -protein a affinity chromatography: advanced harvest clarification enabled major improvements in affinity capture, in terms of eluate purity and reduction of host cell impurities (<35 ppm in all conditions tested). (fig. 1 ). -cation exchange chromatography: cex allows higher capacities (>100 g/l) than protein a, whilst being more affordable (from 2-to 6-fold cheaper). low residual hcp (<500 ppm) was observed with all cex resins tested. without harvest treatment and clarification preceding the capture studies (either affinity or cex), results showed a lower binding capacity of the resin, a higher content of hcp in the eluate (up to 2000 ppm), a higher content of hmw species in the elution fraction (up to 3-fold higher) and a significant turbidity of the neutralized eluate. -continuous multicolumn chromatography: further options to increase cost efficiency include using a continuous multicolumn setup (table 1) . two models were assessed based on two different static binding capacities (sbc), demonstrating that 4 to 6 columns of 100ml were able to process a 200l production in less than 24h. this method provides a great opportunity for designing simplified and low footprint mabs dsp processes, while maintaining similar or achieving superior quality profile compared to standard approaches: -harvests treatments followed by depth filtration proved to be a cost-efficient way to obtain pretreated feed and minimize the burden on downstream operations. -protein a resins exhibited advantages of extracting key contaminants during harvest treatment, while caex confirmed to be a competitive capture strategy. -switching from batch to continuous multicolumn mode allowed to process a complete batch in less than 24 hours, requiring lower media and resins volumes. followed by a single polishing step, such process set-up strongly supports the reduction of operations required to deliver a high-quality product. analyses of product quality of complex polymeric igm produced by cho cells background immunoglobulin m (igm) antibodies are secreted by b cells as the first defense against invading pathogens during primary immune response. some igm antibodies already gained the orphan drug status, which shows their unique capability in therapy of rare diseases. potential fields for applications are discovered with increasing knowledge about these molecules. it seems that the most active forms are pentameric and hexameric igms. unfortunately, recombinant production of igms is rather difficult as secretion and correct polymer formation results in low expression yields and mixtures of polymers. we established stable producing chinese hamster ovary (cho) dg44 cell lines to analyze cellular and extracellular factors that influence quantity and quality of the produced recombinant polymeric igm in future studies [1] . one quality parameter is polymer distribution, which can be measured directly in cell culture supernatant using densitometric analyses [2] . additionally, we developed a very efficient single-step-affinity purification strategy using the poros captureselect igm affinity matrix to analyze pure igms. for more precise measurements of the igm isoform distribution we separated the purified polymers by high performance liquid size exclusion chromatography (sec hplc). our cho dg44 cell lines grow to peak cell concentration of 4.5x10 6 cells/ml in erlenmeyer flasks and 4.0x10 6 cells/ml in bioreactors. similar productivity of approximately 50 mg/l was observed for cells cultivated in both cultivation vessels in a nonoptimized batch culture using chemically defined media. analysing how cultivation conditions affect the fraction of polymers may offer clues about the assembly of polymers and the challenges of igm production. we quantified polymeric distribution of igm directly in the supernatant using a densitometric method [2] . cultivated under standard conditions (37°c, ph 7) igm012 is produced as 90% pentamers, whereas igm012_gl only consists of approximately 80% pentamers. the purified igm012_gl was analysed with sec-hplc and contained 81 % pentamer and 19 % dimer, which is comparable to the results achieved with densitometry. the purification of the igm antibodies was quite challenging as the manufacturer recommend acidic elution, which led to aggregation and inefficient elution of our model igms. therefore, we screened for different elution buffers that prevent denaturation and aggregation. by combining high salt concentrations with moderate ph reduction we optimized elution conditions to 88-99% igm recovery, which corresponded to a five to six fold improvement compared to the manufacturers' conditions. sds-page analysis and sec-hplc showed that our elution strategy resulted in a very pure product after a single chromatographic step. the purification strategy was verified with the igm103, igm104 and igm617. our model igms were produced in a ratio of approximately 4:1 pentameric to dimeric igm, measured concordantly with both analytical methods. process development on igm purification using the poros capture select human igm affinity matrix enabled the recovery of highly pure fractions. through optimization, by combining mild ph and high salt concentrations, the relatively low elution yields were increased by a factor of 5-6. applying densitometry and sec-hplc we will investigate how culture conditions influence polymer formation in future. currently, no small scale (<0.5l) cell culture system is commercially available for high cell density perfusion cultivations to use in high throughput screening studies. to increase throughput for process characterization activities at janssen vaccines and prevention, a shaker flask-based scale down model was developed. though, the control possibilities of shaker flask cultures are technically very limited and different compared to a bioreactor controlled process. in addition, the sensitivity of the shaker flask model should allow the detection of the effects of process parameters on critical quality attributes (cqas) of the vaccine produced at large scale. iterative experiments were performed in shake flasks to evaluate the influence of cultivation parameters such as shaking speed, working volume, co2% in the incubator and daily base additions on cultivation parameters (as cell growth, ph and do). in addition, a medium exchange was tested to mimic the perfusion mode used in the bioreactor process. the presens shake flask reader was implemented to allow for ph and do monitoring. the conditions for which the performance as reflected in specific virus titer showed the best fit were selected. at these conditions, a series of parallel shaker flask infections were conducted to demonstrate statistical equivalence of performance parameter and cqas (as cell specific iu titer and vp/iu ratio) between the production scale and reduced scale processes and thus to qualify the shake flask as a scale-down model. a daily medium exchange by centrifugation was implemented and cultivation parameters for shake flasks were identified. based on performance parameter (cell specific vp titer) and the cqas of the vaccine (cell specific iu titer and vp/iu ratio), equivalence between the production-scale and scale down systems was confirmed. the scale down model data fall into the 95% prediction intervals calculated on manufacturing data whereas scale down model data from batch mode experiments (using non optimized cultivation conditions) do not. the shaker flask as a scale down model for the 10l bioreactor perfusion process was qualified. this model is a tool to screen a subset of process parameters at a higher throughput, thereby reducing process characterization timelines. background until today, the market for therapeutic proteins, especially monoclonal antibodies, is gaining more and more importance in the pharmaceutical field. to meet the increasing demand for these products, the industry made tremendous efforts to generate highly efficient production systems. one of the pharmaceutical industry's research focuses is the improvement of the secretion process in eukaryotic cells. in mammalian cells, the efficiency of protein transportation strongly depends on the translocation of a nascent protein into the er, which is mostly conducted by the signal peptide (sp) coupled to the nterminus. through the interchangeability of signal peptides between products and even species, a large variety can be used to enhance protein expression in already existing production systems materials and methods at first the influence of four different natural sps (sp (7), (8), (9) and (10)) was compared on the secreted amount of an igg4 model antibody (product a) in fed batches using a cho dg44 host cell line. in the second part, one promising sp-candidate showing improved secretion (sp (9)) was identified and the influence of this sp on four additional antibody products, which varied in their expressability from good to mid/bad, was investigated. in both approaches, the standard sp was implemented for comparative reasons. in the first approach, four signal peptides sp (7), sp(8), sp(9) and sp(10) were screened for their potential to improve the product secretion of cho dg44 cells expressing a model antibody (product a). the results revealed a 2.4-fold increase in average final fed-batch antibody titer of sp(9) when compared to the standard sp approach (standard sp = 0.44 g/l; sp(9) = 1.50 g/l). in the second approach, the enhancing capacity of sp(9) on secretion of four other igg products (named product b to e, table 1 ) was further evaluated. an improved performance was observed for all products when comparing sp(9) and the standard sp in a fed batch process (fig. 1) . with an increase in average final fed-batch titers ranging from 28 to 354 % and up to 290 % in cell-specific productivities. taken together, with a positive influence on the final concentrations of all tested products, the results obtained with sp(9) contribute to -signal peptide sp(9) was identified as a promising candidate with an average 2.4-fold titer increase during screening of four signal peptides. -sp(9) was able to improve production titers up to 354 % compared to standard sp. -sp(9) was able to improve cell-specific productivities up to 290 % compared to standard sp. -future usage of sp (9) contributes to the further optimization of sartorius stedim cellca's standard cell line development process. new platform for the integrated analysis of bioreactor online and offline data lukasz gricman 1 , milan ganguly 2 , amanda fitzgerald 3 , hans peter more and more experiments are used to assess bioreactor suitability and stability of clones, to evaluate media composition and other process parameters, and to start upscaling campaigns. this has resulted in a major bottleneck due to the increase in data capturing, processing, aggregation, visualization, and statistical analysis. in addition, the association of the data with the experimental context (e.g., fermentation protocols, media recipes, bioreactor control parameters) is not easily accomplished in high throughput. the data generated in the process must not only be analyzed, but also managed and stored to enable easy tracking and relating to historical records. furthermore, the processes are often developed by global teams interacting in complex enterprise it ecosystems. therefore, new and high performing systems for data capture, processing, and analysis need to be integrated in order to enable storage and correlation of experimental context information and various types of time course analytics data. we have developed genedata bioprocess™, a new enterprise platform for bioprocess development. the platform enables automatic capture and visualization of all online and offline data (e.g., ph, o2, metabolic data), auto-calculations and aggregations (e.g., ivcd, qp, consumption rates) and multi-parametric assessment of any type of time-series bioreactor data in the context of experimental protocol data (e.g., process parameters, feeds). genedata bioprocess comes with dedicated interfaces for integrating with relevant laboratory instruments, control systems, statistical analysis software packages and custom enterprise solutions. it enables the modeling and tracking of complex nonlinear workflows and supports decision making in bioprocess development. the data can be analyzed in the context of upstream process development, and also be correlated to other unit operations. automation support assists the ever increasing throughput of bioprocess development operations, and the analysis of experimental data and process parameters across unit operations or even different projects. this overall integration enhances process development workflows. highlighted use cases describe the selection of the best producer clones (fig. 1a) , the identification of optimized media feeding strategies (fig. 1b) , and the comparison of clone performance across different fermentation scales (fig. 1c) . a special focus is on the analysis of data from micro-and bench-top bioreactors (such as the ambr15™ and dasgip™ systems) operated in parallel. these bioreactors allow for increased throughput of clone selection and process optimization studies, which in turn leads to an increase in data generation. genedata bioprocess supports integration with such systems and enables a comparison of data regardless of the instrument provider or scale. automated bioreactor data analysis allows development groups to take advantage of even richer datasets and, as data management is built-in to the system, the data can be easily tracked and associated to historical records. another focus is on cross-reactor scale comparisons. data coming from different bioreactor scales can be easily imported into the platform and analyzed to establish the best conditions for upscaling. genedata bioprocess enables the correlation of process parameters (e.g., fermentation protocols, media recipes, bioreactor control parameters), with key performance indicators of the processes (e.g., titer, qp) and the product quality attributes (e.g., aggregation, glycosylation profiles). finally, bioreactor time course data can be tracked together with clone analytics and product quality parameters, which makes the platform uniquely able to support end-to-end biopharma development. upstream bioprocesses are at particular risk of contamination from adventitious agents. the typical 0.1 μm filters used at this step protect bioreactors from bacteria and mycoplasma but offer no protection from viral contaminations. a new polyethersulfone (pes) upstream virus filter, viresolve® barrier, has demonstrated high levels of microorganism retention -full retention for bacteria and mycoplasma (>8.0 lrv -log reduction value) and~5 lrv for small viruses, such as parvoviruses. it also has improved flow and capacity as compared to virus removal filters designed for monoclonal antibody purification. given the small pore size of virus retentive filters, implementing a virus filter upstream of the bioreactor raises the question of whether critical cell culture media components are removed. therefore, it is important to evaluate the cell culture performance and protein quality attributes using virus-filtered media to ensure that filtration does not negatively impact the process. materials and methods ex-cell® cho media and corresponding feeds were processed through either viresolve® barrier filters or 0.22 μm filters (control). media composition post-filtration was evaluated by high performance liquid chromatography (hplc), inductively coupled plasma/ optical emission spectrometry (icp-oes), and nuclear magnetic resonance (nmr). recombinant cho cells were cultured in fed batch culture. cell density and viability were measured by vi-cell tm cell viability analyzer while metabolites were analyzed by bioprofile® flex analyzer. shake flasks and bioreactors were utilized to verify that surfactants, such as poloxamer, (which are essential for shear protection in stirred tank bioreactors and can be difficult to filter) have not been removed during filtration. monoclonal antibody titer was quantitated by protein a hplc. characterization of the antibody product quality was assessed via weak cation-exchange chromatography (charge heterogeneity), size exclusion chromatography (aggregate profile), and 2-ab fluorescent labeling with np-uplc (glycan species). media and feed compositions were unaffected by filtration through the virus barrier filter. no significant differences in concentrations were observed with icp-oes (trace metals) or hplc (amino acids and water soluble vitamins). nmr showed no change in the organic composition of the media including poloxamer. the aromatic region with vitamin and amino acid signals is shown (fig. 1a) . cell cultures showed no differences in cell growth or titer, in either shake flasks or bioreactors (fig. 1b) . cell viability was unaffected, metabolite levels were within limits, and titer was consistent. the protein quality of the secreted antibodies showed no differences in the glycosylation pattern (fig. 1c) , amount of aggregates or charge variants. the risk of virus contamination in the bioreactor remains a concern for biotherapeutic manufacturers as there is no universal technology that provides a reliable, cost effective solution for virus removal that can be applied to all components of cell culture media. this study evaluated the viresolve® barrier filter that provides an efficient and easy way to protect bioprocesses from adventitious virus contamination. study results demonstrated that media and feed compositions, cell culture performance, and product quality were unaffected by filtration through the viresolve® barrier filter. implementation of vire-solve® barrier filters provides efficient filtration performance, high virus retention, and minimal cell culture impact and offers a viable option to improve the overall virus risk mitigation strategy for the manufacture of biotherapeutics. b tracking of process conditions together with online and offline performance analytics. the system allows to flexibly define tracked parameters and select optimal process conditions. c comparison of process performance across different reactor scales. the open architecture makes genedata bioprocess a provider agnostic system which allows to aggregate and compare data regardless of provider background bi-and multi-specific antibodies, antibody-cytokine fusion proteins, nonimmunoglobulin scaffolds, chimeric antigen receptors (cars), engineered t-cell receptors (tcrs) and tcr-based bispecific constructs can provide significant advantages for use in cancer immunotherapy. however, as highly engineered molecules they pose new challenges in design, engineering, cloning, expression, purification, and analytics. we have thus implemented an infrastructure that addresses these challenges and enables the industrialization of these various novel therapeutic platforms. in close collaboration with leading biopharmaceutical companies, we implemented a workflow, data management and analysis support system, genedata biologics™, enabling the automated design, screening, and expression of large panels of therapeutic candidates using these novel technologies. we have also built tools for developability and manufacturability assessments of these complex molecules. we have ensured that there is a seamless integration of all data generated and that functionalities such as bulk protein and vector generation using our in silico cloning engine, configurable library of template vectors and cloning strategies, fully annotated in silico protein molecules and dna constructs, and dna synthesis verification support, can be used for the newest protein formats and molecule topologies. we implemented data structures and data handling systems, which mirror how these complex next-generation biologics molecules and cell lines are being designed, screened, and analyzed. the result successfully addresses workflows for tcr optimization and engineering. we exemplified this with the generation and evaluation of a panel of engineered tcrs with an alpha chain cdr3 randomization and successfully supported the analysis and selection of beneficial mutations. the system also successfully supported workflows for the design and generation of a panel of tcr-based bispecifics (tcr coupled with anti-cd3) using automated molecule registration and in silico cloning tools and subsequent capture of expression, purification, and functional and analytical characterization data. on the car-t cell front, the system is able to provide traceability of the work from antibody generation, optimization, car engineering (e.g., attachment to the scfv with cd3-zeta and co-stimulatory domains to mimic the natural tcr complex) to the engineering of the t-cell. the genedata biologics platform successfully enabled automation, increased data integrity and traceability during research and development work, and will contribute towards the industrialization of these very exciting novel approaches for cancer immunotherapy. optimal selection of therapeutic antibodies and production cell lines by assessment of critical quality attributes and developability background the increasing cost of bringing a new drug to the market has put significant pressure on biopharma organizations. to increase efficiency in r&d processes and reduce costs, organizations need to evaluate potential drug candidates earlier in the r&d process, eliminate those with undesirable characteristics, and focus on the most promising candidates. after designing and thorough testing of successful candidates, efficient production of new biological entities in mammalian cell lines is necessary. the main goal here is to find a suitable cell line and optimal upstream and downstream processing conditions that not only lead to a satisfactory product yield, but also to a product with the desired biochemical properties. the evaluation of production cell lines, processes, and product quality attributes is performed earlier and in higher throughput for an increasing number of drug candidates. in addition, new methods in molecular and cell biology (e.g., novel genome engineering approaches such as crispr/cas9), in analytics [e.g., process analytical techniques (pats)], in process miniaturization, and in automation promise to make process development more efficient. however, the management and analysis of the increasing amount of experimental data during candidate selection and cell line and process development has become a bottleneck. in addition, quality-compromising steps in biopharma organizations can negatively impact the cost of goods and substantially prolong the drug candidate's time to market. therefore, systems for integrated management and analysis of wellstructured and curated data that comprehensively integrate molecule and sample information, manufacturing process parameters, and process and product quality attributes are needed. critical quality attribute (cqa) assessment should be enabled along the whole bioprocess development workflow, including cell line development, upstream and downstream process development, as well as analytical and formulation development. we have developed a comprehensive platform, genedata bioprocess™, which supports drug candidate developability and manufacturability assessment and bioprocess development. the platform captures and structures the cell line and process parameters together with analytical data for cell lines, processes, and protein products. the protein analytical data being tracked include biological data (such as bioactivity, immunogenicity), and physicochemical properties. these properties include glycosylation, chemical liabilities (such as deamidation and oxidation), aggregation, stability under different conditions (low ph, low and high temperature), solubility, and impurities. genedata bioprocess™ simplifies and streamlines laborious, manual process and supports tools for molecule, clone and process selection. furthermore, the platform allows for seamless integration with laboratory instruments, statistical software packages, and custom solutions. here, we present use cases showing how to identify and annotate liability sites prone to chemical modifications (fig. 1a) and how to monitor cqas of molecules allowing to assess developability more efficiently. we show how the analytical data generated in the course of a developability assessment are compiled to select the best drug candidate (fig. 1b) . implemented traffic-light systems indicate where molecules harbor issues such as in case of the antibody tpp-86, which is compromised by low temperature and repeated freeze-thaw operations. the same assessment views can also be applied on batches and cell lines. the underlying data can be visualized graphically. as an example, we show glycan types of products obtained from different cell line clones generated in a cell line development campaign for the molecule tpp-86 (fig. 1c) . even though the selected clone cli-35 meets the glycosylation criteria (e. g., <13% afucosylation, <40% galactosylation, <2% sialylation), the produced next-generation biologics molecules are composed from a number of specific subdomains. each type of molecule is composed of a specific set of domains, which must be mirrored in the registration and further research and development workflow. molecule registration and hit-selection using data from a number of assays is shown here using the example of car-t cells. the image is a screenshot from the genedata biologics™ software molecule harbors some stability issues as mentioned above. therefore, more attempts would be needed either in formulation or in reengineering of the complimentarity determining regions (cdrs) in order to provide a developable ttp-86-like drug candidate. background environmental process variables are often used as tools to optimize the performance of mammalian cell cultures to achieve higher cell densities and high productivities of r-proteins (q p ). the manipulation of culture temperature in the range of mild hypothermia (mh) (35-30°c) [1, 2] , as well as different glucose availability scenarios [3, 4] , has been shown to improve productivity in different cell lines. however, the manipulation of these variables individually or together has a concomitant effect on the rate at which cells grow, masking the net response exhibited by the cells. in order to identify the effects of these variables, we have taken advantage of the use of the chemostat culture. chemostat cultures were performed at two dilution rate (d)(0.010 or 0.018(h-1)), two temperatures (33 or 37°c) and three feed glucose concentrations (20, 30 or 40 mm). the response was analysed considering r-protein production, cell growth and key metabolites. r-tpa protein concentration was determined by immunoassay (trinilize tpa kit); cells were counted using a hemocytometer and cell viability was determined by the method of exclusion using trypan blue (t8154, sigma, usa); glucose, lactate and glutamate were determined by enzymatic assay using a biochemical analyser ysi (yellow spring instruments). statistical analysis of the results was performed by anova (design-expert 7 for windows). a decrease in cell density was observed in response to an increase of glucose feeding concentration, regardless of temperature or specific growth rate (in this case μ=d) evaluated. the maximum cell densities were reached at 20mm, achieving 1.65 and 1.50 x10 6 cells/ml at 37/33°c and 0.018(h -1 ); and 1.10 and 1.33 x 10 6 cells/ml at 37/33°c and 0.010(h -1 ) respectively (fig. 1a) . the increase in glucose concentration from 20 to 40mm resulted in an q p increase of 3 and 3.3 fold at 33°c/0.018(h -1 ) and 37°c/0.018(h -1 ) respectively. a lower increase of 2.4 and 1.8 fold was reached at 33°c/0.010(h -1 ) and 37°c/0.010(h -1 ) respectively (fig. 1b) . the highest q p s were reached at 37°c and 0.010(h -1 ). however, a positive effect of mh was not observed, in contrast to that observed in batch culture [1, 2, 3] . this behaviour suggests that low μ is a main factor on increased r-protein production in batch cultures exposed at mh condition. the specific consumption rate of glucose was significantly increased by the glucose increase from 20 to 40mm and reduced by mh (fig. 1c) . at 0.010 (h -1 ) the specific production rate of lactate (q lac ) was increased by glucose increase, independent of the culture temperature used. while at 33°c/0.018(h -1 ) the q lac decreased with increasing glucose concentration and at 37°c/0.018(h -1 ) a maximum consumption was observed at 30 mm glucose (fig. 1d) . the lactate-glucose yield ( fig. 1e ) not showed relevant changes at 0.010(h -1 ), while at 0.018(h -1 ) this yield showed a more efficient utilization of glucose, as glucose concentration was increased. however, this last behaviour was not reflected in an increase of r-protein production. the concentration of glucose has the greatest impact on the behaviour of the culture, and its increase affects positively the protein productivity. the mh did not improve proteins productivity of cho cells producing tpa under the different conditions evaluated; low dilution rate and at high glucose concentration impact positively the protein productivity and the metabolism exhibited by the cells. background mammalian cell cultures are the most commonly used bioprocess for the production of therapeutic recombinant proteins such as monoclonal antibodies (mabs). facing to the increasing demand of these biopharmaceuticals, the fda has initiated the process analytical technology (pat) framework in order to encourage pharmaceutical industries to use innovative technologies to monitor in real time the critical process parameters (cpps), and to ensure the final product quality [1] . one of the most important cpps for cell culture bioprocesses is the specific growth rate (μ), which is a direct indicator of cellular physiological state. indeed, μ is sensible to culture conditions and its value decreases when cells are in the unfavourable environment for growth [2] , which may greatly influence mab production and quality. however, until this day, the online monitoring of μ remains a great challenge for mammalian cell culture bioprocesses. igg-producing cho cells were cultured in 2 l stirred bioreactors equipped with an in situ dielectric spectroscopy (hamilton). operating conditions were fixed at 90 rpm, 50% of air saturation, ph 7.2 and 37°c. permittivity of cell culture was measured every 12 min, which allowed to calculate in real time the vcd by using a previously established linear correlation. then, a model of online estimation of μ was developed based on vcd prediction and cell mass balance equations. several signal noise filters and various calculation methods were evaluated to reach better model stability. cell cultures were performed in both batch and feed-harvest modes. feed-harvest cultures consisted of sequential renewals of 2/3 volume of the culture medium by following different strategies. this study proposed an innovative methodology based on dielectric spectroscopy to monitor in real time the cellular physiological state, by online estimating the specific growth rate (μ) of cells. model of online estimation of μ was developed from cultures in batch mode, and was validated by comparing online estimated μ with the experimental ones calculated at the end of the culture. with this model, the moment when μ started to decrease significantly, which indicated that cells were no longer in the exponential growth phase, was identified as the critical moment. to demonstrate the interest of online estimation of μ, the developed model was applied to a feedharvest culture, where the medium renewals were performed at the critical moments indicated by the model. this culture was then compared with the traditional feed-harvest culture where medium renewals were performed by following offline measurements of glucose and glutamine. we found that the online strategy allowed to maintain the value of μ by renewing the medium at the right time, while the values of μ varied a lot when using offline strategy. moreover, by using the online estimation of μ, the glycosylation of igg was kept at a high level (about 95%) throughout the whole culture. however, for the culture using offline strategy, the glycosylation level decreased progressively and was only about 75% at the end of the culture (fig. 1) . model of online estimation of μ was developed by using dielectric spectroscopy, which allowed to monitor the physiological state of cells in cell culture bioprocesses. implementation of this model in feed-harvest cell culture led to better mab glycosylation, which demonstrates clearly the potential of this methodology in mab production bioprocesses. background monoclonal antibodies are normally synthesised from transfected mammalian cells as heterogeneous mixtures of glycoforms [1] . however, clinical efficacy may depend upon single glycoforms which have been difficult to isolate [2] . we have now developed an efficient method for generating single glycoforms by solid phase re-modelling which is superior to previous methods because it allows a sequential series of enzymatic changes without the need for intermediate purification of the antibody. solidphase binding exposes the antibody glycans to enable easier access of the transforming glycosylation enzyme. the antibodies subjected to modification were a chimeric human/ camelid monoclonal antibody (eg2), a humanized monoclonal antibody (il8), a full size chimeric antibody (cetuximab) and polyclonal antibodies obtained from pooled human serum. the antibodies were bound to a protein a column using conditions typical of mab purification (fig. 1 ). after washing out non-bound impurities by a neutral ph buffer, each antibody was subjected to enzymatic modification directed to a targeted glycan profile ( table 1 ). the antibodies were then eluted with a low ph buffer and neutralized. the glycan profiles were analysed following glycan removal with pngase f, labelling with 2-aminobenzamide and separation on a hilic-hplc column [3] . prior to enzymatic modification glycan analysis of all 4 antibodies showed variable galactosylation and sialylation typical of human abs. this included a distribution of fg0, fg1, fg2, fs1 and fs2 with galactosylation indices ranging from 0.22 for il8 to 0.64 for eg2. there was minimal sialylation in il8 but up to 11% in eg2. glycan modifications were made as each antibody was held on a protein a column in accordance with procedures shown in table 1 . agalactosylated glycans were enriched by treatment with the single addition of galactosidase and neuraminidase. this resulted in 83-95% of agalactosylated structures in the mabs and 65% in the polyclonal antibody. galactosylated antibodies (>95% yield) were produced by a single stage reaction involving sialidase and by galactosyltransferase with udp-gal. breakdown of the glycans to a trimannosyl core was accomplished by treatment of the agalactosylated structures with hexosaminidase. this produced a yield of 76-80% of the fm3 structure with a small remainder of fa1. sialylated antibodies (>95%) were produced by a 2 stage reaction involving sialidase, galactosyltransferase and finally treatment with 2,6 sialyltransferase in the presence of cmp-nana. the latter reaction produced equimolar quantities of monosialylated and disialylated cetuximab and polyclonal antibodies. the results suggest that for human antibodies (150 kda) there may be a limitation for sialylation given the steric constraints between the two ch2 domains of the dimeric structure. the ability to sialylate the smaller camelid antibody (80 kda) was greater resulting in a high (>90%) level of disialylated glycans. this suggests that the steric constraints for glycosylation may be lower. these sialylated antibodies have significant potential clinical importance for their ant-inflammatory activities. we have modified the glycans of antibodies following immobilization on an affinity ligand column. this allows enzymatic transformation in a solid state that has a distinct advantage over the equivalent transformation in solution because the enzymes and buffers can be washed out on completion of the modification leaving the antibody still attached to the affinity ligand. this enables repeated rounds of an enzymatic reaction or sequential reaction steps without the need for intermediate antibody purification. the antibody can be removed eventually from the column by application of an elution buffer once all desired glycan modification have been made. since affinity ligand purification of antibodies is performed routinely as an initial step of purification after cell culture, the glycan modification can easily be incorporated into this process. the enrichment of the resulting antibody for a targeted glycoform can enhance the potential therapeutic efficacy as it is known that specific glycoforms are required for certain biological effects. [1, 2] . this is mainly because microvesicles can be enriched/deprived for specific proteins, based on their functional purpose and their cellular origin. recently, microvesicles purified from the supernatant of t24 bladder cancer cells were reported to be enriched for bcl-2 and cyclin d1 (anti-apoptotic proteins), but deprived for bax and caspase-3 proteins (pro-apoptotic proteins) contributing towards immunity against programmed cell-death [2] . however, impact of microvesicles on cho-based bioprocess has not been evaluated yet. therefore, in this investigation, we aimed to evaluate their impact on cell growth and recombinant protein production from cho cells. materials and methods cho-k1 cells were grown in chemically-defined protein-free culture medium (life technologies-1835273) in shake flask (gx-00125p). the different fractions of spent-media (microvesicles and microvesicle-free spent media) were collected using ultracentrifugation method [1, 3] . quality of different fractions was ensured using western blotting for exosomal marker, cd63 (sc-15363) and coomassie stained gel for loading control (fig. 1a) . to evaluate impact on cell growth, cells were seeded with microvesicles and microvesicle-free fraction collected from log-phase of culture and cell counts were performed by vicell using trypan-blue dye exclusion method. for impact on productivity, cell-free supernatant, collected from microvesicle-treated human igg secreting cho culture from stationary-phase of culture with respective control, was evaluated using elisa (ab100547). microvesicles collected from 10% of media (by volume) from routine maintenance cultures compared to working volume for microvesiclesupplementation were used in each experiment. the growth of microvesicle-supplemented cultures had shorter lag-phase and achieved 1.2 fold higher maximum cell density (1.46x10 6 viable cells/ml) compared to untreated standard culture (1.21 x10 6 viable cells/ml) and maintained higher for the remaining period of batch culture (fig. 1b) . however, microvesicle-free fraction did not had significant impact on growth. the viability of microvesicle-supplemented cultures, similar to microvesicle-free media supplemented, was also higher compared to standard culture suggesting potential use of microvesicles for regulating cho growth in production cultures. this could be possibily because microvesicles have already been reported to be enriched with cell growth/death-regulating proteins and hence facilitating cell growth [2, 3] . we have also observed abundance of cell cycle regulators including cyclin d1 in microvesicle-fraction compared to microvesicle-free spent-media in our laboratory (data not shown); however, further investigation are required to prove the hypothesis. the overall productivity of human igg secreting cho cells was also observed to increase bỹ 4 fold following supplementation of microvesicles to the culture without significantly affecting per-cell productivity. since microvesicle-supplementation facilitates cell growth, increased number of viable producer cells in the culture could be expected to be the basis of observed increase in the overall productivity of the culture [2, 3] . the further work is ongoing to in-depth explore the potential of microvesicles for improving recombinant protein production from cho cells. the data indicate that microvesicles secreted from cho cells can improve cell growth and hence recombinat protein production in culture. therefore, strategies need to be developed for sterile isolation of cho microvesicles from routine maintenance cultures and their supplementation into the production culture for improving the performance of cho-based production process. the glycosylation profile of a recombinant protein is one of the most important attributes when defining product quality. producing a protein with desired characteristics requires the ability to modify and target specific glycosylation profiles. traditionally the approach to modify the glycosylation profile of a protein involves supplementing a culture with components that can improve galactosylation. experimentation using this supplemental approach resulted in a dramatic increase in terminal galactosylation, but lacked the ability to easily and repeatedly target specific glycosylation profiles. using novel and proprietary technology, we have developed a feed (glycantune™) and a unique feeding process that will maximize growth and titer while being able to modulate glycan profiles. this new feed can be added as a standalone process that can result in a significant shift from g0f to g1f and g2f (maximum galactosylation). using a unique fed-batch process, glycantune can also be used with a standard feed to dial in targeted glycosylation profiles. through process development, we created a method where a transition point is used to switch from a standard feed to a glycan modulating feed. the timing of the transition point will determine the specificity of the glycan profile. ) . n-linked glycans were digested with pngase f and quantified using 100pmole maltohexose/maltopentose internal standards labeled with 8-aminopyrene-1,3,6-trisulfonic acid (atps) as described by laroy et al [1] or the user guide for the glycan labeling and analysis kit (glycanassure™ user guide, thermo fisher scientific). all ce separations were performed using the applied biosystems™ 3500xl. the timing of transition from efc+ to gtc+ made it possible to target specific glycosylation profiles. modulating g0f from 75% down to 32%, while increasing g1f and increasing g2f (fig. 1) . transitioning to gtc+ early in culture resulted in a greater shift from g0f to g1f and g2f. transitioning midway or late in culture resulted in a greater proportion of g0f compared to g1f and g2f. supplementation based approaches using glycosylation modulating media components to modify and target specific glycosylation profiles proved to be difficult. these approaches were able to increase terminal galactosylation (g1f and g2f), but lacked the ability to fine tune glycan profiles. this could result in numerous rounds of titration experiments to target specific glycan profiles that would likely remain inconsistent between cell lines, culture media and feeds, and process scale. the development of a unique process made it possible to predictably target specific glycosylation profiles. transition from standard feeding to glycantune allowed for precise targeting of glycan profiles. transition to glycantune early in culture resulted in an increased shift from g0f to g1f and g2f. a transition late in culture resulted in increased g0f and decreased g1f and g2f. growth performance during precultures and batch curves in plain shaking flasks did not show any differences among tested surfactants or lots thereof, and cell densities reached 10-12·10 6 cells/ ml ( fig. 1a and b) . experiments with hek 293-f cells at elevated power input in baffled shaking flasks revealed distinct differences between pluronic® f-68, f-127 and kolliphor® p188, with f-127 showing the best performance. peak viable cell densities reached with lots a and b of pluronic® f-68 and f-127 were comparable to those in plain shaking flasks, while those for kolliphor® p188 and lots c and d of pluronic® f-68 were significantly lower. peak viale cell densities were of 2 -12·10 6 cells/ml (fig. 1c) . similar transient transfection efficiency and mean fluorescence of transfected cells independent of applied surfactant and lot thereof indicated no major impact of respective poloxamer (fig. 1d) . interestingly, experiments using fluorescein-labelled pluronic® showed a time-dependent uptake into hek cells. visual tracking revealed an endocytic uptake of poloxamers by the cells (>10fold increase in signal after 96 h) and its co-localisation with cell membrane and lysosomes. sec (fig. 1e) analyses showed differences between the tested poloxamers. especially tested lots of pluronic® f-68 revealed notable deviations in the low molecular weight fraction (peak 2, fig. 1e ), compared to the other poloxamers. cultures subjected to varying levels of shear stress showed distinct growth differences depending on used poloxamer. while experiments in plain shake flasks did not show any differences in growth, cultivations under elevated shear stress in baffled shake flasks resulted in lower peak viable cell densities with kolliphor® p188 and some pluronic® f-68 lots. it remaines unclear whether this can be explained by different membrane protective activities alone, or if other mechanisms, occuring during and after cellular uptake, contribute to this effect. especially for the tested lots of pluronic® f-68, sec of surfactants showed differences in the low molecular weight fraction. this fraction mainly represents polyethylen oxide (peo) (revealed by nmr), which is likely to be a remnant from synthesis. these observations indicate that the use of different poloxamers and lots thereof should be carefully evaluated, especially under elevated shear stress. further experiments will focus on investigating distinct sec fractions of poloxamers. overcoming (fig. 1) . aurintricarboxylic acid (endonuclease inhibitor; enhancer used in e.g. salivary gland transfection) and polyvinylpyrrolidone (polymer; beneficial in electroporation) were both found to negatively impact peimediated transfection of cho cells, while another tested polymer enhanced growth as well as transfection efficiency. the use of a strong chelator led to a high transfection efficiency, but impaired cell growth. based on the results of the independent substance testings, the medium formulation was modified by the addition of a weak chelator and further components including vitamins. different osmolalities between 280 mosmol/kg and 340 mosmol/kg were tested for the final formulation, but no major impact was seen neither on transfection efficiency nor on viability 2 days post-transfection. the final cho tf medium formulation supported high cell growth of finally tested cho cell lines 2 and 3 with peak viable cell densities above 10⋅10 6 cells/ml in batch cultivations with an overall cultivation time of 7-8 days (fig. 1) . further improvements of the process might be achieved by adapting the protocol, as the results shown are based on a simple precomplexing of dna-pei. moreover, product yields could potentially be increased by using feeds, temperature shifts or commonly used enhancers (e.g. valproic acids). scaling of a cell culture process is an essential part in its development. in a typical approach scaling [1] is performed by keeping a (critical) process parameter constant throughout the complete bioreactor range. this can lead to non-beneficial results either on the high or the low end of the range. for instance, the specific power input [p/v] of 30 w/m 3 might result in a good agitation in production scale whereas it leads to a nonturbulent mixing behavior in process development scale. to overcome this issue a new approach for an easy scaling procedure was developed. this "utility function" approach for agitation scaling is based on individual functions with a value-based mapping independent of bioreactor scale. process insight information (established either from doe process investigation or existing experience with a process platform) is directly formalized into a set of mappings which transform bioprocess values into perceived benefits (0 to 1). at each bioreactor scale, parameters (e.g. stirring and gassing) are then chosen to maximize the product of resultant utility functions. the model cho fed-batch process in this trial comprised a cho dg44 cell line that was transfected to produce a humanized antibody igg1. a chemically defined media system was used. the process, including cell line, medium and feeding strategy was designed and developed by sartorius stedim cellca. the aim of the gassing scale-up was to achieve similar cell densities when the addition of pure oxygen starts. for all flexsafe str® bags oxygen was sparged via the micro sparger part of the combi sparger. all other systems used a ring sparger with holes face up. the initial air flow rate was set to an oxygen transfer rate (k l a) of 8 1/h at the corresponding agitation rate and volume. all process engineering characterization parameters were determined according to dechema guidelines [2] . with the use of the utility functions the discrete agitation rate was determined (table 1) . the utility functions led to discrete agitation rates where not only homogeneous mixing but also a turbulent flow pattern and a suitable specific power input was guaranteed. the initial gassing rate of air supplied enough oxygen for 5 x 10 6 cells/ml in all bioreactors. due to the used scaling methods the growth patterns in all bioreactor scales were comparable. peak viable cell densities (vcd) of 20 -26 x 10 6 cells/ml were achieved and viability at the point of harvest was above 80 % in all scales. the final product concentration was in an acceptable range of 2.9 -3.6 g/l. product quality attributes show comparability over the complete bioreactor range (fig. 1) . the harvest criteria of 12 days gave a combination of viability and product concentration that made it easy to process the cell broth during cell removal and other downstream steps. the process implementation of the cho production system -expressing mab 2 was successfully performed with the use of utility functions. cell growth, productivity and product quality is comparable over the complete bioreactor range. background endoplasmic reticulum (er), the central part of the secretory pathways in eukaryotic cells, is responsible for controlling the quality of secreted and resident proteins through the regulation of protein translocation, protein folding, and early post-translational modifications [1] . a number of physiological conditions such as oxidative stress, hypoglycemia, acidosis, and thermal instability can disturb the er functions, which triggers er stress [2] . prolonged er stress induces apoptotic cell death [3] . oxidative stress that naturally accumulates in the er as a result of mitochondrial energy metabolism and protein synthesis can disturb the er function [4] . because er has a responsibility on the protein synthesis and quality control of the secreted proteins, er homeostasis has to be well maintained. when h 2 o 2 , an oxidative stress inducer, was added to recombinant chinese hamster ovary (rcho) cell cultures, it reduced cell growth, monoclonal antibody (mab) production, and galactosylated form of mab in a dose-dependent manner. antioxidants can reduce the oxidative stress level and suppress the apoptotic cell death by scavenging oxygen free radicals, inhibiting chain reaction of oxidation, and detoxifying peroxide [5] . however, despite the importance of mass production of mabs, studies on the effect of antioxidants on the production and quality of mabs in rcho cell cultures have not been fully substantiated. to find a more effective antioxidant in rcho cell cultures, six different antioxidants including baicalein, which have used widely in mammalian cell cultures, were evaluated as chemical supplements with two different rcho cell lines producing the same mab in 6-well plates. then, batch and fed-batch cultures were performed in shake flasks with the supplementation of baicalein, which showed the best effect on culture performance among the 6 antioxidants. the reactive oxygen species (ros) and er stress levels were measured to study the effect of baicalein on mab production and quality. among these antioxidants, baicalein showed the best mab production performance. addition of baicalein significantly reduced the expression level of bip and chop along with reduced ros level, suggesting oxidative stress accumulated in the cells can be relieved using baicalein. as a result, addition of baicalein in batch cultures resulted in 1.7 -1.8-fold increase in the maximum mab concentration (mmc), while maintaining the galactosylation of mab ( fig. 1 and table 1 ). likewise, addition of baicalein in fed-batch culture resulted in 1.6-fold increase in the mmc while maintaining the galactosylation of mab. oxidative stress negatively affected the production and galactosylation of mab in rcho cell cultures. among the various antioxidants tested in this study, baicalein showed the best mab production performance in both batch and fed-batch cultures of rcho cells. baicalein addition significantly enhanced mab production while maintaining galactosylated forms of mab. thus, baicalein is an effective antioxidant for use in rcho cell cultures for improved mab production. background the production of many biopharmaceuticals (e.g. antibodies & proteins for diagnostic and therapeutic purposes) requires the cultivation of mammalian cell lines, which is demanding with respect to various aspects such as complex cell metabolism, variabilities in cell behavior, scale dependencies, influences of changes in cultivation conditions, medium composition etc. although an increasing number of measurement parameters is available, only a part of them is routinely utilized in industrial cell culture processes and their corresponding seed trains. nevertheless, the data base grows, statistical investigation of data gains importance and process data are more easily accessible in the context of industry 4.0. cell cultivation has to consider these complex requirements, e.g. for fed-batch control and seed train design. furthermore, cultivation strategies have to be adapted to new products, cell lines and clones as well as to different production plants when transferring processes. one approach to encounter the variabilities and to include actual information from the process and from data analysis is adaptive model-assisted control [1] . two software tools enabling adaptive model-assisted control applying unstructured, unsegregated models have been developed and implemented using matlab © , winers and fortran, one tool for fedbatch control and another one for seed train simulation and optimization. one key element of adaptive model-assisted control is the underlying process model. in order to provide an adaptive character, model parameters should be easily identifiable from routine cultivation data, which is available during seed train and fed-batch without additional sophisticated measurements. therefore, the usage of unstructured, unsegregated models is recommended. a) example of an unstructured, unsegregated cell culture model (for adaptive model-assisted control) one example, describing cell growth, cell death, uptake of substrates and production of metabolites via a first order system of ordinary differential equations and monod-type kinetics, is shown in table 1 . this mathematical model includes 13 cell specific model parameters [2] . ii) b) open-loop control sequence for seed train simulation and optimization [3] : using model, a priori identified model parameters and starting concentration values, the temporal concentration courses can be predicted for the first scale. subsequently, points in time for passaging and starting values for the next scale can be computed by adding a passaging strategy, seed train conditions and medium concentrations. prediction for the following scales can be obtained iteratively. integrating feedback from the process in terms of cultivation data enables increasing prediction accuracy and responding to possible changes in cell behaviour. process design and optimization, e.g. regarding seed train and fed-batch, is realized by adaptive model-assisted software tools using unstructured, unsegregated models. they enable feedback from the process via routine cultivation data and allow adaptation to diverse circumstances such as different cell lines, products, cultivation conditions, plant configurations etc. ) in polyelectrolyte capsules. significant advantages, such as great mechanical stability, good biocompatibility and good mass transfer properties characterized these capsules based on sodium cellulose sulfate/poly(diallyldimethyl) ammonium chloride (scs/pdadmac) [1, 2] . here, we present the possibility to cultivate human t cells, freshly isolated from blood, to high densities in similar semipermeable polyelectrolyte microcapsules within less than 10 days. cells were encapsulated in semipermeable scs/pdadmac polyelectrolyte microcapsules or confined in 1.5% alginate/poly-l-lysine (pll) beads, a standard approach for cell immobilization. the permeability of the microcapsules was estimated using dextran-based molecular weight standards (10 and 20 kda) and vitamin b12 (1.6 kda). gentle digestion with endocellulase allows an easy release of the cells out of the capsules. cell growth, cytokines production and phenotype were measured in non-encapsulated and encapsulated cells grown under standard culture conditions. moreover, we analyzed the interplay between the secreted cytokines and the scs within the capsules and its putative influence on cell growth. cells mixed in the cellulose sulfate solution under physiological conditions can be safely trapped within a liquid core during capsule formation. encapsulated cells can reached cell densities ≤ 40 x 10 6 cells ml capsule -1 , whereas cells confined in alginate/pll beads and non-encapsulated ones reached 11.3 x 10 6 cells ml bead -1 and 2.4 x 10 6 cells ml, respectively. one major advantage of these polyelectrolyte microcapsules (<1 mm) is the low mwco (<10 kda) (fig. 1a-b) . this restricted permeability allows for a conditioning of the capsule core by autocrine factors, which in turn permits the use of basal cell culture medium instead of expensive t cell specialized media, hence does not necessitate high amounts of rhil-2 and reduces the cultivation costs. moreover, co-encapsulation of rhil-2 had a beneficial effect on the growth kinetics in most cases (fig. 1c) . some evidence is presented that the scs used to form the polyelectrolyte microcapsules, specifically adsorbs il-2 (table 1 ) -a cytokine which provides an essential signal for t-cell proliferation and differentiation [3] . therefore, we postulate that the scs used for encapsulation has biomimetic properties, creating an artificial extracellular matrix mimicking heparin sulfate which in turn positively affect t cell proliferation via trans-presentation of il-2 (fig. 1d) [4] . primary t lymphocytes can be expanded under appropriate conditions outside the body. in the latter, t cells grow/expand in specific environments where the cells are tightly packed, leading to multiple cell-cell contacts and manifold interactions with the extracellular matrix. ex vivo suspension cultures of diluted cells cannot provide such a microenvironment. in the microcapsulesbased cultivation system presented, the cells are suspended in a viscous scs-solution. the low molecular weight cut off of the surrounding polyelectrolyte membrane assures that typical signaling molecules produced by the cells are retained thus facilitates the "conditioning" of the cellular microenvironment, while nutrients and metabolites can pass. expensive additives, such as interleukin-2 (il-2), can be co-encapsulated. expansion then no longer requires specialized t-cell media. moreover, the scs seems to have biomimetic properties, representing an artificial extracellular matrix mimicking heparin sulfate. we consider that the described method may be an appropriate alternative to expand t cells while creating a local microenvironment mimicking in vivo conditions. -175) . equations of balances and kinetics of an employed process model including x v viable cell density, x t total cell density, μ cell-specific growth rate, μ d cell-specific death rate, t time, k s and k monod kinetic constant and monod constant for uptake, k lys cell lysis constant, q cell-specific uptake rate or production rate, respectively, y kinetic production constant, c concentration, glc glucose, gln glutamine, lac lactate, amm ammonia, f feed rate, v volume balances with fed-batch terms kinetics ;uptake if c glc ≥ 0.5 mm : q lac,uptake = 0 if c glc < 0.5 mm : q lac,uptake = q lac,uptake,max q amm = y amm/gln • q gln background digital manufacturing (dm) is heightening the productivity and robustness of existing processes and facilities. it also enables the efficient development of previously unmanageable products or processes and provided the basis for a wave of innovations. dm is a resident and on-line source of continuous optimization of process performance. it relies upon the comprehensive, real-time interfacing of both human and machine sourced information through one centralized system. more than legacy distributed control system (dcs) and supervisory control and data acquisition (scada), it is an integral interconnection of real-time access to divergent sources of information. as such, it can promise deep analysis and predictions leading to shortened product cycle and advanced process control. this comprehensive analysis is extending beyond operations performance data from the production floor to data driving such activities as raw materials security of supply (sos) and business continuity management systems (bcms). digital biomanufacturing (db) can be viewed as yet another, larger, embodiment of digital biotechnology. db is similar to digital manufacturing in that it promotes innovations in the manufacturing of biologicals by using such things as computer aided design, manufacture, verification and deep process analysis using software sensors (fig. 1) . however, the fact that there are living components (cells) involved in the processes puts a distinctly different flavor to the systems employed. it is desirable to use a distinct term here to distinguish it because, as in the terms bioproduction and biopharmacology, db addresses many unique aspects of biologically-based activities. the reasons why the biotech and biopharma industry lags behind other sectors such as the automotive regarding the transformation to digital manufacturing are (i) the complexity and dissipative nature of biological systems, (ii) distributed heterogeneous data and (iii) limited at-line or on-line data sources. however, the costs of genomic sequencing, omics data generation, and computing resources are decreasing rapidly, and at the same time process analytical technologies, computational power and predictive modeling as well as data management infrastructures are greatly improving (table 1) . by removing roadblocks that used to limit approaches, these changes have paved the way to transforming the bioeconomy into an industry that is based on digital knowledge. such new and optimized manufacturing technologies as continuous biomanufacturing and 3d bioprinting can actually demand the interfacing of many sources of information, deep data prior to elisa, the various proteins were incubated at 37°c in scs prepared as for encapsulation. as control, the scs was replaced by pbs. shown are mean values ± sd, n = 3 analysis including software sensors for metabolic fluxes, and model-based predictions of digital biomanufacturing. the application of predictive models for bioprocess optimization greatly improves established platforms and finally leads to a massively increased mechanistic process understanding. four essential benefits result from the increased bioprocess understanding, development, and control of db. first, personnel are relieved of many manual and repetitive tasks. second, strategic planning and operational efficiency are improved. third, we see real-time optimization of end-to-end manufacturing based on such high-value criteria as projected product quality and profitability. fourth, it enables previously unmanageable operations and creates innovative solutions. monitoring between-batch behavior of real-time adjusted cellculture parameters xavier lories, jean-françois michiels arlenda, mont-saint-guibert, 1435, belgium correspondence: xavier lories (xavier.lories@arlenda.com) bmc proceedings 2018, 12(suppl 1):p-192 background cell-culture parameters (ccp), such as ph, may be continuously measured online and subject to real-time automated adjustment (e.g. automated addition of a base to prevent the ph to drop too low). this is an efficient method to maintain the parameter within specified limits. this type of control constraint the variability within the predefined limits and does not provide any information on the between-batch variability of the process. online measurements of ccp provide time-dependent curves presenting one or more transitions. different types of transition can be observed: -the process can shift from a state in which adjustment is needed to keep the ccp in range to a state in which it is not. typically, the ccp drifts away from a limit. -the process shifts from a state in which adjustment is not needed to one in which it is. for instance, a drifting ccp reaches the lower or upper limit of the accepted range. the timepoints at which those transitions take place are here called changepoints. those are aspects of the process and, as such, should be controlled. in the multiple changepoints cases, the approach allows the early termination of runs showing very early or very late first changepoint. the identification of the changepoints position is based on simple rules rather than complex statistical modeling to keep the identification methodology simple. once the changepoint are identified, a multivariate bayesian model is adjusted on the appropriately transformed data. prediction regions are obtained and used as control limits [1] . results obtained for a 2-changepoint case are shown on fig. 1 . points on the right-hand graph represent new batches. the red triangle represents a failed batch. it appears that the control strategy fails to identify the failed batch. two reasons can be considered: -the limits of the prediction region have been established based on 9 points, such a small sample size is likely to be insufficient for the definition of such a control chart. -the tested batches were produced out of set point. a control chart should be used on a stable process, ran in the same conditions, in order to be really relevant. this work was based on available historical data, which is never an ideal situation. the suggested strategy offers a simple approach to the monitoring of between-batch behavior for cell-culture. once the limits have been defined, the approach is quite straightforward and usable by nonstatistician. however, such strategy, as any other of this type, must be based on a sufficient number of batches for the definition of the control limits in order to have a good estimation of the batch-to batch variability. fig. 1 (abstract p-190) . intelligent software applications support digital biomanufacturing process development and control. • databases using data collected online, at-line, and offline from bioprocesses operating worldwide. • process data are used to generate metabolic network models that represent a specific host cell line in a bioprocess. • modelbased computational simulations improve process understanding and reduce experimental efforts for media design, clone selection, and metabolic engineering. • automated data import and processing allow for a streamlined and standardized metabolic process analysis. • identification of critical metabolic parameters is used for proactive steering and control of production processes background rabies is a zoonotic viral disease with a mortality close to 100% [1] . as there is not an efficacious treatment available, post-exposure vaccination is recommended for individuals in contact with the virus. on the other hand, the most common source of virus transmission is saliva of infected animals, mostly dogs, whereby mass vaccination of pets is the most cost-effective way to reduce human infections. in this context, availability of both human and veterinary vaccines is critical [2, 3] . our group had previously developed an effective vlp-based rabies vaccine candidate produced in high density hek293 cell cultures with serum free medium (sfm) [4, 5] . one of the aims in vaccine production process is the achievement of a good productivity with a low cost per dose, mainly in the case of vaccines for animal use in which case the sfm is one of the principal expenses. in this work, we show the adaptation of the producer clone to a non-expensive in-house developed culture medium, in order to reduce the global cost of the process and therefore the price per dose. experimental approach first, we compared a direct and a sequential adaptation protocol of our hek293 rv-vlps producer clone, from 100% of the commercial sfm (ex-cell293, safc) to a new formulation with only 50% of the sfm and a minimum essential medium (p2g), developed in our laboratory specifically for rv-vlps production. this new formulation was called rvpm (rabies vaccine production medium). the specific productivity of rv-vlps in culture supernatants was measured by sandwich elisa, using the 6 th international standard for rabies vaccine that quantify the glycoprotein content (nibsc, expressed in elisa units per ml). further, we evaluated both media for the production of the rabies vaccine, using stirred tank bioreactors operated in continuous mode (biostat qplus, sartorius). the production of the rv-vlps was daily evaluated by elisa and the obtained harvests analysed by the nih potency test for rabies vaccine. after the adaptation process, suspension cultures without aggregates or clumps were obtained, with the same specific growth rate. a lower maximum cell density with the rvpm was reached, achieving 5x10 6 cells.ml -1 , compared with the sfm that reach cell densities between 8 and 9x10 6 cells.ml -1 in batch mode. the specific rv-vlps productivity per cell was maintained, obtaining values of 0.88 and 0.90 eu.10 6 cells -1 .day -1 for the clone being cultured in sfm and rvpm, respectively. taking into account that this producer clone can be changed directly from one medium to the other without lag phase or cell damage, and that in rvpm the maximum cell density reached was lower, this medium was proposed to be analysed in high cell density in perfusion mode for a continuous culture in bioreactor. therefore, we performed two cultures in parallel to compare the efficacy of each media formulation in perfusion. as shown in fig. 1 , we obtained very similar culture performances in both bioreactors; 14.4 eu.ml -1 and 16.1 eu.ml -1 of rv-vlps for the commercial sfm and rvpm, respectively. after that, the harvests were evaluated by the nih potency test obtaining a rabies vaccine potency of 1.2 iu.ml -1 for both cultures (being 1 iu.ml -1 the minimum potency required for animal vaccine). thus, the results obtained represent an interesting advance in the optimization of this vaccine production process since the use of this new medium formulation represents a reduction of 40% of the total cost which will be reflected in a considerable reduction of the price of the vaccine dose. background vaccines are one of the most powerful and effective health inventions ever developedproviding tremendous economic and societal value; yet several factors hinder comprehensive immunization coverage. traditional methods of biologics production, based on stainless steel bioreactors, allow pharmaceutical companies to achieve economy of scale, but are limited by high capital expenditures. such approaches stifle manufacturing innovation and lack long-term cost-effectiveness and sustainability. current innovations can cut biologics' production costs to revolutionize the mainstream use of biologic treatments, focusing on developing fast, potent and cost-effective vaccine production. univercells' mission to make biologics affordable to all initiated a paradigm shift, targeting an innovative single-use manufacturing platform incorporating bioprocess into continuous operations. univercells employs process intensification, using high volumetric productivity bioreactors; and unit steps integration, coupling usp and dsp into continuous operations. the objective is a down-scaled high-productivity process for a cost-effective manufacturing solution. the resulting micro-facilities are easily-deployable in developing countries, breaking entry barriers to biomanufacturing (fig. 1) . manufacturing and distribution advancements, from centralized to distributed, foresee affordable treatments' obtainability via supplying local populations with local production units. -bench-scale fixed-bed bioreactor; -carriers made of 100% pure non-woven hydrophilized pet fibers; -vero cells grown in serum-free and serum containing media; -attenuated polio strains; -cell nuclei on carriers counted by the crystal violet method; -polio virus production estimated by elisa assay (d-antigen content). cultivation of vero cells in medium with serum and in serum-free medium, was carried out in bench-scale compact fixed-bed bioreactors, to determine which culture conditions result in the highest growth rate, the highest cell biomass by carriers and virus production. cells were inoculated at 0.05x10 6 cells/cm 2 and infected during the mid-exponential phase, following a complete media exchange. viral infection took place in serum-free media. in-line clarification and purification is targeted to be performed in only a few steps (maximum one of two) without intermediary diafiltration. in such configuration, we measured that vero cells can reach a cell density of 300-350x10 3 cells/cm 2 with pdl/day of 1.0-1.2 in serum-containing media. this new facility is expected to manufacture any type of viral vaccine at a very low cost and could be deployed at the site of the manufacturer in emerging countries, killing the two birds of cost of manufacturing and distribution with one stone. the presentation will feature the description of the engineering development, but also the preliminary results of cell growth, infections, and product quality, as well as a description of the cogs calculation. univercells developed a disruptive polio vaccine manufacturing technology exceeding expectations when compared to traditional methodsachieving a superior result via its all-in-one solution of a simple, scalable, and fully-disposable vaccine production platform resulting in long-term cost-effectiveness, flexibility and sustainability: -all upstream, downstream and inactivation steps take place within a closed system with all the equipment contained in a low footprint isolatorcreating a confined area for polio virus handling that facilitates the deployment of micro-facilities. -this leads to a dramatic reduction in capital investment, time required for development and increases production capacity. -in conclusion, this is a simple and elegant solution for the industrial production of human vaccines at a low cost in micro-facilities, making polio vaccines available to all. comparison of media formulations for the vaccine production in 1 l stirred tank bioreactors operated in continuous mode. both cultures were performed in parallel using the corresponding medium for the perfusion. a feeding was performed with the commercial sfm. b the first two days of perfusion feeding was performed with sfm until the cell density reach 10 7 cells.ml -1 and, after that, the bioreactor was fed using the rvpm formulation. (↓) on day number 10, 20% of the reactor volume was punctually bled maintaining the working volume background vectored vaccines based on modified vaccinia virus ankara (mva) are reported to stably maintain large transgenes, and to be safe, immunogenic and tolerant to pre-existing immunity. mva is usually produced on primary chicken embryo fibroblasts but continuous cell lines are being investigated as more versatile substrates. we have previously reported development of a continuous suspension cell line (cr.pix) derived from the muscovy duck and efficient production process for mva in chemically defined media [1, 2] . this process allowed isolation of an hitherto undescribed genotype (mva-cr19) that induced fewer syncytia in adherent cultures and replicated to higher infectious titers in the extracellular volume of suspension cultures [3] . replication of mva-cr19 remained restricted predominantely to avian cells, an important property of mva vectors. homologous recombination in cr.pix cells was used to generate viruses with various expression cassettes in deletion site iii [4] and combinations of the differentiating point mutations of mva-cr19 in a backbone of wildtype virus. all recombinant viruses were plaquepurified. successful introduction of the mutations was confirmed by sequencing and specifically designed restriction fragment length polymorphisms (rflps). viruses were analyzed by serial passaging, diagnostic pcrs accross deletion sites [4] , replication kinetics, plaque phenotype and electron microscopy. the genome was further investigated by anchored pcr and long pcr. efficiency of spread of recombinant viruses (fig. 1a) could be mapped to a point mutation in one of the genes, a34r. however, although mva-cr19 carries mutations in three structural proteins we detected no obvious differences to wildtype by electron microscopy (fig. 1b) . the replacement of the left viral telomere by the right counterpart was the most surprising result of our new study (fig. 1c) . this extensive rearrangement affects 15 % of the viral genome and has also increased the area of complementarity between the two telomeres. the recombination site was precisely located and shown via analysis of earlier and subsequent passages to be a stable property of mva-cr19. various viruses, including those with larger dual (dsred1 and gfp) expression cassettes, were serially passaged at least 20-fold. although the genotype of mva-cr19 is advantageous for replication, all genomic and genetic markers of wildtype and mva-cr19 were stably maintained in all passages of the recombinant viruses, independent of wildtype or mva-cr19 backbone. we confirmed our previous results that suggested that mva-cr19 replicates efficiently in single-cell suspensions and were able to connect this property with the d86y mutation in a34, a structural protein on the surface of the virions. mva-cr19 was also found to differ from wildtype mva by a recombination between left and right viral telomere. due to this event, several genes encoded at the left terminus have been deleted whereas the gene dosis of those originally encoded only at the right terminus may have increased. we do not currently know how much the various point mutations and changes in genomic structure combine to explain the improved replication of mva-cr19. as several of the affected genes have been reported in the literature to impact interaction of mva with the host we would expect that in vivo studies may reveal additional novel properties of mva-cr19. an extremely important distinction between our earlier study [3] and this one concerns the source of the viruses. here, we investigated plaque-purified viruses and confirm the high genetic and genomic stability of mva. different expression cassettes inserted into deletion site iii, all diagnostic rflps and pcrs over various sites of the genome and within the viral telomeres remained unchanged throughout at least 20 serial passages -independent of whether recombinant viruses with wildtype or cr19-derived backbones were characterized. fig. 1 (abstract p-225) . a one hallmark of mva-cr19 is a significantly reduced tendency to induce syncytia and an increased dispersion of plaques in cr.pix cell monolayers. this property appears to be supported by the mutation in a34r. b electron microscopy reveals no obvious differences between novel genotype and wildtype. background transient gene expression systems using polyethylenimine (pei) are considered to be fast, flexible and cost-efficient for recombinant protein production [1] . transfection efficiency depends on different factors; one of them is the type of media. production media support cell growth and protein production but not high transfection efficiency (te) mediated by pei [2] . therefore, media were selected for transfection followed by feeding of production media [3] to improve te and protein production. two different transfection strategies are compared: conventional transfection by preparing polyplex of a plasmid (pdna) and pei interaction before transfection and insitu transfection by direct addition both of them to the cell suspension and the polyplex formed spontaneously [4] . cells were seeded 24 hr in chomacs cd media before transfection. at transfection time point an equal amount of cells were resuspended in each media type. transfection was applied either insitu or conventional (polyplex prepared in 100 μl of 150 mm nacl and incubated for 20 min.), media addition was performed 5 hours post-transfection (hpt). media type and transfection condition were illustrated in table 1 . media screen result exhibits the highest transfection efficiency of around 50% transfected cells by opti-mem medium coming along with low cell growth and viability. to improve the transfection efficiency, basic parameters including cell density, pdna, and pei concentrations were varied and higher transfection efficiency was reached by reducing media or accordingly increasing cell density, pei and pdna concentration for transfection. further optimization results show that the transfection of cho-k1 cells in opti-mem (transfection medium) for 5 hours followed by addition of cho-macs cd (production medium) for further enhancing the transfection, cell count, and cell viability. the transfection efficiency (te) increased up to 85 ± 2.6% coincide with increases in viable cell concentration (vcc) in comparsion to transfection and cultivation in opti-mem media alone fig. 1a . both conventional and insitu methods are successfully transfected cho-k1 to the same similar high te as shown in fluorescence microscope images of fig. 1b . insitu transfection shows super-priority for suspension cell transfection concerning the reduction of handling steps (one step) compared to the conventional way (two steps). the insitu transfection avoiding the optimization step required for the incubation period to prepare transfection polyplex but require a higher amount of pdna and pei than conventional way as shown in table 1 . in order to deal with the growing demand of large quantities of therapeutic proteins in a timely fashion, expression systems are being optimized to reduce the time of generation of stable clones as well as to increase the levels of protein secretion. this can be achieved by a combination of expression cassette optimization, cell engineering and selection process. we have previously developed the cumate gene-switch, which is a very efficient expression system for protein production [1] . we have shown that the cumate-inducible promoter (cr5) was the strongest promoter we had tested so far in chinese hamster ovary (cho) cells. with this promoter, we were able to generate stable cho pools capable of producing high levels of a fc fusion protein (900 mg/l), outperforming by 3 to 4 fold those generated with cmv5 and hybrid ef1α-htlv constitutive promoters. besides the strength of the cr5 promoter, we demonstrated that the ability to control both the time and the level of expression during pool generation and maintenance gave a real advantage to the inducible expression system. indeed, we observed that keeping the expression off during selection enabled the generation of pools with superior productivity compared with the pools whose expression was maintained on. moreover, preliminary results suggest that keeping recombinant protein expression down increases the frequency of high producer clones [2] . knowing that one of the main bottlenecks of the successful bioprocessing of recombinant proteins using cho cells is the rapid isolation of a high producer, our data suggest that the cumate gene-switch system could be a valuable platform for the generation of stable clones. the main regulatory authorities and organizations demand proof of monoclonality for biotechnological producer cells. with increasing pressure to shorten timelines and to improve drug safety, technologically advanced methods have to be established to ensure that production cell lines are derived from a single progenitor cell. sartorius stedim cellca's single cell cloning approach is based on one round of fluorescence-activated cell sorting (facs) using becton dickinson (bd) facsariatm fusion cell sorter combined with photodocumentation by synentec cellavista microscopic imaging system. for the approach, critical process parameters such as different cell lines, viability and cell aggregation levels were investigated separately to assess their contribution to the probability of monoclonality. immediately after single cell cloning into 384-well plates (1 cell/ well) the plates were centrifuged followed by imaging using the cellavista (day 0). further cellavista images are taken on day 1, day 2 and on one day between day 5 and 7. outgrowth was defined at day 14. 8 cell lines expressing different recombinant products were investigated to calculate probability of having ≥ 2 cells/well after facs sorting p(d), the apparent probability p(i) of having ghostcells (cells that are out-of-focus and, thus, are not visible during initial microscopic imaging), and the apparent probability p(k) of having ghostcells that outgrow the 384-well stage (fig. 1 ). using these results, the probability of obtaining a monoclonal cell by using sartorius stedim cellca's single cell cloning approach was determined (table 1) by conservative examination: p(monoclonal, conservative) = 1 -(p(d) x p(i)) realistic examination: p(monoclonal, realistic) = 1 -(p(d) x p(k)) cell pools with low viability can theoretically impact the probability of monoclonality by e.g. diminishing microscopic imaging quality (cell debris). therefore, pool cell line 1 with very low viability (≥36 %) was used to demonstrate, that the probability of monoclonality is still 99.9 % in case of low viability on day of sorting: p(monoclonal, conservative) = p(d) x p(i) = 99.9 % p(monoclonal, realistic) = p(d) x p(k) = 99.9 % furthermore, cell pools with high aggregation levels can theoretically impact the probability of monoclonality by sticking together during facs sorting and therefore increase the probability p(d) of having ≥ 2 cells/droplet. therefore, pool cell line 8 with high aggregation levels (≥11.1 %) was used to demonstrate, that the probability of monoclonality is still ≥ 99.9 % in case of highly aggregated cell pools on day of sorting: p(monoclonal, conservative) = p(d) x p(i) = > 99.9 % p(monoclonal, realistic) = p(d) x p(k) = > 99.9 % conclusions in summary, there is no obvious correlation between protein product type and the determined probabilities for monoclonality. furthermore, pools with a viability as low as 36 % and pools with an aggregation level as high as 11.1 % can be used for scc resulting in acceptable probabilities of monoclonality. background ich guidance [1] requires that any cell line used to produce biopharmaceuticals originates from a single progenitor cell. recently, there has been increased scrutiny of the method(s) used to achieve this requirement. here, we review the suitability of the legacy capillary aided cell cloning (cacc) method in light of this changing landscape of expectations. the cacc method is based on the 'spotting' technique [2] and relies on independent visual conformation by two scientists of the presence of a single cell in a 1 μl droplet. this method achieves a high probability of monoclonality in one cloning round. although the method has since been replaced by facs single cell deposition for routine use, it remains a viable cloning method. -performed by trained scientists -dilute culture to 1500 ± 500 cells/ml with ≤2% doublets -draw cell suspension into pipette tip by capillary action; tap tip against the centre of the base of each well of a 48 well plate. -size of resulting droplet =~1μl (fig. 1a ) -two scientists independently view all wells using a microscope (initially use 40x magnification with the entire rim of the droplet visible within the field-of-view. next, examine particles using 100x or 200x magnification to confirm they are cells) and individually record the number of cells present in each well's droplet (fig. 1b to d) . -exclude droplet from further analysis if full visualisation is hindered (fig. 1e to h) . -add growth medium, and incubate plates. record all wells containing colonies; only progress colonies from wells that both scientists agree contains only one cell. -data analysis: -each scientist's observations categorised as: 0 cells, 1 cell or >1 cell -observed outcome for each well: growth or no growth -probability of monoclonality estimated from data using a statistical model cloning (ldc) increased accuracy of p(monoclonality) with cacc -ldc weakness: no visualisation after seeding (to check both well seeding and subsequent growth of colonies is well described by the poisson distribution), potentially overestimating p(monoclonality) -addressed by cacc: visual examination with colonies arising from wells seeded with 1 cell distinguished from those seeded with >1 cell -visualisation step further strengthened by: using controls for exclusion of wells; measuring errors based on the presence or absence of colonies in wells where two scientists independently reported 0 cells; and formally analysing the data using a suitable statistical model decreased time and resource requirements with cacc -high p(monoclonality) possible in single round as each well examined individually with only those containing a single cell progressed, and because the error rate for incorrect scoring is considered to be low two scientists miss a cell one cell sitting on top of another and the two thus appearing as one an experiment was performed to estimate error frequency [3] . conclusion -scientists miss a cell infrequently (in the range 0.4% to 1.3%, [3] ) -error frequency does not invalidate use of direct observation methods for cell cloning -single cell seen by both scientists is highly likely to be monoclonal -during method development, strategies established to control potential sources of error ( table 1 ) use of a contemporaneous visualisation approach, a strict control strategy, and a suitable statistical model (which takes into account potential errors) results in: -the cacc method being at least as robust as the ldc method -the cacc method being a reliable, single-step method for cloning to achieve a high p(monoclonality) background vector design is a key step in cell line development for the expression of therapeutic biologics. it is essential that the vector design results in high, stable expression of the encoded protein. other considerations include ease of cloning, stability for propagation in e. coli as well as in the mammalian host cell line, and ease of sequence amplification for verification of vector construction and for detection of insertion site and copy number in stably expressing cells. for these reasons, use of the same promoters and polya tails in dual cassette vectors, as is common for expression of the heavy and light chains of monoclonal antibodies, can be problematic. in order to minimize sequence similarities between the two expression cassettes, we have modified the promoters, introns, and polya tails of the light chain and heavy chain expression cassettes in the dual expression vector commonly used for the expression of therapeutic antibodies in the chozn® gs -/cell line development platform. gene synthesis and vector construction of igg1 and fluorophore-expressing vectors was done by atum. vectors were transfected into chozn® gs -/cells via electroporation. analysis of gfp and rfp expression was achieved using a macsquant instrument. selection and generation of stable pools and single cell clones from transfections with igg1-encoding vectors was performed as described in the chozn® platform technical bulletin. titer analysis was performed in static (96 well plate), in a 7 day tpp assay and in a 14 day fed batch assay using a qk fortebio. initial screening experiments identified a lead vector, #39, and a vector, #37, which produced very low titers and relatively few minipools expressing detectable levels of igg1. analysis of gfp and rfp expression from the modified vectors indicated relatively high expression from the rfp/hc expression cassette of vector #37. a stronger promoter resulting in overabundance of hc, known to be toxic to cells, provides a possible explanation for the poor results with this vector. interestingly, swapping the positions of the lc and hc in #37 resulted in a vector, #77, that outperformed the initially identified lead vector (fig. 1 ). this same change was made to vector #39 without any resulting improvement in titers (vector #78, fig. 1 ). interestingly, vector #39 had a smaller difference in relative promoter strengths, based on mean channel fluorescence ratio of gfp to rfp, suggesting that overabundance of hc was not an impediment to igg1 expression from #39. poor titers were also seen with a modified version of vector #39 (vector #79, fig. 1 ) in which the glutamine synthethase selection cassette was in the reverse orientation. this second screen identified vector #77 as the lead vector design (fig. 1) . a full comparative study of vector #77 and the control vector was performed, cumulating in the generation and comparison of single-cell clones from each. these studies have demonstrated the equivalence of these vectors in terms of igg1 titer. this work has resulted in the identification and characterization of a dual expression vector with minimized similarity between the two expression cassettes, easing the cloning, propagation and analysis of vector integration in stable cell lines while maintaining the high, stable expression of the encoded protein of the original vector design. background traditional cell line engineering strategies mainly include an antibiotics resistance selection. in this process, cells are transfected with the goi (gene of interest) together with an antibiotics resistance gene and those cells are selected that survive treatment with the respective antibiotic [1] . although the gene responsible for the survival of the cell is transfected together with the goi, resistance is not necessarily linked to high goi expression. thus, a significant proportion of resistant cells may not express the goi at all, necessitating the search for alternative, more closely linked selection systems. sirnas (silencing inducing rnas) are short, noncoding rnas that can bind to complementary mrna and inhibit their translation. this function has been used in many approaches to silence the expression of certain genes [2] . with their short length, sirnas can be hidden in introns (non-translating regions) of genes, making it possible to couple the expression of a sirna to a gene. this way a cell produces a correlating amount of sirna when transcribing the gene, without adding any further translational burden on the cell. the co-expression of the sirna can be used as a selective marker by one of the following methods: (1) knock-down of a suicide gene to enable a cell's survival after suicide gene mrna transfection, (2) down-regulation of a surface marker which is used in macs (magnetic cell separation) to filter out wanted or unwanted cells, and (3) inhibition of a fluorophore marker for selection using facs without product specific antibodies. for sirna based cell selection systems, sirnas replace the commonly used antibiotics resistance as a marker. cells that produce goi will also produce the sirna that protects the cell from a suicide gene. the selection protein (suicide genes, fluorophores, surface markers, etc.) is transfected as mrna and is only expressed during selection. the general process is outlined in fig. 1. (a) the traditional antibiotics resistance marker is replaced by an sirna, which is cotranscribed with the goi. unlike in antibiotic resistance, the marker here is not a protein, reducing the translational burden and providing more resources for goi production [3] . transfection with the suicide gene proved to be 100% lethal within 2 days, with no outgrowth over two weeks. protection by expression of the sirna was shown to be efficient. currently a comparison of stable cell line development programs based on sirna selection and neomycin selection is ongoing. conclusions the novel selection system should speed up cell line development, as the system kills rapidly and directly selects for cells transcribing the product gene on a high level. we expect to see more high producers earlier in the process, which will allow for an easier and faster selection in the following steps. sirna based selection offers great opportunities. by directly selecting based on goi transcription and not a proxy marker, we expect more relevant cells on a pool level. in addition, the elimination of an antibiotics resistance allows more cellular resources for goi production. the system offers multiple ways of application, either by enriching wanted, or depleting unwanted cells. background single-cell cloning is an essential step used in the upstream development of transformed cell lines for therapeutic protein production. while single-cell clones are typically used to ensure product consistency, such low cell density cultures present a survival challenge; cells grow more slowly or may even not survive at low densities in protein-free media, costing the industry time and money and limiting the pool of candidate colonies for choice of production clones [1, 2] . to address this problem, we aimed to develop a highly efficient serum-free medium suitable for optimising single-cell cloning efficiency by studying a range of conditioned media (cm) samples isolated from different chinese hamster ovary (cho) cell lines. materials and methods cho-s, dg44 and cho-k1 were adapted to cho-s sfm-ii (gibco) medium for a minimum of three passages. conditioned media was then collected when the cultures reached a cell density of 1x10 6 cells/ml (typically day 2-day 3 depending on the growth profiles of each cell line and whether they grew in suspension or attached conditions). samples were then centrifuged twice to remove cell pellet/debris and stored at -20°c. the ability of conditioned media to support cho colony formation was then assayed using 96-well plates, seeding the cells at low cell density (1-10cells/well) by diluting down cho cultures in media/conditioned media. after incubation at 37°c for 10 days, cloning efficiency was assayed using a standard xtt assay. initial screening of the nine cm samples was performed using cho-k1 cells due to their widespread use in industrial antibody production. successful media candidates were subsequently screened using additional cho cell lines. table 1) . the k1-sfmii-cm product improved cell cloning efficiency for dg44 cells (avg. increase>1.5-fold) and cho-s cells (avg. increase>3-fold) ( fig. 1) and also the adherent cho-k1 cell line growing in atcc +5%fbs. the ability of conditioned media to support cho growth in limiting-dilution conditions (1, 6 and 10 cells/ml) was investigated. from a range of nine conditioned media samples; four compelling products have been identified which improve low-cell density growth of cho-k1 cells, compared to sfm-ii control media. we feel that these early-stage conditioned media products may increase cloning efficiencies during upstream cho cell line development, resulting in financial savings for industry and increasing the possibilities of identifying particularly highperforming transformed clones. 12 (7):3496-3510. the main rate-limiting step in the upstream stages of protein biomanufacture is the isolation of stable, high producing cell clones. ubiquitous chromatin opening elements (ucoe®s) consist of at least one promoter region with associated methylation-free cpg island from housekeeping genes; they possess a dominant chromatin opening capability and thus confer stable transgene expression. ucoe®-viral promoter (e.g. cmv) based plasmid vectors markedly reduce the time it takes to isolate high, stably producing cell clones. although some ucoe®-viral promoter combinations have been tested, they have not been thoroughly evaluated in chinese hamster ovary (cho) cells. plasmid vectors containing combinations of either the human hnrpa2b1-cbx3 ucoe® (a2ucoe®) or murine rps3 ucoe® linked to different viral promoters (hcmv, gpcmv, sffv) driving expression of an egfp reporter gene were functionally analysed by stable transfection into cho-k1 cells and expression analysed by flow cytometry and qpcr to determine vector copy number. the results at 21 days post-transfection and selection clearly indicate that the rps3 ucoe®-gpcmv and -hcmv combinations give the highest transgene expression as shown in fig. 1 . the a2ucoe®-hcmv/gpcmv constructs were the next efficacious but 2-fold lower than the rps3 ucoe® vectors. the sffv promoter linked with either of the two ucoe®s was the least effective with expression levels 17-fold lower than the rps3-cmv constructs. the rps3 ucoe®-gpcmv/hcmv constructs are now being further modified to include elements that will provide optimal post-transcriptional pre-mrna processing (splicing, polyadenylation, transcription termination, mrna stability) thereby maximising stable cytoplasmic transgene mrna levels and protein production. in the last 20 years, growing number of innovator biologics and biosimilars have formed a competitive environment, where speed and efficiency of generating robust and highly productive cell lines needs to be improved continuously. through various advances, especially in media development and process optimization, product titers as high as 10 g/l were achieved in the pharmaceutical industry (kim et al., 2012) for standard products such as monoclonal antibodies. nevertheless, other proteins e.g. bispecific antibodies, fc-fusion proteins or fab-related products are difficult-to-express (dte) in chinese hamster ovary (cho) and may result in delays or even in termination of the cell line development process. we developed a new robust pool generation approach (cld 2.0) addressing both, easy-and difficult-to-express molecules, while reducing timelines down to 5 months (cld standard = 6 months), improving reliability of cell line development as well as clearly increasing obtained titers. in order to create stable cell lines, we transfected our cho dg44 host cells by electroporation. cells processed using the standard approach were cultivated in selective medium or medium containing additional 2.5 nm methotrexate (mtx) for three weeks. after an amplification step with 30 nm mtx for three weeks, stable individual cell pools were expanded and clones were generated by facs-sorting. clones were analyzed for growth performance and product concentration in fed-batch studies. in our new cld 2.0 approach, we increased mtx concentrations (2.5 nm, 5 nm and 10 nm mtx) during the first selection phase of three weeks. afterwards we omitted the 30 nm mtx amplification step. thereby, pool generation finished four weeks earlier than in the standard approach. to evaluate the stability of cell clones derived from mini pools (mps) generated according to the cld 2.0 approach, stability studies were performed for eight weeks, including stability fed batches at t=2 weeks and t=8 weeks. altogether three different proteins of interest with six cell clones each were tested. we adapted our cell line development process by increasing the initial selection during the first selection phase, thereby allowing the omission of the 30 nm mtx amplification step. we observed that the capacity of amplifiability varied for different products. cell lines with a protein titer ranging from >1 g/l to 1.5 g/l (dte) in shake flask fed-batch showed to be more susceptible to increased initial mtx levels and were thus not amplifiable with 30 nm mtx. in contrast, cell lines with high protein titer >1.5 g/l were observed to adapt to 30 nm mtx easily and were amplifiable. finale shake flask fed-batch data with cld 2.0 clones of highexpressing products showed comparable titers to clones from the standard approach. cld 2.0 clone titers for dte proteins revealed in average a 2.0-fold increase compared to clones generated in the standard approach. titers of top producing clones were in a range of 1.8 g/l to to 2.7 g/l (fig. 1) . furthermore, stability data of cld2.0 cell clones from different dte products showed a stable specific productivity in a range of +/-15 % over eight weeks cultivation. fed-batch titer from t=2 weeks and t=8 weeks were in a normal range of +/-20% of the standard 30 nm projects. our results demonstrate that cld 2.0 is a robust and reliable process for standard products (mab) and dte proteins. with our new process, we were able to increase titer of difficult-to-express proteins up to 200%. by omitting the amplification step (30 nm mtx) 96 % of generated clones were stable over eight weeks cultivation time. additionally using the cld2.0 approach, the time line from dna to rcb was reduced to 5 months. background cho cells have become the most popular platform for production of therapeutic proteins [1] . however, the generation of high-producer cells is a time-consuming and labor-intensive process that requires the screening of large amount of cells to get a clone of high titer and stability. since the expression titer and stability of clone is highly dependent on the site of integration, we demonstrated a new cell line development strategy by using ngs to identify the integration site and using crispr/cas9 to generate the target integrated high producing cell lines [1, 2] . to identify the high expression sites in the cho cells, we employed ngs to analyze the integration sites of a high producing cell line (titer > 3g/l). the pair-end reads with one read mapped to the vector and the other read mapped to the cho reference genome are extracted to identify the integration sites. to test the expression activity of the integration sites, we employed crispr/cas9 to specifically integrate the antibody gene into cho genome for expression. our data showed 4 integration sites are in the high producing cell line. among the 4 integration site, is1 integration site was tested by crispr/ cas9 for target integration of antibody gene for expression. the is1target integrated cell pool present higher expression titer than cell pool generated by target integration into other integration sites (fig. 1a) . the single cell clones derived from is1-target integrated cell pool had low copy number of goi (fig. 1b) . after normalization with copy numbers, the single cell clones derived from is1-target integrated cell pool showed high titer per copy (123~583 mg/l/copy) (fig. 1c) . this study demonstrated the generation of high-producing cell lines by crispr/cas9 mediated target integration. this approach will cost less time and labor than traditional method. the active integration site will serve as a platform like a cassette player for therapeutic antibody production. background cho, hek and sp2/0 are the dominant host cells for biologics drug production. achieving high level of recombinant protein production by these cell lines still remains a challenge. in order to understand the potential roles of lipids in protein production, secretion, vesicular transport and energy metabolism, we coupled high-throughput transcriptomics and lipidomics technologies. quantitative lipidomics is an emerging 'omics technology which can help us understand the physiological limitations of each cell line. the two types of major lipid groups in cells are non-polar and polar lipids. polar lipids such as glycerophospholipids (pls) include phosphatidylethanolamine (pe), phosphatidylcholine (pc), phosphatidylinositol (pi), phosphatidylserine (ps), phosphatidylglycerol (pg), and phosphatidic acid (pa). in this study; we integrated two dimensional high performance thin layer chromatography (2d-hptlc) and mass spectrometry (ms) lipid analysis of sp2/0, cho, and hek cell lines to understand the major differences in the lipid content of these hosts. bligh-dyer method was used to extract the lipids and extracts were analyzed by hp-tlc and ms. the polar lipids were separated into different categories by 2-d hp-tlc using a chcl 3 -meoh-h 2 o (71:25:2.5, v/v/v) solvent system in the first dimension and a chcl 3 -meoh-acetic acid-h 2 o (76:9:12:2, v/v/v/v) solvent system in the second dimension. non-polar lipids were separated by 1-d hptlc using hexane-diethyl ether-acetic acid. 2,7-dichlorofluorescein dye was used to visualize both polar and non-polar lipids. further detailed analysis was performed on a qqq mass spectrometer (thermo tsq vantage, san jose, ca) using negative-ion and positive-ion esi modes as well as negative-ion esi mode in the presence of lithium hydroxide. in this study, quantitative lipidomics was coupled with transcriptomics to further understand the physiological pathways of hek, cho-m and sp2/0 cells. initial hp-tlc analysis indicated that major lipids in these industrial cell lines were pe and pc. other polar lipids such as pi, ps, pg, pa, and sm were lower compared to pc and pe in exponential and stationary phases of each cell line. figure 1 represents 2d hp-tlc results of hek with the relative quantitation of polar lipids. in order to investigate the lipid subgroups, shotgun ms analysis was conducted for both exponential and stationary growth phases of the three cell lines. ms analysis indicated that lysophosphatidylethanolamine (lpe) and lyso-phosphatidylcholine (lpc) amounts were 4 -10 fold and 2-4 fold higher in hek cells compared to sp2/0 and cho cell lines. sphingomyelin (sm) was another lipid subgroup that was shown to have a major difference between sp2/0 and other mammalian cell lines. sm was 30-65 fold lower in sp2/0 cell line compared to cho and hek. to understand these metabolic differences, transcriptomics analysis using illumina highseq and gene expression omnibus was conducted on these mammalian cells. the kyoto encyclopedia of genes and genomes (kegg) database was used to map the transcriptomics data to the lipid synthetic pathways. transcriptomics data mapping to kegg pathways demonstrated that differences in lpe and lpc pathways correlate with the expression profiles of secretory phospholipase a2 (spla2), lysophospholipid acyltransferase (lpeat), lysophosphatidylcholine acyltransferase (lpcat), and lysophospholipase (lypla) [1] . the hp-tlc and lc/ms findings demonstrated that high levels of lpe and lpc existed in the hek cell line and low levels of sm were observed in the sp2/0 cell line. coupling lipidomics with transcriptomics provides us with an improved understanding of the physiological differences across sp2/0, cho, and hek cell lines that could be used to guide cell engineering efforts with the goal of increasing the recombinant protein expression capabilities of these three cell lines. biopharmaceuticals are a class of biological macromolecules that include antibodies and antibody derivatives, generally produced from cultured mammalian cell lines via secretion directly into the media. manufacturing at medimmune requires the generation of chinese hamster ovary (cho) clonal cell lines capable of producing the biopharmaceutical product at commercially relevant quantities with optimal product quality. the isolation of cell clones based on random single cell deposition via fluorescence activated cell sorting (facs) provides a heterogeneous panel of expressers. we hypothesize that the application of facs to provide an additional sorting step based on desirable cell attributes that correlate with productivity, product quality or cell growth attributes could lead to the isolation of higher producing cell lines with enhanced product quality attributes. a panel of 20 cell lines expressing a model recombinant monoclonal antibody were characterised in terms of growth, productivity, intracellular recombinant protein and mrna amounts. assays were also developed to investigate cell attributes using the commercially available imagestream instrument, an imaging flow cytometer, which enables the investigation of cellular characteristics that correlate with cell productivity at the single cell level. characterisation revealed the cell lines exhibited a range of values for productivity, growth, and intracellular (ic) antibody mrna and protein expression, ideal for further imagestream characterisation. western blot and qrt-pcr analysis demonstrated that final titre correlated with both ic heavy chain (hc) protein and mrna amounts (pearson correlation coefficient (pcc) = 0.70 and pcc = 0.80, respectively). to assess productivity at the single cell level, assays multiplexing ic hc protein and mrna with cell attributes were therefore developed. initial assay development focusing on hc mrna and protein amounts has revealed interesting results; four cell lines displayed two distinct populations, one producing the antibody and another nonexpressing population. the ratio of these populations varied amongst the cell lines. images obtained from the imagestream have shown the cellular localization and expression of hc and lc message and protein (fig. 1) . for both message and protein, hc and lc colocalize in the cell. whether there is any relationship between ic hc protein and cell attributes at the single cell level was then also investigated, as well as correlations with cell culture parameters at the population level. at the population level, correlations were found between titre and ic hc protein and mrna (pcc = 0.84 and pcc = 0.79, respectively) confirming the data obtained by western blot and qrt-pcr analysis. a panel of 20 cell lines has been characterised at the population level and show a wide range of antibody expression profiles at both the mrna and protein level. in parallel, assays have been developed for the imagestream to measure hc and lc message and protein amounts at the single cell level. protein and message quantification with the imagestream are consistent with more traditional approaches, such as western blots and qrt-pcr, that operate at the population level. the developed assays are now being used to investigate single cell productivity attributes and for the isolation of more productive clones. background productivity and stability are key factors for the selection of cell line in protein drugs production. large amount of target gene integrated in cell genome could lead to the instability of production. therefore, cells with low copies of target gene integrated in high yield sites could be an ideal production cells for manufacturing. it has been known that the transposon system can control the integrated copy number of target gene and can generate high yield producing cells, it could be a great approach to generate stable high yield producing cell lines carrying low copies of target gene through transposon system. we intended to develop a platform to generate high yield producing cell lines carrying 1-2 copy of the integrated target gene using transposon system. two cho cell lines, cho-s cells and dxb11 cells, have been applied. cells were co-transfected with transposon and target gene expression plasmids. after drug selection, the cell pool with highest productivity per target gene copy was applied to single cell cloning. the productivity and copy number of cell clones were determined, and the stability of cell clones was analysed after culture of about 60 generations. in the stable pools of cho-s and dxb11 cells, the productivities per integrated target gene copy were about 11-13 mg/l/copy and 68-75 mg/l/copy in a batch culture, respectively. after single cell cloning, the integrated copy numbers in most cell clones were less than three copies per cell. in cho-s and dxb11 cell clones, the productivities per integrated target gene copy were 20-60 mg/l/copy and 60-150 mg/l/copy in a batch culture, respectively. the productivity per integrated target gene in cell clones developed by the transposon system was much higher than that in cell clones developed by random integration (fig. 1a and b) . to evaluate the productivity stability of cell clones developed by the transposon system, ten cell clones at generation 0, 30, 60, and 100 were applied in the analysis. of interest, about 80% of cell clones were stable at generation 60, but lost the productivity at generation 100 (fig. 1c) , implying the most cell clones could maintain the stability within 2 months. using the optimized conditions of the transposon system to develop the stable gene expression cells, the productivity per integrated target gene was higher than random integration. these results suggested that our platform is capable to develop high yield producing cells with 1-2 copy of integrated target antibody gene and can be applied to identify high yield integration sites. background mammalian cells show an inefficient metabolism characterized by high glucose uptake and the production of high amounts of lactate, a widely known growth inhibition by-product [1] . recently, we have observed a different glucose-lactate metabolism in some cell lines. while some cell lines are unable to metabolize lactate, others can co-metabolize simultaneously glucose and lactate under certain culture conditions, even during the exponential growth phase [2] . these metabolic differences between different mammalian cell lines (cho, hek293 and hybridoma) have been studied by means of flux balance analysis (fba). three different cell lines were cultured in a 2-liter bioreactor: cho-s, hek293sf and hybridoma kb-26.5. for the fba, two adapted genome-scale metabolic models were used: a reconstruction of mus musculus for cho and hybridoma [3] , and a reconstruction of human metabolic model (recon 2) for hek293 [4] . in cultures where ph was not controlled, two different metabolic phases were observed for cho and hek293 cells. during the first phase both cell lines produced large amounts of lactate as a consequence of the high glucose consumption rates. interestingly, when ph dropped below 6.8, due to acid lactic secretion and accumulation, a second metabolic phase was identified, in which concomitant consumption of glucose and lactate was observed even during the exponential growth phase. conversely, hybridoma cells were unable to co-consume lactate and glucose simultaneously even under noncontrolled ph conditions. therefore, the hybridoma physiological data used for the fba corresponded to only phase 1 of phcontrolled cultures. a summary of the main cell growth and metabolic parameters obtained from the different experiments performed is presented in table 1 . fba shows ( fig. 1 for hek293 cell culture) that lactate is produced in phase 1 because pyruvate has to be converted to lactate to fulfill the nadh regeneration in the cytoplasm and only a small amount of pyruvate can be transported into tca through acetyl-coa. cell metabolism in phase 1 is highly inefficient, as the majority of the carbon source is not used for the generation of energy nor biomass. in phase 2, in which mitochondrial ldh was considered, tca fluxes could be maintained as in phase 1 at the maximal rate encountered; hence, the energy available for cells to grow was similar in both phases, obtaining similar growth rate. two different glucose and lactate metabolism behaviors have been observed in cho and hek293 cultures depending on the culture conditions: phase 1) glucose consumption and lactate production, and phase 2) glucose and lactate simultaneous consumption. in contrast, only phase 1 was observed in hybridoma cultures even when ph was non-controlled. fba showed that tca fluxes in phase 1 and phase 2 were similar, obtaining similar cell growth rate, but glucose uptake rate was much lower in phase 2 due to the lactate co-consumption. some authors hypothesize that cells metabolize extracellular lactate as a strategy for ph detoxification [2] . glucose and lactate co-metabolization resulted in a better-balanced cell metabolism, as can be seen from the metabolic fluxes calculated, with minor effects on cell growth. the observation of glucose and lactate co-consumption metabolic behavior and its deeper study and characterization could open the door of novel culturing strategies with the aim of increasing bioprocesses productivity. background transient protein expression in mammalian cell lines has gained increasing relevance as it enables fast and flexible production of high-quality eukaryotic protein. considerable efforts have thus been made to overcome existing limiting aspects of transient gene expression systems, in terms of cell lines, cell culture-based systems, and protein production in a cost-effective manner. milligram amounts of protein per liter can be produced within several days, allowing a significant shortening of the bioproduction process in comparison to protein production from stable clones. to ensure the robustness of the process, it is essential to have a reliable and easy-to-use transfection method. to palliate for the need of a reliable transfection reagent, we developed peipro®, the only commercially available pei optimised for mid to large-scale transient protein production during process development. peipro® is a non-polydiperse and fully-characterised polymer that has become the gold pei standard due to its reliability, reproducibility in high dna delivery efficiency and in ensuing high protein production yields. here, we present experimental data showing the benefits of using peipro® for protein production in comparison to other peis. we further demonstrate compatibility of using peipro® for recombinant protein production in most commonly used chemicallydefined media. materiel and methods suspension hek-293 and cho cells were cultured in shaker flasks in various synthetic media, as listed in table 1 . hek-293 and cho cells were resuspended at 1×10 6 cells/ml of serum-free medium, on the day before transfection. cells were transfected with 0.5-1 mg of plasmid dna encoding for the luciferase gene reporter using peipro®, pei "max" and l-pei 25 kda (polysciences, warrington, pa) resuspended at 1 mg/ml according to the manufacturer's recommendation. protein expression of the luciferase reporter gene was assayed 48 hours post-transfection by affinity chromatography using protein g (hplc). comparison of peipro® to other commercially available peis was achieved by transfecting suspension hek-293 and cho cells with plasmid dna encoding for the luciferase gene reporter. luciferase production yields obtained in hek-293 and cho cells were at least respectively 5-fold and 10-fold higher when using a similar amount of peipro in comparison to the other peis (fig. 1) . furthermore, peipro® was the only pei that led to similar luciferase production yields when decreasing the amount of plasmid dna per liter of cell culture. conversely, at least 1 mg of plasmid dna and 4-fold more of pei "max" and l-pei 25 kda were needed to obtain a similar luciferase expression range in both hek-293 and cho cells. we further assessed the compatibility and versatility of peipro® by measuring protein production yields obtained in most commonly used animal-free synthetic media. as shown in fig. 2 , peipro® leads to high protein production yieds in several commercially avaialble media formulations for hek-293 anc cho cell lines. peipro® is the only fully characterised pei transfection reagent that is suitable for reliable and reproducible recombinant protein production, irrespective of the scale of production and of the type of adherent and suspension cell culture system. fig. 1 (abstract p-274) . peipro® requires less reagent and similar to lower dna amount compared to other peis. suspension hek-293 and cho cells were seeded at 1×10 6 cells/ml in serum free medium and transfected with peipro®, pei "max" and l-pei 25 kda (polysciences, warrington, pa) resuspended at 1 mg/ml. luciferase expression was assayed 48 h after transfection using a conventional luciferase assay fig. 2 (abstract p-274) . peipro® is optimized for transfection of hek-293 and cho cells in several specific synthetic culture media. suspension hek-293 and cho cells were seeded following the recommended protocol in serum-free media and transfected with peipro® using the standard conditions. igg3-fc production was assayed 48 h after transfection using protein g affinity quantification (hplc) monoclonal antibodies (mabs), which are widely used in anticancer therapies, are mainly produced by mammalian cell lines. mab conjugation to biological molecules for enhancing their antitumor activity offers a new powerful tool for anticancer therapies. we have assessed the production of commercially approved anti-her2 therapeutic antibody trastuzumab (tzmb) [1] and also its fusion with interferon-α2b (ifnα2b). two cloning strategies consisting in transfecting cho-s and hek293 cell lines with two bicistronic or with a single tricistronic plasmids have been assessed. the in vitro efficacy of both antibodies has been tested and compared side by side. tzmb heavy and light chains were cloned in two bicistronic plasmids (pirespuro3 and piresneo3, clontech) and in a tricistronic plasmid derived from pirespuro3. ifnα2b was spliced to tzmb heavy chain by overlap extension pcr and the resulting tzmb-ifnα2b fusion protein was also cloned in the expression vectors in the same way than non-modified tzmb. selected cell pools were cultured in 125 ml shake flasks containing sfmtransfx supplemented with 10% v/v of cell boost 5 (hyclone), 4 mm of glutamax (gibco) and 2 μg/ml of puromycin and also with 700 μg/ml neomycin in the case of the cells transfected with pires-neo3. cells were cultivated in the same conditions as described elsewhere [2] . purified products (using protein a chromatography (hitrap mabselect sure, äkta avant 150)) were quantified by both elisa and sds-page. antigen binding test was performed in sk-br-3 breast cancer cell line by means of flow cytometry analysis. the biological activity of the different candidates was tested with mtt assay. both tzmb and the fusion protein tzmb-ifnα2b have been successfully expressed in cho-s and hek293, which use for heterologous protein expression have previously been optimized in prior works [3] . the tricistronic strategy resulted in the most efficient, showing a 3.5fold increase in terms of productivity with respect to the bicistronic double-transfection for tzmb in cho-s cells and a 5-fold increase in hek293 cells (fig. 1a) . in the case of tzmb-ifnα2b, the tricistronic strategy also allowed to achieve higher productivities than the bicistronic one (fig. 1b) . regarding the differences of specific productivity between both cell lines tested, hek293 emerged as the best production host candidate, for the two tested strategies (tricistronic and bicistronic) and for the two produced proteins, showing a 1.5-fold increase in terms of productivity with respect to cho-s cells for tzmb using the tricistronic strategy. tzmb and tzmb-ifnα2b were analysed in terms of their antigen binding capacity, and both were find to efficiently bind to her2+ skbr-3 cells (fig. 1c) . thus, the antibody affinity to her2 antigen has not been affected when fused to inf-α2b. finally, antiproliferative activity of tzmb and tzmb-ifnα2b were assessed on the same sk-br-3 cells. at a concentration of 500 nm of tzmb, and after a 72-hour incubation, sk-br-3 cells presented a 83% growth with respect to the untreated control. however, no antiproliferative effect was observed for tzmb-ifnα2b (fig. 1d) . the tricistronic strategy provides higher productivity yields in hek293 and cho-s cell lines for both recombinant proteins (trastuzumab and tzmb-ifnα2b). regarding which cell line is the best production host candidate, hek293 achieved higher productivity than cho-s cells for the two proteins tested. all constructions performed preserved the binding affinity to its antigen, trastuzumab and tzmb-ifnα2b bind efficiently to the her2 antigen present in skbr-3 cells. finally, tzmb-ifnα2b does not present an improved antiproliferative effect with respect to trastuzumab when compared by means of an in vitro assay. the genetic engineering of patient-specific t cells with lentiviral vectors (lvv) expressing chimeric antigen receptors (car) for late phase clinical trials requires the large-scale manufacture of high-titer vector stocks. the state-of-the-art production of lvv is based on 10-to 40layer cell factories transiently transfected in the presence of serum. this manufacturing process is extremely limited by its labor intensity, open-system handling operations, its requirements for significant incubator space plus costs and patience risk due to presence of serum. to circumvent these limitations, this study aims to develop a stable and serum-free process to produce lvv with pei-mediated transfection. in addition, this study also focuses on the development of a a c b d fig. 1 (abstract p-276) . expression of trastuzumab (a) and trastuzumab-ifnα2b (b) from bicistronic strategy (bc) and tricistronic strategy (tri) with cho-s and hek293 cells. relative specific productivity units are used for comparing the different strategies. c antigen binding analysis of trastuzumab and trastuzumab-ifnα2b. d antiproliferative activity of trastuzumab and trastuzumab-ifnα2b on sk-br-3 cells production system not only using a gfp marker but also a therapeutically relevant transgene (cd20-car) [1] . therefore, three different cell lines (hek 293, 293t, 293ft) were investigated concerning their productivity of lvv and their growing behavior in the in-house serum-free medium transmacs. as part of this, design of experiment was used to investigate the optimal conditions for pei/ dna-transfection. furthermore, this statistical approach was used focusing an ideal ratio between the 3rd generation plasmids (transfer plasmid cd20-car or gfp, envelope plasmid, packaging plasmids). in addition, different enhancers (sodium butyrate, lithium acetate, caffeine, trichostatin a, cholesterol, hydroxyurea, valproic acid) were investigated concerning their effects on productivity comparing hek cultures producing lvv encoding for gfp-marker or cd20-car. concerning productivity and growing behavior, hek 293t was the favored cell line for our serum-free lv manufacturing process. in addition, an additive screen revealed that sodium butyrate alone had the most promising effect on both gfp-lvv and cd20-car-lvv production. after pei/dna titration, we finally could increase lvv productivity by lowering pei/dna amount at higher cell densities referred to our standard transfection protocol. furthermore, the titration for the optimal plasmid ration revealed, that for large transfer constructs higher amounts of transfer plasmid are required than for smaller constructs to achieve a high productivity (fig. 1) . the outcome of these experiments enabled the development of a robust hek293t based process to produce clinical relevant lvv under serumfree conditions. furthermore, it provides an insight how therapeutic genes and the expression of its transgene can influence cell productivity. led to a vast increase in productivity, cho cells yield less than other expression systems like yeast or bacteria [1] . to improve yields and find beneficial bioprocess phenotypes, genetic engineering plays an essential role in recent research. the mir-23 cluster with its genomic paralogues (mir-23a and mir-23b) was first identified as differentially expressed during temperature shift, suggesting its role in proliferation and productivity [2] . the common approach to deplete mirnas is the use of a sponge decoy which, requires the introduction of reporter genes. as an alternative this work aims to knockdown mirna expression using the recently developed crispr/cas9 system which does not require a reporter transcript. this system consists of two main components: the single guide rna (sgrna) and an endonuclease (cas9) which induces double strand breaks (dsbs). these dsbs can result in insertion or deletion (indels) of base pairs which can disrupt mirna function and processing [3] . a cho-k1 cell line stably expressing an igg was used for knockdown experiments. sgrnas were designed to target the seed region of each mirna member and stable mixed populations were generated (fig. 1a) . total rna form each mixed population was reverse transcribed into cdna using mirna specific stemloop primers. the expression was quantified by rt-qpcr. to further analyse the range of indels the mir-23a and mir-23b clusters were amplified by a standard high-fidelity pcr. amplicons were cloned into pcr tm -topo® vector and positive clones were analysed by sanger sequencing. cell growth was monitored using viacount tm viability stain on a guava tm benchtop flow cytometer. productivity was assessed by elisa. students t-test was used for statistical analysis. it was shown that mirna expression was significantly reduced in mixed populations. a knockdown up to 95% was achieved for mir-23a, mir-23b and mir-24. the knockdown in mir-27a and mir-27b expression was considerably less -between 70-90% (n=3, * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001) (fig. 1b) . furthermore, it was shown that various sizes of indels were generated by targeting the seed region. smaller indels (+1/ +2/-1/-2 bps) seemed to be more common but larger deletions were detected as well (fig. 1c) . mir-23a, mir-23b and mir-27b showed increased viability in late stages of the culture. depletion of mir-27a reduced growth significantly whereas knockdown of mir-24 showed increased proliferation as well as boosting igg titers (table 1) . in this work, we have shown that crispr/cas9 can be successfully applied as a tool to knockdown mirna expression in cho cells. the data was generated using mixed pools and it remains to be established if both alleles can be successfully targeted e.g. using nextgeneration sequencing of individual clones. background chinese hamster ovary (cho) cells are the most widely used host cell line for the production of therapeutic antibodies. pre-and posttranslational modifications and optimization of culture methods contributed to increase the productivity, resulting in a very high titre [1, 2] . however, it has been pointed out that the intracellular secretion process is a bottleneck in the production of therapeutic antibodies [3] . in addition, the details of the process of secretion of humanized recombinant antibodies from cho cells have not been well investigated. in this study, we thus analysed the detailed process of secretion of therapeutic antibodies using cho cell lines, which have already been established as high producers, with the aim of obtaining information for the more rational and efficient establishment of high-producer cells. we performed 1) chase assay, 2) immunofluorescent microscopy observation, and 3) size exclusion chromatography (sec) analysis to investigate the duration of secretion, bottleneck position, and formation of recombinant igg, respectively. high-producer cho cells expressing humanized igg1 [4] and igg3 were used. for the chase assay, cells were cultivated in shake flasks with serum-free medium containing 50 μg/ml cycloheximide (chx) to stop nascent peptide synthesis. the amounts of igg both remaining in the cell and secreted into the medium at each time point were measured by quantitative western blotting. for immunofluorescent microscopy observation, cells were cultivated on coverslips with chx for 4 h. immunofluorescent staining against the recombinant igg, endoplasmic reticulum (er), and golgi apparatus was performed after chemical fixation. for sec, cells cultured with chx were re-suspended in a buffer containing tritonx-100 and injected into a column. the amount of igg in each fraction was measured by quantitative western blotting. the amount of igg3 in the supernatant increased until 4-6 h after the inhibition of protein synthesis by chx; however, it hardly changed thereafter (fig. 1, upper panel) . at this point in time, however, around 40% of igg still remained in the cells (fig. 1, lower panel) , meaning that all of the synthesized igg could not be secreted into the medium and remained in the cells for several hours. this result was almost the same as that of studies using igg1-expressing cells [5, 6] . the localization of igg in the cells was checked before and after the addition of chx, with the results showing that igg1 remained in the er and was hardly seen in the golgi apparatus [5] [6] [7] ; igg did not seem to be efficiently transported to the golgi apparatus. the sec experiment showed that most of the igg1 remaining in the cell seemed to form full-sized antibodies [5, 6] , but it could not be secreted despite this. the high-producer cells could not secrete all of the synthesized igg, and around 40% of igg remained in the cells for several hours. this incomplete secretion is a common phenomenon among cho cells producing different types of recombinant igg. the igg could not be transported from the er to the golgi despite its formation of fullsized antibodies. solving this bottleneck in the transportation of igg from the er to the golgi and/or achieving more efficient glycosylation of igg after the formation of full-sized antibodies might be the next target to improve productivity. background humanized monoclonal antibodies (mabs) are among the most promising drugs, but defined strategies for their modification are still not available. our work deals with humanization of murine mab2/ 3h6. the superhumanization approach leads to a loss of binding affinity which was partially restored by a single human-to-mouse backmutation (t98hr). [1] this residue was selected by synergistic combination of sequence analyses of antibody framework regions and structural information using novel in silico simulations. for structural stabilization, a conglomeration of tyrosine residues surrounding t98hr was identified, the so called "tyrosine cage". [2] analysis of the "tyrosine cage" was done by alanine scanning mutations with a double mutation variant t98hr + y27ha (bm09) and a triple mutation variant t98hr + y27ha + y32ha (bm10). in a recent series of experiments we tried to enhance binding affinity by three new variants with backmutations in the variable light chain (vl). originating from t98hr, residues in the vl were selected based on their spatial proximity to the cdr3 loop of the variable heavy chain. affinity improvement of t98hr was evaluated by vl-double backmutation variants t98hr + f46ll (su01) and t98hr + q49ls (su02) and a triple backmutation variant t98hr + f46ll + q49ls (su03). all five variants were expressed transiently in hek293-6e cells and binding affinities were investigated in two individual settings with bio-layer interferometry. in the first approach concentrated cell culture supernatants were directly applied and mabs were captured on protein a tips, blocked with 3d6scfv-fc and the association and dissociation of 2f5 igg was measured. for the second approach, the culture supernatants were purified and the affinity was determined with streptavidin biosensors. first, biotinylated 2f5 igg was bound and then the association/dissociation of the purified 3h6 variants was measured. affinity evaluation of concentrated culture supernatants with protein a sensor tips showed a decrease of binding affinity of bm09 and a loss of binding of bm10. the protein a measurement showed an increased binding strength of su01, su02 and su03 compared to su3h6 and bm07. su01 and su03 result in a higher binding affinity compared to su02. these results can be confirmed with purified variants by the streptavidin bio-assay (fig. 1) . alanine scanning of the tyrosine cage demonstrated a reduction of binding affinity (bm09) and a severe loss of binding (bm10), concluding that the tyrosine cage plays an important role for supporting a correct cdr loop conformation. further affinity improvement of the single mutation variant t98hr could have been reached via mutations in the vl. it demonstrates the underestimated role of the vl for the interaction with its binding partner. although cho cells are a major expression system for production of recombinant biopharmaceuticals, the molecular and cellular background characterizing a high producer is largely unknown. it has been observed that in producer cell lines important signaling pathways like the akt-signaling are altered in characteristical ways. thus analyzing according signaling events should lead to identification of key elements characterizing high producer cells. to investigate this, our emphasis lies on the phosphorylation status of involved proteins as reversible switches in all signaling pathways. we aimed to establish a workflow for cho-specific phosphoproteomics and focused on igf signaling, as cell culture media often are supplemented with this growth factor. two producer cell lines and the according parental cells were cultivated in a stable isotope labeling with amino acids in cell culture (silac) experiment, followed by quantitative ms phosphoproteomic analysis including chospecific data evaluation. the chosen cho cell lines were cultivated in triplicates in silac media containing isotopically-labeled lysine/arginine (hlys/harg) and in parallel in identical standard media (llys/larg, tcx10d, xell). cell density, viability, metabolism and cell cycle distribution were monitored during 50 ml batch culture for 7-8 days. at day 3.25 igf was added into hlys/harg cultures. 5 min later a part of the cells was harvested. for ms analysis igf-treated (hlys/harg cultures) and control cultures (llys/larg cultures) were combined. the following ms sample preparation workflow included digestion of whole protein lysate and phosphopeptide enrichment via tio 2beads. nanolc-esi-orbitrap ms (q exactive plus, thermo fisher scientific) of phosphopeptides was excecuted with subsequent identification and quantification in maxquant [1] . in addition to silac quantification of h/l ratios for investigation of igf effects, aquired data was also used to perform label-free quantification (lfq) in maxquant [1] for comparison of cell lines. statistical significance was calculated via t-test (p<0.05) or anova (permutation-based fdr<0.05) in perseus [2] . results igf effects on growth and production the igf treatment resulted in a prolonged viability for all cell lines. however, an increased vcd was only observed for producer cell line 1, yielding in an enhanced integral of vcd (ivcd). for the parental cells growth was inhibited by igf, although s-phase cells were enriched at least temporary (fig. 1a) . regarding antibody production igf led to a decreased qp and product titer, concomitantly with an increase in s-phase cells (fig. 1a) . this inverse correlation of proliferation and cell specific productivity is known from different productivity enhancing molecules, like butyrate [3] . ms investigation of signaling events the phosphoproteomic experiment resulted in the identification of 10.485 class i-phosphorylation sites. statistical evaluation of phosphopeptide abundances in perseus showed up 144 significant differences between the cell lines and led to producer vs. parental classifications (fig. 1b) . the quantitative evaluation via silac yielded in about 2.408 quantifiable phosphosites in at least 6 biological replicates. rapid phosphorylation changes after growth factor treatment indicated signaling towards protein synthesis, cell cycle and regulation of actin cytoskeleton amongst others. for 201 phosphosites significantly different h/l ratios were calculated between the two groups parental vs. producer, four of them are listed (table 1) . the workflow to study phosphorylation states revealed differences in the related cell lines and gave insights into signal transduction as a response on igf. on the one hand, igf-treatment resulted in a fast and widespread upregulation of phosphorylation sites within aktand mapk-signaling. on the other hand, a different phosphorylation status for producer compared to parental cell lines uncovered distinctions in biological processes like rna-and dna-binding and regulation of cytoskeleton. in sum, our sucessfully established phosphoproteomic approach allows to detect important signaling key players in cho cells that subsequently can be targeted through cell engineering or small molecule treatment. to improve antibody production in the cho cell expression system, it seems to be useful to up-or downregulate gene expression including antibody folding, secretion, and cell metabolism. many cell engineering approaches, including gene introduction, knockout and knockdown, have been employed to enhance recombinant antibody production [1] . however, identifying production enhancer genes is the rate-limiting step for cho cell engineering, because the conventional method requires a series of experiments including genomic integration of the tested genes, selection of stable cell clones and cell culture experiments of several clones. in this study, we propose an approach for rapid evaluation of production enhancer genes based on an episomal expression system. plasmid vector carrying the epstein-barr virus (ebv) encoded nuclear antigen 1 (ebna1) was transfected into cho cell line producing igg1 antibody. after g418 selection and single colony isolation, ebna1 expression was checked with capillary electrophoresis system wes (proteinsimple). ebv ebna1-antibody (1eb12) was used for detection as the primary antibody. the expression vector for the gene of interest was prepared by inserting 1508 bp of an orip dna sequence into a plasmid vector carrying cag promoter, resulting in the potc vector. pei max (polysciences, inc.) and balancd transfectory cho (irvine scientific) were used for the transfection. the number of viable cells and gfp-positive cells were counted using countess ii fl automated cell counter (thermo fisher scientific). the transfected cells were cultured in cellstar cellreactor tubes. the tubes were incubated in a climo-shaker isf1-x (kuhner). antibody production was measured using biolayer interferometry with an octet qk system (fortebio). we constructed four cho cell lines stably expressing ebna1, termed igg1-eb01 to eb04. in capillary electrophoresis analysis, we observed a clear peak corresponding to the ebna1 expression in all four cell lines. we tested the transfection efficiency by potc-gfp plasmids. in the best transfection condition, pei/dna ratio of 1/1, igg1-eb01 cell showed the highest gfp-positive cell number (1.07×10 7 cell/ml) and transfection efficiency (95%) among the four cell lines. therefore, igg1-eb01 cell lines were selected for further study. after the transfection, the number of gfp-positive cells continued to increase even after the passage (fig. 1) , suggesting that the potc-gfp plasmid was stably retained and replicated by ebna1/orip system in igg1-eb01 cell lines. in preliminary experiments, we introduced three genes, mdh2, gss and gclm, into igg1-eb01 cell lines. cotransfection of these three genes led to an increase in igg1 production from 287±18 mg/l (control) to 334±21 mg/l at day 8 (p<0.05, t-test, n=3). this result suggests that these three genes work as production enhancer genes. conventional methods based on stable cells take up to 6 months to determine whether the gene of interest is beneficial for recombinant igg1 production. in contrast, identification of production enhancer genes is achievable within 10 days by our proposed method based on ebna1/orip system. the proposed method makes it possible to evaluate production enhancer genes in a rapid manner. the proposed method is a promising approach to identify genes enhancing recombinant antibody production. background 2g unic™ (2gun) technology comprises a set of protected genetic elements that improve protein production by acting on transcription as well as on translation. the elements can either be inserted into existing (platform) vectors or be provided as complete ready-to-use vectors. the technology can be used in stable and in transient transfection to boost protein production for product development and is being applied in cld for pharmaceutical proteins. in combination with antibiotic selection or dhfr selection, 2gun technology routinely results in 2-3 fold increase in expression of client antibodies or fusion proteins, both in pools and after clonal selection. previously, we have successfully combined 2gun technology with glutamine synthetase (gs) selection and the cho gs null cells of horizon discovery, resulting in clonal cell lines producing > 6 g/l of a biosimilar mab in fed-batch assay. here we present data on the successful application of the 2gun technology for the enhanced expression of a large (>300 kda) human heterotrimeric glycoprotein, a renowned difficult-to-express (dte) protein. all expression vectors comprised a hcmv promoter and bgh polyadenylation sequence in the expression cassettes for the gene of interest, and a selection marker gene with sv40 promoter and sv40 polyadenylation sequence. 2g unic™ vectors also contained genetic elements (2g unic™ technology, proteonic). cho gs null cells (horizon discovery) were transfected in duplicate with reference or 2gun expression vectors and selected in media lacking glutamine and containing the appropriate antibiotics. the bulk pools were seeded at equal viable cell density after obtaining maximum viability and cultured for 9 days without feeding (batch). expression of the target protein in cell culture supernatants of stable bulk pools was measured by elisa. the three protein subunit genes were expressed from vectors with different selection markers. in the reference constructs (without 2gun), the α, β, and γ chains were expressed from vectors with marker genes for zeocin, blasticidin, and gs, respectively. a similar 3 vector combination was also generated with 2gun elements integrated in each vector. in addition, 2 vectors with 2 subunits (γ-α and α-γ), each with a separate 2gun element, promoter and polyadenylation signal, were generated with a gs marker gene. cho gs -/cells were transfected with the 4 appropriate vector combinations in equimolar ratios and selected in bulk in medium lacking glutamine and 1 or 2 antibiotics. the 2-vector transfected cell pools recovered first, due to the presence of only 1 antibiotic in the medium (fig. 1a) . the pools transfected with three 2gun vectors recovered to maximum viability just a few days after the 2-vector 2gun pools. recovery of the reference pools took up to a week longer than the 2gun pools. production of each pool was assessed in a batch production run in shaker flasks. all 2-vector 2gun pools which recovered first produced titers around 0.1 g/l, which is almost 10-fold higher as compared to the production by reference pools (fig. 1b) . the highest titers of 0.5 g/l were obtained in the 3-vector 2gun pools. these data show that the 2g unic™ genetic elements can be successfully used to obtain a significant increase in the titer of difficult-to-express proteins. similar results have been obtained with other dte proteins, including fc-fusion proteins and bi-specific antibodies (not shown). the expression of a large, glycosylated multimeric difficult to express protein can be increased more than ten-fold in cho gs pools by application of 2g unic™ genetic elements. the highest expression of is obtained using a separate vector for each subunit. characterization of antibody-producing cho cells with chromosome aneuploidy noriko yamano 1,2 , sho tanaka background chinese hamster ovary (cho) cells are commonly used as host cells to produce biopharmaceuticals. however, the number of chromosomes in cho cells varies. previously, dg44-sc20 and dg44-sc39 cell lines with modal chromosome numbers of 20 and 39 were isolated from parental cho-dg44 cells, from which igg3-expressing cell lines named igg3-sc20 and igg3-sc39 were established, respectively. the igg3-sc39 cell pool showed a higher specific igg3 production rate than the igg3-sc20 cell pool [1] . even though all of the igg3-sc20 clones and half of the igg3-sc39 clones contained the same number of vector integration sites (single integration site), igg-sc39 cell clones produced more igg3 following the culture of single-cell clones than any of the igg3-sc20 clones [1] . in this study, we performed transcriptome analysis to investigate the characteristics of high-producer cells with chromosome aneuploidy. transcriptome analyses using amplified fragment length polymorphism (aflp)-based high-coverage expression profiling (hicep) and de novo mrna-seq were performed on dg44-sc20, dg44-sc39, igg3-sc20 and igg3-sc39. to compare cell lines with different numbers of chromosomes, transcriptome data from mrna-seq were adjusted for cell number using rna reference materials (nmij crm 6204-a; national institute of advanced industrial science and technology) mixed at equal amounts per cell. pathways related to differentially expressed genes were searched using keymolnet (km data). high-chromosome-number cho cells showed larger cell diameters, as determined by vi-cell (beckman coulter) measurement. the predicted volume ratios, based on these diameters, are 2.24 (dg44-sc39:dg44-sc20) and 1.59 (igg3-sc39:igg3-sc20). the levels of β-actin and the products of most other genes that were detected by mrna-seq differed by approximately 20% in the comparison between sc39 and sc20 (sc39 > sc20). based on the analysis of gene expression levels per cell volume, approximately 90% of detected genes showed lower expression in both dg44-sc39 and igg3-sc39 compared with the levels in dg44-sc20 and igg3-sc20, respectively. in addition, the number of genes whose expression level was decreased in igg3-sc39 compared with that in dg44-sc39 was larger than those showing the opposite pattern. the results of the comparisons between igg3-sc20 and igg3-sc39 indicate that differentially expressed genes were mainly related to cell growth (e.g. myc, smad), apoptosis (e.g. caspase), lipid metabolism (e.g. srebp, pparγ) and epigenetic histone modification (e.g. brca, hat) pathways. the mrna levels of myc, smad, caspase, brca and hat related genes were lower in igg3-sc39, while those of srebp and pparγ related genes were higher in igg3-sc39. the effects of these pathways on antibody production should be examined in future. in this study, we found that high-chromosome-number cho cells have lower amounts of mrna relative to their volume. a reduction per unit volume in the expression of genes that are required for survival might generate additional energy for recombinant protein production in high-chromosome-number cells. from an evolutionary perspective, an increased set of chromosomes underlies rapid evolutionary adaptation. although there are issues to be considered, such as stability, there may also be advantages to using high-chromosome-number aneuploid cho cells as a production host cells of recombinant proteins. background human growth factors have an enormous therapeutic potential. among them, the bone morphogenetic protein-2 (bmp-2) can induce de novo bone formation endowing the protein a high therapeutic potential. however, finding a suitable recombinant production system for such a protein still remains a challenge. recombinant expression of hbmp2 was investigated in transiently transfected hek-293 cells and in stable clones established in cho-k1 cells cultivated in excell and pro-cho5 medium, respectively. protein stability and interaction of the hbmp2 with the producer cells were investigated in vitro using commercially available rhbmp2. in addition, we investigated a cell-free protein synthesis system harboring translocationally active microsomal structures, hence having the potential to perform post-translational modifications, as an alternative production method. we showed that growth rates and viabilities of the rhbmp2producing cells were similar to those of the parent cell line, while entry into the death phase was delayed in case of the recombinant cells. the maximum rhbmp2 concentration detected in the culture supernatant was low for stable clones but can be greatly improved combining the hek-293 cells transient expression system and batch reactor cultivation which reflects a better compatibility of the codon usage in the human cells (table 1) . hbmp2 protein is sensitive to slightly acidic ph and to a lesser extend to proteases (fig. 1a ) and binds to both producers cell lines (fig. 1b) -all this could incidentally contribute to the low product titers. cell-free protein synthesis has been proposed as alternative for "difficult-to-express" proteins. since native hbmp2 is glycosylated, a cell-free system based on eukaryotic cell lysates is required for its production. cho cell lysates were chosen, since they had previously been established as the most productive eukaryotic system in our hands [1] , while concomitantly enabling a direct comparison to the production of hbmp2 in stable clones established in cho-k1. the ability to perform post-translational modifications is a major advantage of eukaryotic systems. the cho lysates prepared by the protocol used here have previously been shown to contain significant amounts of endogenous microsomes derived from the endoplasmatic reticulum during lysis [2] . to enforce translocation of the target protein into the microsomal structures, a melittin signal peptide was fused to the hbmp2 cdna. the glycosylation of the protein was assessed by enzymatic treatment (pngase, endoh) and confirmed using 14 c-mannose for the de novo protein synthesis. upon cell-free protein synthesis, the hbmp2 yield was 100-fold higher than the best one in the hek-293 cells. the difference becomes even more dramatic, when productivities are considered (table 1) , i.e. the fact that maximum product titers are reached within 3 h in the cellfree system compared to 120 h in the cell-based ones. this demonstrates that the cell-free expression system is most suitable compared to mammalian cell expression method for the production of glycosylated human bmp2 (table 1 ) [3] . human growth factors are complex molecules, which make their production in mammalian cells desirable. however, low product titers caused by a variety of both cell and process related effects may hinder the development of highly productive processes. in such cases, cell-free protein production using cho cell lysates containing endogenous microsomes for posttranslational processing, may eventually present an attractive alternative. in particular since these lysates can be used under tightly controlled conditions assuring a higher degree of reproducibility, than, e.g. transient transfection systems. cell-free systems are known to circumvent typical bottlenecks of cellbased ones, e.g. metabolic regulation and cell maintenance mechanisms. in consequence, the production of a recombinant protein is neither inhibited by its accumulation nor by any interaction with the cells, e.g. through the activation of inhibitory signaling pathways. core. preliminary studies showed that the corresponding polyplexes, but also some of the cells that came into contact with them, became magnetic and were manageable by magnetic fields [1] [2] [3] . here, we present a characterization of the influence of structure and composition on the function of these polymers using a library of highly homogeneous, paramagnetic nano-stars with varied arm lengths and densities [4] . the paramagnetic nano-stars library was synthesized by coating maghemite nanoparticles (γ-fe 2 o 3 ) with a thin silica-shell functionalized with an atomic transfer radical polymerization (atrp) initiator. pdmaema arms were grown from the core particles via atrp. in one case, the pdmaema arms was end-capped with pdegma blocks produced during a second atrp step. all nanostars were characterized by size exclusion chromatography and thermogravimetry to calculate number and length of the pdmaema arms. the core diameter was determined by transmission electron microscopy and dynamic light scattering (dls). the different variants (table 1) were analyzed for their ability to complex pdna (pegfp-n1) using various physicochemical methods (dls, zeta sizer). transfection efficiency/cytotoxicity in cho-k1 cells were determined by flow cytometry. transfected cells were placed in a magnetic field and the influence of the polymer architecture on the magnetic separation was investigated. nonparametric spearman analysis was used to correlate between arm length/arm densities, magnetic properties of the cells and transfection efficiency. based on the hydrodynamic radii of the polyplexes, the investigated nano-stars could be divided into three subgroups (table 1) . middle, but also high arm density nano-stars formed smaller polyplexes with hydrodynamic radii ≤ 300 nm, a size that is considered suitable for endocytosis and transfection. transfection efficiencies and cytotoxicities varied systematically with the nano-stars architecture, with viability showing a more pronounced dependency on the characteristics of the transfection agent than the transfection efficiency itself. the arm density was particularly important, with values of approximately 0.06 arms/ nm 2 yielding the best results (fig. 1a) . the end-capping the polycation arms with pdegma significantly improved the serum compatibility (fig. 1b) . the gene delivery potential of a given nano-star and its ability to render the cells magnetic did not correlate. although, compared to the non-separated cells, egfp-expressing cells were consistently more frequent in the magnetic cell fraction, while the non-magnetic fraction was slightly depleted. when the egfp-expressing cells were further divided into low, middle and high producers, a statistically significant shift towards the high producers was observed in the magnetic cell fraction (fig. 1c) . a nonparametric spearman correlation analysis was used to statistically evaluate possible links between the molecular characteristics of the nano-stars, the physicochemical properties of the corresponding polyplexes, the transfection conditions, and the cellular reactions. the resulting correlogram is shown in fig. 1d . transfection agents with magnetic properties enlarge the toolbox for studying non-viral gene delivery, since cellular magnetism is added as a new parameter. this allows, inter alia, a distinction between mere cellular interaction and actual uptake, which is otherwise difficult. viability showed a much more pronounced dependency on the characteristics of the transfection agent/polyplex than the transfection efficiency itself, which should be taken into account during method optimization. end-capping the polycationic pdmaema-arms with pdegma-blocks improved the compatibility of the polycationic nano-stars with serum components. in future optimized, blood-compatible, nano-stars, which can be retained/directed by magnetic fields, could become options for non-viral gene delivery in vivo. the increasing demand for monoclonal antibodies has necessitated the need to increase the productivity of current industrial cell lines. in our earlier study [1] , we had shown that treatment with er-stress inducer, tunicamycin significantly increased the titers and productivity in recombinant cho cell lines with a simultaneous upregulation of many genes from the unfolded protein response pathway (upr). however the loss in cell viability prevented a sustained increase in titers. in the current study we explore the effect of varying concentrations of tunicamycin and treatment times, such as to modulate the increase in protein folding capacity while preventing induction of apoptosis. anti-rhesus igg-secreting cho cells [2] were cultured in sf-cdm in 125 ml shake flasks. the cells were treated with varying concentrations (30-500 ng/ml) of tunicamycin in a batch culture. further, the effect of treatment with tunicamycin for short periods of time (24 hrs) was also evaluated. igg titers and mrna expression levels were quantified using elisa and qrt-pcr (illumina), respectively. results cho cells were treated with different concentrations of tunicamycin and cultured in a batch for 8 days (referred as continuous treatment/cte). figure 1a presents the maximum vcd and % drop in viability under treatment. a dose-dependent inhibitory effect is observed on growth and viability of cells in cte-cultures, with minimal inhibition as lower concentrations. contrastingly, igg titers (fig. 1b) were higher in treated cultures w.r.t. control in initial phase of the cultures at all the concentrations of tunicamycin. the per-cell productivity (fig. 1c) also showed a significant increase w.r.t control at all the concentrations of tunicamycin. however, the increased productivity due to tunicamycin was not sustained and levels become similar to control after day 3 (data not shown). to prevent loss of viability due to tunicamycin, the effect of short-term treatment (ste) with tunicamycin was explored. cells treated with tunicamycin for 24 hours were harvested (corresponding to day 2 of cte cultures) and inoculated in fresh media. the ste-cultures showed improved viability and higher maximum vcd as compared to cte-cultures (fig. 1a) . the fold increase in igg titers was not sustained beyond day 1-2 in stecultures ( fig. 1d ) but significant increase in productivity was seen in the initial phase (fig. 1e) . further, the cells were adapted over 25 continuous generations under 30ng/ml tunicamycin. the adapted cells had overall 1.3-fold higher productivity, as compared to control (fig. 1f) , in a batch culture. to understand the molecular basis of increase in productivity, mrna expression level of key genes was determined. xbp1s is a transcription factor involved in activation of chaperones (like grp78, calreticulin) and apoptotic genes (such as chop). significant increase in the levels of calreticulin was seen on treatment with tunicamycin (fig. 1g) . both xbp1s and grp78 were marginally induced when treated with 30ng/ml of tunicamycin in both cte-and ste-cultures (fig. 1h) , and significantly up-regulated when treated with 500ng/ml of tunicamycin. the chop mrna levels also increase with increasing tunicamycin concentrations, with levels in ste-cultures lower than cte-cultures (fig. 1h) . the results suggest that upr induction may be important to increase productivity in these cte/ste-cultures. note that, tunicamycin had no effect on the expression levels of igg heavy-chain, thus eliminating the involvement of igghc-mrna in increasing productivity (fig. 1i) . tunicamycin induced er-stress increased productivity in the initial phase of the culture and enhanced upr-mediated folding capacity can be attributed as one of the reasons for it. at lower concentrations of tunicamycin, a fine balance between optimum upr induction and apoptosis can be achieved, as seen in 30ng/ml tunicamycin ste-cultures. in summary, this study demonstrates an alternate approach to enhance productivity of current industrial cell lines. background chinese hamster ovary (cho) cells have been widely used for the large-scale production of biopharmaceuticals [1] . to construct antibody-producing cho cells, exogenous genes encoding antibodies are usually integrated into unspecified regions of chromosomes (random integration). however, the chromatin structure differs depending on the location of the chromosomal region, which affects the expression level of the gene of interest [2] . recently, gene-targeting methods that enable site-specific integration of expression vectors have been developed. however, the regions that are most efficient for exogenous gene expression have not been clarified. we previously constructed a cho genomic bacterial artificial chromosome (bac) library generated from the recombinant cho-dg44 cell line. it was expected to cover the entire cho genome five times. the 20 chromosomes in cho-dg44 cells were aligned in decreasing order of size and assigned letters from a to t [3] . three hundred and four bac clones were mapped to every chromosome of cho-dg44. among the karyotypes of cho-dg44, cho-k1 and primary chinese hamster cells, chromosomes a and b are considered as the sole paired chromosomes corresponding to chromosome 1 in primary chinese hamster cells. hence, chromosomes a and b are considered to be stable [4] . in this study, we constructed antibody-producing cells by using a gene-targeting method, which focused on the stable chromosomes. a gene map of chromosome 1 was constructed by combining the bac-fluorescence in situ hybridization (fish)-based chromosome physical map and sequence data of mapped bac clones. the sequences of bac clones were searched by blast with ncbi and chogenome.org databases. three different regions on chromosomes a and b were selected based on cho genomic bac library sequences as target sites. cho-k1 cells were stably transfected by lipofection. the target sequences were broken using the clustered regularly interspaced short palindromic repeats (crispr)/crispr-associated protein 9 (cas9) system and humanized igg1 genes were integrated by non-homologous end joining recombination. transfection without using the crispr/ cas9 system was also performed. these cell pools were cultivated for six days with serum-supplemented medium, and their levels of antibody productivity were evaluated by elisa. copy number analysis was also performed using real-time pcr. results and discussion construction of gene map of chromosome 1: eighty-three bac clones were mapped onto chromosomes a and b (each clone contained 100-150 kb of the cho genome sequence). as a result of annotations of 83 bac clone sequences, 91 genes were mapped on chromosome 1. investigation of the differences of productivity among antibodyproducing cells that were constructed by chromosome 1 targeting and/or random integration: cell growth was not affected by the gene targeting site. the specific production rates of antibodyproducing cell pools constructed by gene targeting of chromosome 1 were higher than those of the cell pool constructed by random integration. all cell pools constructed by gene targeting showed lower copy numbers of heavy chain and light chain in genomic dna than those in the cell pool constructed by random integration, despite showing high productivity. our results indicate that high productivity of the cells constructed by gene targeting of chromosome 1 does not depend on the increase of the antibody copy number, and that the environments around these target regions are suitable for exogenous gene expression. the approach of using gene targeting to chromosome 1 may be promising for constructing antibodyproducing cells. retroviral vectors have been widely used as gene delivery tools in various biotechnology fields. however, the random integration feature of retroviral vectors seems to cause problems such as insertional mutagenesis and gene silencing. we previously demonstrated cre-mediated retroviral transgene insertion into a pre-determined site of the founder cells using integrasedefective retroviral vectors (idrvs), where a cre expression plasmid was transfected into the cells prior to retroviral transduction [1] . recently, we reported novel hybrid idrvs (cre-idrvs) incorporating bioactive cre recombinase protein, and validated site-specific gene integration of an scfv-fc antibody expression unit into the chinese hamster ovary (cho) cell genome [2] . we also developed an accumulative site-specific gene integration system, which enables repeated integration of multiple transgenes into a pre-determined locus of the cell genome [3] . here, we attempted repeated integration of transgenes using cre-idrvs. a viral vector plasmid (pqmscv/hd[scfv-fc]) encoding reporter genes and an scfv-fc expression unit flanked with wild-type and mutant loxps was constructed for the production of idrvs. cre-idrvs were produced described previously [2] . results and discussion figure 1a shows a schematic drawing of each round of targeted transgene integration using cre-idrvs harboring an scfv-fc expression unit. (fig. 1b) . genomic dna extracted from the cells were subjected to pcr using specific primer pairs α and β, and γ and δ to confirm site-specific integration. dna fragments with expected sizes were amplified in each cell clone (fig. 1c) . these results indicate that site-specific repeated integration was achieved using cre-idrvs. in contrast, scfv-fc productivity in cho/hd[scfv-fc]×2 cells was slightly decreased compared with that of cho/ne[scfv-fc]×1 (data not shown). although the reason remains unclear, repeat-induced gene silencing might occur due to tandem repeat structure of expression units. we reported improved recombinant antibody production using a production enhancer element [4] . such a cis-regulatory element might be a feasible approach to enhance the productivity. we demonstrated site-specific repeated transgene integration into a pre-determined chromosomal locus using cre-idrvs for the production of an scfv-fc antibody. if lipids role in the cell have been reduced for a long time to cell membrane formation, it is now understood that lipids plays also a role into energy metabolism, vesicular transport, membrane structure, dynamics and signaling. however, the exact mechanism of how compositional complexity affects cell homeostasis remains unclear. thanks to recent advances in mass spectrometry, it is now possible to study a wide range of lipids, providing a better understanding of lipid homeostasis in high performance cell culture processes. the purpose of this work was to develop a robust lipidomics method applied to mammalian cell cultures in a three step method: extraction, separation and detection (fig. 1 ). both matyash [1] and folch [2] extraction method were performed on our cells to reach the highest yield. two separation techniques were also tested: hydrophilic interaction liquid chromatography (hilic) and reverse phase chromatography. finally lipid classes' identification was achieved by tandem mass spectrometry analysis thanks to structure-specific fragmentation ions. the yield obtained with matyash extraction method was higher than with folch method for each lipid class tested. besides, matyash method presents also the advantage to be less toxic and suitable for high throughput analysis since the organic layer is above the aqueous layer. lipids separation by hilic is based on their polar head. since lipid classes are defined by polar head, the lipids are eluted class by class, making their identification easier. the separation of lipids by reverse phase was correct but the method is longer and we observed a massive carryover of triglycerides on the column. finally each lipid class was screened in ms/ms parent ion mode. target daughter ion was set according to the lipid class structure and fragmentation pattern. this detection technique enabled the identification of 50 different lipids. to ensure the absolute quantification of the detected lipids and to guarantee comparable results between batches labeled internal standard were added prior to extraction. this method was optimized in a stepwise process to ensure a sensitive and selective measurement of the lipids. lipids were extracted by matyash method, separated by hilic and detected by tandem mass spectrometry. this method is suitable for both in process sample lipid analysis providing information on the cell lipid content, and for harvest samples, enabling to follow the lipid release during the different harvest steps. this non-targeted lipidomic quantitation method will enable us to better control lipid synthesis during biopharmaceutical fed batch production through clone selection, metabolomics studies and harvest development. background human mesenchymal stem/stromal cells (hmsc) can easily be isolated from e.g. bone marrow, fat tissue or umbilical cord blood and are therefore a central player in regenerative medicine, gene therapy and cell therapy [1] [2] [3] . the necessary gene shuttle is mainly provided by viruses associated with diseases, like retrovirus or adenovirus [4] [5] [6] [7] . these possible pathogen viruses demand for high safety standards. also, they are prone to genomic alterations and there is the possibility of virus inactivation, triggered due to pre-existing immunity in the patient [8] [9] [10] . in this context, the autographa californica multicapsid nucleopolyhedrovirus (acmnpv) is a safe alternative. the virus replication is hostspecific for insects [11] , but it is known since the mid-90s, that a temporary transduction of mammalian cells is possible [12] . some modifications of the virus increased the applicability in stem cells. pseudotyping the virus with the vesicular stomatitis glycoprotein (vsv-g) led to an expansion of the transducable cell [13, 14] and the integration of the woodchuck hepatitis virus post-transcriptional regulatory element (wpre) prolonged the recombinant protein expression [15, 16] . for achieving a baculovirus-induced differentiation of hmscs, the promotor and the expression strength of the recombinant protein are crucial factors. still, there are still few comparative promotor studies [17, 18] . however, a successful virus uptake is the prerequisite for a successful protein expression. we therefore investigated factors significantly influencing the transduction process by applying design of experiments (s. fig. 1a ). the experimental design comprises a two level factorial screening, set-up using design expert v9.for the transduction 60,000 c/cm 2 were seeded in 24-well plates with dmem + 10% fcs and incubated overnight at 37°c, 8% co 2 and humidified atmosphere. the recombinant baculovirus using an integrated ef1α promoter to control gfp expression, described elsewhere [18] , was diluted to the respective concentrations in the different surrounding fluids. after discarding the cultivation medium of the hmsc-tert, 1 ml of virus containing solution was added to the cells. the following incubation was varied in duration before replacing the virus solution with growth medium and an incubation overnight. 24 h post transduction (hpt) the cells were washed with pbs, trypsinized with 100 μl trypsin/edta and incubated for 5 min at 37°c. trypsination was then stopped applying 100 μl soybean trypsin inhibitor and the cells were analyzed using flow cytometry. as shown in fig. 1a , the virus concentration and incubation time exert the highest influence on the transduction efficiency. obviously, a higher concentration of viral particles and longer incubation of cells with virus increases the probability for hits between cells and virus particles. additionally, the surrounding fluid can have a negative impact on the transduction. this is due to the interaction of medium components with the baculovirus. therefore, pbs containing ca 2+ & mg 2+ is recommended as surrounding fluid for transduction experiments. in fig. 1b , the transduction conditions resulting in the highest percentage of gfp+ cells are displayed: 150 virus particles per cell (ppc) and an incubation time of 5 h with hmsc-tert. the experiments show, that especially the virus concentration and the incubation time of cells with virus influence the transduction efficiency. based on the results of the screening, further optimization of the transduction conditions will be done using a face centered central composite design with pbs containing ca 2+ & mg 2+ as surrounding fluid and at an incubation temperature of 37°c. background breast cancer is the second main cause of cancer related deaths for women worldwide and among them the triple negative subtype (tnbc) represents a clinical challenge by being associated with high mortality and having no effective therapies against it [1] , [2] . accordingly, there is an urgent need to design new and more effective drugs to treat breast cancer. notch signaling is an evolutionary conserved cell-to-cell communication pathway crucial during embryonic and breast development and tissue homeostasis. this pathway is often hyper-activated by overexpression of notch receptors and/or its ligands in several types of cancers, such as breast cancer (tnbc included), where it contributes to its development, progression and drug resistance [3] , [4] , [5] . our aim is to generate a function blocking antibody against the notch delta-like-1 (dll1) ligand with therapeutic efficacy against breast cancer. materials and methods dna of human dll1 full length extracellular domain (dll1-ecd) and a truncated version, containing the minimal binding region to the notch receptor (dll1-egf3), were cloned into pfuse-fc1-igg1, and expressed in hek293e6cells. recombinant proteins were purified from culture media by protein-a affinity and size exclusion chromatography. the human scfv phage display tomlinson i+j library was used to select specific scfv against peptides targeting dll1 binding regions to notch. the binding ability and specificity of the selected scfv clones was evaluated by scfv-on-phage elisa. our strategy allowed us to obtain 20 mg of pure (>95%) and stable dll1-ecd-fc as confirmed by sds page and thermofluor assay. dll1-egf3-fc yield was very low and buffer screenings are ongoing to optimize protein stability. functional studies performed in human breast cancer mcf7 cells showed that both ligands are biologically active as they increased the expression of the notch-dependent genes hes-1, hey-l and hey-1. recombinant dll1 and peptides were used to select for monoclonal antibodies by phage display. after three rounds of panning with dll1 peptides we identified 13 scfv positive clones, 2 of which presented high affinity to dll1-ecd-fc. currently we are performing more phage display selections to increase the number of positive clones. scfv with higher affinities will be reformatted into iggs and their ability to inhibit the notch pathway will be evaluated. the anti-oncogenic effects of anti-dll1 iggs will be assessed in breast cancer cells in viability/apoptosis, proliferation, migration, and invasion assays. an anti-dll1 igg with therapeutic efficacy against breast cancer will demonstrate that targeting dll1 could be one of the key factors for successfully targeting breast cancer. recombinant adeno-associated virus (raav) approaches have an outstanding reputation in gene therapy and are evaluated for cancer therapy [1] . advantages include long-term gene expression, targeting of dividing and non-dividing cells, and low immunogenicity. established raav production utilizes triple transfection of adherent hek 293 cells, which hardly meets product yield requirements for clinical applications. we transferred the aav production system to hek 293-f suspension cells. this process is scalable and uses serum-free media streamlining downstream procedures. after optimization of transfection efficiencies and shaker cultivations, we produced titers of 1×10 5 viral genomes per cell in a 2 l bioreactor. the suspension adapted hek-freestyle 293-f cell line was used for the experiments in chemically defined animal component free media (hek-tf, hek-gm (xell ag), freestyle f17 (thermo fisher scientific)). samples for viable cell density and viabilities were taken daily and analyzed using an automated cell counting system (cedex, roche diagnostics). transient transfection of 3×10 6 cells/ml was carried out with polyethylenimine max in a 1:4 dna-pei ratio (w/w) with 2 μg dna. three plasmids (pgoi, prepcap, phelper) were applied in a molar 1:1:1 ratio (fig. 1a) . pretests were performed in orbital shaking tube spin bioreactors. for scale-up, batch processes were carried out in 125 ml shake flasks as well as in 2 l stirred bioreactors at 30% air saturation and ph 7.1. transfection efficiencies and raav production were quantified by flow cytometry using a goi coding for a fluorescent protein and qpcr of genomic copies, respectively. by optimizing the dna amount for transfection of 293-f cells more than 90 % of the cells were reproducibly transfected. batch cultivations in shaker flasks revealed that raav were produced in the first 24-96 h after transfection. figure 1b shows viable cell densities and viabilities in relation to the genomic titer. genomic titers were determined from raw cell extracts and up to 10 9 copies/ml were repetitively achievable. a decrease in viability marked the decline in genomic copies per ml showing that a prolongation of the process e.g. by addition of a feed would probably not increase yield. in a first scale-up, the raav production was transferred to a 2 l bioreactor (fig. 1c) . transfection efficiencies in bioreactors of up to 55% were comparable to that obtained in a simultaneous shaker flask experiment. transfection efficiencies were lower compared to prior experiments due to controlled conditions in the bioreactor. nonetheless the titer with up to 1×10 5 genomic copies per cell was elevated compared to that of shaker flasks. first experiments with 293-f cells in hek tf medium showed promising results of transferring raav production from the adherent system to suspension. after improvement of transfections by the adjustment of dna amounts in small scale experiments, aav production was analyzed in shaker flasks. the batch process showed an expected increase in cell density with low variability between biological replicates (fig. 1b) . the genomic titer increased according to the viable cell density until day four where a sudden drop started. this observation was made for aav productions in hek-tf, hek-gm and freestyle f17 medium. for optimal yields, we assume that a slight decrease in viability marks the point in time for harvest. from optimized protocols, a batch process in a 2 l bioreactor was carried out. interestingly the bioreactor cultivation resulted in lower overall viable cell densities but in higher genomic copies per cell compared to shaker flasks (fig. 1c) . these results are comparable to already published data for suspension cells [2] . subsequent optimization of the bioreactor protocol will lead to further increase in raav yield. genethon and pall have collaborated to assess pall's single-use icellis fixed-bed bioreactor for viral vector production. clinical use of gene therapies to treat formerly incurable genetic diseases is advancing rapidly. viral vectors are an important tool for introducing genes into target cells. many gene therapies have been developed using adherent cells in 2-dimensional flatware or roller bottles but using these technologies to reach commercial-scale production represents a significant challenge. the icellis bioreactor enables large-scale viral vector production by providing a 3-dimensional matrix for cell growth in a compact configuration (fig. 1 ). up to 500 m 2 of surface area is available in a compact bioreactor measuring 88 mm in diameter in a total volume of 75 l with ph, do and temperature control. a key feature of the icellis bioreactor is that it scales by increasing the diameter of the fixed-bed while keeping the height constant with no change in aspect ratios. the height of the fixed-bed can be varied (2, 4 and 10 cm) as well as density of carrier packing (96 gm/l or 144 gm/l). the icellis system comes in two formats, the icellis nano bioreactor (0.53-4.0 m 2 ) and the icellis 500 bioreactor (66-500 m 2 ). processes developed in the bench top icellis nano bioreactor can be directly transferred to the corresponding icellis 500 system. the icellis nano bioreactor enables an efficient platform for process optimization. the genethon raav-8 process was transferred to an icellis nano bioreactor 0.8 m 2 (2 cm bed height, 144 gm/l density) bioreactor using freestyle media. the initial icellis nano process was established as (1) seed on day 1, (2) transfect at day 5, (3) harvest at day 8 and yielded <1x 10 9 vp/cm 2 (n=3). media exchange, cell density at transfection, pdna/cell ratio, and lysis method were then changed to determine the effect on productivity. the modified process was then scaled from 0.8 m 2 to 4.0 m 2 (10 cm bed height, 144 gm/l density) icellis nano bioreactor. -media: a media exchange at 5 hours post transfection with dmem substituted for freestyle medium resulted in an 8x increase in specific productivity. 1 (abstract p-349) . a schematic overview of raav production in hek293 cells with triple-transfection system. b viable cell densities (vcd), viabilities and genomic copies per ml (gc) of a raav production with 293-f batch cultivations in shaker flasks. genomic copies per ml refer to the titer determined in 1 ml culture volume. error bars represent biological and technical duplicate measurements of samples. c viable cell densities and genomic copies per cell of a raav production with 293-f batch cultivation in a 2 l bioreactor. for reasons of comparability between shaker and bioreactor data genomic copies are given per cell. error bars represent technical duplicate measurements of samples -cell density at transfection: cells were seeded at 6,000 cells/cm 2 and reached 200,000 cells/cm 2 at day 5 which was determined to be the optimal cell density for transfection. -pdna/cell ratio: reducing pdna by 50% had no significant effect on productivity. -lysis: use of trion x-100 at 0.5% with 100 mm nacl at ph 8 resulted in >100% virus recovery compared to sampled carriers. -scaling: specific productivity was maintained as the system was scaled from 0.8 m 2 to 4.0 m 2 . -overall, an average yield of 4x10 13 vg/m 2 was achieved. the icellis technology is being adopted widely for viral vector production. transferring a process to the icellis nano bioreactor can be easily achieved and once in place can be optimized to provide significant productivity increases and cost savings such as reduced pdna. the icellis nano bioreactor is an efficient bench-top system the results of which can be readily scaled to the icellis 500 system. background tissuse multi-organ-chip (moc) platform contributes to the ongoing advancement in systemic substance testing in vitro. current in vitro and animal tests for drug development are failing to emulate the systemic organ complexity of the human body and, therefore, often do not accurately predict drug toxicity. especially, cardiotoxicity is one of the main reasons why new compounds are failing in clinical trials. therefore, we aimed to establish an autologous dynamic multiorgan-device integrating cardiomyocytes for substance testing. generic 2d monolayer and 3d suspension ipsc derived cardiomyocytes differentiation protocols were established. beating cardiomyocytes were first seen on day 8 in monolayer as well as in spheroid culture. cardiomyocytes show up to 64% cardiac troponin t positive cells and 44% myosin heavy chain positive cells by flow cytometry (fig. 1g, h) . myosin ii heavy chain, α-actinin, myosin 9/10, myosin 11 and caldesmon expression was shown by immunohistochemistry ( fig. 1a-d) . due to the exclusion of a lactate enrichment of cardiomyocytes, cardiac fibroblasts are also expressed in the spheroids shown by vimentin staining. those cardiac fibroblasts lead to a physiological heterologous cell population similar to the human heart. beating spheroids were cultivated for 7 days under dynamic culture conditions in the multi-organ-chip. the integrated on-chip micropump provides physiological-like pulsatile circulation at a microliter scale and leads to better nutrition and oxygen supply. the next significant step is to combine multiple autologous 3d organ equivalents in our multi-organ-chip using ipsc differentiation technology. differentiating all cell types from one ipsc donor is crucial to overcome source and rejection problems. combining our multi-organ-chip platform with ipsc differentiation technology will eventually lead to a personalized system for drug and substance testing. lab as a service -automated cell-based assays lena schober, moriz walter, andrea traube laboratory automation and biomanufacturing engineering, fraunhofer ipa, stuttgart, germany correspondence: lena schober (lena.schober@ipa.fraunhofer.de) bmc proceedings 2018, 12(suppl 1):p-365 background the use of cell-based assays in pharmaceutical industry and academic research is a growing trend that is a driving force to reduce costs for drug development. academic research is gaining information about intracellular targets or functional mechanisms through the variety of different assays. these benefits can be used in preclinical studies and furthermore costly late-stage drug failures may be reduced by the use of cell-based assays. the use of automated systems is also in great demand and will change the testing of substances and research activities. nevertheless, there are a lot of barriers at the moment limiting the successful application of automated systems in this field. by the lack of flexibility and the demand for skilled computer scientists & engineers just the two main aspects stated by experts shall be mentioned. our strong background on automated cell culture technologies and expertise, gained in several projects, let us rethink the overall process chain and overcome established principles. a new service orientated platform for the execution of cell-based assays that are commonly used will be introduced. the main idea is to give access to automated infrastructure for academic research or spin-offs which cannot afford the special infrastructure. nowadays it is known that the development of inhibitory antibodies by hemophiliac patients is closely related with immunogenic epitopes present in the coagulation factors. these proteins are produced in hamsters cells [1 -4] which insert a different posttranslational modification profile when compared with the human profile. patients with high-titer/high-responding inhibitors must be treated with bypassing agents that can achieve hemostasis. activated factor vii (fviia) is an attractive candidate for hemostasis, independent of fviii/fix, making this coagulation factor an alternative for hemophilia patients with inhibitory antibodies. however recombinant factor vii is produced in bhk-21 cells (baby hamster kidney cells) and as well as the others coagulation factors, it may contain immunogenic epitopes [5 -7] . in this context, becomes extremely important to produce recombinant proteins with complex posttranslational modifications in a cell line not yet used [8 -10] . we have been using the sk-hep-1 human cell line for the production of recombinant fvii. to generate the recombinant cell line we have used a bicistronic lentiviral vector, 1054-gfp, containing a fvii gene and the gfp selection marker gene. a master cell bank and a work cell bank were generated in gmp conditions. the rfvii analyses were made by elisa assay, western blot, gene expression quantification and biological activity using the prothrombin time (pt) assay. rfvii purification by affinity chromatography using viiselect (ge) column. after purification the rfvii was formulated and dry froze to be used in in vivo experiments. in static conditions sk-hep-1 cells showed, for a period of 6 months, a stable fvii production with an average of 8,03 iu/ml of fvii, 83% of cell viability and 77% of cells expressing the gfp gene. after purification with viiselect column it was possible observe a recover of 65% of the purified protein with 95% degree of purity (fig. 1) . this recombinant purified fvii is being used in in vivo experiments to determine the pharmacokinetics parameters and to evaluate the posttranslation modifications profile. in conclusion, this study reports the use of sk-hep-1 cell line for high-level production of recombinant factor vii. these cells have proven to be effective in the production of recombinant protein and can be used as a new platform for the production of recombinant proteins. fig. 1 (abstract p-366) . a determination of protein yield of egfr (epidermal growth factor receptor) synthesized in a cho cecf system. analysis of egfr protein yield obtained in a various batches of cecf formatted reaction. cecf synthesis was performed in the presence of 14 c leucine for radio labeling of target proteins. radio labeled proteins were precipitated using tca followed by scintillation measurement. b detection of radio labeled egfr by autoradiography. a no template control (ntc) was prepared containing no egfr dna template background emergence of stem cell-based regenerative medicine recently leaded to the necessity to reach a sustained production of such cells [1] . hence, new bioreactors and carriers were designed for cell expansion. however, to meet this increasing demand, improvement of both quality and quantity of stem cells remains necessary. soft biocompatible microcarriers mimicking extracellular matrix in term of structure and stiffness should be of valuable utility as substrate stiffness strongly influence in vitro stem cell fate and differentiation [2, 3] . our expertise in the field of microbeads design using jetcutting technology [4] enabled us to engineer +/-200 μm alginate beads of various g/m monomer ratio. we used jetcutter (genialab gmbh) with 100 μm nozzle at max speed 12000 rpm. alginate solutions with concentrations 2% to 4% were gelifyed in 2% cacl2 etoh 50% solution. alginates with estimated viscosity (@1%) from 30 to 720 mpa were tested. a further surface treatment with gelatine (0,1%, 1%) and poly-l-lysine (0,1%) was carried out to reach an optimal cell anchoring of human adiposederived mesenchymal stem cells (atcc-psc-500-011) in mesempro rs medium (gibco). jetcutter technology allowed us to obtain alginate microcarriers with a good homogeneity in size around 200 μm and sphericity comparable to commercial carriers (table 1) . best adhesion of human adipose-derived mesenchymal stem cells was obtained on 0,1% gelatine coated alginate carriers (fig. 1) . we observed limited apoptosis and human adipose-derived mesenchymal cells stemness was conserved after 14 days in culture (data not shown). cellular bioassays developed with functionally immortalized cell lines aileen bleisch 1 , aleksandra velkova 3 , tom wahlicht 2 , dagmar wirth 2 , tobias may 1 1 inscreenex gmbh, braunschweig, germany; 2 msys, helmholtz centre for infection research, braunschweig, germany; 3 greiner bio-one gmbh, frickenhausen, germany bmc proceedings 2018, 12(suppl 1):p-381 background a major challenge of current research is the limited availability of physiologically relevant cells [1] . thus the development of relevant cellular bioassays that are robust, reproducible and scalable is hindered. to overcome current limitations we developed an immortalization strategy allowing the efficient and reproducible establishment of novel cell lines showing an in vivo-like phenotype. the main feature of our ci-screen technology is the ability to combine the advantage of cell linesthe unlimited cell supplywith the advantage of primary cellsthe physiological relevance. using this technology we have immortalized, amongst others, a human osteoblast cell line (ci-huob) [2] . in the present study, the in vivo-like phenotype and functionality of the novel ci-huob was examined. therefore, ci-huob cells were used to develop a 3d cell culture model by using the magnetic 3d bioprinting technology (nano3d biosciences, houston, tx, usa) [3] . the ci-huob cell line was recently described and cultivated in huob maintenance medium (inscreenex, germany). for spheroid creation ci-huobs were grown in a monolayer, magnetized by adding a magnetic nanoparticle assembly (nanoshuttle, ns, nano3d biosciences, houston, tx, usa) at a concentration of 4μl ns/cm 2 growth area. after an overnight incubation magnetized ci-huob were detached and seeded into cellstar® cell-repellent 96-well plates (greiner bio-one, frickenhausen, germany). with the help of mild magnetic forces cells were printed into spheroids within 2h. these consist of 1.000-50.000 cells and were cultured for a period of up to 50 days. the cell viability was analyzed by a propidium iodide (pi) and calcein am staining. to improve spheroid functionality spheroids were cultivated with huob differentiation medium (inscreenex, germany). "mini bone" tissue functionality and thus mineralization was analyzed by an alkaline phosphatase (alkaline phosphatase activity) and an alizarin red s staining (ca 2+ deposits). the combination of ci-huob cells with the magnetic 3d bioprinting technology enabled the establishment of reproducible and consistent 3d spheroids. single spheroids per well were formed independent of the amount of cells (1.000-50.000 cells) (fig. 1a) . formed spheroids were stable for a culture period of up to 50 days (fig. 1b) . neither cell death nor cell proliferation were observed in the bioprinted spheroids which is indicated by the stable size of the spheroids throughout the cultivation (fig. 1c) . after treatment with a differentiation stimulus the 3d bioprinted spheroids became fully functional "mini bones". this was highlighted by the alkaline phosphatase activity and the ca 2+ deposits within the 3d bioprinted spheroids (fig. 1d,e) . taken together, these results demonstrated that the functional immortalization technology provides physiologically relevant cells in sufficient numbers and that the magnetic 3d bioprinting technology enabled a fast, consistent cell aggregation and the formation of stable uniform spheroids. importantly, these immortalized cells are capable to differentiate when a suitable stimulus is provided. for differentiation into mini bones, 3d spheroid cultivation and additional stimulation by small molecules are required. the combination of physiologically relevant cell systems with three dimensional culturing will help to generate in vitro test systems which closely resemble the in vivo physiology and thereby supporting future drug discovery approaches. fig. 1 (abstract p-381) . characterization of spheroid "mini bones". a different number (1.000-50.000 cells) of ci-huob cells were printed into spheroids. b 20.000 ci-huobs were printed into spheroids and cultivated for indicated time points. c for analyzing spheroid sizes, pictures were taken and quantified by imagej. (d/e) 20.000 ci-huob cells were printed into spheroids and cultivated with (huob differentiation medium) or without a differentiation stimulus for two weeks. afterwards, bioprinted spheroids were sectioned by a cryo microtome and d stained for ca 2+ deposits (alizarin red s) or e stained for alkaline phosphatase activity crispr/cas9, a novel genomic tool to knock down microrna in vitro and in vivo degrontagged dcas9/cpf1 effectors for multi-directional drug-inducible control of synthetic gene regulation assessing the variability of an innovator molecule n-glycan profile correct primary structure assessment and extensive glyco-profiling of cetuximab by a combination of intact, middle-up, middle-down and bottom-up esi and maldi mass spectrometry techniques 2d-dige screening of high productive cho cells under glucoselimitation -basic changes in the proteome equipment and hints for epigenetic effects dependence on glucose limitation of the pco2 influences on cho cell growth, metabolism and igg production fcgalactosylation modulates antibody-dependent cellular cytotoxicity of therapeutic antibodies fc glycans of therapeutic antibodies as critical quality attributes development of an automated, multiwell plate based screening system for suspension cell culture global cancer statistics remission of disseminated cancer after systemic oncolytic virotherapy screening different host cell lines for the dynamic production of measles virus beitrag zur kollektiven behandlung pharmakologischer reihenversuche a simple method of estimating fifty per cent endpoints webinar: ambr15 as a sedimentation-perfusion model for cultivation characteristics and product quality prediction de novo" high density perfusion medium: increased productivity and reduced perfusion rates a high-yielding cho transient system: co-expression of genes encoding ebna-1 and gs enhances transient protein expression an integrated vector system for the eukaryotic expression of antibodies or their fragments after selection from phage display libraries towards the development of a surface plasmon resonance assay to evaluate the glycosylation pattern of monoclonal antibodies using the extracellular domains of cd16a and cd64 biotinylation of the fc gamma receptor ectodomains by mammalian cell co-transfection: application to the development of a surface plasmon resonance-based assay ambr™ mini-bioreactor as a high-throughput tool for culture process development to accelerate transfer to stainless steel manufacturing scale: comparability study from process performance to product quality attributes maximizing binding capacity for protein a chromatography protein glycosylation and its role in protein folding afucosylated antibodies increase activation of fcγriiia-dependent signaling components to intensify processes promoting adcc sialic acids and other nonulosonic acids production of antibody in insect cells suitability and perspectives on using recombinant insect cells for the production of virus-like particles cloning of cdna and characterization of anti-rnase a monoclonal antibody 3a21 production of functional antibody fab fragment by recombinant insect cells optimization of hek-293s cell cultures for the production of adenoviral vectors in bioreactors using on-line our measurements enhancing heterologous protein expression and secretion in hek293 cells by means of combination of cmv promoter and ifnα2 signal peptide hek293 cell culture media study towards bioprocess optimization: animal derived component free and animal derived component containing platforms comparison of control strategies for fed-batch culture of hybridoma cells based on on-line monitoring of oxygen uptake rate, optical cell density and glucose concentration continuous bioprocessing: the real thing this time? mabs white paper on continuous bioprocessing fda perspective on continuous manufacturing. ifpac annu. meet screening and assessment of performance and molecule quality attributes of industrial cell lines across different fed-batch systems amanullah: quantitative modeling of viable cell density, cell size, intracellular conductivity, and membrane capacitance in batch and fed-batch cho processes using dielectric spectroscopy optimal and consistent protein glycosylation in mammcalian cell culture journal of laboratory automation. designs and concept-reliance of a fully automated high content screening platform mini-bioreactor as a highthroughput tool for culture process development to accelerate transfer to stainless steel manufacturing scale: comparability study from process performance to product quality attributes perfusion media development using cell settling in automated cell culture system model-based design of process strategies for cell culture bioprocesses: state of the art and new perspectives model-based strategy for cell culture seed train layout verified at lab scale hyperosmotic stimulus study discloses benefits in atp supply and reveals mirna/mrna targets to improve recombinant protein production of cho cells effects of osmoprotectant compounds on ncam polysialylation under hyperosmotic stress and elevated pco 2 evaluating the bottlenecks of recombinant igm production in mammalian cells igm characterization directly performed in crude culture supernatants by a new simple electrophoretic method effect of culture ph on erythropoietin production by chinese hamster ovary cells grown in suspension at 32.5 and 37.0°c enhancing protein expression in hek-293 cells by lowering culture temperature relationship between tissue plasminogen activator production and specific growth rate in chinese hamster ovary cells cultured in mannose at low temperature a quantitative proteomic analysis of cellular responses to high glucose media in chinese hamster ovary cells process analytical technologies in the pharmaceutical industry: the fda's pat initiative effects of ammonium and lactate on growth and metabolism of a recombinant chinese hamster ovary cell culture glycosylation in cell culture structural mechanism of high affinity fcgammari recognition of immunoglobulin g inhibition of glycosylation on a camelid antibody uniquely affects its fcγri binding activity investigation of cho secretome: potential way to improve recombinant protein production from bioprocess bladder cancer cell-derived exosomes inhibit tumor cell apoptosis and induce cell proliferation in vitro exploring packaged microvesicle proteome composition of chinese hamster ovary secretome glycome mapping on dna sequencing equipment iron (iii) citrate inhibits polyethylenimine-mediated transient transfection of chinese hamster ovary cells in serum-free medium efficient high-throughput biological process characterization scale-up of a stirred single-use bioreactor family quality control: er-associated degradation: protein quality control and beyond pharmacological targeting of endoplasmic reticulum stress signaling in cancer when er stress reaches a dead end oxidative stress and antioxidant defense the antioxidant edaravone attenuates er-stress-mediated cardiac apoptosis and dysfunction in rats with autoimmune myocarditis advanced process and control strategies for bioreactors. book chapter model-based strategy for cell culture seed train layout verified at lab scale seed train optimization for cell culture. chapter high cell density cultivation of human leukemia t cells (jurkat cells) in semipermeable polyelectrolyte microcapsules. eng. life sci cell retention by encapsulation for the cultivation of jurkat cells in fixed and fluidized bed reactors the role of interleukin-2 during homeostasis and activation of the immune system creating a biomimetic microenvironment for the ex vivo expansion of primary human the era of digital biomanufacturing the multivariate normal distribution recovery from rabies: a call to arms the role of vaccination in rabies prevention eliminating canine rabies, the principal source of human infection: what will it take? rabies virus-like particles expressed in hek293 cells immunogenic virus-like particles continuously expressed in mammalian cells as a veterinary rabies vaccine candidate an avian cell line designed for production of highly attenuated viruses a chemically defined production process for highly attenuated poxviruses a genotype of modified vaccinia ankara (mva) that facilitates replication in suspension cultures in chemically defined medium easy and efficient protocols for working with recombinant vaccinia virus mva large-scale transfection of mammalian cells for the fast production of recombinant protein recombinant protein production by large-scale transient gene expression in mammalian cells: state of the art and future perspectives high density transfection with hek 293 cells allows doubling of transient titers and removes need for a priori dna complex formation with pei regulation of recombinant protein expression during chobri/rcta pool generation increases productivity and stability qc, canada; 2 department of microbiology, infectiology and immunology the cumate gene-switch: a system for regulated expression in mammalian cells rapid protein production from stable cho cell pools using plasmid vector and the cumate gene-switch christoph zehe sartorius stedim cellca gmbh, 88471 laupheim animal cell technology: from target to market recent advances in mammalian protein production molecular mechanisms of rna interference. annual review of biophysics ribosome profiling-guided depletion of an mrna increases cell growth rate and protein secretion ubiquitous chromatin-opening elements (ucoes): applications in biomanufacturing and gene therapy the art of cho cell engineering: a comprehensive retrospect and future perspectives a novel bxb1 integrase rmce system for high fidelity site-specific integration of mab expression cassette in cho cells high-troughput lipidomic and transcriptomic analysis to compare sp2/0, cho, and hek-293 mammalian cell lines single cell characterisation of chinese hamster ovary (cho) cells eva pekle 1,2 , guglielmo rosignoli 1 generation of stable chinese hamster ovary pools yielding antibody titers of up to 7.6 g/l using the piggybac transposon system comparison of three transposons for the generation of highly productive recombinant cho cell pools and cell lines effects of ammonia and lactate on hybridoma growth, metabolism, and antibody production lactate and glucose concomitant consumption as a self-regulated ph detoxification mechanism in hek293 cell cultures flux balance analysis of cho cells before and after a metabolic switch from lactate production to consumption reducing recon 2 for steady-state flux analysis of hek cell culture trastuzumab -mechanism of action and use in clinical practice hek293 cell culture media study towards bioprocess optimization: animal derived component free and animal derived component containing platforms enhancing heterologous protein expression and secretion in hek293 cells by means of combination of cmv promoter and ifnα2 signal peptide production of lentiviral vectors references 1. wurm f m: production of recombinant protein therapeutics in cultivated mamammalian cells initial identification of low temperature and culture stage induction of mirna expression in suspension cho-k1 cells small indels induced by crispr/cas9 in the 5' region of microrna lead to its depletion and drosha processing retardance production of recombinant protein therapeutics in cultivated mammalian cells improved antibody production in chinese hamster ovary cells by atf4 overexpression the vesicle-trafficking protein munc18b increases the secretory capacity of mammalian cells rapid evaluation of n-glycosylation status of antibodies with chemiluminescent lectin-binding assay intracellular secretion pathway analysis for constructing highly producible engineered cho cells. 16th annual peptalk intracellular secretion analysis of therapeutic antibodies in engineered high-producible cho cells analysis of intracellular recombinant igg secretion in engineered cho cells. the 29th annual and international meeting of the japanese association for animal cell technology humanization strategies for an anti-idiotypic antibody mimicking hiv-i gp41 antibody humanization by molecular dynamics simulations-in-silico guided selection of critical backmutations maxquant enables high peptide identification rates, individualized p.p.b.-range mass accuracies and proteome-wide protein quantification the perseus computational platform for comprehensive analysis of (prote)omics data hoffrogge: label-free protein quantification of sodium butyrate treated cho cells by esi-uhr-tof-ms the art of cho cell engineering: a comprehensive retrospect and future perspectives gs system for increased expression of difficult-to-express proteins the netherlands correspondence: maurice van der heijden (heijden@proteonic.nl) bmc proceedings increased recombinant protein production owing to expanded opportunities for vector integration in high chromosome number chinese hamster ovary cells ires-mediated translation of membrane proteins and glycoproteins in eukaryotic cell-free systems cell-free protein expression based on extracts from cho cells comparison of cell-based vs. cell-free mammalian systems for the production of a recombinant human bone morphogenic growth factor. eng dual-responsive magnetic core-shell nanoparticles for nonviral gene delivery and cell separation pdmaema-grafted core-shell-corona particles for nonviral gene delivery and magnetic cell separation influence of polyplex formation on the performance of starshaped polycationic transfection agents for mammalian cells systematic study of a library of pdmaema-based, superparamagnetic nano-stars for the transfection of cho-k1 cells systems biology of unfolded protein response in recombinant cho cells dynamics of unfolded protein response in recombinant cho cells cells were transfected with np@(pdmaema 1037 ) 46 (n/p 15), separated 24 h post transfection (t = 0) by magnetically-assisted cell sorting and placed into separated cultures. the bars represent the overall transfection efficiency. distribution of low (light green), middle (green), and high (dark green) producers within the egfp-expressing cell fraction. data represent one experiment carried out in duplicate, with random experimental error shown. d correlogram between the molecular characteristics of the nano-stars (core diameter, arm density, arm length, number of monomeric units per nano-star), the physicochemical properties of the corresponding polyplexes (hydrodynamic radius, zeta potential), the transfection conditions (n/p ratio, amount of polymer), and the cellular reactions (transfection efficiency, magnetism, viability) production of recombinant protein therapeutics in cultivated mammalian cells position effects on eukaryotic gene expression bacterial artificial chromosome library for genome-wide analysis of chinese hamster ovary cells construction of bac-based physical map and analysis of chromosome rearrangement in chinese hamster ovary cell lines suguru imanishi 1 , akira ito 1 , masamichi kamihira 1,2 1 department of chemical engineering cre recombinase-mediated sitespecific modification of a cellular genome using an integrasedefective retroviral vector targeted transgene insertion into the cho cell genome using cre recombinase-incorporating integrase-defective retroviral vectors an accumulative site-specific gene integration system using cre recombinase-mediated cassette exchange improved recombinant antibody production by cho cells using a production enhancer dna element with repeated transgene integration at a predetermined chromosomal site lipid extraction by methyl-tert-butyl ether for high-throughput lipidomics a simple method for the isolation and purification of total lipids from animal tissues using baculovirus as a gene shuttle in hmsc: optimization of transduction efficacy gundula sprick 1 clinical applications of mesenchymal stem cells concise review: mesenchymal stem cell treatment of the complications of diabetes mellitus wharton's jelly-derived mesenchymal stromal cells as a promising cellular therapeutic strategy for the management of graft-versus-host disease gene therapy: twenty-first century medicine state-of-the-art gene-based therapies: the road ahead viral vectors: a look back and ahead on gene transfer technology basic biology of adeno-associated virus (aav) vectors used in gene therapy biosafety challenges for use of lentiviral vectors in gene therapy adenoviral vector-mediated gene therapy for gliomas: coming of age manufacturing of viral vectors for gene therapy: part i. upstream processing the complete dna sequence of autographa californica nuclear polyhedrosis virus efficient gene transfer into human hepatocytes by baculovirus vectors efficient transduction of mammalian cells by a recombinant baculovirus having the vesicular stomatitis virus g glycoprotein recombinant baculoviruses as mammalian cell gene-delivery vectors post-transcriptional regulatory element boosts baculovirusmediated gene expression in vertebrate cells baculoviral vector-mediated transient and stable transgene expression in human embryonic stem cells systematic comparison of constitutive promoters and the doxycycline-inducible promoter baculovirus-induced recombinant protein expression in human mesenchymal stromal stem cells: a promoter study triple-negative breast cancer: an unmet medical need the therapeutic monoclonal antibody market. mabs a monoclonal antibody against human notch1 ligand-binding domain depletes subpopulation of putative breast cancer stem-like cells notch activation stimulates migration of breast cancer cells and promotes tumor growth notch-out for breast cancer therapies aav production in suspension: evaluation of different cell culture media and scale-up potential modular adeno-associated virus (raav) vectors used for cellular virusdirected enzyme prodrug therapy production of recombinant adenoassociated virus vectors using suspension hek293 cells and continuous harvest of vector from the culture media for gmp fix and flt1 clinical vector development of a cost-efficient scalable production process for raav-8 based gene therapy by transfection of hek-293 cells simon arias 2 , mustapha hohoud 1 , roel lievrouw 1 , fabien moncaubeig 1 b-1120 brussels, belgium; 2 généthon, rue de l'internationale 1 cell-free systems based on cho cell lysates: optimization strategies, synthesis of "difficult-to-express" proteins and future perspectives cell-free protein expression based on extracts from cho cells comparison of cell-based vs. cell-free mammalian systems for the production of a recombinant human bone morphogenic growth factor ires-mediated translation of membrane proteins and glycoproteins in production of recombinant factor vii in sk-hep-1 human cell line zip 14049-900, brazil; 3 department of clinical, toxicological and food science analysis, faculty of pharmaceutical sciences of ribeirão preto human cell lines for the production of recombinant proteins: on the horizon production of recombinant protein therapeutics in cultivated mammalian cells recombinant protein therapeutics from cho cells -20 years and counting establishment of a cell line expressing recombinant factor vii and its subsequent conversion to active form fviia through hepsin by genetic engineering method expression and fast preparation of biologically active recombinant human coagulation factor vii in cho-k1 cells implications of the presence of n-glycolylneuraminic acid in recombinant therapeutic glycoproteins uniquely human evolution of sialic acid genetics and biology production platforms for biotherapeutic glycoproteins. occurrence, impact, and challenges of non-human sialylation human cells: new platform for recombinant therapeutic protein production therapeutic glycoprotein production in mammalian cells masthering industrialization of cell therapy products tissue cells feel and respond to the stiffness of their substrate matrix elasticity directs stem cell lineage specification continuous cider fermentation with co-immobilized yeast and leuconostoc oenos cells eternity and functionality -rational access to physiologically relevant cell lines generation and characterization of two immortalized human osteoblastic cell lines useful for epigenetic studies biocompatibility of nanoshuttletm and the magnetic field in magnetic 3d bioprinting publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations we accept pre-submission inquiries • our selector tool helps you to find the most relevant journal • we provide round the clock customer support • convenient online submission • thorough peer review • inclusion in pubmed and all major indexing services • maximum visibility for your research submit your manuscript at www submit your next manuscript to biomed central and we will help you at every step authors thankfully acknowledge the biotechnology and biological sciences research council for funding this research work. sns thanks esact 2017 for providing her with the opportunity to present her work at the meeting. we would like to thank moritz frei for his support for the generation of the ngs transcriptomics data. many thanks to valentine chevallier for her precious advices, to stefanos grammatikos for his support and to the whole upstream process sciences team. we thank david bruehlmann and thomas vuillemin from merck (vevey, switzerland) for providing the igg glycan variants and the 2-ab-uplc glycan data. polyplus-transfection would like to thank généthon for their kindly provided data.acknowledgements asmita mukerji, reetesh pm, sasi kumar k, pilot plant team acknowledgment to cedric allier from cea leti, grenoble, france. we would like to thank a. schemel and a. ehrlich for technical assistance. the authors would like to mention that this research was supported by the fi-dgr (2017) from spanish government and the project was led by prof. jordi joan cairó badillo. the authors want to thank the biotech process sciences team at merck in corsier-sur-vevey for their support and also the members of the morbidelli group at eth zürich for their input and collaboration. authors would like to thank dr. benjamin youn in manufacturing science and technology (msat) at biomarin for his help on coding the excel macro program for ambr15, and dr. donald l. traul from tap biosystems (now part of the sartorius stedim biotech group) for his assistance on ambr15 operations. thanks to the bioreactor team of dustin davis, amer al-lozi, and jana mahadevan. we organic vaccines tm and the nih, who kindly provided to univercells. we thank polymun scientific immunbiologische forschung gmbh for providing the antibodies igm103, igm104 and igm617 as a kind gift. this allison kurz and gian andrea signorell, genedata ag, basel, switzerland allison kurz, gian andrea signorell, genedata, basel, switzerland. allison kurz, gian andrea signorell, genedata ag high glucose concentration and low specific cell growth rate improve specific r-tpa productivity in chemostat culture of cho acknowledgements r. boraston for photographs in fig. 1 . we acknowledge atum for their contributions to vector design and construction. austrian bmwfw, bmvit, sfg, standortagentur tirol, government of lower austria and business agency vienna through the austrian ffg-comet-k2. this research is supported by sfi grant 13/ia/1841. financial support of the austrian science fund (fwf; grant number p 25056) is gratefully acknowledged. we would like to thank the australian institute for bioengineering and nanotechnology, university of queensland-brisbane, australia (aibn) for providing the cho clones. this research is partially supported by the developing key technologies for discovering and manufacturing pharmaceuticals used for next-generation treatments and diagnoses both from meti and amed, japan. this work was supported in part by grants for developing key technologies for discovering and manufacturing pharmaceuticals used for next-generation treatment and diagnoses, both from the ministry of economy, trade and industry (meti), japan and from the japan agency for medical research and developments (amed). many thanks stefanos grammatikos for his support and to the whole upstream process sciences team. the authors thank the hessen state ministry of higher education, research and the arts within the hessen initiative for scientific and economic excellence (loewe program) for the financial support. the authors thank xell ag, bielefeld, for providing hek serum-free media (hek gm and hek tf) and for fruitful discussions. the authors would like to thank dana wenzel for cho lysate preparation (fraunhofer izi, potsdam-golm, germany). this work is supported by the european regional development fund (efre) and the german ministry of education and research (bmbf, no. 031b0078a). the authors acknowledge são paulo research foundation -fapesp (2015/ 19017-6), centro de pesquisa, inovação e difusão (cepid), and national institute of science and technology in stem cell and cell therapy -inctc for financial support. this work was supported by grants from the niedersächsisches ministerium für wissenschaft und kultur (80029155) and the german ministry for economic affairs and energy (igf 16153 n). the infrastructure which was modularly built up, consists of automated liquid handling robots, plate and tube handling robots as well as incubators, refrigerator and analysis systems as for example an imaging system. the aim is to address the need on reproducibility and reliability of results and to offer access to a maximal controlled and automated environment. with the help of a web-based configurator assay selection as well as parameterization of the assays can be done in an easy way. after the order process, test items can be shipped to the lab. assays will be executed on the fully automated platform. by capturing in process data as well as environmental conditions, a real complete data set is leading to comprehensively results. as soon as results are available during the process, the view and analysing can be done in a secure cloud. the service can be used for single experiments in low throughput applications and is therefore a benefit for labs which cannot afford automated infrastructure or the staff for the maintenance for such platforms. extensive monitoring and data capturing during the run leads to a gapless data trail and the possibility of detailed result analysis. due to automated processing the reproducibility is increased associated with direct reduction of costs and time. the centralized service paired with specific know-how allows up-scaling of processes at any time. the web-based interface provides a flexible guidance for the user and the online order gives 24/7 access on the infrastructure, leading to a fast reliable result generation. furthermore the secure interaction with additional services e.g. other specific data analysis tool is possible. this dynamic access to automation offers high flexibility for low throughput experiments and will push high quality research and drug development in early stage. development of alternative animal cell technology platforms: cho based cell-free protein synthesis systems for the production of "difficult-to-express" proteins lena thoring 1, 2 background nowadays, animal cell technologies are commonly used for a broad range of medical and pharmaceutical applications. one main topic of these technologies is the production of proteins used for therapeutical purposes. these in vivo production processes are often time consuming and limited in production of so called "difficult-to-express" proteins including the pharmaceutical relevant class of membrane proteins. to overcome these issues, novel cell-free protein synthesis platforms were developed based on the industrial working horse cho cells [1] . cell lysates provide a basis for this technology by including all components of the translational machinery and enabling protein production within a few hours. microsomal structures present in cho cell lysates enable posttranslational modification of target proteins and insertion of membrane proteins into lipid bilayer. in this study a cell-free protein synthesis platform was developed based on a combination of cho cell lysates and a continuous exchange reaction format. the continuous exchange reactor consists of a twochamber system, a reaction and a feeding chamber, separated by a semipermeable membrane. due to concentration gradients, energy components can diffuse to the reaction chamber, while inhibitory byproducts are continuously removed. different classes of proteins were selected to evaluate the quality of the cho cecf system including a transmembrane receptor, a single chain variable fragment and an ion channel. cell-free protein synthesis was performed in the presence of 14 c leucine for radio labeling of synthesized proteins. protein yield was quantified by tca precipitation of radio labeled proteins followed by scintillation measurement and molecular mass was detected by autoradiography. posttranslational modifications and activities of proteins were estimated by kinase assays, elisa, endoglycosidase treatment and electrophysiological measurements. the demonstrated results showed a protein production of up to around 1 g/l while detecting correct molecular weights by autoradiography. analysis of the productivity using different lysate batches by the production of the membrane protein egfr revealed only minimal batch-to-batch variations (fig. 1a) . posttranslational modifications of proteins, including phosphorylation and glycosylation, were detected using western blot and autoradiography (fig. 1b) . evaluation of localization of membrane embedded eyfp fusion proteins by confocal laser scanning microscopy resulted in the detection of proteins in the microsomal fraction of cho cell lysate. produced single chain variable fragments showed binding specificity in elisa experiments. the activity of synthesized ion channels was underlined by electrophysiological measurements and detected single channel activities. a cell-free system based on cho cell lysates for high yield production of proteins was developed that provides a platform for efficient production of "difficult-to-express" proteins. the combination of a cho lysate based cell-free system and a continuous exchange cell-free system leads to be a highly efficient production system for various classes of "difficult-to-express" proteins. this approach opens up a fast and cost-effective process pipeline for the production of "difficult-to-express" proteins and shows a high potential for industrial applications including screening technologies, protein structure determination and just-in-time protein production processes. key: cord-008777-i2reanan authors: nan title: ecb12: 12th european congess on biotechnology date: 2005-07-19 journal: j biotechnol doi: 10.1016/j.jbiotec.2005.06.005 sha: doc_id: 8777 cord_uid: i2reanan nan in the last 20 years biotechnology has made tremendous progress in its different application fields: red biotechnology, the use of biological methods for medical purposes, is firmly established in the development of new drugs. the use of plant or green biotechnology is under controversial discussion in politics and public. nevertheless, genetically modified herbicide and insect resistant crops are cultivated to a large extent. industrial biotechnology, now often named white biotechnology, seems widely underestimated in the public perception. it includes all industrial processes for the production of chemical products and enzymes, which fully or partly rely on the biological toolbox of nature. white biotechnology processes are carried out in a contained environment, typically in a bioreactor in a dedicated industrial plant. well-known examples are the fermentative productions of antibiotics, amino acids, vitamins and enzymes, products related to medical, food and feed applications. many products like the amino acids glutamic acid, lysine, threonine and tryptophane are exclusively produced using microbes in large scale industrial processes. in other cases, like the water soluble vitamin b2, biotechnological processes successfully replaced chemical productions, due to lower costs and improved ecoefficiency. in contrast to this, most industrial chemicals and polymers are produced by chemical synthesis based on oil and gas. however, there are some examples for bioproducts among industrial chemicals. the solvents acetone and butanol, for instance, were manufactured by fermentation for several decades in the last century. since the 1950s these fermentations have been replaced by more efficient and cheaper chemical synthesis. recently, new pilot and production processes for biopolymers like pha or biomonomers like 1,3-propanediol or lactic acid were announced by different companies. currently, ethanol is by far the largest white biotech product by volume. in brazil, where bioethanol is used as liquid transportation fuel, the annual production is in the range of 15 mio m 3 . bioethanol is of growing importance also in the united states. business consultants predict a tremendous growth of biotechnological products within the chemical industry. high prices for crude oil, dropping prices for renewable resources, and scientific progresses nourish the expectation that industrial biotechnology will replace many bulk chemicals. is this realistic? will we switch from a petrochemical industry to a biobased chemistry within the next years? based on economic considerations it can be stated that this is a long term goal. to achieve this it remains a scientific challenge to make renewable raw materials available for competitive bioproduction of bulk chemicals at low costs. conversion of lignocellulosic material to fermentation sugar may be a solution. also green biotechnology can contribute to the supply of cheap fermentation raw materials. innovative ideas for downstream processing or further chemical conversion of fermentation products are required to enter the chemical value chains. furthermore, the identification of new higher value bioproducts is a chance for short term successes in white biotechnology. enzyme and protein engineering has the potential to create new biomolecules, metabolic engineering can contribute to develop new metabolic pathways, may be even for unnatural compounds. by continuously increasing the efficiency and throughput of dna sequencing we, together with colleagues, have sequenced the human genome and the genomes of all the major model organisms. the challenge now centers on understanding these vast instruction sets. our ability to read these instructions must be enhanced through collection of key additional data sets. one productive path for delineating the functional sequences and inferring their function is comparative sequence. the mouse genome sequence, for example, led to estimates that only 5% of the human genome is functional. sequencing of an extensive set of additional mammalian genomes promises to define these functional sequences with a resolution of less than 10 base pairs. on a different course, we have sequenced the chimpanzee genome to learn what has changed in the evolution of humans. beyond providing for the first time a catalog of the differences between the two genomes, the comparison of the chimpanzee and human genomes reveals the patterns of neutral mutation and regions that deviate from that. the talk will summarize these and related findings. the sequence of additional primate genomes will help delineate what has changed specifically in humans and add power to the analysis. ultimately, capturing human sequence variation and correlating with phenotypic variation will be required to understand function. but learning what these functional elements do requires new sets of experimental data. for this, we have turned to the nematode c. elegans. in this simple system most of the ∼20k genes have been defined and experimentally confirmed. beyond the hundreds of genes with already known mutants, two centers are systematically producing gene knockouts or rnai can be used to inhibit any gene temporarily. sequences of three caenorhabditis species are already available, and two more are underway. expression data has been collected for all the genes under many conditions and time points through development. to enhance the resolution of expression data and to simplify phenotypic analysis of embryonic mutants, we are developing a system that will automatically trace the cell lineage and assign gene expression to precise cells with high temporal resolution. the latest results with the system will be described. in the longer term, this and similar datasets should provide an understanding of how the genome specifies the form and behavior of the worm. uhlen department of biotechnlogy, albanova university center, royal institute of technology (kth), stockholm, sweden here, we present a new protein atlas database (www.proteinatlas.org) showing the expression and localization of human protein in normal and cancer tissues. the atlas is based on the use of antibodies (agaton et al., 2003) to generate high-resolution immunohistochemistry images representing 48 normal tissues and 20 different cancer types (uhlen and ponten, 2005) . each antibody is used to generate more than 500 individual images and each image has been annotated by a pathologist (kampf et al., in press) . the database has been created by the swedish human proteome resource (hpr) and the program has been set-up to allow the exploration of the human proteome with antibody-based proteomics (nilsson et al., in press) . the basic concept is to generate, in a systematic and high-throughput manner (uhlen and ponten, 2005) , specific antibodies to all human proteins, and subsequently used these for functional analysis of the corresponding proteins in a wide range of assay platforms, including (i) a protein atlas for tissue profiles (kampf et al., in press) , (ii) specific probes to evaluate the functional role of individual proteins, and (iii) affinity reagents for purification of the specific proteins and their associated complexes for structural and biochemical analyses. ments, most effective source of variation was perturbation in growth medium, followed by perturbation in growth rate. effect of gene deletion on data variation was found to be less apparent when compared to other perturbations. a significant similarity in variation of metabolome and mrna data was observed, which may be used as the key point for integration of these two sets of data in functional analysis of genes. projection to latent structures (partial least squares, pls) is used for integration of transcriptome and metabolome data. comparison of pca and pls shows that linear model constructed via pls to predict the metabolome data does not make use of all the variation in transcriptome data. thus, pls allows the discrimination between the portion of gene expression change that affects the metabolome profile and the portion that is not directly effective on metabolome. both pca and pls can be used to detect the open reading frames (orfs) which are the main sources of variation in transcriptome data and/or effective on metabolome profile. extracellular metabolomics to accelerate the discovery of key genes involved in fibre degradation silas g. villas-bôas, geoffrey lane, graeme attwood, adrian cookson agresearch limited, grasslands research centre, tennent drive, private bag 11008, palmerston north, new zealand. e-mail: silas.villas-boas@agresearch.co.nz (s.g. villas-bôas) the genome of the hemicellulose-degrading microbe clostridium proteoclasticum is been sequenced and an array of candidate genes with diverse activity relevant to fibre degradation have been identified by automated gene annotation methods. c. proteoclasticum falls within the butyrivibrio-pseudobutyrivibrio assemblage of rumen bacteria which are though to play an important role in the degradation of plant hemicellulose-lignin complexes which limit fibre degradation in the rumen. for new zealand it makes strategic sense to invest in microbial genomics efforts applied to agriculture where the country holds a strong competitive advantage and where ruminants constitute the vast majority of farmed animals. in conjunction with dna sequencing, proteomics and transcriptomics (micro-array analysis) we are using metabolomics as an additional functional genomics tool for gene discovery. we have established a footprinting approach for microbial metabolome analysis focused mainly on metabolic intermediates of polysaccharide degradation to provide quantitative information on end products of fibre-degrading enzymes. a gc-ms method has been developed that is able to resolve complex biological mixtures containing mono-, di, and oligosaccharides, in addition to a series of organic acids. we are currently phenotyping a series of c. proteoclasticum mutants to validate our analytical methodology and we are going to fully characterize the fibrolytic ability of c. proteoclasticum to be compared with other fibre-degrading microbes. we believe that our metabolomics data will complement current proteomic analysis of fibre-degrading enzymes and micro-array analysis of gene expression from a series of mutants by providing direct evidence of the metabolic function of key genes involved in fibre-degradation processes. many gram-negative bacteria utilize cell-to-cell communication systems that rely on diffusible n-acyl homoserine lactone (ahl) signal molecules to monitor the size of the population in a process known as quorum sensing (qs). in human pathogens this form of gene regulation ensures that the cells remain invisible to the immune system of the host until the pathogen has reached a critical population density sufficient to overwhelm host defenses and to establish the infection. the qs regulon of pseudomonas aeruginosa and burkholderia cepacia, two important pathogens for patients suffering from cystic fibrosis, has been studied by proteome analyses. comparative twodimensional gelelectrophoresis of pre-fractionated protein mixtures (extra-, surface-, and intracellular proteins) coupled to mass spectrometry analysis or n-terminal sequencing has been employed to recognize and identify qs-controlled proteins. our findings strongly support the importance of ahl-mediated cell-cell-communication as a global regulatory system and suggest that qs control also operates via post-translational mechanisms. as qs has been proven to be a central regulator for the expression of pathogenic traits and biofilm formation in opportunistic human pathogens it represents a highly attractive target for the development of novel anti-infective compounds. functional genomics technologies (transcriptomics and proteomics) have been exploited to validate the target specificity of natural and synthetic qs inhibitors, thus having a great potential as alternative therapeutics for the treatment of bacterial infections. modeling cell cycle complex formation from high-throughput data sets lars juhl jensen european molecular biology laboratory, meyerhofstrasse 1, 69117 heidelberg, germany. e-mail: jensen@embl. de to analyze the dynamics of protein complexes during the mitotic cell cycle, we integrated data on protein interactions and gene expression. the resulting time-dependent interaction network for the first time places both periodically and constitutively expressed proteins in a temporal cell cycle context, thereby revealing novel components and modules. we discover that most complexes consist of both periodically and constitutively expressed subunits, suggesting that the former control complex activity by a mechanism of just-in-time assembly. consistent with this, we show that additional regulation through targeted degradation and phosphorylation by cdk1 (cdc28p) specifically affects the periodically expressed proteins. alessandra luchini, andrea callegaro, silvio bicciato department of chemical engineering processes, university of padova, padova, italy. e-mail: alessandra.luchini@unipd.it (a. luchini) since transcriptional control is the result of complex networks, analyzing dynamical states of gene expression is of paramount importance to detect the multivariate nature of biological mechanisms. although hundreds of studies fully demonstrated the relevancy of microarrays in describing different physiological conditions, to reconstruct complex interaction pathways it is necessary to analyze the temporal evolution of transcriptional states. however, a robust experimental design for identifying differentially expressed genes over a temporal window would require large amounts of microarrays. unfortunately, replicates for each time point and experimental condition are not always available, because of cost limitations and/or biological samples scarcity. in addition, common data analysis tools, like anova, require replicates and disregard correlation structure among times. we present a method for the identification of differentially expressed genes in un-replicated time-course experiments. the procedure does not assume any model or distribution function, takes into account the correlation of data, and does not require sample replicates at the various time points, other than the presence of an initial time point for all analyzed conditions. the identification of differentially expressed genes as the result of a system perturbation is formally stated as a hypothesis testing problem in which a defined statistic is used to rank transcripts in order of evidence against the null hypothesis. specifically, (i) data are structured so that measurements are correlated in time, within the same biological condition; (ii) the null hypothesis is formulated so that changes in expression levels at different time points are equivalent; (iii) time point t0 represents the system before the perturbation. therefore, modulated genes are detected testing the statistical significance of expression differences between physiological states at each time point, once corrected by the variability at t0, and given an empirical null distribution constructed using permutations. statistical significance is assessed by the q-value. the method has been tested on time-course microarray experiments aimed at studying the temporal changes of gene expression in: (i) skeletal muscle cells treated with a histone deacetylase inhibitor (iezzi et al., 2004) and (ii) immature mouse dendritic cells (dc) exposed to larval and egg stages of s. mansoni (trottein et al., 2004) . differentially expressed genes, identified using the proposed algorithm, have been compared with results obtained from anova model and sam paired test. the biological significance and soundness of selected transcripts was also verified using global functional profiling by means of ontotools. results demonstrate that this novel procedure allows the identification of biologically relevant genes using half of the replicates required by standard model-based approaches. carbon sources. interesting data on the expression profile of the sty and paa genes in pseudomonas sp. y2 have been obtained, and have raised new questions on styrene and paa degradation by this bacterium. the calcium-dependent antibiotic (cda) is a lipopeptide synthesised non-ribosomally and produced by streptomyces coelicolor a3(2). cda contains several non-proteinogenic amino acid residues. hydroxyphenylglycine (4-hpg) is one of the unusual amino acids in the structure of the cda and vancomycin groups of antibiotics. for the members of the vancomycin group of antibiotics, the 4-hpg residue plays crucial roles in the structure and function of the final glycopeptide antibiotic. to reveal the putative biosynthetic pathway of this amino acid in cda, a standard "double crossover replacement strategy" was used to delete 4-hydroxymandelic acid synthase (4-hmas, encoded by hpd) from s. coelicolor mt1110 and 2377, using the delivery plasmid pzmh3. there was no cda production in the disrupted strains. plates containing a gradient of hydroxymandelic acid were used to restore cda production in both s. coelicolor mt1110 hpd and 2377 hpd. exogenous supply of 4-hydroxyl phenylglyoxylate and 4-hydroxyphenylglycine reestablished cda production by the hpd mutant. feeding analogs of these precursors to the mutant resulted in the directed biosynthesis of novel lipopeptides with modified arylglycine residues (mutasynthesis). a cxcl2 tandem repeat promoter polymorphism is associated with susceptibility to severe sepsis in the spanish population n. maca-meyer 1 , c. flores 1 , l. pérez-méndez 1 , r. sangüesa 2 , e. espinosa 2 , j. villar 1 : 1 research institute, hospital universitario n.s. de candelaria, s/c tenerife 38010, spain; 2 department of anesthesiology, hospital universitario n. s. de candelaria, s/c tenerife 38010, spain. e-mail: nmacame@ull.es (n. maca-meyer) sepsis describes a complex clinical syndrome resulting from a systemic inflammatory response to bacteria, and remains an important cause of mortality in the intensive care unit. cxcl2 chemokine (or mip-2) exhibits a pivotal role in the immune response, and several functional studies in animal models of sepsis have catalogued cxcl2 as a candidate gene for the development of sepsis. we have performed a case-control association study of cxcl2 gene variants and susceptibility to severe sepsis in 179 hospitalised patients and 364 healthy individuals. after the examination of linkage disequilibrium in the region, we analysed whether two promoter polymorphisms (snp rs3806792 and a newly described polymorphic short tandem repeat d4s3454) were associated with the syndrome. we found a significant association of common variants at d4s3454 with the development of severe sepsis (heterozygote carriers or 2.82; 95% ci 1.10-7.24, and homozygote carriers or 3.65; 95% ci 1.41-9.43; mantel-haenszel χ 2 test for linear trend p = 0.0006). the risks remained significant even after a genomic control adjustment, based on 20 additional genotyped polymorphisms not linked to the candidate gene. these preliminary results suggest that cxcl2 gene variants may contribute to the development of severe sepsis. kasper møller 1 , ana paula oliveira 1 , jens nielsen 2 , mark johnston 1 : 1 center for microbial biotechnology, biocentrum, technical university of denmark, denmark; 2 department of genetics, school of medicine, washington university, st. louis, usa glucose is the preferred carbon and energy source for most cells. in saccharomyces cerevisiae, a complex regulatory network ensures that s. cerevisiae ferments glucose to ethanol even in the presence of oxygen. to obtain a better understanding of this crabtree effect and the logic of the glucose signalling network in s. cerevisiae, we are analyzing glucose sensing and signalling in the related species saccharomyces kluyveri, which exhibits much less of a crabtree effect (it prefers not to ferment glucose when oxygen is available). we show that there are only two major glucose transporters in s. kluyveri, and that these are regulated in response to the availability of glucose via a glucose sensor and a signalling pathway similar to the glucose induction (rgt2/snf3-rgt1) pathway in s. cerevisiae. we have used dna-microarrays for s. kluyveri to find targets of the s. kluyveri glucose induction pathway, as well as to evaluate the global response to a change in environment from growth on ethanol to growth on glucose. this study identifies a number of differences in the regulation of glucose uptake and global responses to glucose between s. kluyveri and s. cerevisiae, which may contribute to their different glucose metabolism. detection and analysis of microrna using lna probes nana jacobsen, christian lomholt, peter mouritzen, peter stein nielsen, mikkel noerholm exiqon a/s, bygstubben 9, dk-2950 vedbaek, denmark. e-mail: mouritzen@exiqon.com (p. mouritzen) micrornas are a class of short endogenous rnas that act as post-transcriptional modulators of gene expression. growing evidence suggest that micrornas exhibit a wide variety of regulatory functions and exert significant effects on cell growth, development, and differentiation. recent studies have shown that human microrna genes are frequently located in cancer associated genomic regions and perturbed microrna expression patterns have been observed in many malignant tumors. we have exploited the significantly improved hybridization properties of lna oligonucleotides against rna targets to design lna-modified dna probes for detection of different micrornas in animal and plants by northern blot analysis, microarray hybridization and in situ hybridization. we will describe the results obtained from detection and analysis of different micrornas in c. elegans, zebrafish, mouse, and plants. in addition, we will describe a novel lna-based method for expression profiling of mature micrornas by quantitative rt-pcr. expression profile of the sty and paa genes in pseudomonas sp. y2 by means of dna microarrays david bartolomé-martín 1 , david juck 2 , m a teresa del peso-santos 1 , charles w. greer 2 , julián perera 1 : 1 departamento de bioquímica y biología molecular i, facultad de ciencias biológicas, universidad complutense de madrid, 28040 madrid, spain; 2 environmental microbiology group, biotechnology research institute, national research council canada, montréal, que., canada h4p 2r2. e-mail: perera@bio.ucm.es (j. perera) dna microarrays are a new and powerful tool to study gene expression in very diverse systems. environmental biotechnology and biodegradation are some of the fields of research where this technology may be very promising. pseudomonas sp. y2 is a bacterium able to grow in minimal medium plus either styrene (sty) or phenylacetic acid (paa) as the sole carbon and energy sources. this bacterium is the only organism where the genes that code for both the upper (sty genes) and the lower (paa genes) catabolic pathways for the styrene degradation have been described till now. it is unique in having two active copies of the genes encoding the lower pathway (paa1 and paa2 gene clusters). we have designed a dna microarray with the sty and paa genes in order to analyse their expression in the wild type pseudomonas sp. y2, in p. sp. y2 t2 (a paa2 deletion mutant) and in p. sp. y2 c1 (a crc gene mutant). this analysis has been performed on bacterial cultures grown in media with different carbon sources. interesting data on the expression profile of the sty and paa genes in pseudomonas sp. y2 have been obtained, and have raised new questions on styrene and paa degradation by this bacterium. dynamics in induced repression of phosphomannose isomerase pmi40 gene of saccharomyces cerevisiae anssi törmä 1,2 , juha-pekka pitkänen 1,2 , laura huopaniemi 2 , risto renkonen 2 : 1 medicel ltd., haartmaninkatu 8, 00290 helsinki, finland; 2 rational drug design program, department of bacteriology and immunology, haartman institute and biomedicum, university of helsinki, p.o. box 63, 00014 helsinki, finland. e-mail: juhapekka.pitkanen@medicel.com (j.-p. pitkänen) gdp-mannose is the precursor of cell wall biosynthesis in s. cerevisiae. to understand the system level role of gdp-mannose, we studied a conditional knock-out strain of the key enzyme in its synthesis; pmi40. the experimental procedure allowed us to study the order of mechanisms the cells launch in order to adjust to a sudden malfunction in the metabolic machinery. we collected 100 samples from continuous cultivations over 80 h and measured genome-wide gene expression levels, 10 enzyme activities, and concentrations of 30 intracellular metabolites. for sampling we have built a sample robot, which automatically takes and preserves the samples. in order to carry out this magnitude of experimentations and generated data, we have constructed a proprietary software platform to handle all the phases from project management in wet-lab to workflow and pathway management in in silico. after normalization and clustering, significantly changed genes and metabolites were searched for enrichment in biological processes and molecular complexes. further, gene expression levels, metabolite concentrations, and enzyme activities were searched against each other for causality over time. overall, we focused on thorough analysis of our own data and known database data in order to reward our efforts with knowledge. at the transcriptome level, repression of pmi40 led to two major types of activation profiles, one peaking at the time when pmi40p activity and gdp-mannose were depleted and the other later during recovery from the perturbation. the primary response was most enriched with genes known to play roles in mating and filamentous growth and associated with the transcription factors ste12p, tec1p, dig1p, and mcm1p, whereas the secondary response consisted of genes involved in carbon metabolism and associated with the general stress response regulators msn2p and msn4p. skn7p, a high-level transcription factor was associated with both the primary and the secondary response, consistent with its suggested role of coordinating environmental responses and developmental processes. transcriptome of pig ovarian cells: discriminant genes involved in follicular development bonnet a., le cao k.a., low-so g., san cristobal m., tosser-klopp g., hatey f. laboratoire de génétique cellulaire, centre inra de toulouse, castanet-tolosan 31326, france in order to identify genes and gene networks involved in pig ovarian follicular development, we built subtractive suppressive hybridization libraries (ssh) from granulosa cells of healthy follicles (small, medium or large). the rna isolated from these cells was used to hybridize cdna nylon micro-arrays. data analysis using a gaussian linear mixed model showed that 83% of the variability is due to the genes. two hundred fifty one regulated genes (from the 956 expressed) were identified and clustered into three groups according to the follicle size. moreover, we found previously identified genes such as aromatase, igfbp2 which supported the validity of our experimental model. ramdom forest analysis put forward the most discriminant 11 genes between the three follicle classes. this study put forward gene sets such as those involved in cell modeling, regulation of transcription, apoptosis during follicle growth. the next step will be to describe more precisely the spatio-temporal expression patterns at the mrna levels of the genes identified by these experiments. microalgae constitute a significant source of valuable natural products, e.g. sulfated polysaccharides, polyunsaturated fatty acids, and phycobiliproteins that find applications in wide range of industries, including food, pharmaceutical, agricultural and cosmetics. however, genomic and molecular genetic studies of microalgae lag far behind those of higher plants. in order to accelerate red microalgal genomic studies by taking advantage of current genomics and post-genomic technologies, we have generated expressed sequence tag (est) databases of two red microalgae porphyridium sp. and dixoniella grisea grown under various physiological conditions. to date we have sequenced 7210 and 6231 ests of porphyridium sp. and d. grisea, respectively. the sequence assembly resulted into, ca. 2000 non-redundant unigenes for each microalga, only 40% of which were identified by similarity to sequences in the public databases. porphyridium sp. and d. grisea unigenes were compared with the whole-genome predicted proteomes of three microalgae and those of representative eukaryotic and prokaryotic organisms. both microalgae have highest similarity to the red microalga cyanidioschyzon merolae. the order of sequence similarity to other organisms examined was arabidopsis thaliana, oryza sativa, chlamydomonas reinhardtii, thalassiosira pseudonana (diatom), saccharomyces cerevisiae, caenorhabditis elegans, archaea and cyanobacteria. although red microalgae are considered as phylogenetic bridge between prokaryotes and eukaryotes, our data show that the red microalgae have strong similarity to eukaryotes and only distant similarity to prokaryotes. gene expression profiles were studied by analyzing cdna and subtraction libraries constructed from algae grown under various physiological conditions. we observed that top three most abundant ests in the stationary phase of porphyridium sp. were adp ribosylation factor like-1, flavohemoglobin and adp ribosylation factor-1. in addition, we have identified several genes which were specific to nitrate-and sulfate starvation. the sarco(endo)plasmic reticulum ca 2+ -atpase (serca, "the calcium pump"), is responsible for pumping the ca 2+ released into the cytoplasm during muscle contraction back into the sarcoplasmic reticulum store while proton are pumped the opposite way as counter-cations. these transport processes go against the concentration gradients and are therefore energy consuming. the energy is derived from atp hydrolysis via formation and break-down of a phospho-enzyme intermediate. over the last year a number of new crystal structures have been published which have added to our understanding of how this task is accomplished, which provides an impressive insight to the mechanism of a molecular pump. rhomboids are a family of intramembrane serine proteases that are widely conserved throughout evolution. among diverse functions discovered so far, rhomboids participate in intercellular signalling, parasite invasion, membrane dynamics and bacterial quorum sensing, making them potentially valuable therapeutic targets. the identification of physiological substrates and of selective inhibitors will be key towards their evaluation as drug targets. we have developed an in vitro cleavage assay to monitor rhomboid activity in the detergent solubilised state, enabling the first isolation of a highly pure rhomboid with catalytic activity. analysis of purified mutant proteins suggests that rhomboids use a serine protease catalytic dyad instead of the previously proposed triad, and gives insights into subsidiary functions like ligand binding and water supply. this work was supported principally by embo and the mrc. structure and target-specificity of thioredoxin h kenji maeda 1 , anette henriksen 2 , per hägglund 1 , christine finnie 1 , birte svensson 1 : 1 biochemistry and nutrition group, biocentrum-dtu, technical university of denmark, dk-2800 kgs. lyngby, denmark; 2 biostructure group, carlsberg laboratory, dk-2500 valby, denmark. e-mail: kenji@biocentrum.dtu.dk (k. maeda) thioredoxins are ubiquitous small proteins with protein disulphide reductase activity. thioredoxins can alter the structures and the activities of various target proteins by reducing their disulphide bonds. seeds of several plants are abundant in cytosolic thioredoxins referred as h-type. in barley, two thioredoxin h isoforms, hvtrxh1 and hvtrxh2 that share 51% sequence identity but differ in temporal and spatial distributions were previously identified and characterised. in the present study, the relationship between structures and targetspecificities of h-type thioredoxins are analysed. the 3d-structures of hvtrxh1 and hvtrxh2 are determined by x-ray crystallography as the first crystal structures of thioredoxin h. comparison of the structures shows that the majority of solvent exposed residues near the active sites are conserved between the two isoforms. this is in agreement with previously observed similarity in target-specificity of the two isoforms. thioredoxins from organisms distantly related to barley, such as e. coli, have highly similar folds but different surface charge distributions compared to barley thioredoxins. a comparison of the target-specificities of hvtrxh1, hvtrxh2, e. coli thioredoxin and several thioredoxin mutants will be attempted to reveal the structural features that influence specificity of barley thioredoxin h isoforms. enbrel is a dimeric fusion protein consisting of the extracellular ligand binding portion of the human 75 kda (p75) tumor necrosis factor receptor (tnfr) linked to the fc portion of human igg1. the cho-expressed molecule contains both n-and o-linked oligosaccharides with a total carbohydrate content of 20% by mass. the o-linked oligosaccharides were released by hydrazinolysis and their structure determined by exoglycosidase sequencing and maldi-tof mass spectrometry. to locate precisely the o-linked sites, the glycosylation heterogeneity of tnfr:fc was simplified by treatment with n-acetyl neuraminidase and n-glycanase. the remaining molecule, which only carries core o-linked glycan structures, was cleaved by trypsin and analyzed by lc-ms. precise localization of o-glycosylation sites was determined based on the concept of a specific modification of the o-glycosylated serine into 2aminopropenoic acid and o-glycosylated threonine into 2-amino-2butenoic acid. the deficit in mass resulting from this transformation was the marker used to localize the modified residues on the peptides by tandem mass spectrometry sequencing (ms-ms). ms-ms spectrum of enbrel glycopeptides were interpreted based on the presence of 2-aminopropenoic acid and 2-amino-2-butenoic acid, resulting in a complete map of o-linked glycans precisely located at 10 different sites. anu mursula, beatrix fahnert, sari krapu, eija-riitta hämäläinen, ritva isomäki, peter neubauer bioprocess engineering laboratory and biocenter oulu, university of oulu, oulu, finland. e-mail: anu.mursula@oulu.fi (a. mursula) wnt proteins form a highly conserved family of secreted glycoproteins important in cell-cell signaling events during embryogenesis and adult tissue maintenance. impairments within this complex signaling pathway can lead for example to developmental defects in embryos, degenerative diseases and cancer. respectively, wnt proteins can be used as tools in basic research concerning wnt function, developmental biology, screening for interacting compounds, and for medical applications (e.g. therapeutics, stem cells). hence, recombinant wnts provide a valuable basis for these purposes. however, production of recombinant wnt proteins is challenging, because they contain multiple disulfide bonds making the folding very difficult. a process for production of murine wnt-1 in e. coli has been developed in our laboratory. the knowledge obtained from this research has also been applied to the expression of other wnts, namely wnt-4 and wnt-6, and can be used to approach other cysteine-rich proteins as well. since the expression level of wnt proteins is rather low so far, tools for monitoring and optimizing the production process have been established. by means of this sandwich hybridization method the level of target (wnt) mrna can be measured. the technique has already been applied to analyzing wnt-1 mrna levels. probes also for wnt-4 and wnt-6 have been generated. thus, transcription of wnt genes in all kind of cells (e.g. tissue, recombinant hosts) in general as well as kinetics of transcription can be studied using these tools. in growth factor signaling, stimulation of cell-surface receptors first triggers activation of the receptor itself and then of a large number of intracellular effector molecules. the stimulus is integrated with a host of other cellular processes, leading to cytoskeletal changes, activating transcriptional programs in the nucleus and ultimately resulting in cell proliferation, differentiation or motility. classical signaling pathways and networks depict potential protein-protein interactions only in a static form. in the cell, these interactions are dynamic and occur in an ordered fashion. here, we apply a mass spectrometric method that converts temporal changes to differences in peptide isotopic abundance in order to study the global dynamics of signaling events. briefly, three cell populations are metabolically labeled with either normal arginine or a 13 c substituted form, or a 13 c 15 n variant (stable isotope labeling by amino acids in cell culture, silac). each population was then stimulated with egf for a different time period and tyrosine phosphorylated proteins were affinity purified with anti-phosphotyrosine antibodies. the proteins from the precipitated complexes were quantitatively analyzed and identified using lc-ms/ms. arginine containing peptides occurred in three forms, directly indicating protein activation at the corre-sponding time point. combination of two experiments sharing one common time point of activation then generated five-point dynamic profiles. from the 202 proteins quantified, we identified 81 signaling proteins, including virtually all known egfr substrates and 31 novel effectors, and the time course of their activation upon egf stimulation. discriminating proteins involved in the signaling network from unspecific binders was straightforward as these presented an activation profile. we have now further extended this study by directly measuring in vivo phosphorylation sites in response to growth factor stimulation and monitoring the time evolution of the phosphorylation events. finally, we determined and quantitatively compared the global egf and pdgf tyrosine phosphoproteomes in human mesenchymal stem cells and revealed a control point in their differentiation into bone-forming cells. such global activation profiles provide a novel perspective in cell signaling and will be crucial to model the highly dynamic signaling networks in a systems biology approach. klaus schneider 1 , dave g smith 2 , steven skaper 2 , alastair d. reith 1 : 1 discovery research, glaxosmithkline, coldharbour road, harlow, essex, uk; 2 neurology & gastrointestinal cedd, glaxo-smithkline, coldharbour road, harlow, essex, uk over the last 15 years, progress in signal transduction research has revealed an astonishing degree of complexity in cell signalling which is manifested in positive and negative regulations and feedback loops within signalling pathways and by cross-talks between pathways, all of which are highly cell-type dependent. it has become evident that protein phosphorylation by protein kinases plays a major role in this complex regulation of cell signalling (hunter, 2000) . due to the importance of signal transduction in disease processes, many protein kinases may constitute key targets for disease intervention. yet, the lack of a full understanding of the regulation, the activation and, importantly, of downstream substrates of particular protein kinases requires often more detailed studies before initiation of resource-intensive efforts to find disease-modifying molecules. technologies for the study of protein kinase signalling include 32 p labelling, mutational and knock-out studies and more recently rna interference. these tools are complemented by approaches that are based on proteomic technologies developed over the course of the last 10 years. in this presentation, the scope of proteomics technologies to contribute to an understanding of kinase signalling will be discussed. an overview of available technologies will be given and results will be presented from a proteomic study of glycogen-synthase kinase 3 (gsk3) signalling (coghlan et al., 2000) . novel findings will be presented on a study of gsk3 inhibition in a primary neuronal cell line by differential 2d gel electrophoresis, which resulted in the identification of more than 40 proteins that were significantly regulated. proteome analysis is typically done by nanospray lc/ms in order to achieve higher sensitivity and thus a greater number of protein identifications. however, nano-scale lc systems can be more challenging to use and maintain. to obtain the best chromatographic performance, connections must be made carefully to minimize band broadening. improved chromatographic performance can enhance the mass spectrometric results by tandem ms as a greater number of peptides can be detected. a microfluidic chip-based system has been developed (yin et al., 2004) that minimizes the number of connections and the delay volumes. this work evaluates the performance of this device against the traditional nanospray approach. a yeast extract sample was separated by sds-page and bands were excised from the gel for further analysis. after in-gel digestion, the sample was analyzed by both traditional nanospray and the microfluidicbased chip device. after protein database searching, the identified proteins and the protein sequence coverage's were compared for the two approaches. the microfluidic device was demonstrated to be equivalent or better compared to the traditional approach. yin, h., killeen, k., brennen, r., et al., 2004. anal. chem. 77, 527-533 . plant cytochromes p450 (p450s) play key roles in the biosynthesis of most bioactive compounds with agronomic and therapeutic applications. a collection of about 120 plant p450s was expressed in yeast. the cdnas were isolated from the model plant with a sequenced genome arabidopsis thaliana, some others from wheat, helianthus tuberosus or vicia sativa. they were expressed under the control of a galactose-inducible promoter in an engineered strain of saccharomyces cerevisiae in which the gene of the native p450 reductase was replaced with the gene of a p450 reductase from a. thaliana under the control of the same galactose-inducible promoter, in order to provide an optimal context for plant p450 expression and activity (pompon et al., 1996) . an original procedure was designed for the high-throughput functional screening of this enzyme collection. it is based on the detection of oxygen consumed during the catalytic reaction by a fluorochrome embedded in the bottom of the microwell plates. this method was validated using several recombinant p450s of known activity. it also allows for a very efficient screening for enzyme inhibitors. the advantages and limits of the method will be discussed. this work was carried out with the support of génoplante programme (no 2001004) . reference pompon, d., louerat, b., bronine, a., urban, p., 1996. methods enzymol 272, 51-64. 3 folding of a bacterial membrane protein studied by protein engineering daniel e.otzen, pankaj sehgal, peter a. christensen department of life sciences, aalborg university, sohngaardsholmsvej 49, dk -9000 aalborg. e-mail: dao@bio.aau.dk (d.e. otzen) we have carried out a kinetic analysis of the folding of the 4-helix transmembrane protein dsbb in a mixed micelle system consisting of varying molar ratios of sodium dodecyl sulfate and dodecyl maltoside. this analysis incorporates both folding and unfolding rates, making it possible to determine both the stability of the native state and the process by which the protein folds. the analysis also takes into account the composition of the mixed micelles, which is different from the bulk detergent composition. refolding and unfolding are consistent with a three-state folding scheme involving the sdsdenatured state, the native state and an unfolding intermediate that accumulates only under unfolding conditions at high mole fractions of sds. the temperature-dependence of the folding reaction displays an unusual decrease in heat capacity accompanying unfolding, which probably reflects the amphiphilic environment of the membrane protein. destabilization of dsbb by different short-chain alcohols correlates very well with the alcohols' respective hydrophobicities. data from a series of ala-scanning mutants tentatively identify a nucleus for folding, which is relatively diffuse and involves all four helices. we are currently complementing this work with studies of the association of peptides corresponding to individual transmembrane segments of dsbb. the reca protein of e. coli plays a crucial role in homologous recombination and dna repair. the recombination process takes place in a filamentous complex, in which the protein monomers are arranged in a helical manner around a single-stranded dna (ss-dna). in the presence of atp the filament can accommodate a second, double-stranded dna (ds-dna) and the strand exchange reaction can occur. the three-dimensional structure of reca itself and its complex with adp have been determined by x-ray crystallography. the active nucleoprotein filament, however, has only been studied at low resolution. both electron microscopy (em) and small-angle neutron scattering (sans) indicate significant differences between the structures of the active nucleoprotein filament and the compressed, inactive filament of only reca. we have presented a structural model of the reca protein in its active filament with ss-dna, using data obtained by linear dichroism (ld) polarized-light spectroscopy, based on a technique, we call "site-specific linear dichroism", which allows the orientation of a set of amino acids to be determined from ld data by systematic modification of the protein. here, we show that ld data of the nucleoprotein filament with ds-dna is over all similar to the data of the complex with ss-dna, indicating that the orientation as well as internal structure of reca in the active filament is not significantly altered when the bound dna is changed from single-stranded to double-stranded. this result supports the idea that the strand exchange reaction occurs without large conformational change of the reca protein. the choline-binding modules: a powerful biotechnological tool jesús m. sanz instituto de biología molecular y celular, universidad miguel hernández, elche, spain choline-binding modules (chbms) are present in some virulence factors of streptococcus pneumoniae (pneumococcus). the most extensively studied chbm is c-lyta, the carboxy-terminal domain of the pneumococcal cell-wall amidase lyta. the three-dimensional structure of choline-ligated c-lyta is built up from six loop-hairpin structures ("choline binding repeats", chbrs) forming a left-handed ␤-solenoid with four choline binding sites. although the structure of the ligand-free form is not yet known, our folding studies suggest that it is more loosely packed, with a partially unfolded amino-terminal region and a stable carboxy-terminal moiety that is extremely resistant to chemical denaturation (maestro and sanz, 2005) . the affinity of c-lyta for choline and other structural analogues allows its use as an efficient affinity tag for overexpression, immobilization and single-step purification of proteins of biomedical interest (c-lytag fusion protein purification system). this system presents many advantages when compared to current commercial methods, namely simplicity, compatibility with buffers and robustness. the availability of multiple supports that specifically bind chbms (such as multiwell plates) has recently allowed the development of a new procedure for the immobilization of c-lyta-containing hybrid proteins that may be used in proteomics, diagnostics and peptide display. in this communication, we present our last results about the stability, folding and engineering of c-lyta, together with a compendium of the current biotechnological potential of this protein, and highlight the productive link between basic molecular studies and their application. many of the modern approaches for studying disease compare steady state functions, such as repair, growth, and regulated gene expression within the various biological compartments organised by specialized function, be it mitochondria or blood vessels. the assignment of protein identities, which are linked to key biological mechanisms, which are associated with disease processes and disease progressions are an important area of this work (marko-varga and fehniger, 2004) . today, the technology available for studying proteome expression and resolving exact protein and peptide identities in complex mixtures of biological samples allows global protein expression within cells, fluids, and tissue to be approached with confidence. this confidence is due in part to reproducible repetitive sampling and analysis technologies including robotics data acquisition and high level mass spectrometry including both laser-desorbtion and electro spray ionisation. the precision in defining differences between normal and diseased steady states is aided by the creation of compiled reference and master data sets and by new methods for multiplexing the analysis of samples in groups. the establishment of key representative reference proteome systems representing the dynamic changes in protein expression during disease will be vital to the interpretation of changes observed in specific samplings of disease states and specific cells obtained from these samples. the creation of reference databases of proteins linked to disease pathways will play an important role in furthering our understanding of the "proteome of disease". examples will be given where protein expression patterns have been generated from compartments within tissue sections. marko-varga, g., fehniger, t.e., 2004. j. proteome res. 3, 167178. 2 adaptation of the saccharomyces cerevisiae proteome to nutrient limitations studied by metabolic stable isotope labeling and mass spectrometry albert j.r. heck netherlands proteomics centre and utrecht university, the netherlands. e-mail: heck@npc.genomics.nl. url: www.netherlandsproteomicscentre.nl one of the major aims of proteomics is to provide quantitative data on differential protein expression levels. recently, mass spectrometry-based methods have been introduced that can provide quantitative data on differential protein expression, mostly using stable isotope labeling (goshe and smith, 2003) . we opted for metabolic labeling as this provides efficient means to quantify differential protein expression, and has the advantage that all proteins are labeled universally (romijn et al., 2003) . in their natural habitat microorganisms encounter non-optimal growth conditions and often growth is limited by one nutrient. microorganisms need to respond rapidly to changes in the environment in order to survive. in the present study, we investigate the proteome response of chemostat cultivated wildtype saccharomyces cerevisiae to two different nutrient limitations, namely carbon and nitrogen limitation. yeast was metabolically labeled in well-controlled chemostat cultures. 14 n and 15 n labeled proteins were separated using 1d gel electrophoresis followed by rp-lc-esi-ms on a lc-q. relative quantification was performed by using relex software (maccoss et al., 2003) . we quantified 759 proteins, using on average 8 peptide peak pairs per protein. this analysis revealed that 419 proteins showed a significant increase/decrease in expression level. the functional annotation of these proteins revealed that the yeast cells change expression levels of enzymes involved in metabolism of the growth-limiting compound. the protein expression ratios were compared with corresponding transcript levels. moreover, we compared the accuracy of quantifica-profiles mainly reflected differences in cellular origins in addition to different functional roles. mass spectrometric analysis identified 82 proteins pertaining to several functional classes, i.e. acute phase proteins, antioxidant proteins and proteins involved in protein synthesis/maturation/degradation, cytoskeletal (re)organization and in lipid metabolism. several proteins not previously implicated in nerve regeneration were identified, e.g. translationally-controlled tumor protein, annexin a9/31, vitamin d-binding protein, ␣-crystallin b, ␣-synuclein, dimethylargininases and reticulocalbin. real-time pcr analysis of selected genes showed which were expressed in the nerve versus the dorsal root ganglion neurons. in conclusion, this study highlights the complexity and temporal aspect of the molecular process underlying nerve regeneration and points to the importance of glial and inflammatory determinants. yeasts plasma membrane macromolecular components involved in stress resistance paola branduardi, paola paganoni, danilo porro dipartimento di biotecnologie e bioscienze, università degli studi di milano-bicocca, piazza della scienza, 2-20126 milano, italy. e-mail: paola.branduardi@unimib.it (p. branduardi) the plasma membrane is a universal structure of living cells constituting an essential barrier dividing and defining the intracellular from the extracellular environment. it is consequently easy to deduce the crucial role played by said structure for any cell of any living organism, and especially for unicellular organisms, since all the information deriving from the external environment as well as many of the consequent cellular responses have to pass through this barrier. unicellular organisms, thanks to easy manipulation and cultivation techniques, can represent a very useful model for studying the plasma membrane function and response. in addition microorganisms, and among them yeasts, can be considered advantageous cell factories for recombinant productions. in this contest, the implementation of any process of production has to take into account, among others, the response and the tolerance of the host to the external environment. from these considerations derives the interest of our group to analyse the main macromolecular components of yeasts plasma membranes (proteins, lipoproteins and lipids), isolated from cells grown under different stress conditions, with particular attention to acidic environments. here, we present our recent data about separation and identification (by sequencing analyses) of lipoproteins isolated from the conventional yeast saccharomyces cerevisiae as well as from the non-conventional and acid tolerant yeast zygosaccharomyces bailii cell cultures grown in different conditions. in parallel, the protein fraction is under evaluation through a differential 2d proteomic approach and consequent analyses. effect of fungal polysaccharides on the expression of pancreatic proteins in streptozotocin-induced diabetic rats sang woo kim 1 , hye jin hwang 1 , kwang bon koo 2 , jang won choi 2 , jong won yun 1* : 1 department of biotechnology; 2 department of bioindustry, daegu university, kyungsan, in an attempt to search novel biomarkers for monitoring diabetes prognosis, we examined the influence of the hypoglycemic fungal extracellular polysaccharides (eps) on the differential expression of pancreatic proteins in streptozotocin-induced diabetic rats. the results of diabetic study revealed that orally administrated eps exhibited excellent hypoglycemic effect, lowering the average plasma glucose level of the diabetic rats to 52.3%. pancreatic proteome were analyzed by 2-de system, which separated more than 2000 individual spots. the 2-de analysis demonstrated that thirty-four proteins from a total of about 500 matched spots were differentially expressed, of which 26 spots were identified as the proteins whose expression has previously been associated with diabetes. twenty-two overexpressed and twelve underexpressed proteins were significant (p < 0.05) between the healthy and diabetic rats, and the altered proteins were restored (p < 0.05) upon eps treatment. it was first found that carbonyl reductase (18.6-fold, p < 0.001) and mawdbp (31.4-fold, p < 0.01) were surprisingly upregulated upon diabetes induction, and then those two protein concentrations were completely restored by eps treatment. moreover, we obtained eight unidentified proteins that have not been reported to be related with diabetes mellitus. these results evidenced the effect of fungal eps on searching potential markers for diagnosis and therapeutic manipulation of diabetes mellitus. although molecular basis of protein modulation after eps administration in diabetic rats was not verified in this study, the results of the proteomic analysis provide impetus for further studies of mining the biomarkers for diabetic therapy. the model established in our experiment is expected to mimic human diabetic status, which will help us to interpret the roles of biomarkers in diabetic state. the use of polyol-responsive monoclonal antibodies in immunoaffinity chromatography and as a probe for unfolding of wild-type and altered (t103i) amidase from pseudomonas aeruginosa s. martins 1 , j. andrade 1 , a. karmali 1 , a.i. custódio 2 , m.l. since immunoaffinity chromatography is a powerful protein purification technique of interest in proteomics, monoclonal antibodies (mabs) against mutant (t103i) amidase from p. aeruginosa were raised by hybridoma technology. in order to identify mabs that bind t103i amidase tightly but release under gentle conditions, hybridoma clones secreting polyol-responsive mabs (pr-mabs) were previously screened. nearly 10% of elisa assay-positive hybridoma produced clones secreting pr-mabs with potential application as ligands for immunoaffinity chromatography. to select the optimal conditions for amidase elution, an elisa-elution assay was carried out, with two of these clones (f6g7; e2a6). the dissociation of ag-ab complex required 10% of propylene glycol and either 0.25 m (nh 4 ) 2 so 4 or 0.25 m nacl. the binding of purified mab of igm class (e2a6) to wild-type and mutant amidases was investigate by direct elisa, which revealed that it recognised specifically a common epitope on both amidases. conformational changes on antigen molecule were studied. mab e2a6 showed a higher affinity for heat denatured forms than for native forms as revealed by affinity constants suggesting that the mab recognizes a cryptic epitope. the effect of mab e2a6 on amidase activity was also investigated. the binding of mab to wild-type and mutant amidases exhibited an inhibition and activation of 60% as a function of time, respectively. this pr-mab is useful as a probe to detect conformational changes in native and denatured amidases as well as a ligand in immunoaffinity chromatography, which is of great interest in protein purification and proteomics. fragility and solubility of non-classical inclusion bodieš s. peternel 1 , a. ristič 1,2 , v. gaberc-porekar 1 , v. menart 1,2 : 1 national institute of chemistry, ljubljana, si-1000; 2 lek pharmaceuticals d.d., ljubljana, si-1000. e-mail: spela.peternel@ki.si (š. peternel) human granulocyte colony stimulating factor (g-csf) is a pharmaceutically important cytokine. when overexpressed in escherichia coli, it is usually accumulated in the form of inclusion bodies (ibs) . when produced at 42 • c classical insoluble ibs are formed while at 25 • c non-classical ibs containing a high amount of correctly folded g-csf are formed. as higher fragility and solubility of non-classical ibs were noticed, we decided to check whether bacterial cell disruption method has any influence on their mechanical stability and solubility. enzymatic lysis, sonication and homogenization, methods often used for disruption of bacterial cells during the isolation of ibs were compared. lysozyme treatment of bacterial cells appears to be mild enough disruption method not influencing the integrity of ibs. homogenization of bacterial cells at high pressure (100.000-120.000 kpa) shows no impact on classical ibs while some loss of target protein from the non-classical ibs is observed. sonication seems to be most harmful as even at rather low sonication altitudes, noticeable disassembling and solubilization of non-classical ibs occurs while no effect on classical ibs is perceived. our studies show that non-classical ibs are much more fragile and soluble than classical ones. therefore, one should extremely carefully choose the method for cell disruption to avoid undesirable loss of the target protein. the danish tick ixodes ricinus parasitize three different hosts both mammals and birds during the 3-year life cycle. the aim of this study was to identify the last blood host being the host, which the nymph had parasitized before molting to the adult instar. the reason for the study was to reveal the origin of the host contributing the most to the life cycle of the tick and thereby the maintenance of tick-borne diseases in denmark. the most common tick-borne diseases are lyme borreliosis and tick-borne encephalitis (tbe) causing illness in both animals and humans. we analyzed adult ticks, which were collected from known hosts. the analysis was performed at different heat stable proteins, which could be detected during the off host period by elisa. we found that heat stable proteins could be used as identification markers for host recognition. mushroom polysaccharides alter the expression of diabetesassociated proteins in the liver of streptozotocin-induced diabetic rats hye-jin hwang 1 , sang-woo kim 1 , kwang-bon koo 2 , jang-won choi 2 , jong-won yun 1* : 1 department of biotechnology, daegu university, kyungsan, kyungbuk 712-714, korea; 2 department of bioindustry, daegu university, kyungsan, kyungbuk 712-714, korea. e-mail: jwyun@daegu.ac.kr (j.-w. yun) in the present study, we investigated the influence of the hypoglycemic fungal extracellular polysaccharides (eps) on the differential expression of liver proteins in streptozotocin (stz)-induced diabetic rats. the results of diabetic study revealed that orally administrated eps exhibited an excellent hypoglycemic effect, lowering the average plasma glucose level in eps-fed rats to 52.3%. in the next step, we analyzed the differential expression patterns of rat liver proteins from each group, to discover potent candidates for diabetesassociated proteins. a total of 69 proteins of the 2-de gel were expressed differentially between diabetic and healthy rats. among them, 34 proteins were upregulated and 35 proteins were downregulated upon diabetes induction. many of these changes were in accordance with observations in previously published studies. surprisingly, the altered levels of most proteins in diabetic group were fully or partially restored to those of non-diabetic control group by eps treatment. moreover, we obtained 13 unidentified proteins that have not been reported to be related with diabetes mellitus. although molecular basis of protein modulation after eps administration in diabetic rats was not verified in this study, the results of the proteomic analysis provide impetus for further studies of mining the biomarkers for diabetic therapy. potential is still limited by the differences observed in the structure of plant and mammalian n-glycans. indeed, theses differences and particularly the presence of ␤1,2-xylose and ␣1,3-fucose glycoepitopes are responsible for the immunogenicity of plant n-glycans. in order to reduce the structural differences between plant and mammalian n-glycans, current strategies are to knock out plant-specific glycosyltransferases or to humanize plant n-glycans by expression of mammalian glycosyltransferases in plants. in the present study, we have expressed a human ␤1,4-galactosyltransferase in alfalfa. in order to further increase the efficiency of the human ␤1,4-galactosyltransferase in the plant golgi apparatus, we have exchanged the endogenous targeting signal of this human glycosyltransferase for the ones from plant glycosyltransferases recently characterized in our laboratory. we will illustrate this approach of targeted expression with the results obtained by fusion of the catalytic domain of human ␤1,4-galactosyltransferase with the n-terminal sequence of a plant glycosyltransferase that targets the fusion to the very early compartments of the golgi apparatus. the efficiency of natural versus targeted expression of human ␤1,4galactosyltransferase in alfalfa will be compared in term of n-glycan humanization. altogether, our results clearly illustrate that we are now on the way to get perfect copy of mammalian glycoproteins in alfalfa plants. construction of recb-recd gene fusion and analysis of fusion enzyme activities oytun portakal 1 , gerald r. smith 2 , pakize dogan 1 : 1 department of biochemistry, hacettepe university medical school, 06100, ankara, turkey; 2 divisions of basic sciences, fred hutchinson cancer research center, seattle, wa 98109-1024, usa. e-mail: oytun@hacettepe.edu.tr (o. portakal) protein folding is a fundamental process to gain protein function. in an oligomeric protein, the interaction between polypeptides affects the folding process and assembly to the holoenzyme. recbcd is a heterotrimeric and multifunctional enzyme that plays an essential role for major pathway of homologous recombination in e. coli. it is composed of recb, recc and recd gene products. recd is the fast motor unit of the recbcd enzyme. recd also plays a role for high affinity dsdna binding, nuclease activity and chidependent regulation. this study was designed to test the hypothesis that recd polypeptide regulates the essential reca loading activity. the approaching of the study was to fuse recd gene to subsequent recb gene and to observe the changes in enzyme activity and structure. for these purpose two genetic fusion mutations, two-nucleotide deletion and three-codon substitution were created at the overlap sites (ta) of recb and recd genes. fusion mutations were constructed by phage-mediated recombination system, which is called recombineering. this technology requires red function but not host reca protein function. here, we showed the recbd fusion polypeptides in crude extracts. genetic characterization tests were revealed that both fusion enzymes are recombination proficient and have wild-type phenotype. biochemical assays demonstrated that recbdc fusion heterotrimers have dsdna exonuclease, unwinding and chi cutting activities. they were also resistant to dna damaging agents. western blot analysis also detected a wild type length recd polypeptide together with recbd fusion polypeptides and a decreased heterotrimer compared to wild type. our findings suggest that recb-recd genetic fusions may affect recd assembling to the heterotrimer, but not affect it's native folding. sandwich immunoassay-a simple strategy for enhancement of the sensitivity and the specificity in prostate specific antigen detection based on surface plasmon resonance cuong cao, sang jun sim department of chemical engineering sungkyunkwan university, 300 chunchun-dong, jangan-gu suwon, prostate cancer is a deadly disease in men. prostate specific antigen (psa) has been proved to be the most reliable and specific biomarker in preoperative diagnosis, monitoring and followup of patients with prostate cancer. in this study, a biochip based on surface plasmon resonance was fabricated to detect psa at concentrations ranging from 1 to 1000 ng/ml. to reduce nonspecific binding, the chemical surface of sensor was constructed by using various ethyleneglycol mixtures of different molar ratios of hs(ch 2 ) 11 (och 2 ch 2 ) 6 cooh and hs(ch 2 ) 11 (och 2 ch 2 ) 3 oh. we also biotinylated the sams surface to enhance the orientation of protein immobilization. by using this surface, spr-based psa detection gave a positive ru value at the fist response in the whole range of psa concentrations. however, this ru value could get better and more reliable by simply applying a secondary interactant, the psa polyclonal antibody, in sandwich immunoassay. the results shown this approach could satisfy our purpose without modify the secondary interactant, which has usually been done by the other report. expression of epitopic domains of human coagulation factor viii in escherichia coli amir amiri yekta 1,2 , naser amirizadeh 3 , alireza zomorodipour 1* , fariba ataei 1 : 1 department of mol genet. national institute for genet eng & biotechnol tehran-iran p.o. box: 14155-634, tehran, iran; 2 islamic azad university of jahrom, jahrom, iran; 3 department of hematol, faculty of med, tarbiat modarres university, tehran, iran. e-mails: amir amiriyekta@yahoo.com (a.a. yekta), * zomorodi@nrcgeb.ac.ir (a. zomorodipour) human factor viii (hfviii) plays major role in the intrinsic pathway of blood coagulation and is used to treat individuals with hemophilia a for bleeding episodes. many researches have been focused on the molecular aspects of this protein. in this regard, epitopes of hfviii as well as their corresponding antibodies have many important applications. bacterially produced fviii-epitopes are capable to neutralize the alloantibodies that inhibit hfviii activity. the purpose of present study was to over-express two epitope-containing fragments of fviii in e. coli under t7 promoter (novagen). two dna fragments from light-and heavy-chains of hfviii (942bp-c1c2 and 1644bp-a1a2, respectively) were subcloned in the expression vector. the use of his 6 -tagged tail was also considered for detection and purification purposes. in each of the examined clones, a protein of expected size was detectable. in the c1c2-expressing clone the specificity of the over-expressed protein was confirmed by its reaction with the rabbit serum directed against native hfviii as well as anti-his-tag antibody. in the heavy chain-related-expressing clone, the expression level was low, but it was detectable by immunoblotting experiments. manipulations of the growth as well as induction may be required. the over-expression of the other epitopes reported in the heavy chain may be achievable by the expression of (a) sub-fragment(s) of this region. the over-expressed his-tagged c1c2-related protein was appeared to be trapped in the cell as non-soluble inclusion bodies. therefore, after homogenizing of the induced recombinant cells, the nonsoluble fraction was dissolved in a solution of denaturant (guanidine hydrochloride) and subjected for the purification, using a ni-nta resin (qiagen) followed by protein measurement. accordingly, an expression level of 5 mg/l (of culture) of the purified c1c2-related peptide was obtained. the recombinant hfviii c1c2-derived peptide has provided useful mean for further experimental and medical applications. isotachophoresis has almost exclusively been applied for contracting and stacking samples ions before zone electrophoretic separation of proteins. this study attempts to apply microfluidic isotachophoresis (itp) as a high resolution analytical method for proteins. beta-lactoglobulin and other milk proteins with slightly different pi were labelled with fluorescent red 646 and analysed by the micralyne tk system using microfluidic glass chips, either with simple cross (sc) or double cross (tt) injection or designed 2d-itp-cze chips with double tt injection and sc for transfer to the second dimension cze channel, efficiently non-covalently coated with 0.6% w/v epoxy-polydimethylacrylamide to lower electroendoosmosis. capillary zone electrophoresis (cze) in borate or phosphate buffer was reproducibly perform for more than 75 consecutive runs using upchurch tm reservoirs glued to the wells to enable larger buffer volumes and greater run-to-run stability. finally, isotachophoretic anionic separation of the proteins were done using phosphate (ph 8.1) or chloride (ph 8.5) as leading ion and -amino-caproic acid (ph 8.9) as terminating ion. the effect of narrow cut ampholytes as spacers needs further investigations. the perspective aim is to combine the migrating itp separated zones with second dimension capillary zone electrophoresis as a new microfluidic proteomic 2danalysis. development of strategies for heterologous expression of glucose dehydrogenase from the halophilic archaeon halobacterium sp. nrc-1 juan carlos cruz-jiménez 1 , lorenzo saliceti-piazza 1 , rafael montalvo 2 : 1 chemical engineering, university of puerto rico-mayagüez campus, mayagüez 00680, puerto rico; 2 biology, university of puerto rico-mayagüez campus, mayagüez 00680, puerto rico. e-mail: juancruzj@hotmail.com (j.c. cruz-jiménez) halophilic archaea are excellent model organisms and valuable for biotechnology applications; they are easy to culture in the lab, genetically tractable, and exhibit a variety of interesting and useful characteristics. most halophilic archaea require 1.5 m nacl to sustain growth and structural integrity. among the 2.630 genes in halobacterium, we are studying the gene encoding a glucose dehydrogenase, gene id is 446, located between the 345205 and 346272 bases (halobacterium sp. nrc-1 genome project). this extremozyme is bioengineerable, and its use as a model for studying biocatalysis in aqueous/organic and nonaqueous media has not been explored to date. the utilization of enzymes in organic solvents has several potential advantages over aqueous systems. a major benefit is the increased solubility of many substrates, resulting in higher concentrations of reactant and products, hence reducing and purification costs and simplifying recovery protocols. cells were grown aerobically during seven days at 37 • c in a complex medium, harvested by centrifugation and their genomic dna extracted. for cloning of the gene, primers were designed based on the sequence recently published by the halobacterium sp. nrc-1 genome project. the forward (5 -ccgcatgcgcc cacagtccc-3 ) and reverse (5 -ccggcctctagaacggcctgg-3 ) primers were designed to incorporate restriction sites for sph i and xba i, respectively (in bold). we are pcr amplifying the genomic dna and developing methods for the heterologous expression using the mesophilic escherichia coli, as well as purifying the enzyme. the purification procedure will be carried out using high resolution methods based on the protein's halophilicity. bioinformatics methods will be used to facilitate conforming of protein function and for comparison with a native enzyme. quantitative measurements by mass spectrometry of hundreds of proteins simultaneously using the new proteinchip systemseries 4000 p. iversen, e. fernvik ciphergen biosystems inc., symbion research park, fruebjergvej 3, dk-2100 copenhagen, denmark. e-mail: piversen@ciphergen.com (p. iversen) most mass spectrometry methods used in proteomics allow for the identification of multiple proteins in a limited number of complex samples, but lack the ability to assess the quantity of the proteins and their modifications. however, mounting evidence shows specific cleavage of well-known proteins as being strong candidates for specific biomarkers, and in order to discover these biomarkers one has to be able to monitor the quantity and mass of hundreds of proteins from hundreds of complex samples reproducibly. the new series 4000 instrument in connection with proteinchip arrays ® from ciphergen biosystems enables this. the new 4000 series instrument is optimized for sensitivity, reproducibility and quantitation. new ion optics allows the use of higher acceleration voltages thus increasing the sensitivity, but without lowering the resolution. a new method of blanking the detector in connection with a non-linear gain of the detector also increases the sensitivity to the effect that igg can be detected down to 0.2 fmol. furthermore, the unique design of the instrument permits the detection of proteins with great variation in both mass and concentration and thus making it ideal for proteomics studies. a unique feature of the 4000 series instrument is the possibility to normalize the output by controlling the laser and detector so that results can be read with equal precision on different instruments, which is not often possible in mass spectrometry where individual instruments yield different results. this feature is vital in the validation of research results beyond individual laboratories. the coupling of liquid chromatography with mass spectrometry is now firmly established as a routine method for the identification of proteins that have been subjected to enzymatic digestion. in an on-line lc-ms experiment, the column eluent is coupled to the electrospray source via an emitter and any tryptic peptides present in the mixture are mass analyses as they elute from the hplc column. should there be any co-eluting species in the eluent, these will be separated in the mass analyser by their mass-to-charge ratio. it has become increasingly clear that relative quantification of protein expression changes is important in modern biology and medicine. several current approaches have been developed that utilise stable isotope labelling of samples in combination with separation and subsequent analysis by mass spectrometry. however, we have recently described an lc-ms strategy where quantification is achieved via normalisation of the ms datasets and comparison of the peptide intensities (of the observed tryptic peptides) across samples is performed. in this case, it is desirable to perform replicate injections and hence reduce statistical errors. this approach places a requirement upon good chromatography, especially in terms of retention time reproducibility. in addition exact mass measurement of the eluting ions is required as well as the ability to generate reproducible and reliable peak intensity, or area, calculations for the eluting tryptic peptides. the ability to measure the mass to charge ratios of ions accurately, across injections and across samples, increases confidence that the same ions have been matched from each sample injection. in this presentation our current strategy for the relative quantification of proteins will be discussed using, as examples, complex protein mixtures from salmonella enterica, eschericia coli and human serum. proteomic analysis for the production of rhctla4ig in transgenic rice cell cultures using dige ji-suk cho, song-jae lee, inha university, difference in gel electrophoresis (dige) technology using fluorescent dyes is a novel method, which simplifies the process of detecting and matching proteins between multiple gels by allowing the separation of up to three separate protein samples in the same gel. it provides accurate quantitative and reproducible differential expression values for proteins in several samples. recombinant human cytotoxic t lymphocyte-associated antigen 4-immunoglobulin (rhctla4ig) was produced in the transgenic rice suspension cell cultures using ␣-amylase promoter system. this system is efficient for the production of recombinant proteins, as it secretes target proteins into culture medium under sugar-depleting condition. in this study, the intracellular proteins expressed at both growth and induction stages of culture were separated and analyzed using 2-d dige. each sample from different conditions and internal standard were labeled with n-hydroxy succinimidyl ester-derivatives of cy2, cy3 and cy5 dyes and run within a single dige gel. using decyder tm software, 2218 spots were detected with two-fold thresholds with 95% confidence and it was found that 60 proteins underwent significant change during the production of rhctla4ig with normalization method improving data distribution. a pooled sample mixture for the picking gel was prepared with deep purple staining and analyzed with mass spectrometry. in addition, the intracellular rhctla4ig spots were identified with western blot analysis using goat anti-human igg (fc) antibody after dige gel was transferred to pvdf membrane. study of substrate specificity of rnr-exoribonucelases using hybrid proteins ana barbas, mónica amblar, cecília m. arraiano instituto de tecnologia química e biológica, ean, 2784-505 oeiras, portugal. e-mail: ab@itqb.unl.pt (a. barbas) the ribonucleases are essential enzymes responsible for the regulation of gene expression and have shown to be important for biotechnology purposes. for instance, commercial mutants deficient in ribonucleases have been quite relevant for the over-production of recombinant proteins. escherichia coli rnase ii is a processive 3 -5 exoribonuclease prototype of the rnr family that has homologues widespread in the majority of the sequenced genomes. by sequence alignment it has been proposed for the rnr type proteins the existence of three different domains: an n-terminal cold shock nucleotide binding domain (csd), a rnb catalytic domain, and a c-terminal s1 nucleotide binding domain. we have constructed several rnase ii deletion mutants to enable the characterization of each domain. these studies have allowed us to determine that both csd and s1 are involved in the binding of the enzyme to the rna substrate, being the s1 domain the most important. in rna-binding proteins it has been shown that the s1 domain's conformation is highly conserved. however, it is not known whether the substrate specificity is s1-dependent. in order to characterize the s1 domain and verify if it is directly related to substrate specificity, we have constructed rnase ii hybrid proteins in which the s1 domain was substituted by the s1 of two other exoribonucleases, rnase r (rnii-rnr) and pnpase (rnii-pnp). preliminary results have demonstrated that both quimeric proteins are capable of binding and degrading various rna substrates. in addition, studies are currently being carried out to verify the possibility that s1 domain of pnp in the hybrid protein might be involved in multimerization and/or interaction with other proteins. the murine monoclonal antibody igg1, anti-digoxin was produced in a rolling bottle fermentor. purification was performed on a protein g column. cd spectra were recorded on a jasco-810 spectropolarimeter. protein concentrations of 20-50 g/ml and path length of 1 cm were used for measurements in a far uv region. all measurements were performed in a cell holder thermostand with an accuracy of ±0.2 at 25 • c. at this temperature the predominance of ␤-strands is indicated. large conformational changes occur at 78 • c. at this temperature the spectra tense to irregular ␤-strands and unordered structures. these evidences confirming temperaturedependent conformational changes of protein and also high thermal stability of mentioned monoclonal antibody. generation of monoclonal antibodies for the assessment of protein purification by recombinant ribosomal coupling janni kristensen, kim kusk mortensen, hans peter sørensen 1 laboratory of biodesign, department of molecular biology, aarhus university, gustav wieds vej 10 c, dk-8000 aarhus c, denmark. e-mail: hans.peter.sorensen@teknologisk.dk (h.p. sørensen) we recently described a conceptually novel method for the purification of recombinant proteins with a propensity to form inclusion bodies in the cytoplasm of escherichia coli. recombinant proteins were covalently coupled to the e. coli ribosome by fusing them to ribosomal protein 23 (rpl23) followed by expression in an rpl23 deficient strain of e. coli. this allowed for the isolation of ribsomes with covalently coupled target proteins which could be efficiently purified by centrifugation after in vitro proteolysis at a specific site incorporated between rpl23 and the target protein. to assess the efficiency of separation of target protein from ribosomes, by site specific proteolysis, we required monoclonal antibodies directed against rpl23 and gfp. we therefore purified rpl23-gfp-his, rpl23-his and gfp from e. coli recombinants using affinity, ion-exchange and hydrophobic interaction chromatography. these proteins could be purified with yields of 150, 150 and 1500 g per gram cellular wet weight, respectively. however, rpl23-gfp-his could only be expressed in a soluble form and subsequently purified, when cells were cultivated at reduced temperatures. the purified rpl23-gfp-his fusion protein was used to immunize balb/c mice and the hybridoma cell lines resulting from in vitro cell fusion were screened by elisa using rpl23-his and gfp to select for monoclonal antibodies specific for each protein. this resulted in 20 antibodies directed against rpl23 and 3 antibodies directed against gfp. antibodies were screened for isotypes and their efficiency in western immunoblots. the most efficient antibody against rpl23 and gfp were purified by protein g sepharose affinity chromatography. the purified antibodies were used to evaluate the separation of ribosomes from gfp, streptavidin, murine interleukin-6, a phagedisplay antibody and yeast elongation factor 1a by centrifugation, when ribosomes with covalently coupled target protein were cleaved at specific proteolytic cleavage sites. proteomic analysis for the production of rhctla4ig in transgenic rice cell cultures using dige ji-suk cho, song-jae lee, inha university, difference in gel electrophoresis (dige) technology using fluorescent dyes is a novel method, which simplifies the process of detecting and matching proteins between multiple gels by allowing the separation of up to three separate protein samples in the same gel. it provides accurate quantitative and reproducible differential expression values for proteins in several samples. recombinant human cytotoxic t lymphocyte-associated antigen 4-immunoglobulin (rhctla4ig) was produced in the transgenic rice suspension cell cultures using ␣-amylase promoter system. this system is efficient for the production of recombinant proteins, as it secretes target proteins into culture medium under sugar-depleting condition. in this study, the intracellular proteins expressed at both growth and induction stages of culture were separated and analyzed using 2-d dige. each sample from different conditions and internal standard were labeled with n-hydroxy succinimidyl ester-derivatives of cy2, cy3 and cy5 dyes and run within a single dige gel. using decydertm software, 2218 spots were detected with two-fold thresholds with 95% confidence and it was found that 60 proteins underwent significant change during the production of rhctla4ig with normalization method improving data distribution. a pooled sample mixture for the picking gel was prepared with deep purple staining and analyzed with mass spectrometry. in addition, the intracellular rhctla4ig spots were identified with western blot analysis using goat anti-human igg (fc) antibody after dige gel was transferred to pvdf membrane. similarity searches and multiple alignment of s 1 and s 2 protein of sars-cov for modeling 3d structure and its evolution (origin) mohammad soltany rezaee rad, iman tavassoly, negar mottaghi, banafsheh rezaee. e-mail: mohammad.soltany@gmail.com (m.s.r. rad) aims: the exact origin of the cause of severe acute syndrome (sars) is still an open question. nowadays 8 recombinant origins for this virus have been found. s 1 and s 2 subunit of spike protein of this virus are the most important proteins responsible for severe acute respiratory syndrome. in fact they are glycoproteins of this virus exist on its surface. they are responsible for mediating fusion of viral and cellular membrane. the classification and modeling 3d structure of this virus can help us to suggest new ideas about its charististics and function, which may lead to new therapeutic and preventing modalities. methods: we used nucleotide sequence of s 1 and s 2 subunit of s (spike) protein for multiple alignments. we have done multiple alignments with different bioinformatics software (clusterx, entrez) for comparing the sequence with the other viruses and, we used weblab view software for modeling and identifying 3d structure of these proteins. findings: the similarity searches on nucleotide sequence of this protein with the 30 single strands rna (ssrna) shows the virus belong to a known classification named coronaviridae. these 3d structures show the responsibility of s protein in this syndrome. another findings based on these alignments is an important similarity between these subunits and genome of hiv-1 showing they have familiar mechanism in pathogenesis. discussion: multiple alignments are powerful tool in classification of new recombinational virus and emerging infection. 3d structure model of this virus is an important guide to understand the mechanism of this virus. the shape of glycoprotein that modeled with bioinformatics software can help us in understanding mechanism of binding this virus to human cells. this fact can be used in designing drug and vaccine to cure and prevent the sars. blocking these origins and sites leads to inhibiting the virus attachment. also the similarity between this virus and hiv-1 shows us that both of them have similar proteins that cause pathogenesis of these viruses. the simulated moving bed (smb) technology is a continuous countercurrent chromatographic separation technique that has been applied successfully in the last years to a number of significant problems. an smb consists of a series of fixed bed chromatographic columns connected in a loop, and outperforms column chromatography in terms of productivity and solvent consumption. the use of smb instead of batch processes for bioseparations, i.e. separations involving large and rather complex molecules with multiple 3d configurations depending on parameters such as ph, temperature, etc., is becoming of greater and greater interest. examples of these are therapeutic proteins, antibodies and plasmid dna among others. for all chromatographic processes in this field, one of the most crucial issues is the cleaning of the chromatographic media with a special solvent system, an operation usually referred to as cleaning in place (cip). in single column chromatography this is easily done off-line, but this is not compatible with standard smb operation. in order to overcome this limitation, the standard smb configuration has been modified by adding a dedicated section plus an additional section for the re-equilibration of the freshly cleaned column with the working solvent before it is re-inserted into the smb loop. in such a way, cip according to gmp criteria can be incorporated into the smb unit and operation, which is then called cip-smb. this new smb configuration is also related to the three fraction separation unit called 3f-smb that has been recently introduced and applied to the separation of nucleosides. in this work we apply cip-smb using a size exclusion stationary phase to the separation of plasmid dna from the filtered cell lysate solution. plasmid purification has become a key issue in the last years as a result of the advances in gene therapy, whereas traditional laboratory methods are not always suitable for therapeutic purposes. we report about separation performances, which are then discussed in the light of smb design criteria and compared to column chromatography performance. computer guided optimization of adsorptive bioseparation processes bernt nilsson department of chemical engineering, lund university, 221 00 lund, sweden. e-mail: bernt.nilsson@chemeng.lth.se separation processes like chromatography can be highly nonlinear and the behavior can sometimes be hard to predict. optimization of preparative chromatography is often done experimentally, which is both time consuming and expensive. a model-based approach to optimization is therefore an attractive and challenging way to overcome some drawbacks in the traditional working procedure in biotechnical industry. efficient model-based optimization for industrial needs requires three parts; models, methods and tools. model-based methodology requires a set of chromatography column model structures, which can capture the phenomenon of interest. for instance they have to capture column load variations, elution profile changes, operation condition disturbances, column configurations and stationary phase properties. to derive a reliable model for optimization it has to be calibrated and validated to experimental data, which requires an efficient calibration procedure. different calibration procedures are discussed and compared. after validation the model can be used in the design of a separation step. to do a robust design a set of requirements have to be fulfilled. the column size and operation conditions are used to optimize the performance of the step, which requires a constraint nonlinear optimization method. the choice of objective function for optimization and corresponding constraints are not obvious and the resulting operation conditions are often not robust. therefore there have to be additional methods available for analysis of the performance, like sensitivity and robustness analysis. optimization of the purification of antibodies is discussed and exemplified. the work with mathematical models and numerical methods has to be supported by a set of computer tools of different kinds in order to solve industrial problems effective. there is a need for different kinds of tools; customized tool to solve specific problems by a non skilled user and general toolbox for the expert. an example of a toolbox is presented. tina tarmann, alois jungbauer department of biotechnology, university of natural resources and applied life sciences, vienna, austria plasmids and viruses are the contemporary vehicles for genetherapy and genetic vaccination. extremely promising results have been reported from in-vitro, in-vivo and clinical studies. currently a lot of these compounds are manufactured with a technology which has been directly transferred from laboratory to pilot scale without further engineering. membrane based separations, chromatographic separations and precipitation have been employed for separation of plasmids and viruses. chromatographic separation have been designed with aim of protein separation. thus such processes suffer from either mass transfer limitations or low capacity. monoliths without intraskeleton mesopores and chromatography particles with giga pores are excellently suited for adsorption and separation of plasmids and viruses. low mass transfer resistance and high capacity compared to conventional beaded materials can be observed. adsorption kinetics were derived from infinite and finite bath methods and isotherms were constructed. these data also suggest that a conformational change of the plasmids takes place upon adsorption. discussion of the mass transfer properties and an example of scale up of a chromatographic separation process using these novel materials will be shown and discussed in respect to already existing processes. the recent developments in molecular therapies such as non-viral gene therapy and dna vaccination have fostered the development of efficient plasmid dna (pdna) purification processes. the separation of supercoiled (sc) and open circular (oc) isoforms is one of the key steps in the large scale purification of pdna vectors intended for a therapeutic use. although escherichia coli produces mainly the more compact sc pdna isoform, oc, linear and denatured pdna isoforms are usually present and are likely to be less efficient in transferring gene expression. for this reason, regulatory agencies specify that more than 90% of pdna in a therapeutic product is in the sc isoform. in this work histidine-base recognition is explored as a mean to separate pdna isoforms. the agarose gel used here combines the mild hydrophobic characteristics of an epoxy spacer arm with a pseudo-affinity histidine ligand. chromatographic profiles were obtained by injection of native plasmid (sc + oc) samples in the histidine-agarose support showing an efficient and baseline separation of both isoforms. the high resolution obtained with this support indicates that the method is potentially applicable to the separation of pdna at preparative and analytical scale. affinity ligand development with a novel encoded bead screening technology ib johannsen, versamatrix a/s, gamle carlsberg vej 10, dk-2500 valby, denmark. e-mail: www.versamatrix.com the presentation describes a new invention for fast development of affinity ligands, where up to 20,000 ligands can be screened onbead and identified in a few hours. combinatorial synthesis by the split and mix procedure is a powerful technique for generating vast numbers of diverse chemical compounds on polymer beads with relatively little effort. traditionally, the technique is hampered by the laborious spectroscopic and chemical analysis, needed to determine the exact structures of the ligand on selected beads. in this way, 6-12 month analysis time could easily be spent just to analyze a tiny fraction of the library. in the versaffin tm technology each bead is encoded, individually tracked, and identified during the synthesis and screening. this decreases the whole ligand development time from months to weeks and increases the amount of information significantly. the bead code further enables evaluation of the ligandprotein binding under varying binding and elution conditions. the instrument for reading the encoded beads and for quantifying the amount of bound protein is presented. the encoded beads we use are based on functional cross-linked polyethylenglycol (peg), which is compatible with water as well as most organic solvents. thus, the combinatorial synthesis can be carried out in organic solvents and the resulting compounds can be evaluated, still bound to the parent beads, under aqueous conditions. a further advantage of using peg based beads for on-bead screening is the fact that peg is biologically inert and therefore does not interfere in a bioassay. in the biopharmaceutical industry, pressure is mounting to shorten development times and thereby time to market, e.g. in the field of monoclonal antibodies, generic processes have been established which allow for more rapid development from gene to production of pre-clinical and proof of concept/phase i material. for non-mab products from various expression systems, productspecific approaches still prevail. however, for most product types similar issues like clearance of process-and product-related impurities, overall purity and yield, or manufacturing issues have to be dealt with in downstream process development. integrated and timely approaches based on process science and developed orthogonal analytical tools are often hampered by tight time frames and limited resources available. on the other hand, thorough understanding and analytical characterization of product characteristics but also of (process-related) impurities are pre-requisites for fully exploiting separation power and for achieving final purities way above 95% in a robust and cost-effective manner. from primary separation to polishing steps, we have made several attempts to improve the efficiencies of process steps themselves but also of ways to develop them. the strategies applied comprise implementation of innovative processing tools, rational streamlining and optimization of a sequence of unit operations, tech transfer and scale up considerations. also in this context, the applicability of scale down and ultra scale down models for process development and optimization purposes, their potential for speeding up and their limitations will be discussed. chromatographic monoliths are rather new chromatographic stationary phases. they consist of a single piece of a highly porous material. the pores are interconnected forming a network of channels. since the transport mechanism is predominantly based on convection, mass transfer between mobile and stationary phase is significantly enhanced resulting in short separation times. because of that they seem to be an ideal support for separation and purification of extremely large molecules like proteins, dna or even viruses. in this talk, various features of the monoliths like high porosity, fast mass transfer, surface accessibility and dynamic binding capacity will be described. effect of each feature on the separation and purification efficiency will be discussed in terms of molecular size and properties. while chromatographic monoliths are already widely accepted in microchip fluid devices, capillary columns as well as analytical columns, very few reports about preparative monolithic columns can be found. reasons for lack of preparative chromatographic columns will be elucidated and preparation strategy for construction of several liter volume methacrylate based monoliths will be presented. finally, several examples of biomolecule purification like large proteins, plasmid and genomic dna and viruses on cim convective interaction media ® monolithic columns will be given. further, their application as bioreactors and supports for solid state synthesis will be demonstrated. hubbuch institute of biotechnology, forschungszentrum juelich, 52425 juelich, germany. e-mail: m.schroeder@fz-juelich. de (m. schroeder) the intraparticle diffusion coefficient is an important parameter for modeling of chromatographic separation processes. a new method based on dynamic measurements of intraparticle concentration profiles of proteins in process chromatographic media with a confocal laser scanning microscope is presented. the diffusion coefficient is determined by fitting experimental data to a spherical diffusion model. excellent agreement of experimental data with simulation results is obtained. the diffusion coefficient is measured for seven proteins in sepharose 6 ff, spanning molecular weights from 14.3 to 160 kda. in addition, multicomponent diffusion processes for combination of differently sized proteins are analyzed and the influence of adsorbed proteins on the diffusion coefficient is measured in sp or q sepharose ff. taken together the presented method allows measuring the diffusion coefficient of proteins in process chromatographic media in a packed column. use of automated docking for predicting chromatographic behavior of proteins in hydrophobic interaction chromatography andrea mahn, m. elena. lienqueo contreras university of chile, santiago, chile in the present work, we have extended and automated the methodology proposed by mahn et al., 2005 for predicting protein behavior in hydrophobic interaction chromatography (hic). this methodology is based on the good correlation level between the average surface hydrophobicity of the interfacial zone (local hydrophobicity lh) and protein retention time in hic, for only three different ribonucleases. for determining the lh it is necessary to select the most probable protein-ligand conformation. in this work, we have determined the most probable conformation, of more than 12 proteins, using (i) first, the module insight ii affinity by accelrys for providing automated docking (grid method) of ligands (phenyl) to the proteins (100 conformations); (ii) then, the different probable docked protein-ligand conformations were automatically scored using the module insight ii ludi by accelrys; (iii) after that, each conformation was clustered and each cluster was scored by using the average score of each cluster (iv) finally, the most probable conformation was selected using a function based on the number of cluster components, and the average score value. then, when the most probable conformation was selected, the local hydrophobicity (lh) was calculated using the graphical representation and analysis of structural properties (grasp) program. the results have shown an acceptable correlation level (r > 0.90) between lh and the experimental dimensionless retention time (drt). in view of these results, we consider that this methodology could be used to adequately represent the chromatographic behavior in hic for all kinds of proteins (with a heterogeneous and homogeneous surface hydrophobicity distribution) and without a large number of tedious experiments, but only using computational simulation and adequate score criterion. potato tuber proteins are nutritious and show potential as functional ingredient in food systems. however, the present bulk processing technology can only recover byproduct protein for animal feed use. an expanded bed adsorption (eba) process for isolating functional food-grade protein from crude potato starch effluent was previously developed. moderate capture efficiency (20-25%) of the total crude protein was most likely caused by diffusion limitations and aggregated protein, inaccessible for adsorption. we employed the same adsorption ligand attached to agarose-tungsten carbide beads to create stable beds of 2.0-2.7× expansion using flow rates at 400-750 cm h −1 . a pilot scale eba process was run in a commercial processing plant over a three month campaign of starch production from potatoes of mixed variety. fresh crude effluent (150-300 l/cycle) was applied to a column (20 cm × 2 m) containing 20 l of resin. protein capture by eba was reliable in operation, producing a refined protein material, which after dewatering and gentle drying, showed improved functionality over heat-coagulated protein produced at the same plant. overall productivity increased. however, finding a robust operating window of predictable productivity is challenging since the potato fruit water is complex and deteriorates easily. from breakthrough curves, it is observed that the major bulk protein, patatin, displays non-langmuir adsorption behavior. this may indicate a range of interactions for different species of the same protein. chlorogenic acid (ca), the main polyphenolic substance in potato tuber, causes enzymatic browning and undesirable flavor changes, but polyphenols can also react with protein. assessing the effects of interacting cell components therefore applies to the bioprocessing of plant material. at present the acceptance of biochip technology for on site use, e.g. diagnosis or environmental control is hindered by rather expensive and complex instrumental systems. there is a need to provide reliable and cost-effective systems that can be operated with minimal training. the construction of electronic biochip microarrays using semiconductor technology enables the construction of compact systems with high integration at acceptable production costs. the key feature of the fully electrical biochip technology are micro arrays made in advanced si-technology and carrying several array positions with interdigitated nanometer gold electrodes on its surface. the chips are fabricated by standard silicon fabrication methods allow-ing high volume production and to minimise the cost per chip. the advantage of fully electronic microarrays is the intrinsic high spatial resolution and direct signal coupling of the biosensing element and the transducer. the function of fully electronic biochips is also based on the electrochemical transduction and quantification of the formation of affinity complexes on the chip surface. a portable device for field applications and point of care diagnosis have been designed and manufactured. the amperometric device enables the recognition of biomolecular interactions by measuring the redox recycling products of elisa enzyme labels. the highly sensitive signal transduction is achieved with a 16-channel interdigitated ultramicroelectrode array. one major advantage of fully electronic microarrays is the direct signal coupling of the biosensing element and the resulting robustness and opportunity for miniaturisation. those electrical biochip arrays have been adapted for the detection of all types of affinity complexes, such as for dna, rna, proteins and haptens. self assembling of capture oligonucleotides via thiol-gold coupling have been used to construct the array chip nucleic acid interface. thus e.g. pathogenic micro organisms have been identified and quantified via their genomic dna or ribosomal rna respectively. another application based on immobilized antibodies is shown to sense extreme low concentration of bioagent toxins. for processing the assay formats and the electrical read out of the detection of affinity complexes a modular fully automated measurement system has been developed. it is manufactured in industrial lines and available at market. dynamics and self-assembly of organic molecules on surfaces revealed by high-resolution, fast-scanning stm flemming besenbacher interdisciplinary nanoscience center (inano), university of aarhus, dk-8000 aarhus c, denmark. e-mail: fbe@inano.dk in the interdisciplinary area of nanoscience and nanotechnology, the adsorption and self-assembly of organic molecules on singlecrystal surfaces have attracted much attention lately due to the potential applications in fields ranging from molecular electronics to biocompatible interfaces. the supramolecular structures formed upon deposition of molecular species on solid surfaces depend on the molecular architecture and the distribution of functional groups on one hand, which determines the thermodynamically stable molecular arrangement, and on the other hand, on kinetic factors like thermal diffusion, spontaneous rotations and conformational dynamics. i will show how the unique aspect of our aarhus stm and the time-resolved, high-resolution stm imaging can be used to obtain important new insight into the dynamics, and can provide very important new information on the atomic-scale realm and on the dynamics of molecular nanostructures. the time-resolved stm data are visualized in the form of stm movies (see www.inano.dk/spm) which can subsequently be analyzed in order to extract quantitative information on the activation energy, the prefactors and the adsorbate-promoted diffusion. i will specifically discuss: (i) the self-assembly of guanine quartets on au(1 1 1) and the influence of cooperative hydrogen bonds, and (ii) the molecular recognition in binary mixtures of dna bases. g molecules are found to self-assemble into a hydrogen-bonded network of g-quartets, whose structure corresponds perfectly with the quartet structure of telomeric dna determined by x-ray crystallography. the strong preference of g molecules to form quartets can be explained by a cooperative effect that strengthens the hydrogen bonds within the g-quartet network over the hydrogen bonds in isolated dimers. by means of a combination of stm experiments and dft calculations we compare the 2d molecular networks formed on deposition of the binary mixtures g-c (purine-pyrimidine pair of complementary bases) and a-c (purine-pyrimidine pair of non-complementary bases). we find that the non-complementary bases segregate into islands of pure a and a network of pure c, whereas the complementary bases g and c form a network that cannot be separated by annealing up to the desorption temperature for c. high-resolution stm images allow us to identify the structures for the enhanced thermal stability as structures that contain g-c bonds, possibly with the same structure as the watson-crick pairs in dna molecules. kühnle, r., et al., 2002 . nature 415, 891. otero, r., et al., 2004 . angewandte chemie int. ed. 43, 2092 . otero, r., et al., 2004 . nat. mater. 3, 779. otero, r., et al., 2005 . angewandte chemie int. ed. 44, 2. rosei, f., et al., 2002 . science 296, 328. schunack, m., et al., 2002 . biomedical and pharmaceutical companies are using an increasing number of carbohydrate polymers in the formulation of drugs. one such polymer with highly attractive features is chitosan that can be produced from crustacean shells. chitosan is non-toxic, biocompatible and biodegradable. chitosan can be formulated as nanoparticles or membranes and have enhanced several bioprocesses. among the well-documented features are enhanced drug uptake by tight junction relaxation and enhanced in vivo uptake and protection of nucleic acid formulations. chitosan research has increased throughout this decade. research programs are addressing the potential of chitosan applications but preparations with variable molecular size and charge are not easily available. specifically companies working with the development of new drugs and enhancement of drug functionality are in need of formulation technology that provides well characterized biocompatible material. in the chitosan innovation consortium the danish companies, coloplast, novozymes biopolymers, pipeline biotech, and zgene, together with the research center inano (aarhus university) and bioneer a/s have developed a series of chitosan preparations suitable for research of biopharmaceutical applications (www.chitosan.dk). the ability to obtain functional formulations is currently being tested in both in vitro and in vivo experiments. the consortium participants have established a platform from which chitosan processing, characterization and formulation technology can be extracted. by providing specified chitosan preparations the polymer feature can be adjusted to fit specialized biopharmaceutical applications. high throughput bioprocessing govind rao center for advanced sensor technology, umbc, baltimore, md 21250, usa the post genome era holds a great deal of promise. an enormous number of new proteins await study. these will require sophisticated culture techniques, as cells will have to be grown under large numbers of environmental conditions to elucidate expression triggers. unfortunately, unlike molecular biology, bioreactor technology is little changed since its inception. the primary reason has been a lack of sensor technology that can be readily employed to monitor the cellular environment. we will take a look at the current status of the technology and report on promising optical sensor technology that permits low-cost high throughput cell culture and fermentation. noninvasive sensors that monitor ph, po 2 and pco 2 and high sensitivity solutions for glucose and glutamine measurements will be presented. in addition, the mixing characteristics that determine bioreactor performance will be examined at the small scale and their relevance to the large scale will be demonstrated. on-line monitoring and fed-batch operation in shake flask and micro titre plate cultures jochen büchs 1 , frank kensy 1 , markus jeude 1 , tibor anderlei 2 , barbara dittrich 3 , doris klee 3 : 1 biochemical engineering, rwth aachen university, aachen, germany; 2 ac biotec, jülich, germany; 3 textile chemistry and macromolecular chemistry, rwth aachen university, aachen, germany although methods of molecular biology has led to rational design of micro-organisms to suit our requirements, screening of large numbers of strains and media is still one of the most important tasks in biotechnology. batch operation of shaken bioreactors and absence of on-line monitoring is still the general state of the art for that purpose. it is also a very common practice in screening projects that only the final product titre is measured at the end of the culture for the evaluation of the "best performers". in the recent years several approaches were introduced to follow microbial cultures also in shaken bioreactors like shake flasks or micro titre plates (mtp's). it became obvious that the cultures can behave quite unexpected and most relevant and essential information is lost, if only the final product titre at the end of the cultures is utilised for evaluation. as a result, the screening may be directed to an unknown and non-intended direction or may even fail. this is demonstrated with some examples in this contribution. new methods and techniques are introduced to measure the oxygen and carbon dioxide transfer rate and the respiratory quotient in shake flasks and the optical density, nadh fluorescence, ph and do 2 in mtp's. if the desired product can be fused to a fluorescence protein, like gfp or yfp, also the product formation can be monitored on-line in mtp's. it is of utmost importance that the operating conditions of the applied shaking bioreactors are suitable and shaking motion is not stopped during the measurement. otherwise, e.g. power input, mixing and oxygen supply is interrupted and the micro-organisms will ongoingly rearrange their metabolism to cope with these disturbances of their environmental conditions. another problem of screening is the commonly applied batch operation mode. a lot of microbial systems display an overflow metabolism, substrate or osmotic inhibition or are characterised by a catabolite repressed product formation. in all these cases, batch operation is not the preferred operation mode and, therefore, these cultures are run in fed-batch in larger scales. in particular, it is nearly impossible to screen systems, which are catabolite repressed by the carbon source, in batch mode in defined mineral media. after initial growth has led to a nearly complete consumption of the carbon source, the product formation is derepressed. but then no more carbon source is available to continue with production. it is quite questionable whether in batch operation mode suitable strains can be selected for later fed-batch operation or not. this consideration has resulted in the development of a new technique which allows to run the screening in fed-batch operation mode. this technique is applicable in shake flasks as well as in mtp's. dramatic increases in product titre between 4-and 400-folds were observed under these conditions compared to conventional batch screenings. mikkel nordkvist, john villadsen center for microbial biotechnology, technical university of denmark, dk-2800 lyngby. e-mail: mnq@biocentrum.dtu.dk (m. nordkvist) efficient mixing and mass transfer are highly important in the chemical industry and in the fermentation industry. poor mixing can result in low yield and variable product quality in a number of cultivation processes, and mass transfer can easily become the limiting step in aerobic cultivations, especially at high cell density. we have tested a new tank reactor system, where liquid is withdrawn from the bottom of a tank, rapidly circulated, and injected back into the bulk liquid through the nozzles of rotary jet heads. liquid feed as well as gas is added in the recirculation loop and thereby distributed via the rotary jet heads. solid feed in powder form can also be added in the loop with advantage, and heat is efficiently removed in a plate-type heat exchanger, which is part of the loop. the system has a very simple design with no internal baffles or heat exchange area, and between batches the rotary jet heads are used for cleaning in place. a number of applications ranging from dispersion of liquid and powder to mass transfer will be presented. mass transfer applications include baker's yeast cultivation and oxidation of lactose to lactobionic acid by a carbohydrate oxidase. fast and accurate analytical information that can be used to rapidly evaluate the interactions between biological systems and bioprocess operations is essential for optimization of biological production processes. we have researched and developed a multiplexed microbioreactor system for the parallel operation of multiple microbial fermentations. the microbioreactors have working volumes from 5 to 150 l, and are instrumented for real-time monitoring of dissolved oxygen, ph and optical density. the growth profiles obtained with escherichia coli compare favorably to results obtained from conventional 500 ml batch bioreactors. we also demonstrate the use of our microbioreactors coupled to dna microarray analy-sis, as a tool for accelerated discovery and elucidation of metabolic pathways and gene expression profiles. the multiplexed system represents a significant step towards high-throughput data acquisition and has the potential to replace current instrumented bioreactors, which are bulky, expensive to run, and require many mechanical manipulations. design of a laboratory scale bioreactor to study solid-state tobacco fermentation m. di giacomo, l. nappi, d. silvestro, m. paolino, d. parente r&d biology department, british american tobacco italia, naples 80126, italy italian toscano cigar production is based on the fermentation of dark fire-cured tobacco. the process starts with the rise of leaf moisture to levels of water activities assuring development of the wild phylloplane microflora in the absence of free water. the intense growth of microorganisms modifies leaf characteristics (ph rise from acidic to alkaline condition) contributing to define the toscano typical smoke profile. tobacco fermentation takes place in great bulks of 500 kg which cause considerable amount of heat evolution as a function of the metabolic activities of the microorganisms. this heat accumulates and temperature can rise to as high as 70 • c. a laboratory cylindrical packed-bed bioreactor was designed to work under isothermal conditions. the reactor was ideal to ferment small quantities of tobacco (200 g) and was made up of a column aerated from the bottom with humidified air and placed in a thermoregulated room. experiments were conducted with constant temperature and air flow. moreover, bioenrichment experiments were conducted in the presence of different microbial starter cultures. fermentation courses were monitored by measuring microbial counts and chemical/physical modification of the substrate. with this laboratory scale system we obtained different kinds of information on the role and dynamics of the microorganisms involved in the fermentation process and on the influence of different environmental conditions. for the future, the design of an adiabatic device for tobacco fermentation is planned. on-line liquid chromatography as a process analytical technology for monitoring and control of biotech processes rick e. cooley 1,2 : 1 process analytics center of excellence, dionex corporation, sunnyvale, ca, usa; 2 eli lilly and company, indianapolis, in, usa (retired) biotech processes, used to produce an active pharmaceutical ingredient (api), generally differ from small molecule api manufacturing processes in that the starting materials tend to be more variable, more complex, and the product of interest in lower concentration due to the fact that they originate from a biological rather than a chemical process. this complexity and low starting concentration has generated a high level of interest in developing technologies that can be utilized to increase yield and reduce variability in the initial bioreactor phase, as well as, downstream isolation and purification operations. the use of process analytics, or on-line analytical measurements, is a technology approach that can contribute to increased process understanding and control. this presentation will provide examples of how various analytical measurement technologies have been utilized to monitor typical bioprocess unit operations leading to increased automation and control. examples of the use on-line liquid chromatography to monitor bioreactors, process scale chromatography columns, and enzymatic reactions will be presented in more detail. application of advanced monitoring strategies for recombinant protein production karl bayer department of biotechnology, university of natural resources and applied life sciences, vienna a-1190, austria high yield in combination with the required quality are the key objectives of large-scale production of recombinant proteins using different host/vector systems. however, in the past the efficient production of recombinant proteins was frequently limited due to inadequate exploitation of the cell factory and deficiencies in process design. to achieve optimal exploitation of microbial cell factories the key requirement is to enhance the monitoring capabilities to improve the insight into the host metabolism dynamics and to cope with limited understanding of the interaction of recombinant protein synthesis with host cell metabolism. since each protein exerts an individual influence on the host/vector system, the selection of appropriate analytical methods is even more important. in order to overcome these problems high throughput technology platforms, such as dna microarrays for transcriptome and differential 2-d electrophoresis (dige) for proteome analysis provide extensive data to screen for significant analytes and provide an appropriate basis for in silico modelling and reverse engineering of regulatory networks. taking advantage from such an iterative process experimental design will be improved and further aid to increase the performance of modelling. in addition, transcriptome data are used to screen fast stress responsive promoters to set up gfp based reporter gene fusions for in-situ monitoring of the metabolic load due to recombinant gene expression. moreover chemometric methods are frequently applied to model complex data from easily obtainable on-line data sets to overcome the limited monitoring capabilities due to the high complexity and nonlinearity of biological reactions and reaction networks. this strategy has been successfully applied to model bdm (bacterial dry matter), pcn (plasmid copy number) and the amount of recombinant protein using data sets acquired from off-gas analysis (o 2 consumption, co 2 evolution), alkaline consumption rate, in-situ capacitance and multi-wavelength fluorescence measurements. at-line monitoring of bioprocess-relevant marker genes using an electric dna-chip britta jürgen 1 , daniel pioch 1 , le thi hoi 1 , jörg albers 2 , rainer hintsche 2 , stefan evers 3 , karl-heinz-maurer 2 , michael hecker 4 , thomas schweder 1 : 1 institut für pharmazie, ernst-moritz-arndt-universität, 17487 greifswald, germany; 2 ebiochipsystems, 25524 itzehoe, germany; 3 vtb-enzymtechnologie, henkel kgaa, 40191 düsseldorf, germany; 4 institut für mikrobiologie, ernst-moritz-arndt-universität, 17487 greifswald, germany. e-mail: britta.juergen@uni-greifswald.de (b. jürgen) the gram-positive bacteria bacillus licheniformis and bacillus subtilis represent important industrial hosts for the production of enzymes (e.g., proteases and amylases) or antibiotics (e.g., bacitracin). both organisms are attractive for this purpose because of their apathogenity and their classification as gras organisms (generally regarded as save). moreover, their easy cultivation and their high natural capacity to secrete proteins into the growth medium qualify them for the industrial overproduction of homologous or heterologous proteins (simonen and palva, 1993) . for the control of the physiological state and the productivity of the production cells efficient analysis tools are of great interest. a prerequisite for the evaluation of the physiological state of cells during industrial fermentation processes is the analysis of so-called process-relevant marker genes, the expression of which indicates unfavorable growth and production conditions (schweder and hecker, 2004) . by means of proteome and transcriptome analyses we have identified critical process-relevant genes of b. licheniformis and b. subtilis cells under different nutrient limitation conditions and during industrialclose bioprocesses. dna-chips with probes for such process-relevant marker genes could be valuable diagnostic tools for the monitoring of the cellular physiology during microbial bioprocesses. in order to provide reliable tools for the monitoring of the cell physiology during microbial bioprocesses, we have developed a fast mrna analytical approach, which allows an at-line monitoring of the transcriptional activity of selected marker genes during bioprocesses. this approach is based on an easy, fast and reliable rna isolation procedure and the measurement of specific mrnas by means of an electric dna-chip barken et al., 2004 a robust bioprocess is crucial to ensure the consistent process performance and provide the high quality of product for drug manufacturing in biopharmaceutical industries. existing methodologies for bioprocess design do not involve establishing mechanisms to achieve the desirable robust bioprocesses and have low capacity in handling uncertainty in the product manufacturing. also, the solutions are often obtained step wise and do not account for interactions between the steps. despite its importance, the robustness of a bioprocess has not been properly defined and studies carried out in statistic sense are often retrospective. in addition, the computational cost is expensive due to using a line search algorithm for finding an optimal operating solution. finally, the existing methodologies are difficult to apply to the whole bioprocess in biopharmaceutical industries. this paper attempts to define rigorously a measure for process robustness and presents a new methodology for evaluating the robustness of bioprocess operations and their performance. the methodology is based on the concept of 'windows of operation' which shows the whole range of possible operating regions. the methodology also establishes a lower bound for the largest variation of a design variable to ensure the performance. these bounds are achieved by min-max optimization techniques. a direct search algorithm has been developed and its computational cost is much lower than the line search algorithm. results include visualization of robust operating regions and a set of indices which compare the performance of different operating strategies. the capabilities and efficiency of this methodology are illustrated by applying it to the centrifuge selection for the clarification of high solids density cell broths. the research work will impact considerably upon robust bioprocess operation. when grown on methanol, pichia pastoris is able to synthesize proteins to high titres as well as secreting and glycosylation, thereby making this organism a very interesting host for the production of recombinant drugs at large scale. the methanol residual concentration has been reported to strongly influence the specific productivity, the optimum concentration being around 3 g/l. a suitable monitoring and control technique is therefore necessary to study and improve the productivity of p. pastoris fermentations. the current research aims at showing how a mid-infrared spectrometer (atr-ftir) can be calibrated in-situ in order to monitor and control p. pastoris fermentations. this method is simple and fast, and eliminates the need of both standards preparation and off-line calibration. it is based on the observation that during fed-batch processes, only substrate and biomass concentrations vary significantly. the method therefore consists in adding a known amount of methanol at the beginning of the process, just after inoculation, and subsequently calibrating the instrument. financial support for the swiss national science foundation is gratefully acknowledged. implantable biomaterials are subjected to inflammatory responses mediated by adherent phagocytes such as monocytes and macrophages. these cellular responses and behavior have been shown to be dependent on the type of protein that adsorbs to the surface. surface modification is necessary to control and prevent protein adsorption, and thus modulate the inflammatory responses. hydrophilic surfaces that adsorb minimal amounts of protein are considered useful for minimizing the inflammatory reactions to biomaterials. in this study we have used two routes to modify polyethylene terephthalate (pet) films: (1) a wet-chemical method for attachment of linear polyethylene glycol chains (mpeg); and (2) gas-phase plasma polymerisation of diethylene glycol vinyl ether (degve) to generate peg-like surfaces. the surface chemistry was assessed by x-ray photoelctron spectroscopy (xps), fourier transform infrared spectroscopy (ftir) and time-flight secondary ion mass spectrometry (tof-sims). the two pegylated surfaces were compared for their ability to minimise both fibrinogen adsorption and the adhesion and activation of macrophage-like human leukocytes. adsorbed fibrinogen has been shown to be one of the key proteins in stimulating inflammatory responses to biomaterials. adsorption was investigated quantitatively using 125 i-radiolabeled human fibrinogen. in addition, the conformation of the adsorbed protein was tested using an antifibrinogen monoclonal antibody in an enzyme-linked immunosorbent assay. the results showed that pegylated surfaces adsorbed up to 90% less fibrinogen, and that unfolding of adsorbed fibrinogen was more pronounced on the linear mpeg layers than on the peg-like plasma polymer surfaces. adhesion of in-vitro differentiated macrophage-like u937 cells was reduced on both the peg-like plasma polymer surfaces and the linear mpeg layers compared to the unmodified pet surface, but cells adhering to the peg-like plasma polymer surfaces secreted less tumor necrosis factor-␣ (tnf-␣) than cells adhering to the linear mpeg layers. thus, the linear mpeg surface is relatively efficient at reducing adhesion of macrophage-like cells, but those cells that do attach are in a more activated and proinflammatory state. analysis of ceramides from biological samples m. budvytiene 1 , j. liesiene 1 , b. niemeyer 2 , n. ceramides in human skin play an important role in the regulation of cell growth, differentiation and apoptosis. moreover, they are involved in the numerous signaling pathways. the growing interest in the investigations of ceramides physiological functions requires efficient separation methods of ceramides from biological resources and sensitive analytical methods. in this work some sensitive and selective methods, involving thin layer chromatography (tlc), high performance liquid chromatography (hplc), mass spectrometry (ms) and nuclear magnetic resonance spectroscopy (nmr) were employed for the separation and characterization of ceramides from human foreskin. epidermal lipids were extracted from human foreskin for 24 h at room temperature using three solvent mixtures (chloroform/ethanol/water 1:2:0.5, v/v/v); (chloroform/ethanol 1:1, v/v), and (chloroform/ethanol 2:1, v/v). ceramides retention char-acteristics in tlc and hplc were compared with the retention of commercial standards. the best separation was obtained using normal phase column packed with hilic silica 3 m. the elution was performed using mixture of chloroform and ethanol 50/50 (v/v) as an eluent. two commercial standards n-stearoyl-sphingosine cer(ns), r f = 0.51, and n-palmitoyl-sphingosine cer(np), r f = 0.50, were selectively separated with hplc system in above mentioned conditions. the retention time of cer(np) and cer(ns) was 4.31 and 5.19 min, respectively. the same lipids were detected by hplc in the human foreskin extracts. the structure of the lipids from collected fractions was confirmed by means mass spectrometry and nmr. the physiological functions of the separated ceramides are investigated. production recombinant human thymosin-␣ 1 overexpressed as intein fusion protein in e. coli roman s. esipov, vasily n. stepanenko, anatoly i. miroshnikov, shemyakin-ovchinnikov institute of bioorganic chemistry, russian academy of sciences, ul. miklukho-maklaya 16/10, moscow, 117997 russia. e-mail: esipov@ibch.ru (roman s. esipov) medicines based on polypeptides consisting of 30 and more amino acid residues are widely spread in pharmaceutical market at the present time. practically all polypeptide medicines known are prepared by general chemical synthesis that caused high cost of their production. that's why biotechnological way of polypeptide medicines preparation using recombinant gene expression in bacteria seems to be promising. during our work we design hybrid construction allowing solving a problem of specific cleavage of target polypeptide from the hybrid protein using system of protein splicing from new england biolabs. in case of thymosin-␣ 1 production system intein mediated purification with affinity chitin (impac system)-binding tag has been used, where the modified sce vma from s. cerevisiae has been applied as intein. in contrast to other systems, impact allows the preparation of target protein without using of serine protease and other factors that may cleave the hybrid protein. in the presence of thiol reagents, such as dithiothreitol, mercaptoethanol, or cystein the hybrid protein can be site-specifically cleaved to give intein, the target protein and small fragment of nextein. using of this system does not allow to obtain the target protein with ser, cys or thr on n-terminal of protein, because in those cases target protein and n-extein ligation product will be formed. ser is n-terminal amino-acid in thymosin-␣ 1 . it was recently found that some metal ions essentially affect the splicing. we tried to use zncl 2 and we have found that, in case of intein-thymosin-␣ 1 , the maximal yield of target polypeptide and the minimal yield of splicing products are observed in the absence of dithithreitol and in the presence of 0.5-1 mm zinc chloride in buffers on all stages of thymosin ␣ 1 isolation. the structure of recombinant thymosin-␣ 1 of human was confirmed by the determination of n-and c-terminal amino-acid sequences and by maldi tof mass-spectrometry. we present a microbioreactor with thermoelectric cooling to inactivate cellular metabolism by cell culture freezing. small-scale cultivation methods have gained increased importance and their development has been supported by advances in bioprocess monitoring methods. yet efficient sampling methods for off-line analysis remain important where in-situ real-time measurements are difficult, for example for intracellular metabolite concentration or enzyme activity measurements, and for all methods which are invasive by nature, such as protein purification. freeze-stop measurements of metabolite levels are ideal, because they are inert, i.e. do not require the addition of a chemical. in large systems, the chilling time is often the limiting factor, and alternative methods for cell metabolism inactivation are required, such as the spraying of the cell suspension in 60% methanol at a temperature of −40 • c, or the use of boiling buffered ethanol. due to the small thermal mass of microsystems, shorter chilling times can be expected. sample cooling to 4 • c in a microbioreactor has been presented previously. in this contribution, we investigate sample freezing to completely inactivate the cell metabolism from microbioreactor working volumes of approximately 100 l. the use of multi-parameter flow cytometry for characterisation and monitoring of insect cell-baculovirus fermentations in a mechanically-agitated bioreactor bojan isailovic 1 , alvin w. nienow 1 , ian w. taylor 2 , ryan hicks 2 , christopher j. bacteria and mammalian cells have been traditionally used as hosts for commercial recombinant protein production. however, in recent years, the insect cell-baculovirus system has emerged as a potentially attractive recombinant protein expression vehicle. although flow cytometry has been used widely for analysis of mammalian and microbial cells, there is very little information on applications of this powerful technique in insect cell culture. here we compared cell ratiometric counts and viability (propidium iodide and calcein am) of sf-21 cell cultures using flow cytometry to those determined by more traditional methods using a haemocytometer and the trypan-blue exclusion dye. flow cytometry has also been used to monitor various parameters during cultures of sf21 infected with the recombinant autographa californica nuclear polyhedrosis virus (acnpv) containing the inserted nucleic acid sequence amfp486 coding for am-cyan coral protein, which emits natural green fluorescence. the optimization of a fermentation process requires the organism to be cultivated under desirable conditions, which depends on how well the fermentation process is controlled. inadequate mixing and mass transfer are responsible for the heterogeneous environment at large scale in terms of nutrient concentration and ph profile, resulting in lower product yields. these have been associated with, inadequate control of ph and the production of acetate or formate in response to over-feeding of glucose and oxygen deficiency. rapid analyses of substrates and indicator metabolites in a fermentation process is critical for optimal control. this can be achieved in real time with nir spectroscopy. in this study, nir-spectroscopy has been applied to monitor the concentrations of glucose, acetate, formate, ammonium hydroxide and biomass in the cultivation of e. coli (w3110). a comparison of partial least square models built using water standards, synthetic medium standards and fermentation samples has been made. the control of the brewing process is important for improving product quality, and lowering costs. infrared spectroscopy is a technique that can be used at-line and on-line to rapidly measure component concentrations of unprocessed whole broth samples in real time. in this study, both mid -infrared (mir) and near-infrared (nir) spectroscopy have been used and compared for the monitoring of ethanol, flavour components, wort sugars, biomass and specific gravity during the brewing. partial least-squares regression (pls) was used to model relationships between component concentrations and spectra. the performance of these models was evaluated in terms of the standard error of prediction (sep), number of pls factors and the correlation coefficient (r). calibration models were constructed using spectra acquired for multi-component mixtures, intended to simulate brewing fermentation conditions, and actual brewing fermentation samples. chemometric results indicated that nir is a powerful tool for accurately measuring sugar, ethanol and biomass concentrations. when choosing an expression host for production of a specific recombinant protein, one can essentially select from a multiplicity of different systems. while e. coli bacterium is usually the starting point for any cloning and expression effort, there is no universal expression host that would work optimally for all proteins. practical issues to consider include e.g. need for post-translational modifications and protease activity of the host. we have produced recombinant hiv-1 nef in different host cell systems: e. coli, p. pastoris yeast and stable drosophila s2 insect cells. using strain/cell line development, production and purification data from practical experiments, we were able to conduct a techno-economical comparison of the different host cell systems. the annual production goal was set at 100 mg of high-purity nef. this was supposed to be produced campaign-wise in 1-2 batches using laboratory-scale bioreactors and other equipment. in this study, it was shown that although the production costs of the different systems were in the same range, the production in e. coli was most inexpensive, and the s2 cell system was the most expensive. regardless of the selected host system, the labour costs incurred the most expenses. when comparing different stages of the work (strain/cell line development, bioreactor production and down-stream processing), the strain development, the most man-hours demanding stage, involved approximately half of the costs of every production system. although e. coli was the most inexpensive host system for producing nef, it has some definite disadvantages: e.g. the production of endotoxins and the disability to perform post-translational modifications. if these disadvantages are of importance, the production must be done using more expensive system. modelling and control of industrial fermentation j.k. rasmussen, s.b. jørgensen capec, department of chemical engineering, technical university of denmark, dk-2800 lyngby, denmark. e-mail: jkr@kt.dtu.dk (j.k. rasmussen) fed-batch processes play a very important role in chemical and biochemical industry. fermentations in biochemical industry are most often carried out as fed-batch processes. present control schemes do not utilize the full potential of the production facilities and may fail to achieve uniform product quality and optimal productivity. the introduction of model based control strategies is considered difficult because suitable models are not readily available. first principle engineering models can be used but the usually limited knowledge of the regulatory network in the micro-organism makes model development very time consuming. another strategy is to use a purely data-driven approach where only limited prior knowledge of the process is required. a framework for generation of such blackbox models is used in this project. this framework is called "grid of linear models" (golm), it uses a large number of linear models which each describes the behaviour of the process within a certain time interval. the combination of these models results in a model which covers the entire time span of the fermentation and approximates the nonlinear time varying behaviour. a procedure for deriving golm models from operational data has been developed and because they consist of a large set of linear models it makes them suitable for model predictive control implementation with iterative learning capabilities from batch to batch. iterative learning model predictive control (ilmpc) based on a golm model is being implemented on a fermentor at novozymes a/s. the results will be evaluated in terms of the controller's capability to ensure uniform product quality and reject both intra and inter batch process disturbances. the model based approach renders optimization of the process recipe possible by using the ilmpc capability. this opportunity provides a great potential for increase of productivity and reduction of cost. j.m. viader-salvadó, j.a. fuentes-garibay, l.j. galán-wong, m. guerrero-olazarán institute of biotechnology, biological science school, autonomous university of nuevo león, san nicolás de los garza, n.l., méxico, the production of recombinant trypsin and trypsinogen has been reported as difficult due to a probably toxicity on host or its instability. in an effort to attain high-level production of shrimp trypsin for aquaculture applications, we have evaluated shrimp trypsinogen production by recombinant pichia pastoris strains in 5-l bioreactors. a p. pastoris gs115 mut+ containing in its genome the litopenaeus vannamei trypsinogen cdna fused in frame to saccharomyces cerevisiae ␣-factor secretion signal, previously constructed in our laboratory, was used. four three-step fermentations (glycerol batch, glycerol and methanol fed-batch) were carried out. the glycerol batch step (2 l of basal salts medium, 50 g/l glycerol, 8.8 ml biotin 0.02%, 8.8 ml ptm1, 250 l 289 antifoam, ph 5 adjusted to with 28% nh 4 oh) was carried out until glycerol was completely exhausted (21 h). the glycerol fed-batch step was carried out feeding with 50% glycerol (12 ml/l biotin 0.02% and ptm1) at 0.8 ml/min by 9 h and 45 min of posterior starvation. the methanol fed-batch step was carried out feeding with 100% methanol (12 ml/l biotin 0.02% and ptm1) by 133 h using a methanol concentration on/off feedback control to maintain constant the methanol concentration in the culture medium to 1 g/l. in all the fermentations the air flow rate and the agitation were set at 5 l/min and 800-1000 rpm, respectively. with the four fermentation assays, the influence of the ph and temperature in the production phase to the recombinant shrimp trypsinogen production were evaluated. in the four fermentations, at the end of the second step a biomass of 250 g/l wet weight were obtained (od600 230). the methanol demand in the four fermentations surprisingly was not increasing rather initially it was 0.37 ml/min, after 32 h decreased 2.5 times for 29 h, increased to 0.49 ml/min for 23 h and afterwards it was decreased manually to a constant value of 0.3 ml/min for that the dissolved oxygen will not decrease to values less than 20% (last 48 h). the total protein amount in the culture medium supernatant increased during the production step until values of 1.6 g/l (assay at ph 6), 6.5 times more than the worst assay (ph 3) recombinant shrimp trypsinogen production was confirmed by sds-page (about 500 mg/l) and trypsin enzymatic activity was detected using bapna as substrate after trypsinogen activation with shrimp hepatopancreas extract. large conformational change on giant dna induced by ascorbic acid: a nobel scheme on its antioxidative activity yuko yoshikawa, emi sakai, yoshiko oda department of food and nutrition, nagoya bunri college, nagoya 451-0077, japan ascorbic acid is often regarded as an antioxidant in vivo, where it protects against dna damage by scavenging reactive oxygen species. in the present, we will show another potent scenario on the protective effect of ascorbic acid through a significant structural change of giant dna. recently, we examined the effect of ascorbic acid on the higher order structure of dna through single molecular observation with fluorescence microscopy, and found that ascorbic acid generates a pearling structure in giant dna molecules, where elongated and compact parts coexist along a molecular chain. the results of observations with atomic force microscopy indicate that the compact parts assume a loosely packed conformation. as the extension, here we study the protective effect against double-strand breaks by reactive oxygen at different concentrations of ascorbic acid, in relation to the change of the higher order structure of giant dna. we have performed a real time observation on the doublestrand breaks on individual dna molecules by use of fluorescence microscopy. we have found that the double-strand break is markedly protected when ascorbic acid exists over millimolar concentrations. it is found that such a protective effect of ascorbic acid corresponds well to the above mentioned change on the higher order structure of dna. it has been reported that human circulating immune cells, such as neutrophils, monocytes and lymphocytes, accumulate ascorbic acid in millimolar concentrations. therefore, it is expected that the ascorbic acid concentration that induces the large conformational change on dna may be of physiological significance. , y., et al., 2004 . febs lett. 566, 39-42. yoshikawa, y., et al., 2003 . plant proteins are increasingly being used as an alternative to proteins from animal sources and substantially contribute to the human diet in several developing countries. there are many process both industrial and food based which employ protein hydrolysis and hydrolytic products have a wide variety of applications from industrial fermentation media to food additives. traditionally, proteins are hydrolysed by chemical means. acid hydrolysates of protein are used to produce food ingredients and flavour compounds. however, hydrolysis by chemical reagents produce potentially hazardous byproducts and these non-selective hydrolysis products cannot easily be defined. the use of enzymes allows for selective hydrolysis of protein and thus produces a potentially safer and more defined material. the present investigation describes the effects of substrate, enzyme and hydrolysate concentration on the hydrolysis of corn gluten. the corn gluten was hydrolysed by a commercial protease preparation neutrase. the protein hydrolysis reactions were carried out in 0.2 l of aqueous solutions at the temperature of 50 • c and ph 7 and were monitored by using ph-stat method. the degree of hydrolysis (%) and soluble protein concentration depending on time were investigated by using 1, 2, 4, 6, 8 and 10% (w/v) substrate concentrations; and 0.25, 0.4, 0.5, 0.6 and 0.75, 1% (v/v) enzyme concentrations; and 25, 50, 75 and 100% (v/v) this paper describes medium optimization for urease production by aspergillus niger ptcc 5011 by one-factor-at-a-time and orthogonal array design methods. the one-factor-at-a-time method was used to study the effects of carbon and nitrogen sources on urease production. among various carbon and nitrogen sources used, sucrose and yeast extract were the most suitable for urease production, respectively. subsequently, the concentration of sucrose, yeast extract and mineral sources were optimized using the orthogonal array method in two stages. the effects of nutritional components for urease production by a. niger ptcc 5011 in the first and second stages were in order of urea > niso4 > sucrose >kh 2 po 4 > k 2 hpo 4 > cacl 2 > yeast extract > mgso 4 and yeast extract > sucrose > k 2 hpo 4 > kh 2 po 4 > urea > cacl 2 > niso 4, respectively. the optimal concentrations of nutritional components for improved urease production were determined as 20g/l sucrose, 0.85 g/l urea, 3.4 g/l yeast extract, 0.03g/l niso 4 ·6h 2 o, 0.5 g/l mgso 4 ·7h 2 o, 0.04 g/l cacl 2 , 0.35 g/l kh 2 po 4 , and 0.35 g/l k 2 hpo 4 . these results showed that urea, niso 4, yeast extract and sucrose had significant effect on urease production by a. niger ptcc 5011. tween 80 and mgso 4 had negligible effect on urease production by this strain. the subsequent confirmation experiments determined the validity of the models. maximum urease activity in optimized media by onefactor-at-a-time and orthogonal array methods were about 1.14 and 2.74 times greater than with the basal medium, respectively. carbon sources create fingerprint fermentation characteristics pınar ç alık 2 , güzide ç alık 1 , tunçer h.özdamar 1 : 1 bre lab, department of chemical engineering, ankara university, 06100 ankara, turkey; 2 ib lab, department of chemical engineering, metu, 06531 ankara, turkey. e-mail: calik@eng.ankara.edu.tr (g. ç alık) this work reports on a systematic investigation of the interactions between the single-carbon sources, i.e. glucose and citric acid, and complex-medium components, i.e., carbon and nitrogen sources, in enzyme fermentation processes, i.e. serine alkaline protease (sap; ec 3.4.21.62),) and ␤-lactamase (ec 3.5.2.6), with the oxygentransfer and ph conditions to demonstrate the influences of carbon sources that create the fingerprint fermentation characteristics, moreover, their influences on the product and by-product formations and the intracellular reaction rates. the influence of the medium composition i.e. citric acid-, glucose-, molasses-and soybean-based media together with the oxygen transfer (ot)-and ph-conditions applied, on the product and by-product distributions and ot characteristics were investigated in batch bioreactors. for sap, in general, under uncontrolled-ph operations the variation of the medium ph in sap fermentation process has a tendency to increase in the sap production phase; however, depending on the carbon source used, its behaviour changes in the early stages of the fermentation as the consequences of the directed functioning of the intracellular bioreaction network. the loci of the dissolved oxygen (do) curves also strongly depend on the carbon source(s) utilised, in addition to the applied ot conditions. the complex media profiles are significantly different compared to the defined media as the ph and do profiles are interrelated owing to the bioreactor operation conditions affecting the metabolic reaction network. the highest volumetric oxygen uptake rates were obtained with soybean-based medium that was ca. three-fold higher than the values reported in citrate-based and glucose based media, and ca. 1.5-2-fold higher than the values reported in molasses-based medium. the significant changes, moreover, the drastic change observed with the use of soybean-based complex medium are due to the compositions of the fermentation media used, and its influence on the intracellular bioreaction network. thus, we conclude that the change in medium composition based on the carbon source changes the fermentation characteristics under the designed bioreactor operation conditions that appear as the fingerprints of the bioprocess. effects of oxygen transfer on ␣-amylase production by b. amyloliquefaciens nurhan güngör, güzide ç alık bre lab, department of chemical engineering, ankara university, 06100 ankara, turkey ␣-amylase (e.c. 3.2.1.1) a commercially important enzyme used in food, textile, detergent, brewing, paper, and animal feed industries; hydrolyses ␣-1,4 glucosydic bonds in amylose, amylopectin, and related polysaccharides. optimum medium composition and influence of bioreactor operation parameters on ␣-amylase production with high yield and selectivity were determined together with the metabolic flux distributions using b. amyloliquefaciens (nrrl b-14396) which is found to be a good producer of the enzyme. systematic investigation of oxygen transfer in relation with the metabolic fluxes for ␣-amylase is not available in the literature. shake-flask experiments were conducted in 0.5 dm 3 airfiltered bioreactors in orbital shakers with agitation and heating controls (b. braun, certomat-bs1). laboratory-scale bioreactors were composed of agitation, heating, foam, dissolved-oxygen and ph-controlled 1.0 and 3.5 dm 3 systems (b. braun, biostat q; and chemap). after separation of the cells with a sorval rc28s ultracentrifuge, ␣-amylase activity was measured by the dns method (bernfeld, 1955) . amino acid concentrations were determined with a hplc (waters), protein and organic acid concentrations were measured with a hpce (waters, quanta 4000e) (ç alık et al., 1998) . oxygen transfer characteristics in the bioreactor systems were calculated using the dynamic method (rainer, 1990) . in the mass flux balance-based analyses, a pseudo-steady state approximation for the intracellular metabolites and the accumulation rates of the extracellular metabolites measured throughout the fermentations in considerations of the biochemical feature of the system were used to acquire the flux distributions. in laboratory scale, the effects of different c sources i.e., glucose, fructose, maltose, lactose and soluble starch; n sources i.e., (nh 4 ) 2 hpo 4 , (nh 4 ) 2 so 4, and nh 4 cl and/or their concentrations; and the operation parameters, ph and temperature on cell growth, substrate utilization, ␣-amylase and byproduct concentrations, and ␣-amylase activity were investigated. in pilot scale, the fermentation and oxygen transfer characteristics of the bioreaction system together with the metabolic fluxes were determined. oxygen transfer regulates benzaldehyde lyase production in e. coli pınar ç alık iblab, department of chemical engineering, metu, 06531 ankara, turkey. e-mail: pcalik@metu.edu.tr the effects of oxygen transfer rate on benzaldehyde lyase (bal) production by puc18::bal carrying recombinant e. coli on a defined medium with 8.0 kg/m 3 glucose were investigated in order to finetune the bioreactor performance, in v = 3 dm 3 batch bioreactors at five different conditions with the parameters at, i.e. q 0 /v r = 0.5 vvm and n = 250, 375, 500 and 750 min −1 and; q 0 /v r = 0.7 vvm and n = 750 min −1 . the concentrations of the product and by-products amino acids and organic acids were determined in addition to bal activities. medium oxygen transfer rate conditions and uncontrolled ph operation at ph 0 7.25 are optimum for maximum bal activity, i.e. 860 u/cm 3 at 12 h, and productivity and selectivity. on the bases of the data, response of the intracellular bioreaction network of r-e. coli to oxygen transfer conditions were analysed using a mass-flux balance based stoichiometric model that contains 102 metabolites and 133 reaction fluxes. the results reveal that metabolic reactions are intimately coupled with the oxygen transfer conditions. oxygen transfer rate showed diverse effects on the product formation by influencing metabolic pathways and changing metabolic fluxes. metabolic flux analysis was helpful to describe the interactions between the cell and the bioreactor by predicting the changes in the fluxes and the rate controlling step(s) in the metabolic pathways. therefore, knowing the distribution of the metabolic fluxes during the growth, and bal and by-product formations provide new information for understanding physiological characteristics of the r-e.coli, and reveals important features of the regulation of the bioprocess and opens new avenues to successful application of metabolic engineering. the detection and diagnosis of pathogenic bacteria causing many diseases to the human body is an area of important research to public welfare. food is the most important energy source to humans, but it can give rise to disease caused by pathogenic bacteria not performing adequate detection tests. oligonucleotide-based microarrays are becoming increasingly useful for the analysis of expression profiles and polymorphisms among interested genes. here, we checked the possibility of development of oligonucleotide-based microarrays for detection and diagnosis of foodborne pathogenic bacteria. the oligonucleotide chip technology was applied to one control strain and seven foodborne pathogenic bacteria strains. it was designed repeated spots of eight hyperspecific and two highly conserved (control) capture probes from 16s rdna sequences. in order to validate experimental quality and to certificate specificities among specific spots at a glance by 2d and 3d views, quantitative visualization tool was developed. using the proposed oligonucleotide chip, we could classify and diagnose species and even subtypes of some pathogens. induction of in vitro neuro-muscular junctions using neuroblastoma and fibroblast cell lines for facilitating receptor-binding studies with botulinum toxin arindam chaudhury, bal ram singh department of chemistry and biochemistry, university of massachusetts dartmouth, 285 old westport road, north dartmouth, ma 02747-2300, usa. e-mail: g achaudhury@umassd.edu (a. chaudhury) botulinum toxin (bont), the most potent biological toxin known, is responsible for botulism, a fatal paralytic disease of the neuromuscular transmission. it blocks the release of acetylcholine at the neurotransmitter junction of the synapse. the objective of the current study was to induce in vitro neuro-muscular junctions through co-culturing of nerve and precursor-muscle cell lines. presently no known primary cultures or cell lines are available for nerve-muscle co-culture, thus validating the current work. j2-3t3 fibroblast cell line was first adapted to grow in media conducive for growth of sh-sy5y neuroblastoma cell line. the two cell lines were then splitted and co-cultured and observed for junction formations. light and fluorescent microscopic studies revealed en plaque (twitch-type) and en grappe (bulbous nerve endings) nerve-muscle junctions. growth rate of j2-3t3 cells decreased substantially when the media was initially changed. structurally they were more spindle-shaped than the normal reticular shapes of j2-3t3 cells, when grown in a media tailor-made for them. the formation of nerve-muscle junctions were confirmed using markers specific for each cell type. future work is focusing on receptor identification for the botulinum toxin in the established in vitro neuro-muscular junctions and also the transcellular translocation of the toxin. fourier-transform infrared (ftir) spectrometers have recently enjoyed widespread popularity in bioprocess monitoring applications due to their non-invasiveness and in-situ sterilizability. their online applicability creates an interesting opportunity for process control and optimization. however, the precision and accuracy of the predicted analyte concentration values directly depend on the quality of the measured signal and the robustness of the calibration model. instability and time drift in the measured spectra are currently one of the main obstacles in ftir monitoring. the intensity of the detected signal is influenced both by random noise and structural drifts and offsets. as a result, it is often necessary to scale the measured spectrum with respect to a constant reference spectrum, a technique similar to the internal standard approach used in analytical assays, such as hplc. applying this technique has lead to a noticeable decrease in the standard error of prediction in the monitoring of an anaerobic s43 fermentation of the saccharomyces cerevisiae yeast. in order to test the robustness of the calibration model and to increase its resistance to signal instability, random spikes of known amounts of analytes were introduced into the measured medium. this approach can be used to fine-tune the calibration model on-line and is currently one of the aspects investigated in this laboratory. the effect of the stringent response induction on l-valine biosynthesis by corynebacterium glutamicum ilze denina 1,2 , longina paegle 2 , liga zala 1,2 , maija ruklisha 2 : 1 faculty of biology, university of latvia, kronvalda blvd. 4, riga lv-1586, latvia; 2 institute of microbiology and biotechnology, university of latvia, kronvalda blvd. 4, riga lv-1586, latvia. e-mail: ilzede@hotmail.com (i. denina) the present study was focused on methods of the stringent response induction and on investigation of its effect on valine overproduction by isoleucine auxotrophs of corynebacterium glutamicum. the intracellular level of guanosine 5 -diphosphate 3diphosphate (ppgpp) increased and bacterial growth rate (µ) decreased during the short-term experiments when the exponentially growing cells were exposed to isoleucine limited conditions. the induction of the cellular stringent response resulted in a drastic increase in the activity of acetoxydroxy acid synthase (ahas), also by a significant increase in valine production. in contrast, an increase in ahas activity and valine synthesis by c. glutamicum was not achieved when bacterial growth was down-regulated in a ppgpp-independent manner. these results demonstrated that induction of the ppgpp-mediated stringent response might be significant in order to increase valine overproduction by c. glutamicum. infections with human cytomegalovirus (hcmv) continue to be an important health problem in certain patient populations, such as newborns, graft recipients of solid organs, or bone marrow and aids patients. in these groups, hcmv is a major cause of morbidity and mortality. the complex biology of hcmv necessarily begins with an initial interaction between the envelope of the infectious virus and the host cell. glycoprotein b (gb) is the major antigen, on the envelope of hcmv, for the induction of neutralizing antibodies. the region between aa 552 and 635 of hcmv gb (termed antigenic domain 1, ad-1) has been identified as the immunodominant target for the humoral immune response following natural infection. screening methods for detection of neutralizing antibodies have not been used because they are costly and labor intensive and thus far are not feasible for use on a large scale. for the development of reliable and inexpensive serodiagnostic tests the ad-1 of hcmv glycoprotein gp58, which are known to bind neutralizing antibodies, was expressed in prokaryotic systems. in this work, one prokaryotically expressed fusion protein which codifies ad-1 with ␤-galactosidase was used. the influence of different process conditions, on the production of the fusion protein containing the ad-1 as well as sugars addition to the fermentation medium was investigated. in order to analyze the expression of fusion protein, the ␤-galactosidase activity was followed throughout the fermentation. lysis process was also optimized and some final confirmation tests about protein antigenicity were performed. polyenzymic systems for the preparation of drugs based on modified nucleosides d. chuvikovsky, r. esipov, t. muravyova, a. miroshnikov shemyakin-ovchinnikov institute of bioorganic chemistry, russian academy of sciences, ul. miklukho-maklaya 16/10, moscow 117997, russia. e-mail: esipov@ibch.ru (r. esipov) considerable progress in the preparation of nucleoside analogues was achieved by combination of chemical and biochemical transformations. enzyme-catalyzed chemical transformation is now widely recognized as practical alternative to traditional organic synthesis in pharmaceutical and chemical industries. pentofuranosyltransfer reaction catalyzed by nucleoside phosphorylases was successfully employed for the synthesis of a variety of base-and sugar-modified nucleosides. enzymes involved in the metabolism of ribose phosphate and deoxyribose phosphate, such as ribokinase and phosphopentomutase, were used for the preparation of sugar-modified nucleosides. nucleosides phosphorylases (thymidine phosphorylase (tp), uridine phosphorylase (up) and purine nucleoside phosphorylase (pnp)), ribokinase and phosphopentomutase from e. coli have been cloned and overexpressed in e. coli. fast and efficient methods for the purification of nucleosides phosphorylases have been developed. the amount of purified protein was about 140 mg/l of cell culture, corresponding to 6300, 16,800 and 4200 units, respectively, of the up, tp and pnp. synthesis of medicinal drugs ribavirin (1-(␤-d-ribofuranosyl)-1,2,4-triazole-3-carboxamide), cladribine (2-chloroadenine-9-␤-d-2 -deoxyribofuranoside) and fludarabine (2-fluoroadenine-9-␤-darabinofuranoside) with the use of recombinant enzymes were studied. several important factors affecting the modified nucleosides production (ph, temperature, enzyme concentration, donor/acceptor ratio) were investigated and optimized. under optimum conditions ribavirin, cladribine and fludarabine produced in the reaction mixture in yields of 84, 85 and 80%, referred to 1,2,4-triazole-3-carboxamide, 2-chloroadenosine and 2-fluoroadenosine, respectively. aggregation and adsorption of fibroin molecules in aqueous solution won hur school of biotechnology and bioengineering, kangwon national university, chunchon 200-701, korea. e-mail: wonhur@kangwon.ac.kr fibroin, the structural protein from bombyx mori, is composed of heavy chain (generally called 'fibroin') and light chain polypeptides of about 370 and 25 kda, respectively. this study investigated the aggregation of fibroin and the adsorption between fibroin and surfaces. the variations of particle size and zeta potential were investigated by electrophoretic light scattering spectrophotometer (els). the adsorption of fibroin on surface was investigated in a continuous flow system by biacore applied surface plasmon resonance(spr) technique. the particle size and zeta potential range of aqueous fibroin were 140 nm, ±20 mv, respectively. iso-electric point(ph iep )of fibroin was ph 4.29. the amount of fibroin adsorbed on a gold surface was less than 0.1 g/ml even in the presence of high concentration of fibroin. the modification of gold surface was accomplished by applying chemicals known to form self-assembled monolayer, those are carrying nh 3 + , coo − , benzene ring and peptide that similar structure with fibroin. the adsorbed amount of fibroin on the self-assembly monolayers (sams) increased in the following order: nh 3 + > benzene ring > coo − > peptide surface. the deposition of fibroin in aqueous solution on non-waven fabric was affected by nacl and high temperature. ph influences metabolite profiling of ␤-lactamse producing b. licheniformis nazari̇leri 1 , pınar ç alık 1 , ali şengül 2 : 1 ib lab, department of chemical engineering, metu, 06531 ankara, turkey; 2 gülhane sch med, dept immunol, 06018 ankara, turkey. e-mail: e115715@metu.edu.tr (n.i̇leri) ph conditions in the bioreactor affect product and by-product formations by influencing metabolic pathways and changing metabolic fluxes, based on its influence on, i.e. dna transcription, protein synthesis, transport mechanism, atp generation and cellular energetics. whereupon, some fermentation processes favours uncontrolled-ph conditions while others favours controlled-ph conditions. on the bases of the interactions between the cell and the bioreactor through a process, carried out at either uncontrolled-or controlled-ph conditions, intracellular ph can be widely different and variable during the fermentation process. consequently, if one aims towards a quantitative understanding of the cell metabolism, one has to take into account the time variations of the intracellular ph and its effects on the in-vivo kinetics of the metabolic steps involved. moreover, since the presence of dormant or dead cells in the cultivation medium have negative effect on the synthesis of the production of desired product; the physiological state of the culture has great importance. in this context, the effects of ph on the regulation of intracellular ph, transport mechanism, and metabolic activity of b. licheniformis during production of ␤-lactamase (ec 3.5.2.6), an industrial enzyme catalyzing the hydrolysis of beta-lactam ring in beta-lactam antibiotics, was investigated. in addition, the physiological state of the organism and its effect on the production were observed. the optimal controlled-ph operation was found to be ph 6.75 with 54 u/cm 3 enzyme activity. the intracellular and extracellular na + , k + ion concentrations increased significantly throughout the process with increasing ph. on the other hand, the intracellular nh 4 + ion concentration was relatively constant. isolation and characterization of angiotensin-i converting enzyme inhibitory peptides by use of anti-peptide antibody fida hasan 1 , megumi kitagawa 2 , yoichi kumada 1 , naoya hashimoto 1 , masami shiiba 2 , shigeo katoh 1 , masaaki terashima 2 : 1 graduate school of science and technology, kobe university, nada, kobe 657-8501, japan; 2 department of human science, kobe college, okadayama, nishinomiya 662-8505, japan inhibitory peptides against angiotensin-i converting enzyme can be promising bio-functional peptides as natural alternatives for the non-peptide ace inhibitory drugs. these peptides are inactive within sequences of parent proteins and can be released during enzymatic digestion or food processing. immunointeraction is very effective for the purification of proteins and peptides with high purity. in this study, ace inhibitory peptides from hydrolysate of bonito meat were isolated by an anti-peptide antibody column and hplc, and kinetics of production of these ace inhibitory peptides was studies. an anti-peptide antibody against an ace inhibitory peptide, which was found by kohama et al. from tuna was obtained by immunization of the antigen peptide pc-iace (kkpthikwgd). water extract of bonito meat was digested at 37 • c in a modified gastric juice, 1.76 mg/ml nacl containing 312 g/ml pepsin (ph 2). peptides in hydrolysates were purified by use of an affinity column coupled with the antipeptide antibody and separated by hplc equipped with a reverse phase column (cosmosil 5c18-ms-ii, 4.6 cm × 150 cm). amino acid sequences and ic 50 values of the potent ace inhibitory peptides were determined. sds-page and western blotting experiments clarified that bonito protein contained peptides having similar sequence to the antigen peptide. a fraction of retention time 18-28 min in hplc purification samples showed high inhibitory activity, and several peptides in this fraction were separated. after 48 h digestion, two major inhibitory peptides, herdpthikwgd and pthikwgd, were found to be relatively stable in the gastric juice. kluyveromyces marxianus physiology on several levels of carbon, nitrogen sources and oxygenation during inulinase production silva-santisteban yépez, o. bernardo, francisco maugeri department of food eng./unicamp, 13081-970, campinas, sp-brazil. email: maugeri@fea.unicamp.br (f. maugeri) inulinase produced by yeasts is an interesting alternative compared with the one produced by filamentous molds, as culture conditions can be better controlled. during the assays, it was observed that inulinase production levels varied with nutritional conditions in batch culture. kluyveromyces marxianus atcc 16045 culture is described by two main phases, the first one being the growth phase, where substrate consumption and basal inulinase production were performed, and the second one being the phase where some metabolites are uptaken and high inulinase production is observed. the metabolic fluxes analyses were used to describe the cell physiology in the first phase, in a variety of conditions of sucrose and ammonium sulfate concentration and aeration condition. the metabolic network included the main metabolic pathways such as glycolisis, pentose phosphate pathway, krebs cycle, oxidative phosphorylation and biomass biosynthesis. the physiology in this phase was correlated with high inulinase production in the second phase. it was also noticed that inulinase production diminished when sucrose was in high concentration, leading, additionally, to ethanol production. in these terms, it was unveiled a kind of crabtree effect performed by this strain. forward extraction of l-aspartic acid from fermentation broths by reverse micelles and backward extraction by gas hydrate methodö. aydogan 1 , e. bayraktar 1 ,ü. mehmetoglu 1 , m. parlaktuna 2 , t. mehmetoglu 2 : 1 department of chemical engineering, faculty of engineering, ankara university, tandogan, ankara 06100, turkey; 2 department of petroleum and natural gas engineering, middle east technical university, ankara 06531, turkey. e-mail: mehmet@eng.ankara.edu.tr (ü. mehmetoglu) recently gas hydrate method has been applied as a technique for backward extraction of amino acids from reverse micelle systems. in this study, backward extraction of l-aspartic acid was investigated by gas hydrate method. at the first stage, production of l-aspartic acid was carried out using 50 ml of 0.5 m ammonium fumarate (ph 9.5) as substrate at 37 circc in an orbital shaker at 150 rpm for 4 h. e. coli (atcc 11303) was used as biocatalyst. at the end of reaction excess fumaric acid was extracted in reverse micelle phase. then forward extraction of l-aspartic acid was carried out with injection method in reverse micelle phase. for back extraction, co 2 is used to form gas hydrates crystalline structure. back extraction experiments were carried out between 35-37 bar g pressure and at 2 circc. at the end of the back extraction l-aspartic acid was obtained in crystalline form. the results indicate that recovery of l-aspartic acid from reverse micelles by forming gas hydrate can be achieved with a yield of 55.3%. consequently, gas hydrate method can be used as a new technique for backward extraction of amino acids from reverse micelles. aerobic and anaerobic cultivations of aspergillus niger on different nitrogen sources susan meijer, gianni panagiotou, lisbeth olsson and jens nielsen center of microbial biotechnology, biocentrum-dtu, technical university of denmark, dk-2800 lyngby, denmark in this study, we aim at creating a succinic acid producing strain of a. niger. a. niger is known to be a strictly aerobic organism, meaning it is not able to use the reductive part of the tca cycle to produce succinate. during aerobic growth a. niger uses oxygen as electron acceptor in the respiratory chain, thereby reoxidizing the produced nadh to nad + . however, under anaerobic conditions other compounds than oxygen, such as no 3 − are required as electron acceptor (denitrification). this process consists of no 3 − reduction to nh 4 + coupled to substrate-level phosphorylation that supports growth under anaerobic conditions. in the present study, our aim was to investigate the effect of different nitrogen sources on the physiology of a. niger during growth under aerobic and anaerobic conditions. aerobic growth experiments on three different nitrogen sources; ammonium, nitrate and nitrite, showed that ammonium and nitrate could be consumed by the filamentous fungus. nitrite on the other hand could not facilitate growth, indicating the absence of a nitrite uptake system. however, under anaerobic conditions notable growth was only observed on nitrate. these data support the hypothesis of the existence of an alternative electron acceptor that might facilitate anaerobic growth of a. niger. among the therapeutic proteins derived from mammalian cells, recombinant antibodies received a great deal of attention as a prominent product through biotech pipelines toward the marketplace. they now occupy about 25% of the estimated medicines in clinical development and many more antibodies which lead the value of the market going forward are reported. there are various environmental factors affecting rcho cell cultures such as medium components, temperature, ph, and byproducts (ammonia, lactate, and, etc.) . because most of mammalian cells are very sensitive to their environmental change, appropriate control of environmental parameters is a very important consideration to enhance cell growth and production of target proteins. balanced addition of limiting medium components plays an essential role on improvement of cell density and product concentration. temperature and ph are easily adjustable process parameters, being reported to influence cell growth and recombinant protein production. ammonia and lactate are well-known byproducts which have an inhibitory effect on cell growth when their concentrations exceed a specific level. in this work, effects of various environmental factors including temperature, ph, amino acids, vitamins, hormones, and metabolic byproducts on cell growth and recombinant antibody production were investigated in the cultivation of recombinant chinese hamster ovary cells. the most suitable condition of each environmental condition was proposed for enhancement of the cell growth and the productivity of recombinant antibody. the present study was carried out in order to assess the protective effects of calycosin-7-o-␤-d-glucopyranoside isolated from astragali radix (ar) on hyaluronidase (haase) and the recombinant human interleukin-1␤ (il-1␤) induced matrix degradation in human articular cartilage and chondrocytes. we isolated the active component from the n-butanol soluble fraction of ar as haase inhibitor and structurally identified as calycosin-7-o-␤-d-glucopyranoside by lc-ms, ir, 1 h nmr, and 13 c nmr analyses. the protective effect of arbu on the matrix gene expression of immortalized chondrocyte cell line c-28/i2 treated with haase was investigated using a reverse transcription polymerase chain reaction (rt-pcr). its effect on haase and il-1␤-induced matrix degradation in human articular cartilage was determined by using a staining method and calculating the amount of degraded glycosaminoglycan (gag) from the cultured media. pretreatment with calycosin-7-o-␤-d-glucopyranoside effectively protected against matrix degradation of the human chondrocytes and articular cartilage. therefore, it would appear that calycosin-7-o-␤-d-glucopyranoside from ar is a potential natural ant-inflammatory or anti-osteoarthritis agent and can be effectively used to protect against proteoglycan (pg) degradation. catechol-o-methyltransferase (comt) is an enzyme that catalyses a variety of endogenous and exogenous catechol substrates by transferring a methyl group from s-adenosylmethionine (sam) to either the meta-or the para-hydroxyl group of the catechol ring. the enzyme has a physiological role in the metabolism of the catechol estrogens, inactivation of the catecholamine neurotransmitters such as dopamine and epinephrine and detoxification of a variety of xenobiotic catechols. comt activity has been identified in various tissues; however with the developments in molecular biology and gene technology, the production and purification of large amounts of recombinant comt is a good option for biochemical, pharmacological and structural studies. in this work, cultures of recombinant e. coli harbouring a model plasmid were grown in a 500 ml shake-flask containing 125 ml of complex medium. the influence of medium composition and induction time on comt production, recovery and clarification by sonication, ammonium sulphate precipitation and purification by hydrophobic interaction chromatography onto a butyl-sepharose column will be presented and discussed. bioactive bacterial exopolysaccharides corinne sinquin 1 , karim senni 2 , jacqueline ratiskol 1 , farida guéniche 2 , jean guézennec 1 , gaston godeau 2 , sylvia colliec-jouault 1 : 1 ifremer, 44311 nantes cedex 3, france; 2 ea2496 université rené descartes, 92120 montrouge, france. e-mail: corinne.sinquin@ifremer.fr (c. sinquin) interest in mass culture of microorganisms from the marine environment has increased considerably, representing an innovative approach to the biotechnological use of under-exploited resources. marine bacteria associated with deep-sea hydrothermal conditions have demonstrated their ability to produce in an aerobic carbohydrate-based medium, unusual extracellular polymers (guezennec, 2002; colliec-jouault et al., 2004) . these exopolysaccharides (eps) present original structural features that can be modified to design innovative bioactive compounds and improve their specificity. with the aim of promoting biological activities, chemical modifications (depolymerization and substitution reactions) of one eps produced by vibrio diabolicus have been undertaken (raguenes et al., 1997) . the structure of the native eps has been described (rougeaux et al., 1999) the potential of the eps derivatives as therapeutical agents will be presented. physiological responses of e. coli to glucose and oxygen shifts in fed-batch fermentations jaakko soini 1 , christina saarimaa 1 , arne matzen 2 , peter neubauer 1 : 1 bioprocess engineering laboratory, department of process and environmental engineering, university of oulu, oulu fi-90014, finland; 2 sanofi-aventis, germany. e-mail: jaakko.soini@oulu.fi (j. soini) in high-cell density fermentations e. coli cells are often subjects of transient changes in microenvironment around them. these changes can be, for example, medium component gradients or differences in oxygen availability. we have studied the physiological response of e. coli w3110 cells to simultaneous oxygen limitation and overfeeding of glucose. the aim is to obtain more information of physiological changes for better understanding of the bottlenecks in such processes. the response of the cells for glucose and oxygen shifts was studied by analyzing key metabolites and proteins and mrna transcript levels. the transcript levels were measured using a sandwich hybridization technique (rautio et al., 2003) proteomic analysis was carried out by 2d-electrophoresis and the metabolite analysis by hplc. the main focus of this study is to monitor the expression patterns of marker genes involved in mixed acid fermentation, glycolytic pathway and tricarbonic acid cycle. rautio et al., 2003 . sandwich hybridisation assay for quantitative detection of yeast rnas in crude cell lysates. microb. cell fact. influence of ner on genetic instability of the (ctg/cag) tracts in bacterial chromosome sylwia szwarocka 1 , paweł parniewski 2 : 1 department of microbiology and immunology, university of łódź, 90-237 łódź, banacha 12/16, poland; 2 centre for medical biology, polish academy of sciences, 93-232 łódź, lodowa 106, poland many human hereditary neurological diseases, including fragile x syndrome, myotonic dystrophy and friedreich's ataxia, are associated with expansions of triplet repeat sequences (trs) (cgg/ccg, ctg/cag and gaa/ttc) in or near specific genes. mechanisms that mediate the expansions and deletions of trs include dna replication, repair and recombination. many investigations suggest that the structural properties of the trs play a consequential role in their genetic instabilities. nucleotide excision repair (ner) is the major cellular system in both prokaryotes and eukaryotes and recognises damages due to distortion of the dna helix. involvement of ner in the hairpin loop repair that can form within ctg tracts has been reported. the participation of this repair systems in the trs instability was investigated in e. coli only on multicopy plasmids. the results showed that deficiency of some ner functions dramatically affects the stability of long (ctg/cag) inserts. in this work we present a chromosomal model to study the instability of the trs in e. coli. we introduced the (ctg/cag)n tracts into the chromosome of e. coli and used strains with some deficiency of the ner and investigated genetic stability of these tracts after multiple recultivations. in general, our results show that the (ctg/cag)n repeats are much more stable in the chromosome than in plasmids. these data may suggest that instability of trs in plasmids is associated with interaction between repetitive tracts on different plasmid molecules inside the cell. however, mutations of ner genes may increase (uvra and uvrb mutants) or decrease (uvrc and uvrd mutants) stability of the trs in the e. coli chromosome. this study was partially funded by the kbn grant 2 p05a 01927. performance analyses of a multi-stage integrated fermentation process for lactic acid production hsun-tung lin, feng-sheng wang department of chemical engineering, national chung cheng university, chia-yi 621-02, taiwan. e-mail: chmfsw@ccu.edu.tw (f.s. wang) in this work, we considered a multi-stage integrated continuous fermentation process for producing lactic acid. each stage consists of a mixing tank, a fermenter, a cell recycle unit and an extractor. the generalized kinetic model is first applied to formulate the integrated process. we have compared the overall productivity and conversion of the integrated process with those of two simplified processes. from the design equations, we obtain that three processes have the identical overall conversion. however, the proposed process has the greatest overall productivity. the specific kinetic model for lactic production (youssef et al., 2000) was applied to the integrated process in order to find the maximum overall productivity. two optimization problems are respectively considered to determine the optimal stages, operating conditions and design variables. the first problem supposes that the integrated process has the equal working volume ratio for each fermenter. such a process requires four stages to yield the maximum overall productivity and the nearly complete overall conversion. however, if the working volume ratio for each stage is considered as the decision variables in the second optimization problem, three stages is enough to achieve the identical overall productivity. youssef, c.b., guillou, v., olmos-dichara, a., 2000. contr. eng. pract. 8, 1297-1307. modelling of the binding of ligands to macromolecules jørgen m. mollerup department of chemical engineering, building 229, dtu, 2800 lyngby, denmark a variety of factors that govern the properties of proteins are utilized in the development of chromatographic processes for the recovery of biological products including the binding and release of protons, the non-covalent association with non-polar groups (often hydrophobic interactions), the association of small ions (ion exchange) and the highly specific antigen-antibody interaction (affinity interactions). the fulcrum point in the understanding and modelling a chromatographic separation is the adsorption isotherm that determines the peak shape at preparative load. to enable an efficient chromatographic process development strategy it is necessary to conduct theoretical and experimental investigations of the adsorptive behaviour of proteins. thermodynamically consistent models for ion exchange chromatography and hydrophobic interaction chromatography have been developed and can be utilised in the simulation of a chromatographic separation. besides, measurements on hic media can be utilised to determine the cohn salting-out coefficient. the lig-and binding process can frequently be coupled to associated structure changes in the protein, the ligand or both. this gives rise to nonlinear adsorptive behaviour known as cooperativity which cannot be modelled using conventional models which displays convex behaviour. examples of cooperative behaviour are the reversible binding of oxygen and carbon monoxide to haemoglobins and the binding of nad + to yeast glyceraldehydes 3-phosphate dehydrogenase. in the paper we discuss the modelling of reversible binding of mobile as well as immobilised ligands to macromolecules and compare modelling to experiment. comparative analysis of the temperature policy for processes with a deactivating native enzyme i. grubecki, m. wojcik department of chemical and biochemical engineering, university of technology and agriculture, 85-326 bydgoszcz, ul. seminaryjna 3, poland a comparative analysis of the temperature policy for an enzymatic reaction with michaelis-menten kinetics in a batch reactor has been carried out. both isothermal and optimal temperature policies for processes with deactivating native enzyme have been considered. in the model, the thermal deactivation was described by a first-order reaction, and the arrhenius-type dependence between rate parameters and temperature was assumed. as an indicator for a direct comparison between the isothermal and optimal temperature policies the quotient of conversions under identical initial and final condition was used. a method was presented to calculate this indicator, which is based on the analytical and numerical solutions. this method can be of great importance for the industrial practice. application of changeable temperature policy could result in significant increase in conversion when ratio of activation energy for deactivation and activation energy for reaction is high. studies on the impact of mixing during brewing using near and mid-infrared spectroscopy georgina mcleod 1 , alvin w. nienow 1 , graham poulter 2 , reg wilson 3 , henri tapp 3 , christopher j. the control of the brewing process is important for improving product quality, and lowering costs. infrared spectroscopy is a technique that can be used at-line and on-line to rapidly measure component concentrations of unprocessed whole broth samples in real time. in this study, both mid -infrared (mir) and near-infrared (nir) spectroscopy have been used and compared for the monitoring of ethanol, flavour components, wort sugars, biomass and specific gravity during the brewing. partial least-squares regression (pls) was used to model relationships between component concentrations and spectra. the performance of these models was evaluated in terms of the standard error of prediction (sep), number of pls factors and the correlation coefficient (r). calibration models were constructed using spectra acquired for multi-component mixtures, intended to simulate brewing fermentation conditions, and actual brewing fermentation samples. chemometric results indicated that nir is a powerful tool for accurately measuring sugar, ethanol and biomass concentrations. the optimization of a fermentation process requires the organism to be cultivated under desirable conditions, which depends on how well the fermentation process is controlled. inadequate mixing and mass transfer are responsible for the heterogeneous environment at large scale in terms of nutrient concentration and ph profile, resulting in lower product yields. these have been associated with, inadequate control of ph and the production of acetate or formate in response to over-feeding of glucose and oxygen deficiency. rapid analyses of substrates and indicator metabolites in a fermentation process is critical for optimal control. this can be achieved in real time with nir spectroscopy. in this study, nir-spectroscopy has been applied to monitor the concentrations of glucose, acetate, formate, ammonium hydroxide and biomass in the cultivation of e. coli (w3110). a comparison of partial least square models built using water standards, synthetic medium standards and fermentation samples has been made. template refolding utilizing biospecific interactions shigeo katoh 1 , yoichi kumada 1 , nanae maeshima 1 , daisuke nohara 2 : 1 graduate school of science and technologykobe university, kobe 657-8501, japan; 2 department of biomolecular sciencegifu university, gifu 501-1193, japan. e-mail: katoh@kobe-u.ac.jp (s. katoh) recombinant proteins over-expressed in e. coli are often accumulated as insoluble particles called inclusion bodies. proteins in inclusion bodies must be solubilized by a denaturing agent, such as urea and guanidine hydrochloride, and refolded to recover their native structures having biological activities. in bioprocesses it is important to obtain high refolding efficiencies and high throughputs at high protein concentrations. in refolding operation, a denatured protein solution is usually added batch-wise into a large volume of a refolding buffer in order to start refolding by reducing the concentration of a denaturant and to prevent aggregate formation of renaturing molecules. thus, a large volume of a stirred tank is required, and the concentration of protein after renaturation becomes low. biointeractions between a pair of biomolecules, such as enzyme-inhibitor, antigen-antibody and hormone-receptor, are highly specific and have been used for detection and separation of biomolecules. these interactions may be used as templates for refolding of target molecules, which can be captured with the templates and are prevented from aggregate formation and, in the case of proteases, from autoproteolysis. the specific interaction might promote refolding of the target molecules. these might improve the refolding efficiency. the biointeractions between antigen-antibody and enzyme-inhibitor were used for efficient refolding in packed columns, in which template ligands (antibody, inhibitor) were coupled on gel support. denatured solutions of target molecules (carbonic anhydrase and s. griseus trypsin) were mixed with refolding buffer and supplied to the affinity column coupled with the template ligands for refolding. with refolding in the column, higher refolding efficiencies were obtained than those by the batch dilution method with relatively low concentrations of denaturants. by increasing the adsorption capacity of the column, throughput of refolding can be increased without decrease in the refolding efficiency. the use of multi-parameter flow cytometry for characterisation and monitoring of insect cell-baculovirus fermentations in a mechanically-agitated bioreactor bojan isailovic 1 , alvin w. nienow 1 , ian w. taylor 2 , ryan hicks 2 , christopher j. bacteria and mammalian cells have been traditionally used as hosts for commercial recombinant protein production. however, in recent years, the insect cell-baculovirus system has emerged as a potentially attractive recombinant protein expression vehicle. although flow cytometry has been used widely for analysis of mammalian and microbial cells, there is very little information on applications of this powerful technique in insect cell culture. here we compared cell ratiometric counts and viability (propidium iodide and calcein am) of sf-21 cell cultures using flow cytometry to those determined by more traditional methods using a haemocytometer and the trypan-blue exclusion dye. flow cytometry has also been used to monitor various parameters during cultures of sf21 infected with the recombinant autographa californica nuclear polyhedrosis virus (acnpv) containing the inserted nucleic acid sequence amfp486 coding for am-cyan coral protein, which emits natural green fluorescence. carbon sources create fingerprint fermentation characteristics pınar ç alık 2 , güzide ç alık 1 , tunçer h.özdamar 1 1 bre lab, department of chemical engineering, ankara university, 06100 ankara, turkey; 2 ib lab, department of chemical engineering, metu, 06531 ankara, turkey. e-mail: calik@eng.ankara.edu.tr (g. ç alık) this work reports on a systematic investigation of the interactions between the single-carbon sources, i.e., glucose and citric acid, and complex-medium components, i.e., carbon and nitrogen sources, in enzyme fermentation processes, i.e., serine alkaline protease (sap; ec 3.4.21.62),) and ␤-lactamase (ec 3.5.2.6), with the oxygentransfer and ph conditions to demonstrate the influences of carbon sources that create the fingerprint fermentation characteristics, moreover, their influences on the product and by-product formations and the intracellular reaction rates. the influence of the medium composition i.e. citric acid-, glucose-, molasses-and soybean-based media together with the oxygen transfer (ot)-and ph-conditions applied, on the product and by-product distributions and ot characteristics were investigated in batch bioreactors. for sap, in general, under uncontrolled-ph operations the variation of the medium ph in sap fermentation process has a tendency to increase in the sap production phase; however, depending on the carbon source used, its behaviour changes in the early stages of the fermentation as the consequences of the directed functioning of the intracellular bioreaction network. the loci of the dissolved oxygen (do) curves also strongly depend on the carbon source(s) utilised, in addition to the applied ot conditions. the complex media profiles are significantly different compared to the defined media as the ph and do profiles are interrelated owing to the bioreactor operation conditions affecting the metabolic reaction network. the highest volumetric oxygen uptake rates were obtained with soybean-based medium that was ca. three-fold higher than the values reported in citrate-based and glucose based media, and ca. 1.5-2-fold higher than the values reported in molasses-based medium. the significant changes, moreover, the drastic change observed with the use of soybean-based complex medium are due to the compositions of the fermentation media used, and its influence on the intracellular bioreaction network. thus, we conclude that the change in medium composition based on the carbon source changes the fermentation characteristics under the designed bioreactor operation conditions that appear as the fingerprints of the bioprocess. kinetic resolution of racemic benzoin with different lyophilized microorganisms ç . babaarslan 1 ,ü. mehmetoglu 1 , a.s. demir 2 : 1 ankara university, faculty of engineering, department of chemical engineering, 06100, ankara, turkey; 2 middle east technical university, department of chemistry, 06531 ankara, turkey. e-mail: barslan@eng.ankara.edu.tr (ç . babaarslan) the biocatalytic resolution of racemates is valuable tool for enantioselective synthesis and proved to be a convenient method for obtaining optically enriched compounds from their racemic form. in this work, enantiomerically pure benzoin which is one of the 2-hydroxy ketones was synthesized by kinetic resolution of racemic benzoin using different lyophilized microorganisms as lipase sources. the effect of lyophilized microorganism type, solvent type and acyl donor type on enantioselectivity were studied. in kinetic resolution experiments, lyophilized microorganism was resuspended in the media containing solvent, racemic benzoin and acyl donor at 30 • c and 150 rpm on orbital shaker. the reaction was followed by tlc during the experiment and the enantiomeric ratio of benzoin was determined by hplc analysis using chiral cell ob column. the best enantioselectivity value was obtained with lyophilized rhizopus orayzae cbs 112-07 as ee s = 16% and ee p = 30% (conversion = 35%) using thf as solvent and vinyl acetate as acyl donor. otimizing the fermentation broth for tanase production by a new isolated strain paecilomyces variotii vania battestin, gláucia pastore, gabriela macedo department of food science, unicamp, p.o. box 6121, campinas, cep 13083-862, são paulo, brazil tannase is an inducible enzyme that catalyses the breakdown of ester linkages in hydrolysable tannins, resulting in gallic acid and glucose. the fermentation broth can use by-products as wheat bran, rice or oats, adding tannic acid. the use of by products or residues rich in carbon source for fermentation purposes an alternative to solve pollution problems that can be caused by an incorrect environmental disposal. in the present study we have optimized the production of an extracellular tannase by a new isolated paecilomyces variotii using response surface methodology. the first step was identify the variables having a significant effect on enzyme production. the variables evaluated were temperature, residues ratio (coffe: wheat bran), concentration of tannic acid, salt solution during 3, 5 and 7 days of fermentation time. results showed that temperature, residues ratio (coffe: wheat bran) and tannic acid had significant effects on tannase production. commercial wheat bran (cwb) and coffe rusk residues (cr) were used as solid substrate. for fermentation the mediun was composed by, cwb:cr were mixed with distilled water and transferred into 250 ml capacity erlenmeyers flasks and autoclaved at 120 • c for 20 min. the medium was then inoculated with spores (5.0 × 10 7 ) and the flaks were incubated at 32 • c. tannase was assayed according to the methodology of mondal et al. (2001) . acording to the statist analyses, the optimum conditions to produce tannase was the range of temperature (29-34 • c); tannic acid (8.5-14%); residues % (coffe: wheat bran) (50:50) and 5 days fermentation time. the enzyme production increased 8.6 times more enzyme production than that was obtained before this optimization. how to cope with fda's pat-initiative with respect to fermentation process monitoring and control marco jenzsch 1 , andreas luebbert 1 , rimvydas simutis 2 : 1 institute of bioengineering, martin-luther-university halle-wittenberg, halle (saale), germany; 2 process control department, kaunas university of technology, kaunas, lithuania with its pat initiative, fda forces drug manufacturers to increase their activities in innovative manufacturing techniques, and, more than previously, to focus on quality assurance. the agency particularly places emphasis on making use of modern process supervision and control techniques such as up-to-date process analytics, multivariate data acquisition and analysis tools in order to improve process monitoring and control. in this contribution we show by means of practical examples how this guidance can be applied to cultivations of genetically modified microorganisms. a comparison of different multivariate state estimation techniques will be presented and compared with more knowledge-based techniques such as the extended kalman filter. the comparison was made for the model system gfp expressed from e. coli bacteria (bl21/de3/gfp) for which more than 40 full data sets are available. all these techniques have already been used during real protein formation at productionscale fermenters, with the same success. hence, the requirements expressed in the pat initiative can immediately be put into practice. feedback control of the recombinant protein production processes based on such estimations is show for several cultivation systems. simple parameter adaptive controllers are compared with model supported controllers, for instance, generic model controllers and model predictive controllers. the results clearly show that we have at hand a rather extended arsenal of feedback control procedures that can be used successfully to tightly control the processes even along set-point profiles of physiological variables such as the specific growth rate (µ). again, fda's suggestion with respect to "control in the engineering sense" can be applied immediately to reduce batch-to-batch variances and thus to increase process quality. extending life by alternative respiration? alexander kern, franz hartner, anton glieder institute of biotechnology, graz university of technology, a-8010 graz, austria. e-mail: a.kern@tugraz.at (a. kern) alternative oxidase transfers electrons directly from the ubiquinol pool in mitochondria to oxygen, allowing cell respiration in presence of complexs iii and iv inhibitors like antimycin a or cyanide. electron transfer by alternative oxidase is not coupled with proton transfer across the mitochondrial membrane, thereby uncoupling the supply of small metabolic intermediates by the central metabolic pathway from energy production in the cell. alternative oxidase is present in mitochondria of plants, many fungi and a few, mostly crabtree-negative yeasts, but not in p. angusta (hansenula polymorpha) and s. cerevisiae. alternative oxidase has multiple functions in different organisms. it is involved in stress answers, in programmed cell death, maintenance of the cellular redox balance, and also citric acid accumulation in a. niger. we isolated the alternative oxidase gene from the methylotrophic yeast p. pastoris in order to study its effects on the cellular energy content, respiratory activity, its protective role against oxidative stress. our results indicate the importance of an exact regulation of the alternative oxidase due to its impact on many cellular functions. new types of energy efficient fermenters with better mass transport, mixing and cooling properties than the current crop of rushton turbine derived tank bioreactors are likely to be required in the future. such fermenters will be needed in order to meet the increasing pressure on costs for low price commodity type products such as single cell protein or food and technical grade enzymes, and to meet the demands of the new wave of white biotech, in which bio-produced chemicals must be made at prices competitive with those of the traditional chemical industry. with this in mind, a prototype pilot scale (500 l) u-loop fermenter has recently been commissioned at biocentrum-dtu. in this fermenter, liquid circulation is driven by a propeller pump through a vertical u-shaped pipe, which is connected at the top with a de-gassing tank. we present here a study of liquid mixing and dispersion in the prototype u-loop fermenter. sub-sequently we show that the results can be described with the tanks in series model. mixing was characterised using pulses of nacl tracer, which were detected with a conductivity probe in various parts of the fermenter. bodenstein numbers (bo) were determined for flow rates corresponding to a linear fluid velocity of 1.05 m/s in the 'legs' of the reactor and showed that the majority of the mixing occurred in the top degassing part (bo = 29) rather than in the u-loop section (bo = 422). it was also observed that the time for mixing to 90% homogeneity after tracer pulse addition was a function of the number of cycles through the reactor (3-3.5) within the range of flow velocities (u) studied (u = 0.41 m/s to u = 1.74 m/s). the mixing time to 90% homogeneity was between 44.6 s (at u = 1.74 m/s) and 181 s (at u = 0.41 m/s). today many biotechnological processes are operated at suboptimal conditions and according to best practice. however, the current industrial development is towards analyzing more parameters and in particular there is a large interest in analysis of biological/biochemical variables. the quality of products and also the possibility to optimize production in submerged cultivations would be greatly enhanced if more on-line/real-time information were at hand. the present investigation was undertaken with the aim of evaluating the potential in using multi-wavelength fluorescence for monitoring and control of filamentous fungi fed-batch cultivations. a recombinant a. oryzae expressing a heterologous lipase was applied as model system. spectra of multi-wavelength fluorescence were collected every five minutes with the bioview ® system (delta, denmark) and both explorative and predictive models, correlating the fluorescence data with the important biological parameters cell mass and lipase activity, were built. the models will be presented, furthermore, advantages and disadvantages of multiwavelength fluorescence for monitoring of cultivation processes will be discussed. moving from r&d to pharmaceutical development is a costly process. it is therefore of paramount importance to design a manufacturing process that combines robust and well-documented technological platforms. therapeutic recombinant proteins designed for human administration should be as close to the authentic product as possible. here, the use of a scalable process and an economically sound affinity tag can be a relevant choice. the tagzyme tm system has been designed to allow for the precise removal of amino terminal affinity tags. the system is based on the use of recombinant aminopeptidases including dipeptidyl peptidase i (dapase tm ). dapase tm is currently produced under cgmp providing a suitable strategy for its use in pharmaceutical production. the tagzyme tm system is superior to other methods since: (1) it is based on exopeptidases, precluding, e.g., unspecific protein cleavage reported when using so-called site-specific endoproteases. (2) it has been tested for production of more than 200 recombinant proteins. (3) it is easily scalable from lab scale to kg of processed protein. (4) it allows the use of his-tags for commercial production without patent infringment, due to our ipr position. (5) the commercial use of tagzyme tm does not require any licensing, only purchase of the enzyme(s). (6) the use of aminopeptidases for pharmaceutical production has been extensively documented for approved drugs. (7) a number of therapeutics is currently being developed using tagzyme tm . (8) unizyme can assist in the optimization of the dsp to enable further cost reduction in the process. these aspects will be discussed and illustrated in the presented poster. website sphingolipids are biologically active molecules involved in the regulation of a large quantity of biological responses. they function in cell proliferation, survival and death (apoptosis) as messengers. dysregulation of apoptosis has significance in numerous pathological conditions including cancer. several anticancer agents act by increasing tumor cell ceramide (a kind of sphingolipid) content. so, a novel approach to cancer therapy would be the pharmacological manipulation of sphingolipid metabolism. in this study, sphingolipid metabolism in baker's yeast s. cerevisiae is used as a model system as many of its sphingolipid related genes and proteins have been characterized. gepasi-biochemical kinetics simulator was used for metabolic control analysis (mca) of the above-specified system. the concentration control coefficients (ccc), flux control coefficients (fcc) and elasticity coefficients were calculated, and their significance in identification of anticancer drug targets is determined. elementary flux modes were also identified and metabolic pathway analysis (mpa) was performed. quantitatively, control effective flux (cef) values were used for potential drug target identification. the results from mca and mpa indicate almost the same potential drug targets: serine palmitoyl transferase, ceramide synthase and ceramidase. drugs against these targets are in preclinical and clinical development. for the identification of new potential drug targets, the cccs, fccs, cefs and elasticity coefficients were examined with an objective function of maximizing the cell ceramide concentrations. it was found that manipulation of inositol-1-phosphate synthase and phosphoinositide kinase activities have considerable effects on ceramide concentrations. if a drug targeting the two enzymes at the same time is designed, it might give a better outcome in terms of cancer therapy. in recent years, there is a growing interest in utilization of airlift reactors (alrs) to biotechnological processes. nevertheless, their industrial application still remains limited because of a lack of reliable studies on transfer phenomena and mixing enabling a suggestion of suitable scale-up procedure. the way to more widely utilization of alrs to biological processes lies in experimental research (on a model medium as well as on a real fermentation medium) followed by mathematical modelling and scaling-up of the processes. this paper deals with a modelling of a glucose-gluconic acid fermentation by a. niger in an internal loop airlift reactor. knowledge of the stoichiometric relationship in the key reaction provides a good opportunity for estimation of substrate and product concentration. the model is based on material balance equations and has been adjusted to experimental data obtained from three internal loop airlift reactors (10.5, 40 and 200 l) . in the model, the alr is divided into ideal stirred tanks in series. in each zone (tank) of the alr the material balance is calculated in two phases (the gas and the liquid phase). this work was supported by the slovak scientific grand agency, grant number vega 1/0066/03 alkaline phosphatase (ap, e.c. 3.1.3.1) is a thermolabile enzyme which is indigenous to all dairy products. it has an inactivation temperature slightly above the value that is required to destroy the most resistant pathogenic microorganism likely to be found in milk. due to that feature, this enzyme is used as an indicator of proper pasteurization. the effect of temperature treatment on the activity of ap was investigated in raw cow's and goat's milk. the stability of alkaline phosphatase in raw milk was compared with the stability of this enzyme in a 0.1 m potassium phosphate buffer with ph 6.6. the ph value of the buffer was approximately the same as that of raw milk. the inactivation curves were measured in the temperature range from 54 to 69 • c. ap in cow's milk was completely inactivated at 69 • c during 60 s but approximately 30% of activity remained at 54 • c after 100 min of treatment. the time required for a complete inactivation of the enzyme in the raw cow's milk was reduced from 90 to 1 min as the temperature increased by 11 • c. heat treatment of goat's milk caused the decrease of activity of the enzyme in the same temperature range as in the case of cow's milk. the increase of temperature from 58 to 68 • c reduced the inactivation time from 35 min to 40 s. the study of thermal stability of the alkaline phosphatase in the buffer solution showed that the time required for inactivation of enzyme was significantly shorter than in milk. milk thus had a protective effect on the activity of alkaline phosphatase. the experi-mental curves were fitted simultaneously using kinetic models where the initial heating period was considered. this work was supported by a grant of 6th framework program of eu, project foodpro, no. sme-2003-1-508374. during the process of separation, purification and concentration of monoclonal antibodies (mabs) at industrial scale, the chromatographic unit operations have an important role. three different protein-binding modes are employed: ion-exchange, hydrophobic and affinity binding. two adsorbent properties are of uppermost importance: a high selectivity and adsorption capacity. in the case of ion-exchange/hydrophobic chromatography, the binding of charged proteins can be affected by ph and ionic strength. in this work, the adsorption capacity of eight commercially available adsorbents designed for separation of mabs (mabselect, rprotein a sepharose ff, poros 50a, prosep-va, fractogel emd se hicap (m), sp sepharose ff, mep hypercel, s ceramic hyperd f) was measured as a function of ph. as a model mab and contaminant proteins, human immunoglobulin (igg), human serum albumin (hsa) and horse skeletal muscle myoglobin (myo) were used. the resin properties were investigated within the range of ph 4-8. the experiments were conducted in a batch-mode, individually for each model protein. the results showed that ion-exchange and hydrophobic resins provided the best selectivity for igg at ph 6. the selectivity of affinity adsorbents was essentially unaffected by ph, however, the highest capacity for igg was at ph 7. another investigated aspect was the dynamics of protein binding. the solution of individual protein in contact with tested adsorbent was circulated through an uv spectrophotometer, what enabled the measurement of time-dependent decrease of protein concentration. the results indicated that affinity adsorbents with a rigid matrix needed approximately four times shorter time to reach the adsorption equilibrium with igg in comparison with a gel. the gels, however, provided higher adsorption capacity. at ion-exchange resins, the time necessary to adsorb 99% of total amount of igg was about 0.5-2 h. the affinity adsorbents were highly selective and therefore they adsorb very small amount of tested contaminant proteins (hsa, myo). the adsorption capacity was saturated by 50% in less than 25 min in all cases of dynamic adsorption measurements. this work was supported by a grant of 6th framework program of eu, project aims, no. nmp3-ct-2004-500160. microtechnology has for several years been applied within chemical reaction engineering. the advantages of microtechnology are that it makes it possible to develop light weight and compact systems, and the systems enable large surface-to-volume ratio, which results in low mass-transfer distances. in addition, parameters like pressure, temperature, residence time, and flow rate are more easily controlled. the use of microtechnology is also beginning to find its ways into the field of biotechnology. what we are aiming at is the development of a microreactor that can be applied as a production tool in industry as an alternative to conventional enzymatic reactors. our strategy is to use a small plate of a suitable material with microchannels fabricated into its surface, and the approach is to covalently couple enzymes into the microchannels. substrate can then be pumped through the channels and the enzymatic conversion will take place within the channels. as model enzyme in the development of the microreactor we are applying celb, a thermostable ␤-glycosidase from pyrococcus furiosus. kinetic resolution of racemic benzoin with different lyophilized microorganisms ç . babaarslan 1 ,ü. mehmetoglu 1 , a.s. demir 2 : 1 ankara university, faculty of engineering, department of chemical engineering, 06100, ankara, turkey; 2 middle east technical university, department of chemistry, 06531, ankara, turkey. email: barslan@eng.ankara.edu.tr (ç . babaarslan) the biocatalytic resolution of racemates is valuable tool for enantioselective synthesis and proved to be a convenient method for obtaining optically enriched compounds from their racemic form. in this work, enantiomerically pure benzoin which is one of the 2-hydroxy ketones was synthesized by kinetic resolution of racemic benzoin using different lyophilized microorganisms as lipase sources. the effect of lyophilized microorganism type, solvent type and acyl donor type on enantioselectivity were studied. in kinetic resolution experiments, lyophilized microorganism was resuspended in the media containing solvent, racemic benzoin and acyl donor at 30 • c and 150 rpm on orbital shaker. the reaction was followed by tlc during the experiment and the enantiomeric ratio of benzoin was determined by hplc analysis using chiral cell ob column. the best enantioselectivity value was obtained with lyophilized rhizopus orayzae cbs 112-07 as ee s = 16% and ee p = 30% (conversion = 35%) using thf as solvent and vinyl acetate as acyl donor. chromatography is one of the most important unit operations at separation, purification and concentration of monoclonal antibodies (mabs) at industrial scale. since these proteins belong to the group of immunoglobulins, their molecular weight (about 150,000 g/mol) or hydrodynamic radius (10.7 nm), respectively, is relatively large. the adsorbents used in ion-exchange/affinity chromatography of these biomolecules should thus provide a high pore accessibility coupled with a high value of specific surface area in order to ensure a sufficient ligand density and a high binding capacity. in this study, the pore accessibility of eight commercially available adsorbents designed for separation of mabs (mabselect, rprotein a sepharose ff, poros 50a, prosep -va, fractogel emd se hicap (m), sp sepharose ff, mep hypercel, s ceramic hyperd f) was investigated via size exclusion of standard-sized molecules (glucose, sucrose and dextrans with molar weight range 1200-40 × 10 6 g/mol) at nonbinding conditions. the experiments were conducted in a batch and column-mode. the batch experiments provided absolute partition coefficients, which were calculated from a mass balance and represent the fraction of pore water accessible to a solute. it was found that several adsorbents contained a small fraction of very small pores (less than 1 nm), whereas some adsorbents contained a significant fraction of pores larger than 100 nm. the column measurements provided relative partition coefficients, which were calculated from the retention volumes of solutes and represented the relative accessibility of pores, scaled between the accessibility of the smallest and largest solute used. when the absolute partition coefficients were recalculated into the relative form, it was found that the coefficients obtained by both methods correlated very well. the relative partition coefficients of solutes with the hydrodynamic radius of 10 nm (corresponding to mabs) was about 0.2-0.4 at ion-exchange and hydrophobic adsorbents and 0.6-0.8 at affinity adsorbents. the relation between hydrodynamic radius of the solutes and their partition coefficients was successfully described with the giddings random plane model. this work was supported by a grant of 6th framework program of eu, project aims, no. nmp3-ct-2004-500160. the transfer of laboratory results to a larger scale is often a critical step in process development to industrial application. the objective of this study was to scale-up the bioreactor for the fructosyltransferase production based on the results obtained in a 3 dm 3 stirred bioreactor. the investigations made in this bioreactor provided a clear picture of the effect of medium composition on the obtained fructosyltransferase (ftase) activity but the influence of mixing intensity was unequivocal. the increase of agitation rate had a positive effect up to a certain level where both fructosyltransferase and biomass production increased. the final optimal yield factor of ftase per dry cell mass obtained in the laboratory bioreactor was 10,300 u g −1 . we studied the effect of oxygen transfer on the process of ftase production at a larger scale, in 12 and 100 dm 3 mechanically stirred bioreactors and in air-lift bioreactors, 20 and 60 dm 3 whilst the medium composition was kept constant. the yield factors of ftase were comparable in both mechanically stirred bioreactors and they were about 7500 u g −1 . this decrease compared to that in the laboratory bioreactor could be explained by a slower cell growth. this fact was also confirmed by that glucose was not depleted till the end of fermentation and free fructose concentration was also lower. the yield factor of ftase was 7400 u g −1 in the 60 dm 3 air-lift reactor and 6100 u g −1 in the 20 dm 3 air-lift bioreactor. the lower yield of ftase in 20 dm 3 bioreactor was caused by a better biomass growth. this work was supported by slovak scientific grand agency, grant numbers vega 1/0066/03 and 1/0065/03 and by science and technology assistance agency, grant number apvt-20-025704. the poster gives an overview of the objectives and achieved results of an interdisciplinary project on direct product isolation from crude feedstocks using magnetic micro adsorbents in combination with suitable magnet technology. the project was funded by the deutsche bundesstiftung umwelt and was running between august 2002 and november 2004. in the course of the project several milestones could be met, which can be looked at as critical key points on a route towards an industrial realization of the process. among these milestones are: (i) the production of magnetic micro adsorbents with high capacity and selectivity in batches up to 50-100 g; (ii) the proof that the micro adsorbents can be reused many times; (iii) generation of a variety of recombinant tagged, active enzymes; and (iv) the design, assembly and operation of a fully automated pilot plant capable of generating approx. 5 g/h (≈95% purity) protein. the process was also simulated by help of the software tool superpro designer and simple mass balance and sorption equilibrium approaches were used to derive rules for estimating optimum process parameters and productivities. finally an environmental performance evaluation was conducted externally by the german dechema. in this study the effect of fed-batch cysteine addition to a culture of a high-gsh-accumulating yeast strain on the metabolism of glutathione was investigated. it is known that cysteine is the rate limiting amino acid in the biosynthesis of gsh. the influence of the consumption rate of cysteine on glutathione metabolism and growth of s. cerevisiae mt-32 was determined. the results show that for rates of consumption below a critical value the microorganism growth is similar to a culture without feed of cysteine, but glutathione production is increased two-fold. on the other hand, if cysteine consumption rate is above the critical value the changes of cell metabolism implies ethanol accumulation in the extracellular media which diminishes biomass synthesis. the maximum specific glutathione production in this case is maintained at two-fold; however, gamma-glutamylcysteine accumulation is increased. cysteine present in culture media directs cell metabolism to a greater synthesis of ammonia and amino acids. hydrophobic interaction chromatography (hic) exploits the hydrophobic properties of protein surfaces for separation and purification by performing interactions with chromatographic sorbents of hydrophobic nature. in contrast to reversed phase chromatography this methodology is less detrimental to the protein and is therefore more commonly used in industrial scale as well as in bench scale when the conformational integrity of the protein is important. hydrophobic interactions are promoted by salt and thus proteins are retained in presence of a cosmotropic salt. when proteins are injected on hic columns with increasing salt concentrations under isocratic conditions only, a fraction of the applied amount is eluted. the higher the salt concentration the lower is the amount eluted protein. the rest can be desorbed with a buffer of low salt concentration or water. it has been proposed that the stronger retained protein fraction has partially changed the conformation upon adsorption. this has been also corroborated by physicochemical measurements. the retention data of five different model proteins and 10 different stationary phases were evaluated. partial unfolding of proteins upon adsorption on surfaces of hic-media were assumed and a model describing the adsorption of native and partial unfolded fraction was developed. furthermore we hypothesize that the surface acts as catalyst for partial unfolding, since the fraction of partial unfolded protein is increasing with length of the alkyl chain. stationary phases for bioseparation of glycoproteins j. aniulyte 1 , j. liesiene 1 , b. niemeyer 2 : 1 department of chemical technology, kaunas university of technology (ktu), radvilenu pl. 19, 50254 kaunas, lithuania; 2 institute of thermodynamic, helmut-schmidt-university/university of the federal armed forces hamburg, holstenhofweg 85, hamburg, germany d-22043 nowadays glycomics raise new challenges for affinity chromatography related with an abundance of glycoconjugates in living organisms and with scaling-up of the preparative processes. economics, efficiency and practicality dictate the search of novel chromatographic biospecific adsorbents that could contribute to enhancing the productivity of the affinity separation process. the purpose of the work was to prepare cellulose-and silica-based biospecific adsorbents with immobilized lectins and to evaluate them for the affinity chromatography of glycoproteins. cellulose-based matrix granocel and silica coated with hydrophilic polymers were used as a support. the effect of support characteristics, such as pore size, chemistry of active groups and their density on the support' surface on lectin immobilization and on the efficiency of adsorbents obtained were evaluated.three different methods were used for the activation of the support: oxidation with sodium periodate, modification with pentaethylenhexamine (spacer arm of 18 atoms) and carbonyldiimidazole activation.cona and wga two lectins of different molecular weight and shape were selected to notice differences resulting from the size and diffusion behaviour. chromatographic performances of the adsorbents were studied applying two different glycoproteins (god and fetuin) carrying specific terminal glycomoieties of mannose (god), and n-acetylglucosamine (fetuin) for specific interaction with cona and wga, respectively. the adsorbents demonstrated high affinity to glycoproteins with a sorption capacity in the column up to 7.4 mg per ml support and a high recovery (up to 93%). it was shown that spacer arm affected ligand coupling kinetic as well as the chromatographic behavior of the adsorbents obtained. the adsorption isotherms of god onto cona adsorbents reveal an adsorption behavior with high and low affinity binding sites. the dissociation constant k d of the ligand-sorbate complex is approximately 1 × 10 −6 m, and 0.4 × 10 −5 m, respectively. it was supposed that the second step is related to the sorption of solvated god onto already adsorbed god forming sorbate dimers. cell disruption and chromatography are key unit operations in the downstream processing of an intracellular product. the cost involved in the extraction and purification of intracellular products can be reduced by selective release of proteins and reduction in the number of steps involved in the purification. the extent of disruption can be varied to provide a selective release, limiting the release of the contaminant proteins. the particle size distribution of the cell debris in the resulting suspension depends on the extent of disruption. expanded bed adsorption chromatography allows for the direct capture of the proteins from an unclarified suspension. this technique allows for the integration of solid-liquid separation, concentration and preliminary purification in one unit operation. a perfectly stable expanded bed can be obtained by choosing the appropriate flow conditions and a suitable adsorbent. the difference in the density between the adsorbent and the cell debris in the suspension, permits the cell debris to flow through the column without blocking, whilst the protein molecules in the suspension are adsorbed onto the adsorbent. after sample application, the bed is washed with buffer and the proteins eluted from the column in the packed bed mode. the presence of the cell debris in the feedstock influences the expansion of the bed and the adsorption of protein molecules. the physical properties of the suspension obtained after cell disruption depends on the extent of disruption. the particle size distribution of the cell debris, the viscosity and the release of soluble proteins and other intracellular components are influenced by the extent of disruption. the influence of the extent of disruption of e. coli on the expansion of the bed and the adsorption of ß-galactosidase is presented in the current study. e.coli cells were disrupted at different operating pressure using a high pressure homogenizer. the resulting crude homogenate is subjected to expanded bed adsorption chromatography using streamline deae as adsorbent. the disrupted suspension was characterised in terms of viscosity, density, particle size distribution of the cell debris and the extent of protein and ß-galactosidase released. the interaction between cell debris and adsorbent was quantified as the cell transmission index (ratio of the amount of cells present in the sample before and after passing through the bed). the expansion of the bed at a constant settled bed height and flow rate was measured. the influence of the cell debris on the extent of adsorption of ß-galactosidase has been quantified in terms of dynamic binding capacity (dbc) at 10% of the inlet concentration. the dbc of ß-galactosidase that was released by disruption at 7500 psi (5%, w/v, w/w, 1 pass) was found to be 100 u/ml of adsorbent while dbc of samples disrupted at 2500 psi (5% w/v, w/w, 1 pass) was 67 u/ml of adsorbent. the extent of disruption of e. coli over a wide range and its effect on the expansion and adsorption will be presented. study of dna binding during expanded bed adsorption and factors affecting adsorbent aggregation ayyoob arpanaei 1 , niels mathiasen 1 , timothy hobley 1 , owen rt thomas 1,2 : 1 center for microbial biotechnology, building 223, biocentrum-dtu, technical university of denmark, 2800, lyngby, denmark; 2 department of chemical engineering, university of birmingham, edgbaston, b15 2tt, uk. e-mail: aa@biocentrum.dtu.dk (a. arpanaei) the adsorption of sonicated calf thymus dna (as a model dna molecule) to biosepra q hyper z adsorbents was evaluated in batch and expanded bed modes. stability of the expanded bed during feedstock loading was also studied. two batches of prototype q hyper z (batch 1 and 2) were examined, which had ionic capacities measured to be 122 and 147 mmol cl − /ml support respectively. in all adsorption experiments a 50 mm tris-hcl ph 8 buffer was used. maximum static binding capacities of adsorbent batches 1 and 2 were determined to be 20.9 and 23.8 mg dna/ml particle, respectively. dynamic binding capacity at 10% breakthrough (dbc 10% ) was measured in a 1-cm diameter eba column containing 6.7±0.5 cm settled bed with a feed of 20 g/ml dna. dbc 10% of the adsorbents were 7.4 and 12.7 mg dna/ml support for batches 1 and 2, respectively in buffer containing no salt. however, the maximum dbc 10% for batch 1 (10.1 mg dna/ml support) and 2 (18.9 mg dna/ml support) were obtained in buffers containing 0.25 and 0.35 m nacl, respectively. further increases in salt concentration led to a decrease in dbc 10% for both adsorbent batches. the bed compression during loading that was observed in experiments at high conductivities (achieved by adding salt) was less than that seen with low conductivity (2 ms/cm) solutions. aggregation of adsorbent particles and channeling of flow were not observed in the presence of salt concentrations more than 0.1 m. the effect of different concentrations of dna during loading in the presence of 0.15 m nacl was studied. it was found that increasing dna concentrations in the feed from 20 to 40 g/ml, 60 to 80 g/ml resulted in a decrease of dbc 10% by 16, 24 and 30%, respectively. the bed compressed slower during loading of feedstock with low dna concentrations compared to that for higher concentrations. the expanded bed showed a partly reversible compression behavior during feedstock loading. this is attributed to the electrostatic interaction between dna adsorbed on the particles surface and rearrangements of dna strands as the number of free ligands on the adsorbent surfaces decrease during loading. large-scale production of plasmid dna for gene therapy and dna vaccination applications has become necessary as a result of the increasing number of approved protocols using non-viral vectors for gene delivery. a major challenge of large-scale plasmid production is to establish a robust cgmp manufacturing capable of producing hundreds of milligrams or grams of a pharmaceutical grade product. alcohol and salt precipitation are operations largely used in the early steps of plasmid downstream processes. however, there are few systematic studies on the influence of these precipitation agents in the final plasmid recovery and purity. in this work, alcohol and salt precipitation steps used in a plasmid purification process developed by our group have been optimized aiming at large-scale production. the optimization of alcohol precipitation indicated that almost 100% of the pdna precipitated when 0.6 vol. of isopropanol were used. the studies also indicated that the precipitation profile was strongly influenced by pdna initial concentration. finally, the final plasmid recovery and purity after a sequential alcohol and salt precipitation were strongly dependent on the concentrations of these precipitation agents. thus, a commitment between high recovery and purity level should be made during the development of the downstream processes. comparison of novel and conventional processes for protein refolding and initial purification h. ferré 1,2 , u. jørgensen 1 , l. scale down of downstream processing unit operations is convenient for assessing process alternatives, particularly if feedstock is scarce. in this study it was imperative to use the smallest possible scale for comparison of a new system for continuous protein refolding and direct expanded bed adsorption (eba) capture with a traditional process composed of discrete operations of batch renaturation, centrifugation, microfiltration and packed bed chromatography (pbc). minimisation of the scale was restricted by the eba step: the smallest practical scale being a 1cm diameter column with 5-6 cm of settled bed, expanded two fold. in order to permit a fair comparison a similar column diameter and adsorbent volume was used in the packed bed process. in both alternatives, chelating media charged with cu 2+ was used and a feedstock of denatured hat-tagged human beta-2 microglobulin (hat-h␤2m). following batch refolding and clarification, the performance of the packed column was severely hampered due to fouling of the top adapter. reducing the protein loaded to the packed bed to 50% of dbc working lead to a recovery of 68.5% at a purity of 87% and 4.6-fold concentration. the eba-based process performed unimpeded and productivity was calculated to be 8% higher than for that employing a packed bed. however, due to the severe scale restrictions placed on the eba process, which limited optimisation, significant productivity improvements of eba over packed bed are expected at larger scale. high gradient magnetic filtration has the potential for rapid processing of large volumes of crude bioprocess liquors when magnetic adsorbents are employed. the binding of a protein to a superparamagnetic solid support provides a unique selective 'handle'. typically the focus is placed on using the magnetic handle for direct capture of a protein from a fermentation broth. however, magnetic adsorbents may provide solutions to a range of downstream processing problems and in this presentation we illustrate this with a number of case studies. using whey as a model system, we show that the extent of the tryptic hydrolysis (ca. 0.2 mg/ml added enzyme) of proteins could be controlled by adding benzamidine-linked magnetic adsorbents after a given period (4-15 min), followed by removal of the loaded adsorbents using a magnetic filter. hydrolysis was stopped effectively and approx. 50% of the added trypsin could be recovered. a coupled process was devised for the refolding and purification of inclusion body proteins. solubilised (in 8 m urea) inclusion bodies of recombinant histidine affinity tagged human beta 2 microglobulin (hat-h-beta2m), were refolded by dilution in a pipe reactor (14 s), then captured directly on cu(ii) charged magnetic immobilised metal affinity adsorbents in a second pipe reactor (10 s residence time). loaded adsorbents were retained in a magnetic filter, then washed and the protein eluted. a generic framework for the prediction of scale-up when using compressible chromatographic packings r. tran 1 , j. joseph 1 , a. sinclair 2 , y. zhou 1 , n. packed bed chromatography is the pre-eminent technique in the downstream purification of many biological products. the aspect ratio of a packed bed has a significant effect on the column pressure drop by virtue of wall support which is reduced at low aspect ratios. this can result in unexpectedly high pressures during manufacturing caused by the compression of the matrix via drag forces due to fluid flow through the bed. the need for an accurate model to predict flow conditions at increasing scale is essential for the scaling-up of chromatographic processes and for avoiding bed compression during operation so that maximum throughput can be achieved. several studies have generated correlations which allow for the prediction of column pressure drops but have either been mathematically complex, which makes their practical use unfeasible, or they have used highly specific empirical constants and hence require a large amount of experiments to be performed before they can be used. in this study, we have established relationships to link the critical velocity of operation, to bed height (l), column diameter (d c ), feed viscosity (µ) and also to the matrix rigidity through the level of agarose cross-linking (a%). the correlation is straight forward to use and involves very few system-specific constants thus significantly reducing the need for any preceding laboratory-scale experimentation. this paper describes the series of experiments that were performed to establish the correlation, using a range of cross-linked agarose matrices (2-10%), at various aspect ratios, fluid flow rates and varying viscosities (0.9-1.85 mpas). a mathematical model was developed where parameter estimation for multi-variables was achieved by least squares optimisation. the model can be used to predict the extent of compression in industrial chromatography applications and will be useful in the development of chromatographic operations and for column sizing. institute of process engineering, swiss federal institute of technology, zurich. e-mail: makart@ipe.mavt.ethz.ch (s. makart) simulated moving bed (smb) technology receives increasing attention in biotechnology and in the biopharmaceutical industry as it enables an increase in productivity per unit mass of stationary phase, reduced solvent consumption, and fast and reliable scale-up. combining continuous chromatographic separation unit and reactor should enable the production of biopharmaceuticals and fine chemicals with high purity and yield at the same time. due to the increasing demand of enantiopure intermediates in the pharmaceutical industry, biocatalytic processes gain more and more importance because of their excellent enantioselectivity. yet the application of biocatalytic carbon-carbon bond formation on process scale is often hampered by an unfavourable equilibrium position and difficult downstream processing due to substrate/product mixtures. coupling a continuous separation unit to such a process would improve the feasibility by driving the reaction to completion and thus increasing the overall yield. we will discuss the design of such an integrated biocatalytic/smb process, taking the formation of l-allo-threonine from glycine and acetaldehyde, catalysed by the glycine-dependent aldolase glya from e. coli, as a model reaction. the enzyme exhibits absolute stereoselctivity at the c-alpha atom, whereas selectivity is less strict at c-beta. in situ product removal, by the integration of an smb unit, would aid to maintain a high diastereomeric excess as it shortens the residence time of the products in the reactor, in addition to shifting the reaction to the product side. the in-line coupling of the chromatographic unit to the enzyme reactor requires the use of the same solvents for reaction and separation, so the choice is limited to aqueous solutions close to physiological ph, limiting in turn the possible stationary phase materials. in a screening of different cation exchangers, amberlite cg-120 ii gave promising results: threonine is more retained than glycine, acetaldehyde is poorly and the cofactor plp is not retained. adsorption isotherms were determined by the retention time method and a smb under process conditions was simulated. by improving the packing of the column, i.e. achieving a more even particle size distribution, we tried to further increase the efficiency of the separation step. the application of enzymes for the synthesis of optically active substances is nowadays of growing importance in the pharmaceutical industry. this requires a proper cultivation of the microorganism as well as a posterior isolation process yielding a constant catalyst quality at high purity. goal of this project is the development of an integrated process for the production and isolation of a lipase from trichosporon beigilie and its posterior application for the enantioselective synthesis of pharmaceutical products. the cultivation of the microorganism is optimised in a laboratory and pilot-scale fermenter in a fed-batch mode. parameters like media composition, temperature, ph and aeration rate are set up. taking advantage of the localisation of the enzyme (covalently attached to the cell membrane) the first step of product isolation consists of a continuous cell disruption. optimal results are achieved with the continuous bead mill disruption process (68% enzyme release with a specific activity of 0.8 u/mg of protein). the non disrupted cells are recycled as inoculums for a new cultivation, increasing the yield of the overall process (by 5-times in the pilot-scale fermenter). in order to isolate the product two different process sequences are considered. the first one consists of an extraction (peg and phosphate buffer) coupled to an ion-exchange chromatography (q-sepharose ff). the second one applies a precipitation step with ammonia sulphate followed by a hydrophobic interaction chromatography (sepharose-hic) provid-ing a lipase yield of 72% (8-times higher than the one provided by combining extraction-chromatography). an ultrafiltration process is used in order to concentrate the lipase and its final properties (molecular weight, isoelectric point, activity, stability and kinetic data) are studied using p-nitrophenylacetate as model substrate. the relevance of the obtained product for its application in the pharmaceutical industry is proven by transforming (r,s)-naproxen-methylester into (s)-naproxen acid with an enantiomeric excess of >99% (after 24 h). biotensides (sugar fatty acid esters, sfaes) find nowadays a wide range of applications in pharmaceutical, personal care and food industry because of their biocompatibility, biodegradability and special surfactant properties. goal of this project is the development and optimisation of an integrated process for the enzymatic synthesis of sfaes from renewable sources to be used in cosmetic formulations. the following figure shows the scheme of the overall process. commercial and also new screened lipases are applied in the reaction between sugar and fatty acid. the mixture grade of the initial reaction system is increased by ultrasounds taking into account the influence on the catalyst characteristics and also the necessity of an organic solvent as adjuvant. the reaction takes place in an enzymatic membrane reactor (emr) equipped with an ultrafiltration membrane which retains the catalyst. the separation of the by-product (water) from the rest of the components can be achieved by means of a pervaporation unit which coupling to the emr allows the semi-batch process. in order to separate the esters from the fatty acid a stepwise elution chromatography method is developed using silica as adsorbent and ethyl acetate and methanol as eluents. with this system 91% of the dimer is isolated with purity (hplc) of 93%. the application of a dialysis membrane technique allows the separation of 80% of the fatty acid by building ester micelles changing the polarity of the organic solvent used as eluent. solubility and crystallisation properties of recombinant bacillus halmapalus ␣-amylase cornelius faber centre for microbial biotechnology, biocentrum dtu, building 223, 2800 lyngby, denmark a comprehensive knowledge of solubility properties is a prerequisite for the efficient design and operation of bulk enzyme recovery processes, however, complete phase diagrams are only available for very few proteins, in particular lysozyme of high purity. here, we present the results of detailed solubility studies in aqueous solutions of an industrially relevant ␣-amylase of technical grade. experiments were conducted in small scale batch mode (working volume of 1 ml). the influence of selected cations and anions from the hofmeister series on the stability of the ␣-amylase was examined. the hofmeister series for anions was followed in the correct order at all salt concentrations studied, i.e. from 0 to 1 m, whereas the series was reversed for monovalent cations at concentrations up to 0.5 m, with the exception of lithium. to further investigate why the position for lithium was different to the hofmeister series established for lysozyme, the zeta potential of protein solutions at low concentrations of selected salts was determined. the results of these measurements indicate a pronounced effect of lithium on the zeta potential, as compared to other salts. in particular, the ph of zero zeta potential (i.e. the pi) was shifted approximately 0.5 ph units towards alkaline conditions in the presence of lithium, whereas the pi stayed almost constant for sodium and potassium. since the solubility exhibits a minimum at ph-values at or near the protein's pi, shifts in ph caused by salt addition are important to identify and quantify to avoid uncontrolled phase separation. the measurement of the zeta potential of proteins in solution holds significant promise as an attractive tool for understanding and controlling processes that are operated close to the solubility limit and which are often plagued by uncontrolled precipitation or crystallisation and thus rely on carefully chosen operating conditions. polyphenolic interactions with potato proteins during industrial expanded bed adsorption processing sissel løkra 1 , knut olav straetkvern 1 , bjørg egelandsdal 2 , gerd vegarud 2 : 1 department of natural science & technology, hedmark university college, n-2317 hamar, norway; 2 norwegian university of life sciences, 1432 as, norway. e-mail: sissel.lokra@lnb.hihm.no (s. løkra) in plant extracts it has been shown that polyphenols have a tendency to react with proteins, either by covalent or non-covalent interactions. these reactions can induce changes in the surface properties of the proteins, and, e.g. cause proteins to be insoluble and precipitate at ph-values below their isoelectric point. potato proteins have a high nutritional quality and show interesting functional properties in food systems. moreover, chlorogenic acid (ca) and caffeic acid constitute about 90% of the total polyphenol content of potato tuber. we have experienced expanded bed adsorption (eba) chromatography to be a method well suited for recovering industrial proteins from potato starch effluent. the process separates proteins from polyphenolic pigments, fiber and minerals. during the adsorption step, patatin, the major potato tuber protein shows complex binding kinetics demonstrated by breakthrough curves. in addition to diffusion limitations in the eba resin, changes in protein structure and surface properties probably are likely to affect this adsorption behavior. reactions between ca and patatin might result in a range of interactions for different species of the same protein. this project therefore aims to assess the interactions between ca, patatin and other major tuber protein fractions and how these changes affect the protein capture in eba. changes in size and charge are screened in 2-d electrophoresis and analyzed further. samples of different protein fractions are taken from breakthrough curves and dynamic binding capacity experiments in model systems with real feedstock. sandwich hybridisation assay for analysis of brewery contaminants s. huhtamella 1 , m. leinonen 1 , t. nieminen 1 , a. breitenstein 2 , p. neubauer 1 : 1 bioprocess engineering laboratory, university of oulu, finland; 2 scanbec gmbh, halle, germany. email: peter.neubauer@oulu.fi (p. neubauer) here we describe the development of a sensitive, cultivationindependent analytical method for the analysis of brewery contaminants which can be performed within three hours in crude sample extracts. the method is based on 16s rrna detection by a paramagnetic bead based sandwich hybridization assay (sha) with two oligonucleotide probes designed to either detect the species or a group of contaminants. the signals were read out either by a fluorimeter (rautio et al., 2003; leskelä et al., 2005) or potentiometrically with an electric biochip instrument (ebiochip systems) . this assay is advantageous over rt-pcr becasue it only detects viable cells and the method can be directly applied to crude cell extracts without prior purification. we describe the principle of designing and evaluating a series of groupspecific lactobacillus probes and the optimisation towards effective cell lysis and high assay sensitivity. the applicability of the sha was evaluated with real brewery samples and the results were compared to routine tests. in all steps of the evaluation the reliability and usability of the method was prioritised. the optimised method combined with a 24 h pre-enrichment period gave reliable results, had a detection limits of about 10 4 -10 5 cells per assay and was easily applicable in a brewery environment. biodesulfurization is one of the possibilities studied by the researchers to attain the maximum sulfur levels imposed for a near future by governments (european directive, 2003) . rhodococcus erythropolis igts8 is a natural and strictly aerobic microorganism able to remove the sulfur atom from dibenzothiophene (dbt) in a selective way (4s route (oldfield et al., 1997)), obtaining 2hydroxibifenyl (hbp) and sulfate. growth is carried out using the experimental procedure performed in previous works dealing with the inoculum built up, media composition and operational conditions (del olmo et al., 2005a (del olmo et al., , 2005b . this work is focused to determine the oxygen uptake rate during the production of the biocatalyst. four experiments were carried out at a biostat b fermentor (braun biotech.) using as only variable the constant stirrer speed used: 150, 250, 400 and 550 rpm. oxygen uptake rate have been determined by means of two methods: dynamic technique at different times during growth for few seconds to ovoid influences and from oxygen profile when the term dealing with oxygen transfer rate is known (predicted by the model proposed in a previous work (garcía-ochoa and gómez, 2004)). our values obtained from the techniques used present the same tendency in all the runs carried out: our values from dynamic technique is always lower than our values obtained from the oxygen profile. these values are modeled and the difference observed is explained due to the cellular economy principle: during the seconds employed in the dynamic technique determinations microorganism do not produce 4s route enzymes. it was studied different methods to recover a p. salmonis antigenic protein from recombinant e. coli cells. this protein has shown be highly effective in vivo vaccine. it has the ability to stimulate salmon immune system protecting them against of aggressive disease salmonid rickettsial syndrome resolving by this way a great problem of salmonid aquaculture. biomass obtained from iptg induced e. coli bl21 (de3) codon plus culture was used for soluble and insoluble antigenic protein recovery. it was evaluated recuperation by glass bead mill, freezing and thawing, osmotic shock and lisozyme/edta treatments, all of them applied in single or combined way. biomass was measured by dry weight of cells, soluble protein concentration was quantified according to bradford method, and antigenic protein was identified by sds-page and western-blot analysis. cells treated with lisozyme/osmotic shock and then glass bead mill the soluble protein was a 62.9% of the dry weight cell mass whereas using lisozyme/edta and glass bead mill as a single treatment only a 24.4 and 22.6% were obtained respectively. the freezing and thawing disruption treatment released less than 5% of soluble protein, as well as the osmotic shock procedure too. the sds-page and west-ernblot analysis revealed that the antigenic protein must be purified from the insoluble cell fraction when physical or mechanical disruption methods were employed and from the soluble cell fraction when chemical or enzymatic treatments were used. we propose investigate in further studies the inclusion bodies formation to design an efficient purification procedure for the target protein. the iso-peroxidase pox2 from garlic bulb allium sativum that represented the major peroxidase activity was purified to homogeneity. the enzyme is monomeric and has a molecular mass of 36 kda, and a pi around 9. the optimum temperature ranged between 25 and 40 • c, while optimum ph was around 5. pox2 appeared remarkably thermostable since it retained 50% of its activity at 50 • c for at least 6 h. in addition, the enzyme was stable at a ph range from 3, 5 to 11. kinetic constants were calculated, the apparent k m values were 500 and 150 m for gaïacol and h 2 o 2 , respectively. the high thermostability of pox2 may represent clear advantages in a number of processes including immobilizing peroxidase and use it as a biosensor to detect oxidant component as h 2 o 2 and other peroxides. immobilization of pox2 was achieved by binding covalently the enzyme to a sepharose matrix (bead and membrane va epoxy). the immobilized peroxidase showed great stability at heat and storage than the soluble enzyme. the native enzyme retained 55% of its activity at 60 • c for 10 mn while the immobilized pox2 retained full activity for 35 mn at the same temperature. in other side, the free enzyme retained full activity for at least one month and a half during storage at 25 • c, and lost 50 m of its activity after 2 months. the immobilized form of pox2 retained complete activity for 2 months at the same temperature. the immobilized enzyme was used to detect h 2 o 2 in some food components such as milk and fruit juices. in a second study, same experiments were performed in order to detect the smaller quantity of added h 2 o 2 to the farming milk. purpose: a new research field has been created to begin to address protein function at level of regulation of enzyme activity. this new area has been given the name chemical proteomics, or activity-based proteomics (abps), and makes use of small molecules that can covalently attach to catalytic residues in an enzyme active site. the selectivity of the chemically reactive group allows specific proteins or protein subset to be tagged, purified and analyzed. methods: this molecule (abps) has three subsets: tag, linker, and warhead. warhead is a nucleophile and attach to active site. linker is a polypeptide that makes a simple connection between warhead and tag. tag is fluorcent or radioactive material that facilitates the detection of drugs. findings: several diseases such as cancer, rheumatoid arthritis and osteoporosis are associated with elevated levels of protease activity. serine hydrolyses abps have been used to profile enzyme activity in a diverse range of cancer cell lines. in studies comparing metastatic and non-metastatic human breast cancer models, it was shown that the former exhibited a higher activity of a ␥-glutathione-s-transferase, an enzyme that has not previously been associated with breast cancer. discussion: additionally, abps can be used to develop robust screens for small molecule inhibitors of a specific enzyme target within a large family of related enzymes. this method of inhibitor screening allows compounds to be assayed for both potency and selectivity against a set of related in complex biological samples. this technique is able to identify novel enzymatic proteins and drugs and has the potential to accelerate the discovery of new drug target. a cyclodextrin glycosyltransferase (cgtase) from a new isolated strain from bacillus clausii e16, was purified through q-sepharose, gel filtration chromatography and deae-sephadex a-50. the mw of the pure enzyme was 75 kda with sds-page. the enzyme displayed optimum ph value and ph stability at ph 6.0 and in range of 6.0-11.0, respectively. the optimum temperature and thermostability were at 55 • c and up to 60 • c by 1 h, respectively. the k m and v max were 2.85 mg/ml and 80.0 mol/min mg and 0.83 mg/ml 13.45 mol/min mg using maltodextrin and soluble starch, respectively. the isoeletric point was 4.8 and the n-terminal region of the pure enzyme was sequencing by maldi-tof-ms. the ratio of ␣-, ␤and ␥-cd was 0.29:1.00:0.79 and 0:1:0 with maltodextrin and soluble starch at 2.5%. application of magnetic separation technology for recovery of immobilised lipases nadja schultz 1 , anke neumann 1 , george metreveli 3 , matthias franzreb 2 , christoph syldatk 1 1 chair of technical biology, university of karlsruhe, engler bunte ring 1, d-76131 karlsruhe; 2 forschungszentrum karlsruhe, institute of technical chemistry, water-and geotechnology; 3 inst. für wasserchemie, engler bunte ring 1,76131 ka. e-mail: nadja.schultz@ciw.uni-karlsruhe.de (n. schultz). url: www.fzk.de/itc-wgt (m. franzreb) first results on the development of the magnetic separation technology for the recovery of immobilised lipase from a 2-phase-system, which should be suitable for a large scale use in future, known as high gradient magnetic separation (hgms) are presented. the application of immobilised lipases makes the reuse of the enzyme in a process possible and is therefore interesting for industrial applications. in this study immobilised lipase is used in a 2-phase-system. here the new approach to recycle and reuse the lipase, immobilised on magnetic particles, from a 2-phase-system with the help of the new high gradient magnetic separator (hgms) is examined in 30 ml scale. as model enzyme for the immobilisation on magnetic microparticles (polyvinyl alcohol (pva), 1-2 m) the commercially available (novonordisc) lipase a (cala) from candida antarctica was used. necessary for screening of immobilisation methods and characterisation of the immobilised lipase (candida antarctica) was the development of a robust, simple and rapid chromophoric activity assay. therefore the pnpp-lipase assay was optimised for direct application on immobilised lipases (in preparation schultz et al., 2005) . further more a ph-stat-assay for measuring the activity of free and immobilised cala in a 2-phase-system of tributyrate and buffer was optimized for this system. another important basis for the realisation of the recovery of immobilised lipases was to optimise the immobilization technique of the lipase. furthermore approaches for the explication of generally empirical based immobilization techniques on insoluble support were made. hereby we successfully applied the zeta potential measurement on the immobilization behaviour of the lipase cala. for to determine the operating temperature for the biomagnetic separation procedure we studied stability analysis of free and immobilised lipase cala at different temperatures (37, 25, 4, −20 • c) and at different ph values (ph 6, 7 and 8). the optimal temperature and ph value for the free and immobilised lipase was determined. presently and constructively on the so far developed methods and techniques we intensively work on the demonstration and feasibility of the recovery of immobilized lipase from a 2-phase-system. challenging approaches and first results on the recovery of immobilized lipase from a 2-phase-system in 30 ml scale were shown already. nowadays glycomics raise new challenges for affinity chromatography related with an abundance of glycoconjugates in living organisms and with scaling-up of the preparative processes. economics, efficiency and practicality dictate the search of novel chromatographic biospecific adsorbents that could contribute to enhancing the productivity of the affinity separation process. the purpose of the work was to prepare cellulose-and silica-based biospecific adsorbents with immobilized lectins and to evaluate them for the affinity chromatography of glycoproteins. cellulose-based matrix granocel and silica coated with hydrophilic polymers were used as a support. the effect of support characteristics, such as pore size, chemistry of active groups and their density on the support' surface on lectin immobilization and on the efficiency of adsorbents obtained were evaluated. three different methods were used for the activation of the support: oxidation with sodium periodate, modification with pentaethylenhexamine (spacer arm of 18 atoms) and carbonyldiimidazole activation. cona and wga two lectins of different molecular weight and shape were selected to notice differences resulting from the size and diffusion behaviour. chromatographic performances of the adsorbents were studied applying two different glycoproteins (god and fetuin) carrying specific terminal glycomoieties of mannose (god), and n-acetylglucosamine (fetuin) for specific interaction with cona and wga, respectively. the adsorbents demonstrated high affinity to glycoproteins with a sorption capacity in the column up to 7.4 mg per ml support and a high recovery (up to 93%). it was shown that spacer arm affected ligand coupling kinetic as well as the chromatographic behavior of the adsorbents obtained. the adsorption isotherms of god onto cona adsorbents reveal an adsorption behavior with high and low affin-ity binding sites. the dissociation constant k d of the ligand-sorbate complex is approximately 1 × 10 −6 , and 0.4 × 10 −5 m, respectively. it was supposed that the second step is related to the sorption of solvated god onto already adsorbed god forming sorbate dimers. quantitative methods in high throughput screening of aqueous two phase systems matthias bensch, björn selbach, jürgen hubbuch institute of biotechnologie 2, forschungszentrum jülich, germany purification of biopharmaceuticals is one of the most expensive and at the same time least understood steps in bioprocesses. during the process development for protein production, short time to market and the demand for cheap processes dominate today's process development. one way of reducing process costs is to implement integrative processes. aqueous two phase systems (atps) combine the advantages of removing cell debris and simultaneously purifying and concentrating the target protein, however, to the cost of highly complex systems which are difficult to predict and optimize. using high throughput screening techniques in the development of atps processes thus seems to be an ideal candidate for achieving both a reduced development time and an economical process without the need for preliminarily well characterized systems. in this study, we use the robotic system tecan freedom evo tm as an automation platform for the evaluation of aqueous two phase systems. central to this workstation are the integrated hardware as liquid handler, gripper, reader and centrifuge. we have created high throughput methods for rapid parameter estimations. as a first step, pipetting and mixing had to be calibrated for the use of highly viscous polymer solutions which are common in atps. the focus of the current work lies on the integration of the automatic preparation and analysis of two phase systems in microtiter plate scale. the robotic platform can now automatically create aqueous two phase systems and measure characteristic values such as binodal curves, protein concentrations and protein distributions between the two phases. the major bottleneck of hts processes, namely the rapid analysis of impure systems, is tackled by using automated elisa tools. depending on the intended use, the high number of measured partition coefficients and yields can be used for modelling or rapid process design. today, most optimisations of chromatography separations are based on experimental work and rule of thumb. the pat initiative has opened up for a model-based approach for downstream processing of pharmaceutical substances. this work uses a nonlinear chromatography model to optimize an ion exchange separation step. the general rate model with langmuir mpm kinetics described the behaviour of the components in the column. the optimal operating points using both productivity and yield as objective functions were found. the optimizations were run with both igg and bsa as target proteins respectively to compare their optimal operating points. the requirement on the optimal operating point was a purity of at least 99%. this requirement was added to the optimization problem as a nonlinear inequality constraint. flow rate, loading volume, start salt concentration in elution, elution gradient and cut points were used as decision variables in the optimization. the more retained component, bsa, was much easier to separate from igg with a gradient elution than igg from bsa while still retaining a high productivity and yield. the higher load volume at the optimal operating point, with bsa as target protein, causes a displacement of igg and thereby improving the separation. a high productivity at the yield optimum was still possible with bsa as target protein. both a lower productivity and yield was obtained with igg as target protein. optimisation and robustness analysis of a hydrophobic interaction chromatography step niklas jakobsson, marcus degerman, bernt nilsson department of chemical engineering, lund university, p.o. box 124, se-221 00 lund, sweden process development, optimisation and robustness analysis for chromatography separations are often entirely based on experimental work and generic knowledge. the present study proposes a method of gaining process knowledge and assisting in the robustness analysis and optimisation of a hydrophobic interaction chromatography step using a model-based approach. factorial experimental design is common practice in industry today for robustness analysis. the method presented in this study can be used to find the critical parameter variations and serve as a basis for reducing the experimental work. in addition, the calibrated model obtained with this approach is used to find the optimal operating conditions for the chromatography column. the methodology consists of threes consecutive steps. firstly, screening experiments are performed using a factorial design. secondly a kinetic-dispersive model is calibrated using gradient elution and column load experiments. finally the model is used to find optimal operating conditions and a robustness analysis is conducted at the optimal point. the process studied in this work is the separation of polyclonal igg from bsa using hydrophobic interaction chromatography. department of biochemistry and microbiology, ict prague, technicka 3, prague cz-166 28, czech republic the display of novel metal binding sites on the surface of the biosorbent represents potent tool to increase its binding capacity and improve selectivity. in this study, the 15.8 kda transcriptional regulator merr of mercury-inducible mer operon of tn21 exhibiting high affinity and selectivity towards hg 2+ , was displayed on the surface of s. cerevisiae. to achieve this, merr was genetically fused with gene encoding c-terminal domain of ␣-agglutinin which resulted in covalent attachment of the of the fusion protein on the cell wall glucan via glycosylphosphatidylinositol anchor. to evaluate the performance of such modified whole-cell biosorbent with specific regard to hg 2+ , we constructed a new biosensor e. coli strain, which utilizes kanamycine resistance gene as a reporter under the control of mer promoter. it allowed determination of hg 2+ in a range of 5-100 nm by simply monitoring the growth in the hg 2+ /kanamycine-containing media. the effect of genetic engineering of s. cerevisiae surface by merr became significantly pronounced in biosorption experiments with solutions containing 10 m hg 2+ when modified cells accumulated 2.5-fold more hg 2+ than the control strain expressing mere anchoring domain. sensitivity analysis of amino acids in simulated moving bed chromatography ju weon lee, chong ho lee, yoon mo koo center for advanced bioseparation technology, inha university, inchon 402-751, korea. e-mail: ymkoo@inha.ac.kr (y.m. koo) the difficulty of simulated moving bed (smb) design is that the optimization of the operation conditions relies on the determination of accurate adsorption isotherms. most smb chromatograph is carried out under nonlinear conditions, and the nonlinear behavior should be considered properly in the equilibrium isotherms. the other difficulty is the smb operation which has the characteristics of continuous process, all flow rates and switching time of valves should be maintained during the operation of smb. if the disturbances of operating conditions and isotherm parameters are occurred, it affects the zone flow rates and the migration velocity of the solutes, and these effects change the internal profiles of the solutes. therefore, it is the reason of decreasing the purity and the yield of products the objective of this work is to consider the sensitivity of isotherm parameters and operating parameters in smb chromatography process. two amino acids, phenylalanine and tryptophan, separation by smb process is selected as control system. application of ph and po 2 probes during bacillus caldolyticus fermentation: an additional approach in improving a feeding strategy johannes bader 1 , boris neumann 2 , karima schwab 1 , milan popovic 1 , rakesh bajpai 3 : 1 studiengang biotechnologie, fachbereich v, tfh-berlin, seestr., 13347 berlin, germany 2 proteome factory ag, dorotheenstr. 94, 10117 berlin, germany; 3 department of chemical engineering, university of missouri-columbia, w2061 ebe, columbia, mo, usa. e-mail: popovic@tfh-berlin.de (m. popovic) bacillus caldolyticus,a thermophilic microorganism, is a good producer of thermostable liquefying ␣-amylase. during optimisation of initial and feeding media for fed-batch fermentation a two component feeding containing starch and casitone was found advantageous. to approach the optimal feeding rate the method published by akesson et al., 2001 was extended to two component feeding. the key idea, discussed in this presentation, was using the po 2 and ph probing signals to determine if the feeding of one or the other component should be increased or decreased. each of the probes offers information of different areas of feeding condition. to prevent excessive feeding of starch the ph probe is preferable. in case of excessive casitone feeding the po 2 probe responds in very authentic way enabling together with the ph signal reliable and reproducible evaluation of feeding strategy. however a congruent response of po 2 and ph probes means the approaching of the optimum feeding rate for both components. akesson, m., hagander, p., axelsson, j.p., 2001. probing control of fed-batch cultivations: analysis and tuning. contr. eng. pract. 9, 709-723. antibody immobilization by using the plasma polymerized acrylic acid r. jafari, m.tatoulian, f. arefi-khonsari laboratoire de génie des procédés plasmas et traitement de surface, enscp, upmc, 11 rue pierre et marie curie, 75005 paris, france the objective of this work is therefore to produce a surface containing a high density of cooh functions on the polymer beads (ps) for the covalent immobilization of antibodies. we have investigated the plasma polymerization of acrylic acid in a fluidized bed reactor the polystyrene (ps) beads. for such application, there is a strong need to obtain stable plasma polymerized acrylic acid (ppaa) coating, resistant to washing with water. different physico-chemical analyses have been used (water contact angle measurements (wca), xps and sem analysis) to characterize the ppaa coating deposited on ps beads under different experimental conditions. the xps results showed that the pretreatment of surface of the beads before deposition of acrylic acid plays an important role on the stability of ppaa layer. the instability of the coating is partially due the fact that under certain conditions the coatings are soluble in water and secondly due to the bad adhesion of the polymer beads which are hydrophobic to the growing ppaa coatings. xps as well as tof-sims gives evidence of the immobilization of the antibody. xps results as well as static sims allows to detect nitrogen on the surface of the treated beads which proves the presence of the immobilized antibodies. under optimum condition the ppaa coatings provides the possibility to show a nitrogen uptake which varies between 6.5 and 9% of the apparent stoicheiometry of the surface. holst department of medical physiology, the panum institute, university of copenhagen, dk-2200 copenhagen, denmark glp-1 (glucagon-like peptide-1), a peptide of 30 amino acids secreted by endocrine cells in the gut in response to meal ingestion, was discovered during a systematic search for gut factors capable of enhancing insulin secretion. it turned out to be the most efficacious insulin releaser known, and unlike other factors, was shown to retain its insulinotropic activity also in patients with type 2 diabetes. subsequent research has documented that the peptide not only releases insulin from the beta cells, but also enhance all steps of insulin biosynthesis, up-regulates beta cell gene transcription, and has trophic effects on the beta cells. the latter includes both proliferation of existing cells, neogenesis from ductal precursor cells, and inhibition of apoptosis. the peptide also inhibits glucagon secretion, reduces gastric emptying and reduces appetite and food intake. because of these actions, glp-1 administered to patients with type 2 diabetes dramatically lowers blood glucose as well as glycated hemoglobin levels, and reduces body weight. however, natural glp-1 is extremely rapidly metabolized in the body, and the problem has been how to convert the unstable peptide into a clinically useful agent. the two main problems are its susceptibility to enzymatic degradation by ubiquitous dipeptidylpeptide peptidase iv (dpp-iv) and its rapid renal elimination. a related peptide, isolated from the saliva of a lizard, exendin-4, was found to be a full agonist of the glp-1 receptor, to be resistant to dpp-iv and to be cleared more slowly by the kidneys. this peptide was highly effective in clinical studies and has now (30/4) been approved for diabetes treatment by the fda. other approaches include acylation of glp-1 whereby it attaches to albumin in the body and acquires resistance to dpp-iv as well as a slow renal elimination. also this analogue (liraglutide) has favourable clinical effects. fusion proteins of glp-1 and larger, slowly eliminated proteins in the body are currently being evaluated. small molecule, orally available inhibitors of dpp-iv have been demonstrated to protect endogenous glp-1 from degradation and to be efficacious in both experimental and clinical diabetes, and numerous inhibitors are currently in clinical development. the incretin hormones are released from gut endocrine cells upon meal ingestion. they enhance glucose-induced insulin secretion and nay be responsible for up to 70% of postprandial insulin secretion. the incretin hormones are glucagon-like peptide-1 (glp-1) and glucosedependent insulinotropic polypeptide (gip). in patients with type 2 diabetes (2dm) the incretin effect is severely reduced or absent. in 2dm patients the secretion of gip is normal, but its effect on insulin secretion is almost completely lost. glp-1 secretion, on the other hand, may be impaired, but its insulinotropic actions are preserved and it may restore insulin secretion to near normal levels. substitution therapy with glp-1 might therefore be possible. glp-1 is a product of the glucagon gene and its actions include: (1) potentiation of glucose-induced insulin secretion; (2) stimulation of the expression of ␤-cell genes essential for insulin secretion, including the insulin gene; (3) stimulation of ␤-cell proliferation and neogenesis (by enhancing endocrine differentiation of duct cells) and inhibition of ␤-cell apoptosis; (4) inhibition of glucagon secretion; (5) inhibition of gastrointestinal secretion and motility, notably gastric emptying; and (6) inhibition of appetite and food intake. these actions make glp-1 particularly attractive as a therapeutic agent for 2dm. thus, continuous subcutaneous administration of glp-1 for 6 weeks resulted in a 5 mmol/l reduction in mean plasma glucose and a reduction in hgba1c of 1.3%; a weight loss of 2 kg; improved insulin sensitivity; improved ␤-cell function; and the treatment was associated with no significant side effects. unfortunately, glp-1 is rapidly destroyed in the body by the ubiquitous enzyme, dipeptidylpeptidase iv (dpp-iv). clinical strategies therefore include: (1) the development of metabolically stable analogues of glp-1 viz. activators of the glp-1 receptor; and (2) inhibition of dpp-iv. orally active dpp-iv inhibitors have proven successful in experimental diabetes and several companies are now trying to develop clinically suitable inhibitors. so far the clinical experience is limited, but recent clinical studies have provided proof of concept. metabolically stable analogues/activators include the structurally related lizard peptide, exendin-4 or analogues thereof, as well as glp-1 derived molecules that bind to albumin and thereby assume the pharmacokinetics of albumin. these molecules are effective in animal experimental models of type 2 diabetes, and have been employed in clinical studies of up to 52 weeks' duration. on the basis of these studies it can be concluded that a therapy of type 2 diabetes mellitus based on stimulation of glp-1 receptors is likely to be effective and to become a clinical reality within the not too distant future(1-4). recombinant activated coagulation factor vii (rfviia) was developed to treat bleedings in hemophilia patients, who have developed inhibitors against fviii or fix, and has been demonstrated to have an efficacy rate of 80-90% in major surgery as well as in serious bleedings in such patients. to use rfviia as a hemostatic agent in severe hemophilia is a new concept of treatment, not being a substitution therapy, but using a pharmacological dose of exogeneous rfviia to compensate for the lack of fviii or fix. the administration of extra rfviia has been found not only to bind to tissue factor (tf), but also to the negatively charged phospholipids surface of thrombin activated platelets. hemostasis occurs on surfaces being initiated on the tf-expressing cells as a result of exposure of tf, not normally exposed to the circulating blood, following an injury to the vessel wall. tf is a true receptor protein with an intramembraneous part and an intracellular tail. its ligand is fvii/fviia. as soon as tf is being exposed to the blood, it forms complexes with fviia already present in the circulation. these complexes activates fx and provide the initial limited amount of thrombin molecules activating the co-factors, fviii and fv, as well as fxi and platelets. following the thrombin activation of platelets, negatively charged phospholipids are being exposed on the outer surface of the platelets. on this surface most coagulation proteins bind tightly, facilitating the conversion of fx into fxa and the full thrombin burst, necessary for the formation of a tight fibrin hemostatic plug resistant against premature lysis. in hemophilia patients the initiation of hemostasis is essentially normal, but, since they lack fviii or fix, they do not form the fviiia-fixa complex necessary for full thrombin generation on the activated platelet surface. as a consequence the fibrin plug formed in hemophilia is loose, fagile and easily dissolved resulting in continuous bleeding. pharmacological doses of rfviia have been demonstrated to mediate direct binding of rfviia to the negatively charged thrombin activated platelet surface, thereby generating thrombin formation in the absence of fviii/fix. through this mechanism hemostasis is generated in hemophilia patients independent of fviii/fix. furthermore, by generating more thrombin at an increased rate the formation of stable, tight fibrin hemostatic plugs are facilitated. such fibrin plugs are more resistant against premature lysis and help not only to initiate but also to maintain hemostasis. based on its capacity of enhancing thrombin generation locally on the activated platelet surface, rfviia has been used to ensure hemostasis also in other situations than hemophilia, such as platelet defects including thrombocytopenia. recently, rfviia was shown to be hemostatically effective in patients with profuse bleedings as a result of vast trauma and tissue damage. in these patients with a complex hemostasis pattern including a host of changes leading to an impaired hemostatic function, extra rfviia seems to help generate a burst of thrombin resulting in the formation of a stable hemostatic plug more resistant against the ongoing lysis. in patients with intracerebral haemorrhage, one single dose of rfviia recently was found to limit the expansion of the haemorrhage and thereby leading to improved functional outcome. institute for medical microbiology and immunology, panum 18.3.12, blegdamsvej 3, dk-2200 copenhagen n, denmark. email: s.buus@immi.ku.dk complete genomes from several species including many pathogenic microorganisms are rapidly becoming available along with the corresponding "proteomes". even at the peptide level, the diversity of proteome is enormous and easily represents a unique imprint of the originating organism. it is perhaps not surprising that the immune system considers peptides as key targets. recent immunological advances have shown that mhc molecules act as peptide selectors for immune recognition. we have proposed to generate accurate predictions of peptide binding to mhc and used these to identify immunogenic epitopes directly from genomic data. we have developed an iterative data-driven immunobioinformatics approach where data is used to generate predictors, and predictors are used to select new and complementary data for the next iteration. we have demonstrated the superior performance of this approach compared to a random data selection approach. we have also developed an efficient approach to select the most informative mhc molecules to investigate. the resulting, immunobioinformatics resource represents an immediate and powerful application and interpretation of genomic data, and will enable a rational approach to immunotherapy in the future. allergen specific immunotherapy is a causal treatment for igemediated allergic diseases such as hay-fever, and it has relied traditionally on preparations derived from aqueous extracts of various natural allergenic source materials. the cloning and production of an increasing number of allergens through the use of dna technology has not only facilitated the characterisation and analysis of the allergenic proteins, but also provided the opportunity to use these recombinant proteins instead of natural allergen extracts for the diagnosis and therapy of allergic disease. detailed physicochemical, biochemical and immunological characterisation are essential for the comparison of natural and recombinant proteins, and also provide a basis for developing derivatives. chemically modified allergens with attenuated ige-reactivity are currently used for immunotherapy in order to enable high doses to be achieved with a minimized risk of inducing allergic side reactions. dna technology provides the opportunity to develop and produce hypoallergenic allergen variants using strategies including gene mutation. the design of such variants must ensure that t cell reactivity and immunogenic activity are retained in order to preserve therapeutic potential. the recombinant allergens and their derivatives have several advantages over natural allergen extracts. they are relatively easy to produce in consistent pharmaceutical quality; the problems of natural extract standardisation can be avoided completely; the relative concentrations of the individual allergens can be controlled to obtain optimal dosages; nonallergenic proteins are excluded; the possible risks of contamination are avoided. the first clinical trials with grass pollen allergens and birch pollen hypoallergenic variants have yielded very encouraging results. the use of recombinant polyclonal antibodies (pabs) may improve the treatment of disease caused by complex targets such as infectious agents, when compared to monoclonal antibody therapy. symphogen has developed a method for reproducible production of target-specific fully human pab compositions, so-called symphobodies. the antibody genes are first isolated from donors with an immune response against the target and antibodies are screened for specificity. subsequently, the pabs are expressed in mammalian cells using the sympress technology, which is based on site-specific integration. this procedure ensures that each of the expression constructs encoding the antibody genes are stably integrated at the same site in each of the host cells, thereby eliminating genomic position effects and differential growth and production rates. further, the sympress technology comprises the generation of a polyclonal working cell bank (pwcb) which is used as inoculation material for the manufacturing. these cells display sufficient genetic stability to enable a controlled gmp production of recombinant polyclonal antibodies. symphogen's first product, sym001, is a recombinant human polyclonal rhesus d-specific symphobody preparation consisting of 25 different anti-rhesus d antibodies. this product is intended to be used for treatment of idiopathic thrombocytopenia purpura and prevention of hemolytic disease in newborns. recombinant anti-rhesus d symphobodies were produced and shown to be biologically active against rhesus d. the expression technology provided a compositional reproducibility between batches which is sufficient for manufacturing of such a polyclonal product for clinical use. scaledup production for clinical trials is currently ongoing. stem cells play an important role in renewing tissues such as skin and cornea. they are responsible for the continuous generation of the differentiated epithelium. we have characterized stem cells of the skin and cornea in situ and their fate in vitro in human skin reconstructed by tissue engineering using keratin (k)19. in the outer root sheath of the hair follicle, stem cells (label-retaining cells) present in the basal layer of the bulge area express k19 and present a loosely arranged keratin filament network and low levels of k14 protein in their cytoplasm. in addition, another stem cell population (also labelretaining) is present in the first suprabasal layers. these cells exhibit a very dense keratin network and express k17. three-dimensional tissue constructs (dermis and epidermis) obtained by the self-assembly approach of tissue engineering allow the preservation of k19 positive stem cells in the basal layer of the epithelium. in the eye, the stem cells are located in the limbal part but not in central cornea and they express k19. the epithelium of reconstructed cornea is thinner compared to reconstructed skin and more transparent. the characterization of stem cells in reconstructed tissue is essential to evaluate the long-term survival of these tissues in vitro but also after grafting. these human reconstructed tissues are developed for fundamental (physiological, toxicological studies) and clinical applications such as transplantation for the permanent replacement of damaged organs. lg is holder of the canadian research chair (cihr) on stem cells and tissue engineering. alessandra gliozzi physical department, university of genoa, 16146 genoa, italy hollow nanometer-sized capsules can be prepared by means of different techniques. first "nanocapsules" were liposomes, however they are too unstable for many medical or pharmaceutical applications. in contrast, recently developed polyelectrolyte capsules prepared by means of the layer-by-layer technique are much more stable and seem to be a very promising way for coating living cells or tissues in order to prevent or reduce their immune rejection after implantation. several observations on single living cells encapsulated by the alternative adsorption of oppositely charged polyelectrolytes will be presented. the most relevant result is that cell preserve their metabolic activity, are still capable of dividing and performing specific functions. moreover, a technique to immobilize in defined arrays coated cells expressing green fluorescent protein by using a microcontact printing of polyelectrolytes will be presented. finally, tests performed to study the induction of fibrosis and vascularization by nanocapsules implanted in rat kidney and liver will be presented. over the past decade we have developed methods to generate spontaneously and synchronously beating tissue equivalents from neonatal rat heart cells in the culture dish. these tissue equivalents display the key morphological and functional features of intact myocardium and have been termed engineered heart tissue (eht). to generate ehts, heart cells are mixed with freshly neutralized, liquid collagen i, matrigel and growth supplements and grown in a circular casting mold around a central cylinder, which subjects the cells to a continuous mechanical load. this process is enforced by cyclic mechanical stretch. we use eht mainly for two purposes, as a test bed for the effects of pharmacological or genetic manipulations and for cardiac repair. as a cell culture model, ehts compare with standard 2d monolayer cultures of neonatal rat cardiac myocytes and freshly isolated adult cardiac myocytes. advantages of ehts are their functional similarities with intact heart muscles, the ability to easily measure force of contraction under mechanical load, the pos-sibility to transfect cardiac myocytes inside ehts with adenovirus at high efficiency and the reproducibility in large series. a disadvantage is that contractile function as measured at the end of the culture period also integrates influences on tissue development, cell-cellconnections, extracellular matrix production and on non-myocytes. at present we are working on downscaling the eht method to a 96well format for screening purposes. to use ehts for cardiac repair we created multi-looped ehts from five circular ehts large enough to cover the infarct scar 14 days after coronary artery ligation in rats. ehts survived and formed a layer of muscle tissue on top of the infarct scar. ehts restored undelayed anterograde impulse propagation over the scar, prevented further ventricular dilatation, normalized enddiastolic pressure and relaxation, and partly restored contraction of the scar. thus, the study provides evidence that implanting ehts onto infarcted hearts can improve cardiac contractile function after myocardial infarction. the goal of tissue engineering is the development of skin, bones and even organs to restore, maintain and improve tissue function within the body. the current paper focuses on the investigation of invitro growth of osteoblast cells in different types of scaffolds. three of the scaffolds were made of pcl (polycaprolactone) 10% glycerol and 10% hca(hydroxylapetite). two of the scaffolds were made by compression molding, and one was made by fused deposition modeling utilizing the stratasys. the fourth cerabio was a commercially available product totally ceramic. the pores in compression molding were obtained by putting in 75% volume of sugar either 150 and 595 m which was later removed by leaching. the fused deposition scaffold was made by placing the filaments in a predetermined arrangement. the stratasys system was a computer designed model. the scaffolds were seeded with hfob 1.19 human fetal osteoblast cell line with vigorous shaking overnight and incubating at 37 • c and 5% co 2 . observation of the seeded scaffolds was made after 3 days and 9 days of incubation. the seeded cells were stained with bcip/nbp at 37 • c overnight. the cell proliferation in the 595 and 150 m scaffolds appeared approximately the same with a possible advantage of 595 m. the cerabio sample demonstrated the greatest proliferation among the four scaffolds studied and the stratasys sample exhibited a different type of cell adhesion with the cells were clustered in the interstices of the structure. denise freimark, ruth freitag, valérie jérôme chair of process biotechnology, university of bayreuth, d-95448 bayreuth, germany. e-mail: denise.freimark@uni-bayreuth. de (d. freimark) tissue engineering is emerging as an alternative to bone grafts for the regeneration of defects that do not heal spontaneously. ultimately, the development of an optimum carrier and the identification of ideal inductive factors and cells may enable tissue engineering to provide an improvement over bone grafts in the future. bone formation and repair require a complex cascade involving growth factors, cytokines and angiogenesis. at present the complexity of the molecular mechanisms that control gene expression in bone forming cells in embryo as well as in adult is not fully understood. several factors like bone morphogenetic proteins (bmps), transforming growth factor beta (tgf-␤), vascular endothelial growth factor (vegf) and insuline-like growth factor (igf) have been identified and their ability to stimulate bone formation in vitro and in vivo has been investigated. while much is known about these factors per se, less is known about genetic regulation of artificially stimulated osteogenesis. interestingly, some in vitro investigations showed that only optimal growth factors concentrations lead to effective bone formation whereas higher concentrations had deleterious effects which suggest some variation in the activated regulation pathways. therefore, one of our goals is to analyze kinetics, dose-dependence and synergistic effects of growth factors and cytokines on regulation pathways of bone formation. moreover, the optimal vascularization of the scaffold is a major hurdle in the development of engineered bone. it is well known that: (i) vascular invasion precedes bone growth and (ii) osteogenesis takes place in the vicinity of newly formed vessels. thus, inadequate bone vascularization is associated with decreased bone formation. further analysis of the intercommunication between endothelial cells and osteoprogenitors in co-culture systems could provide key information that could be thereafter used to solve this problem. we propose to add some new knowledge to this complicated puzzle. a first step in our investigation is the production of some of the growth factors mentioned above in recombinant form. these factors are expressed in a novel vector (ptriex tm ; novagen) which allows recombinant protein production in prokaryotic or in eukaryotic systems with a single plasmid. afterwards, we analyze potential synergistic effects of these factors as well as kinetic and dose-depend parameters on the genetic regulation of downstream pathways in osteoblasts culture. in parallel, we develop an in vitro system allowing us to investigate the intercommunication between endothelial cells and osteoprogenitors. there has been significant interest in the therapeutic and scientific potential of human embryonic stem (es) cells since they were first isolated in 1998. if human es cells could be differentiated into suitable cell types, stem cells might be used in cell replacement therapies for degenerative diseases such as type i diabetes and parkinson's disease, or to repopulate the heart following myocardial damage. however, there is a significant shortage of high quality human es cell lines and few research groups have experience in the propagation and manipulation of these cells. we are addressing this important issue using the combined expertise of the stem cell biology laboratory and the assisted conception unit at king's college, london. with local ethical approval and under licence from the uk human fertilisation and embryology authority, we have been establishing high quality human es cell lines from a novel source of human embryos. to date, we have derived three human es cell lines and are now focused on the generation of therapeutically important cell populations, including cells that may have clinical application in degenerative and traumatic injury to the brain and spinal cord, heart, retina and other target organs. wallenberg neuroscience center, department of physiological sciences, lund university, bmc a11, s-221 84 lund, sweden cell replacement therapy for parkinson's disease is based on the idea that implanted dopamine neurons may be able to substitute for the lost nigrostriatal neurons. in rodent and primate models of parkinson's disease it has been shown that transplanted dopamine neuroblasts can re-establish a functional innervation and restore dopaminergic neurotransmission in the area of the striatum reached by the outgrowing axons; that the grafted neurons are spontaneously active and release dopamine in an impulse-dependent manner, at both synaptic and non-synaptic sites; and that they can reverse or ameliorate some of the parkinson-like motor impairments induced by damage to the nigrostriatal system. clinical trials in patients with advanced parkinson's disease have shown that dopamine neuroblasts obtained from fetal human mesencephalic tissue can survive and function also in the brains of pd patients, restore striatal dopamine release, and ameliorate impairments in motor behavior. the principal limitation of this approach is the problems associated with the use of tissue derived from aborted human fetuses, and the large numbers of donors needed to obtain good therapeutic effects. until now, transplantation of dopamine neurons has focused primarily on differentiated neuroblasts and young postmitotic neurons, at the stage of neuronal development that is optimal for survival and growth of the grafted cells. however, progenitors taken at earlier stages of development might prove more effective. efforts are now made to expand multipotent neural stem-or progenitor cells in vitro, and control their phenotypic differentiation into a dopaminergic neuronal fate. initial results suggest that in vitro expanded cells can survive and function after transplantation to the striatum in the rat pd model, but the overall yield of surviving dopamine neurons has been very low. with further development, expanded progenitors or dopamine neuron precursors, possibly in combination with cell engineering techniques, may offer new sources of cells for replacement therapy in pd. stem cell therapy has been very much in vogue for several years now. like gene therapy before it, it has raised unrealistic hopes of cures being available imminently. unlike gene therapy, it can cite proof of principle in the well established practice of bone marrow transplantation which is actually a good example of stem cell therapy. however, most of the uses that are now touted as targets for stem cell therapy, envisage the conversion of the stem cells into lineage restricted progenitor cells or more commonly, the final differentiated cell type. such conversions are extremely difficult to initiate and control. the procedures involve the manipulation, differentiation, and expansion of cell cultures in the laboratory, with unknown long term effects on the genetics and physiology of the cells. these issues are compounded when one considers as source material, human embryonic stem cells, where not only the final cell product requires significant scrutiny, but also there are safety issues surrounding the persistence of undifferentiated cells. on top of all these challenges are commercial (for stem cell companies), clinical, and regulatory pressures which will impact heavily on the pace of progress. nonetheless, despite all these hurdles, various academic groups and companies are making significant progress and examples of such developments in diabetes and cardiovascular repair will be given stem cells have the unique ability to perpetuate themselves while continually replenishing tissues throughout the life of an organism. the era of cellular and tissue regeneration for the treatment of disease and the effects of aging has indeed begun. it has been known that mechanical factors play an important role in the regulation of cell physiology. it is therefore reasonable to believe that mechanical factors also play a significant role in the metabolic activity and differentiation of mscs. in this study, we investigated the viscoelasticity of individual bone marrow-derived adult human mesenchymal stem cells (hmscs), and the role of specific cytoskeletal component -f-actin microfilaments on the mechanical properties of individual hmscs. the mechanical properties of hmscs were determined using the micropipette aspiration technique coupled with a viscoelastic solid model of the cell. for the hmscs under control conditions the instantaneous young"s modulus e 0 was found to be 886 ± 289 (pa), the equilibrium young"s modulus e ∞ 372 ± 125 (pa), and the apparent viscosity 2714 ± 1626 (pas). after exposed to 2 m of chemical agent-cytochalasin d that disrupt the f-actin microfilaments, the young's moduli of hmscs decreased by up to 72% and the apparent viscosity increased by 167%. these findings suggest that microfilaments are crucial in providing the viscoelastic properties of the hmscs, and changes in the structure and properties of them may influence the mechanical properties of hmscs significantly. pharmacologic transcription control of desired transgenes is essential for gene-function analysis, drug discovery, biopharmaceutical manufacturing, design of complex artificial regulatory networks, precise and timely reprogramming of key cell characteristics for gene therapy and engineering of preferred cell phenotypes for tissue engineering. capitalizing on our recent advances in the design of small molecule-responsive transcription control modalities we have used conditional molecular interventions for (i) improvement of specific productivity in biopharmaceutical manufacturing, (ii) transdifferentiation of therapeutically relevant cell phenotypes and (iii) design of artificial microtissues. we will also report on a completely new dimension of transgene control as well as engineering of hysteretic and epigenetic gene networks in mammalian cells. chronic diseases are a growing burden for the individual and society alike. one key factor driving this increase is an ageing population. currently, there are 600 million persons aged 60 years or over and this number is predicted to triple by the middle of the 21st century. effective prevention of irreversible damage to major organ systems requires early diagnosis and treatment yielding significant quality of life to the individual and sparing valuable health care resources. even when safe and effective medicines are available, a remaining problem to successful therapy are issues of patient compliance. historically, vaccines have been one of the major advances towards the longevity we enjoy today, with compliance rates close to 100%. hence, vaccines for early treatment of chronic diseases are ideally positioned for long-term therapy, and will take away the burden of self-medication associated with orally active drugs. here we will discuss a new generation of therapeutic vaccines based on virus-like particles (vlps). by displaying target molecules in a highly repetitive manner on vlps, it is possible to break b cell unresponsiveness in experimental animals as well as in humans. using such vaccines in animals, chronic diseases such as hypertension, alzheimer's disease, obesity and rheumatoid arthritis could be treated. furthermore, vaccination against nicotine resulted in high nictotine-specific antibody titers in humans, greatly facilitating smoking cessation in immunized individuals. interdependence of the impact of methanol and oxygen supply on protein production with recombinant pichia pastoris n.k. khatri, f. hoffmann martin-luther-university halle-wittenberg, institute for biotechnology, halle d-06120, germany. e-mail: f.hoffmann@biochemtech.uni-halle.de (f. hoofmann) the methylotrophic yeast pichia pastoris is a potent expression system for secretion of recombinant proteins. methanol as inductor of the foreign gene expression is also a substrate with high oxygen demand, which can lead to sudden oxygen depletion upon induction. thus, supply rates of methanol and oxygen are major process parameter during protein production with recombinant pichia pastoris. limiting dosage of methanol allowed maintenance of oxygen sufficient conditions during production of a single chain antibody fragment, but the product was degraded from the c-terminal end. in contrast, full-length product accumulated with controlled methanol concentrations despite oxygen limitation. the volumetric methanol uptake rate are limited by the oxygen transfer capacity of the reactor. higher methanol concentrations decreased the biomass yield and thereby increased the specific methanol uptake rate. this enabled prolonged production and yielded fivefold higher product concentrations. at the same time, the accumulation of small molecular weight contaminants was reduced. dosage of pure oxygen accelerated methanol uptake, grow and production. switching to dostat mode upon oxygen depletion led to an arrest of product accumulation, in contrast to persistent methanol feeding. the productivity was tenfold higher than without oxygen. combined with high methanol concentrations, however, fast methanol uptake led to toxication of the cells and early stop of production. flow cytometry revealed that perturbation of oxygen metabolism was followed by partial lysis of the culture. recombinant production of therapeutic proteins poses severe challenges due to their complexity (cystines, subunits, size). formation of the correct disulphide bridges is a prerequisite for activity but difficult to achieve in a prokaryotic host. there proteins mainly fold post-translationally as opposed to eukaryotic co-translational folding. thus the recombinant products are often not soluble and/or active. that is why different production parameters (e.g. host, induction conditions, temperature, compartment, proteinaceous fusion partners, co-expression of chaperones, foldases) are applied in order to gain functional recombinant proteins. the impact of all these strategies cannot be predicted and every problem of the production (expression, solubility, activity) might need to be solved for every target protein separately. nevertheless, much effort has been put into this for almost 3 decades, because they are important targets for the pharmaceutical industry. human growth factors influencing cellular proliferation and/or differentiation are one example. this case study gives an overview of strategies tested within the development of two processes leading to an optimised yield of active protein. murine wnt-1 (wnt family) possesses 23 conserved cysteines (most likely all involved in disulphide bridge formation and one in posttranslational modification) and could only be successfully produced in e. coli (fahnert, 2004) recently despite various attempts for many years. the other target protein is human collagen prolyl-4-hydroxylase being a heterotetramer consisting of two ␣-subunits and ␤-subunits each. the ␣-subunit strongly aggregates if produced separately whereas the ␤-subunit is pdi and is suggested to have a chaperone function. therefore a sequential induction strategy was proposed for this protein . the moss physcomitrella patens has been recently recognized as an ideal producer of recombinant proteins with respect to glycosylation. due to the elaborated post-translational capabilities of moss cells, the glycosylation patterns have been manipulated to obtain similar proteins to those found in animal cells. the protein expression using moss in suspension offers important advantages in comparison to other systems e.g. cho cells. the recombinant proteins can be targeted into the mineral medium, simplifying the down stream processing. moreover, there are neither known moss viruses nor plant viruses that are pathogenic for humans. the moss are cultivated axenically in a filamentous stage, the so called protonema. a pilot 30 l tubular photoreactor is used to characterize the response of p. patens to variations on the culture conditions. the phototrophic culture in bioreactors is systematically investigated, where light quantity and quality, stress, concentration of phytohormones, and moss morphology influence the differentiation, growth, and protein expression. a tight control of the moss morphology in suspension, quantified by image analysis, has shown to be advantageous in order to delay the cell differentiation and maintain the carbon dioxide uptake in long bioreactor runs. the introduced perfused culture system by means of cross flow filtration allowed for a continuous product separation and concentration, and feed back of the productive cells. the characterization of this highly controlled culture system is presented and the potential of p. patens as an alternative tool for molecular farming is discussed. (rnai) is an evolutionarily conserved, endogenous mechanism for sequence-specific gene silencing that uses small double-stranded rnas (called short interfering rnas or sirnas) to direct cleavage or prevent translation of homologous mrnas. harnessing rnai for therapy presents an opportunity for potentially treating a wide variety of diseases. the main obstacle is delivering sirnas into the cytosol of target cells in vivo. although we were able to protect mice from autoimmune hepatitis by hydrodynamic tail vein injection of sirnas targeting fas, this delivery method is unlikely to be adaptable for human use. alternate strategies to deliver sirnas in vivo as small molecule drugs using currently available, clinically acceptable injection methods that have shown promise in mouse models will be discussed. these include local delivery to mucosal surfaces and delivery into specific cells via cell surface receptors using an antibody fragment fused to protamine. these sirna complexes silence gene expression in vivo only in cells bearing the targeted receptor. experiments showing efficient, effective and cell-specific delivery in a mouse tumor model will be discussed. in addition, encouraging preliminary data using rnai for a microbicide to prevent sexually transmitted infection will be presented. rna interference (rnai) holds significant progress as a therapeutic approach to siolence disease-causing genes, particularly those that encode "non-druggable" targets. the key hurdle for rnai therapeutics is in vivo delivery. a critical requirement for achieving systemic rnai in vivo is the introduction of "drug-like" properties, such as stability, cellular delivery and tissue biodistribution, into synthetic sirnas. our progress in achieving in vivo silencing of endogenous genes with chemically modified sirnas will be discussed. hiv-1 replication in human t cells can be inhibited by stable expression of a short hairpin rna targeting the viral nef gene (shrna-nef). however, hiv-1 escape variants emerge after prolonged culturing, and all but one escape mutant acquire a mutation in the shrna-nef target sequence. we observed single and multiple nucleotide substitutions, but also partial or complete deletion of the target sequence. these results demonstrate the sequencespecificity of this antiviral approach. we observed an inverse correlation between the level of resistance and the stability of the shrna/target-rna duplex for most of the escape mutants. however, two escape variants did not follow this pattern, including an escape mutant with a single point mutation at position −7 upstream of the target sequence. these mutants provide a much higher level of resistance than expected based on duplex stability, which is obviously not affected in the −7 mutant. we demonstrate that these mutants adopt an alternative rna secondary structure that occludes the target sequence. this results in reduced shrna-nef binding and provides a novel mechanism for rnai-resistance. to avoid viral escape, one should ideally target hiv-1 with multiple effective shrnas against conserved genome sequences. we performed a large scale screening to identify such targets, and we have identified at least nine genome segments that can be targeted effectively with shrnas. these potent antivirals are currently being assembled in a lentiviral vector for gene therapy applications in hiv-infected individuals. furthermore, we will describe approaches to forecast viral escape routes and to effectively block such evolutionary paths with additional rnai measures. in this study we analyzed the effect of antibodies against electronegative ldl on the development of atherosclerotic lesions in low-density receptor-deficient (ldlr −/− ) mice. two groups of (ldlr −/− ) mice (eight females) fed 0.5% cholesterol-enriched chow were used. the first group received a monoclonal antibody against electronegative ldl (100 g) and the second one received pbs (controls). additionally, other two groups (eight males) of (ldlr −/− ) mice were treated with a polyclonal antibody against electronegative ldl (100 g) or pbs (controls). antibodies were administered by intravenous route one week before starting the hypercholesterolemic diet and then every week over an experimental time of 21 days. afterwards, quantification of atherosclerotic plaque area of heart and aortic arch was done by analysis of the slices stained with oil red/hematoxolin/light green with the image propus software. the passive immunization with either monoclonal or polyclonal antibodies against electronegative ldl significantly reduced the atherosclerotic plaque areas in atherosclerosis-prone ldlr −/− mice. in conclusion, antibodies against electronegative ldl administered by intravenous route may play a protective role in atherosclerosis. supported by fundação de amparoà pesquisa do estado de são paulo (fapesp, scholarships to d.m.g., l.s. and a.b. and grants to m.h.k. and d.s.p.a.). the enzyme asparaginase from the procaryote escherichia coli or erwinia crysanthemi is used for the treatment of lymphoblastic leukaemia. the drug causes immunological reactions in despite of the treatment efficiency. asparaginase may also be obtained from saccharomyces cerevisiae and this enzyme could provide an alternative to its bacterial counterparts. in this study, the periplasmic nitrogen regulated asparaginase ii from s. cerevisiae, that is coded by the asp3 gene, was cloned and expressed in the methylotrophic yeast pichia pastoris under the control of the aox1 gene promoter. the recombinant p. pastoris strain was cultured in shake flasks and in a 2 l instrumented bioreactor. in both cases it was observed specific enzyme yields seven fold higher in comparison to that using a nitrogen derepressed strain of s. cerevisiae, reaching 800 u/g dry cell mass. high cell density cultures carried out in the 2 l bioreactor, in which it was attained 107 g dry cell mass/l, resulted in a dramatic improvement in asparaginase fermentation. as such, it was measured enzyme yields of 85,600 u/l and productivities of 1083 u/l h. department of biochemistry and food chemistry, biotechnology, university of turku, tykistokatu 6, biocity 6th floor, 20540 turku, finland. e-mails: lorenzo.galluzzi@utu.fi; deadoc@libero.it; deadoc@aliceposta.it (l. galluzzi) the bacterial luciferase operon from the bacterium photorhabdus luminescens has been used, since its first description, for exceptionally different applications. these ranged from the environmental monitoring to the cell tagging, from the analysis of cellular metabolism to the high-throughput screening of novel compounds. the wild-type luxcdabe operon has been engineered in countless ways (for instance by changing the order of the constituent genes, by optimizing the codon usage and by coupling it to many promoters) and has been expressed in prokaryotic and eukaryotic organisms in order to meet precise research and commercial needs. upon the operon expression light is emitted as the side product of a chemical reaction catalyzed by the luciferase enzyme, an ␣␤ heterodimer encoded in luxa and luxb genes. the reaction involves the oxidation of a long-chain aliphatic aldehyde and reduced flavin mononucleotide (fmnh 2 ) with the liberation of excess free energy in the form of a blue-green light at 490 nm. the luxcde genes code for the polypeptides (transferase, synthetase, and reductase) forming the fatty acid reductase complex that catalyzes the conversion of fatty acids into the long-chain aldehyde required for the luminescent reaction. noteworthy is that for the production of the substrates for the luciferase both atp and nadph are required, while neither is involved in the actual light emitting reaction (wilson and hastings, 1998) . recently, the coupling of the luciferase operon to regulated promoters lead to the construction of genetically modified bacteria able to sense the presence of chemicals and to respond, in a dosespecific manner, with light emission. this approach has been applied to the detection of antibiotics in samples from the food industry as well as to the detection of heavy metal ions in environmental samples (kurittu et al., 2000; bechor et al., 2002) . in addition, it opened the possibility of screening wide libraries of new compounds looking for molecules with pre-determined features, able to induce the bioluminescent response by de-repressing the lux operon transcription when incubated with the appropriate bacterial sensor. the high throughput and low costs are the main advantages of this system, which shows as well a certain degree of specificity (galluzzi and karp, 2003; galluzzi et al., 2004) . nevertheless, since the in vivo bioluminescence relies upon a complex network of biochemical reactions, under certain circumstances the light emission is not a direct consequence of the lux operon transcriptional induction but it more likely originates at a posttranslational stage. as a matter of fact, one can suppose that a change in the light emission will be observed whenever the concentration of one or more substrates for the lux␣␤ reaction occurs. consequently, all the molecules sharing the ability to impair the delicate chemical equilibrium regarding the compounds involved in bioluminescence will be sensed by the bacteria as inducing compounds. this, in turn, will result in a loss of specificity of the assay. we investigated this aspect of the whole-cell assays based upon the bacterial luciferase operon for drug discovery by means of a reporter plasmid in which the luxabcde genes, rearranged and optimized for the after 60 min (black downward arrow) of incubation at 37 • c under vigorous shaking the following concentrations of trimethoprim were added to the growing cells: 25 g/ml (triangles), 50 g/ml (circles) and 250 g/ml (rumbles). water was administered to control cultures (squares). light emission was quantified every 30 min by means of the wallac victor 2 multilabel counter (perkin-elmer, turku, finland) and normalized to the absorbance, measured at 600 nm with the same device. multi-96 white-walled transparent-bottomed plates were used for the assays (nalge nunc, usa). between the measurements the plates were kept at +37 • c under vigorous shaking. filled symbols refers to the absorbance values (left side y-axis); open symbols to the normalized count per seconds (right side y-axis). expression in gram + organisms, are under the control of the qacr regulatory region from staphylococcus aureus . non-pathogenic s. aureus rn4220 cells bearing the pqaclux plasmid (cultivated in lb broth supplemented with 0,5% d-glucose and 10 g/ml chloramphenicol) emit light upon the specific transcriptional induction with quaternary ammonium compounds, widely used as surface disinfectants and in many over-the-counter drugs . however, the incubation of the same cells with an inhibitor of the dihydrofolate reductase enzyme, trimethoprim (sigma-aldrich chemie, steinheim, germany), enhanced in a dosedependant manner the light emission observed upon the induction with the optimal concentration (1 g/ml) of benzalkonium chloride (bc). the extent of this increase in luminescence varied from 5-10% to more than 200%, according to the trimethoprim concentration and to the measurement time. interestingly, when the same plasmid is carried by escherichia coli xl1 cells, the lux operon is expressed constitutively (since the qacr regulatory region is not functional in xl1 cells) and at much higher levels than in induced rn4220 cells. also in this experimental system the incubation with trimethoprim results in a dramatic increase of the luminescent signal from the cultures. the explanation for these observations can be found in the molecular mode of action of trimethoprim. the inhibition of dihydrofolate reductase, indeed, directly leads to the accumulation of its substrates, among which is nadph, deeply entangled in the biochemical network of reactions centred on the light emission from the lux operon. nadph provides the reducing power to restore the reduced flavin mononucleotide pool and it is as well involved in fig. 2 . xl1/pqaclux growing cells were incubated with the following concentration of trimethoprim: 25 g/ml (triangles), 50 g/ml (circles) and 250 g/ml (rumbles). water was administered to control cultures (squares). light emission and absorbance measurements were performed as previously described for rn4220 cells. filled symbols refers to the absorbance values (left side y-axis); open symbols to the normalized count per seconds (right side y-axis). the diverse level of growth observed for rn4220 and xl1 cells can be accounted by the different antimicrobial activity exerted by trimethoprim towards gram− and gram+ cells and by the lower activity of the promoter which in pqaclux plasmid drives the transcription of the selection marker (chloramphenicol acetyl transferase). the production of the long-chain aldehyde, both substrates of the lux␣␤ heterodimer (wilson and hastings, 1998) . in conclusion, here we demonstrate that the use of light emission from the bacterial luciferase as a transcriptional reporter has to be very carefully controlled, since some molecules (here trimethoprim) could mimic to some extent a specific promoter activation by impairing the delicate intracellular biochemical equilibrium. on the reverse side of the coin, fig. 3 . simplified scheme depicting the bacterial folate metabolic pathway. only part of the reactions and compounds are reported. trimethoprim inhibits the nadph-dependant reduction of dihydrofolic acid into tetrahydrofolic acid catalyzed by dihydrofolate reductase. this results in the accumulation of both substrates, which become available for other reactions, and in the depletion of tetrahydrofolate, the major c 1 carrier in the synthesis of purines, thymidine, glycine, methionine, and pantothenate in bacteria. for antimicrobial purposes trimethoprim is often associated with sulfonamides, with which it displays a synergistic activity (since they act on the same pathway but at an earlier reaction) (scholar and pratt, 2000) . however, it has to be noted that the lux reporter system could be used in many different in vivo experimental setups, where the change in the intracellular concentration of nadph, atp or other metabolites would be sensibly detected. we have carried out the construction of five pichia pastoris strains harboring in their respective genome three different growth hormones cdna's (22 and 20 kda human growth hormones and bovine growth hormone), a shrimp (litopenaeus vannamei) trypsinogen cdna and a bacterial (bacillus subtilis) phytase gene. in all the cases, the same kind of expression vector and the same transformation technique were used. each dna was fused in frame to saccharomyces cerevisiae alpha-factor secretion signal to lead the secretion of the foreign protein into the culture medium. the induction of each recombinant strain, all with his + and mut + phenotype, was carried out in shake flaks using methanol buffered minimal medium (bmm) and growth rates on methanol determined. production level of secreted proteins was evaluated by protein analysis by sds-page. furthermore, the expression with three of the constructed strains was performed in a 5-l bioreactor and the level of secreted protein determined in each case. both human growth hormones (hghs) were secreted into the culture medium with a high degree of purity obtaining up to 64% of hghs of total proteins in crude fermentation medium and 3-18 mg/l of hghs. neither bovine growth hormone, shrimp trypsinogen or bacterial phytase were detected in methanol induced cultures of the respective strains, in spite of the same culture conditions were used for all p. pastoris strains. the growth rates on methanol were different between some strains. in fermentor cultures the hghs production were increased and shrimp trypsinogen was produced and secreted into the culture medium after improving the culture conditions. bovine growth hormone was detected only when the culture conditions were modified. the protein production level for each strain was affected by the proprieties of each recombinant protein produced. competitive advantages of the diagnostic method for invasive amoebiasis using preserved antigenic extracts without using enzymatic inhibitors m.s. flores 1,2* , e. tamez we have patented a method to diagnose invasive amoebiasis using a novel assay that preserves antigenicity of extracts with high protease content without using enzymatic inhibitors (ic:mc). the available tests for serologic diagnosis of invasive amoebiasis like elisa and indirect haemaglutination (iha) do not have consistent results in endemic zones. here we show the advantages of the assay and the validation of this diagnostic test for invasive amoebiasis. we demonstrated the reduction of proteolytic activity of ic:mc compared with the proteolytic activity of crude extract and crude extract with enzymatic inhibitors using assays. we displayed the ic:mc sds-page pattern and the western blot (wb) pattern useful for diagnosis. to search the clinical utility of this test we examined the wb obtained with sera from patients with different liver diseases; 90 patients had invasive amoebiasis and 45 patients had other liver diseases. the results were compared with those of iha test. also we have tested the accuracy of wb using sera from people with multiple intestinal parasites, like giardia lamblia, hymenolepis nana, blastocystis hominis, entamoeba coli, etc. the sensibility of the wb using the preserved amoebic antigens was 99%, specificity was 100%, positive predictive value was 100%, negative predictive value was 98% and accuracy was 99%. the wb did not exhibit cross reactions with sera from persons with intestinal parasites. our test was better than the (iha) test commonly used in endemic zones. these results show the improvement of using the preserved amoebic antigens in diagnostic tests. also they prove the diagnostic accuracy of our new wb test. antibacterial and antifungal activity of heracleum sphondylium subsp. artvinense yasemin kaçar 1 , sema tan 2 , aysun ergene 2 , perihan güler 2 , semra mirici 3 , ergin hamzaoglu 2 , ahmet duran 4 , sinem yildirim 2 : 1 mersin university, faculty of science and literature, department of biology, 33800 mersin, turkey; 2 kırıkkale university, faculty of science and literature, department of biology, 71450 yahsihan-kırıkkale, turkey; 3 akdeniz university, faculty of education, department of biology education, 07400 antalya, turkey; 4 selcuk university, faculty of education, department of biology education, 42300 konya, turkey turkey is covered yearly with a huge number of plant species. about 9222 species are condenced on the region that between asia and europe. many plant species have been used in folkloric medicine to treat various ailments. heracleum l (apiaceae) is include over than 70 species on the world. this variety is represent with 17 species in turkey that seven species are endemic. heracleum sphondylium subsp. artvinense is endemic species for turkey. ethanolic and aquous extract of heracleum sphondylium subsp. artvinense were investigated for their antimicrobial activities against six bacterial species (e. feacalis, e. coli, s. aureus, p. aeruginosa, l. monocytogenes, shigella) and two yeast (c. albicans, c. krusei). both ethanolic and aqueous extract of heracleum sphondylium subsp. artvinense showed antimicrobial activity against the gram negative bacterium (shigella) and gave the best activity against c.albicans. to develop artificial vectors allowing nucleic acid to transfect into mammalian cells are crucial for extending gene therapy. synthetic vectors based on lipid molecules are particularly attractive because of their potential safety. however, the low encapsulation efficiency of nucleic acid is one of the problem to be solved. recent advances in dna-lipid complex have improved this drawback, and now some lipid molecules are used as transfection agents. in particular, cationic lipids interact with negatively-charged cell surfaces and nucleic acids. the former interaction results in delivery of the nucleic acids directly across the cell membrane. on the other hand, the latter interaction improves the efficiency of nucleic acids entrapment. in this study, we developed a preparation method of "nanovesicle" containing nucleic acids by using reverse micellar solubilization. since dna interacts spontaneously with cationic amphiphiles, complete extraction of the dna molecules into an organic phase using dimethyl distearyl ammonium bromide (2c18ab), an oil-soluble cationic surfactant, is achieved. the re-encapsulation of the reverse micellar droplets solubilizing dna by water-soluble surfactants facilitates the formation of nanovesicles. a high salt concentration at the re-encapsulation step promotes the production of nanovesicles containing dna molecules. eventually, more than 80% of dna was encapsulated in this nanovesicles under the condition of 6m nacl and 5% (w/v) cetyltridecyl ammonium bromide (ctab). in addition, the nanovesicles prepared at 45 • c was smaller than that prepared at room temperature. the resultant 2c18ab/ctab nanovesicle was ca. 16 nm. asymmetric and chemically-modifiable vesicle surface are effective to design gene delivery system. moreover, such a small dna (or rna) carrier has a potential for novel gene therapy application from skin. the key players in clinically important inflammatory diseases, endothelial cells and leukocytes, communicate through membranebound cell adhesion molecules (cam's). obvious strategies for therapeutic intervention include specific means to affect the expression of cam's on the cells involved. rna-interference (rnai) is a well known means to achieve specific gene-inhibition. for this study, two cam's were chosen, namely vascular cell adhesion molecule-1 (vcam-1) as a target for inhibition, and intercellular adhesion molecule-1 (icam-1) as a non-target reference. we designed short interfering rna (sirna) oligos for vcam-1 in order to selectively inhibit the expression of this cam in cultured human vascular endothelial cells (huvec). real-time rt-pcr showed an 80% down-regulation of vcam-1 mrna while the expression of icam-1 remained unaffected. neither cam was affected by non-specific sirna. in order to further substantiate the potential use of sirna for therapeutic purposes, we have set out to investigate two things: (1) does vcam-1-specific sirna affect the amount of vcam-1 on the surface of huvec? (2) is adhesion between leukocytes and endothelial cells affected by vcam-1-specific sirna? the manufacturing of plasmid dna (pdna) is crucial to obtain a consistent product for gene therapy applications. although flowsheets for pdna production are established on the basis of experience, simulation tools provide a valuable help for evaluating alternatives. a process designed to produce 23 kg pdna/year is analysed with the superpro designer tool. the target pdna is amplified in escherichia coli. after harvest, alkaline lysis is used to disrupt cells and release pdna and impurities. precipitations with isopropanol and ammonium sulphate are performed to concentrate/pre-purify pdna prior to hydrophobic interaction chromatography. experimental data is used as input for simulation. inventory analysis identified water, yeast extract, tryptone, isopropanol and ammonium sulphate as the major raw materials. major raw material costs (50%) are related to fermentation components. economic analysis indicates a unit production cost of $ 375/g pdna. for a selling price of $ 1667/g, the payback time was 1.2 years and the roi was 88.6%. an environmental analysis highlighted the replacement of the isopropanol precipitation for a microfiltration step as a benefit which would: (i) reduce the cost of raw materials (13.8%), (ii) reduce the environmental impact associated with isopropanol (70%) and (iii) reduce costs associated with the treatment/disposal of liquid waste (32.3%). preparation of chitosan microspheres for controlled release of somatotropin s. simsek 1 , j. introduction: proteins and peptides have received extensive interest for their therapeutic applications in clinical applications. in order to achieve high administration efficacy of proteins, polymeric particulate carriers have been developed as an effective way to control the drug release profile and to protect the protein molecules from degradation. somatotropin also known growth hormone is a protein hormone of about 190 amino acids. growth hormone is of considerable interest as a drug used in both humans and animals. chitosan a natural linear biopolyaminosaccharide is obtained by alkaline deacetylation of chitin. properties such as biodegradability, low toxicity and good biocompatibility make it suitable for use in biomedical and pharmaceutical formulations. the aim of this study was to prepare chitosan microspheres containing somatotropin and to investigate these microsphere formulations in-vitro release properties. methods: somatotropin-chitosan microspheres were prepared as follows: chitosan was dissolved in acidic solution (2%) containing polysorbate 80. sodium sulphate solution (20% w/v) containing somatotropin was added into the chitosan solution and mixed 500 rpm for a hour. resulting suspension was centrifuged 15,000 rpm 15 min at 4 • c. the formed microspheres were freeze-dried and sieved. in-vitro release studies were performed in ph 7.4 phosphate buffer and time interval samples were removed and analysed by bradford protein assay method. results: microspheres were obtained by using chitosan. protein encapsulation efficiency was between 95 and 99%. average particle size of microspheres was between 48 and 57 m. during to in-vitro release studies burst effect was observed with chitosan microspheres. for decreasing the burst effect gluteraldehit, betacyclodextrin and poly ethylene oxide were added to formulations. conclusion: according to our results modified chitosan microspheres are promising vehicles for controlled release somatotropine delivery. cancer immunotherapy using hyperthermia with magnetic nanoparticles and dendritic cells k. tanaka, a. ito, t. kobayashi, t. kawamura, s. shimada, k. matsumoto, t. saida, h. honda department of biotechnology, school of engineering, nagoya university, nagoya, aichi 464-8603, japan. e-mail: h041306d@mbox.nagoyau.ac.jp (k. tanaka) our hyperthermia system utilizes magnetic nanoparticle covered with lipid layer including cationic lipid (magnetite cationic liposomes, mcls) as a heating mediator and necrotic cell death is induced by means of locally generating heat. in this process, heat shock proteins (hsps) are strongly induced and released. dendritic cells (dcs) are potent antigen-presenting cells (apcs) that play a pivotal role in regulating immune responses in cancer, which is in carrying antigens to apcs and in the maturation of dcs by acting as a danger signal. in the present study, we investigated the therapeutic effects of dc therapy combined with mcl-induced hyperthermia on b16 melanoma. in an in vitro study, when immature dcs were pulsed with b16 cells heated at 43 • c for 30 min, mhc class i/ii, costimulatory molecules cd80/cd86, and chemokine receptor ccr7 in the dcs were up-regulated, thus resulting in dc maturation. c57bl/6 mice bearing a b16 melanoma nodule were subjected to combination therapy using hyperthermia and dc immunotherapy. mice were divided into four groups: group i (control), group ii (hyperthermia), group iii (dc therapy), group iv (hyperthermia + dc therapy). complete regression of tumors was observed in 60% of mice in group iv, while no tumor regression was seen among mice in the other groups. increased ctl and nk cell activity was observed on in vitro cytotoxicity assay using splenocytes in the cured mice treated with combination therapy, and the cured mice rejected a second challenge of b16 melanoma cells. this study has important implications for the application of mcl-induced hyperthermia plus dc therapy in patients with advanced malignancies as a novel cancer therapy. fermentation of a marine bacterium for the production of cytotoxic compounds vicky webb 1 , els maas 1 , eiichi akaho 2 , hiroto kambara 2 , debbie hulston 1 , anna kilimnik 1 : 1 marine biotechnology, national institute for water and atmospheric research ltd, kilbirnie, wellington new zealand; 2 faculty of pharmaceutical sciences, kobe gakuin university, kobe 651-2180, japan marine bacteria are a potential source of novel compounds for the pharmaceutical industry. new zealand marine bacteria were isolated from a variety of sources and initially bacterial supernatants were screened using a cell based mtt cytotoxic assay. two bacteria were chosen for further fermentations in different media to optimise cytotoxic compound production. the media used were selected for their diverse ingredients. they were either carbon rich, nitrogen rich, starch rich or a basic seawater medium. the fermentations were extracted using ethyl acetate and methanol prior to assaying. the results showed that cytotoxic compound production was enhanced ten fold in the starch rich medium compared to the other media. the levels of cytoxicity appeared to be depended on the cell line used, with the epithelial lung carcinoma cell line a549 being more sensitive to the cytotoxic compounds than the human fibroblast cell line mrc-5. isolation and identification of marine bacteria from deep-sea sediments els maas 1 , cara brosnahan 1 , vicky webb 1 , helen neil 2 , phil sutton 2 : 1 marine biotechnology, national institute for water and atmospheric research ltd., kilbirnie, wellington new zealand; 2 oceanography, national institute for water and atmospheric research ltd., kilbirnie, wellington, new zealand polystyrene micro plate activated by utilizing a common co-60 gamma source with a crotonic acid. the well surface have been studied in term of optical quality, protein (h-igg) binding capacity and stability. a significantly enhanced total capacity and strength of binding to grafted surfaces was demonstrated as compared to passive adsorption of the proteins to untreated surfaces.the majority routine laboratory tests for the measurement of rheumatoid factor (rf) are semi quantitative and in order to achieve accurate, sensitive and specific rf assay, we are developed enzyme linked immuno sorbent assay (elisa) for human igm-rf kit. stable and reproducible binding of antigen (human-igg) to well of micro titer plate is a prerequisite for rf-elisa kit. the principle of the assay was that the antibodies in serum of patients with rheumatoid arthritis (ra), commonly rheumatoid factor (rf) are directed to the fc part of human igg. in most cases rf belong to the igm class, the well of micro titer plate are coated with antigen (h-igm), and antibodies (h-igg) binding to immobilized antigen is detected by adding enzyme conjugate (anti human-igm-hrp-enzyme) to the wells, substrate was used for color reaction.the performance characteristics of the assay was, the coefficient of variation (c.v) of intra and inter assay was 4.4% and 2.2% respectively, the linearity is ranging from 86 to 112%, and the recovery for three different sera is ranging from 89 to 106%. the data presented in this paper indicated that the activation of polystyrene micro titer plate by the gamma rays could be use for preparation of igm-rf kit and may be others immunoassay techniques. overcoming the nuclease barrier to gene expression during trafficking of plasmid dna vectors a.r. azzoni 1 , a. tavares 2 , g.a. monteiro 1 , d.m.f. prazeres 1 : 1 centro de engenharia biológica e química (cebq), instituto superior técnico, 1049-001 lisboa, portugal; 2 instituto gulbenkian de ciência, rua da quinta grande 6, 2780-156 oeiras, portugal. e-mail: azzoni@ist.utl.pt (a.r. azzoni) inefficient nuclear delivery of plasmid dna (pdna) vectors is thought to be a bottleneck to gene transfer in gene therapy and dna vaccination utilizing non-viral delivery systems. one of the main barriers found by pdna vectors during trafficking to the nucleus is degradation by a population of endo/exo-nucleases. this barrier may be partially circumvented by shielding the pdna from the nucleaserich cell environment with adjuvants or by using nuclease inhibitors. a different approach that has been studied at the cebq is the generation of pdna vectors that are more resistant to nuclease action a priori. in this work, the nuclease barriers to gene expression are being studied aiming at the generation of pdna with improved resistance to nucleases and thus higher transfection efficiency. by engineering the plasmid labile sequences, new plasmid vectors with an improved resistance to physical-chemical and biological degradation are being generated. this was indicated by an extended half-life of the supercoiled isoforms during storage and when the plasmids were exposed to nucleases present at eukaryotic cell lysates and mice plasma. the intracellular trafficking of the new plasmid vectors through the cytosol of mammalian cells was then assessed by fluorescence in situ hybridisation (fish) and the expression of the reporter protein (egfp) was detected by fluorescence microscopy. identification and evaluation of antibacterial phytochemicals of fishbone fern (nephrolepis cordifolia) rikhia chakraborty 1,2 , promod kumar verma 2 : 1 department of cancer biology, lerner research institute, 9500 euclid avenue cleveland, oh 44195, usa, 2 department of biotechnology, guru nanak dev university, amritsar, punjab 143005, india. e-mail: riar5400@rediffmail.com (r. chakraborty) in this study, different aqueous and organic extracts from the fern, nephrolepis cordifolia, were used for screening tests for antibacterial effects. protein and lipid extracts were first tested for antibacterial activity. subsequently, crude extracts of leaves, roots, and stems were prepared in methanol, ethanol, chloroform, hexane, petroleum ether, diethyl ether, and water using optimized standard protocols. each fraction was tested for anti-microbial effect through agar well-diffusion assay, and paper disc method. the antibacterial spectrum against which the fern is active was thus determined. dosagedetermination for optimum activity was also determined for each of the extracts. bacillus and staphylococcus were used as the indicator test-organisms. the results were very encouraging; being effective even at the 54th day, thus showing that the antibacterial properties were not due to any changes in external factors and physiological effects. ethanol, methanol and chloroform extracts from the subaerial portions had strong anti-microbial properties. agar-well diffusion assay and paper-disc diffusion assay done for different solvent fractions were giving comparable results. 2.5 mg was adequate for maximum effect against b. circulans, s. aureus, s. epididermis, and streptococcus sp. 7.5 mg was adequate as effective dosage for kleb-siella pneumoniae. 10 mg was required for mycobacterium bovis, e. coli. pseudomonas was not showing any susceptibility. given the broad spectrum of activity, especially towards the gram-negative bacteria, nephrolepis cordifolia is definitely a promising plant having pharmacological importance. for a full interpretation of the present results further investigations are necessary to elucidate the different physical and chemical parameters of the active principles and also to determine the mechanism of action. the present work highlights n. cordifolia as a plant having a broad spectrum of antimicrobial activity, a phenomenon very rarely observed in the plant kingdom. bacteria (escherichia, salmonella, proteus, staphylococci and bacillus) were isolated from hospital soil using selective enrichment and growth on selective and differential medium, viz. macconkey's agar, clyed medium and baird parkers medium. confirmation and species identification was carried out by biochemical and serological tests. from these isolates, three different pathogens were used to study multiple antibiotic resistances. e. coli bj 83 showed resistance to ampicillin, streptomycin and cefurixime. s. typhi and s. aureus showed resistance to ampicillin and cefuroxime. assay using octadisc using e. coli and s. typhi showed a broader resistance pattern to antibiotics including amoxicillin, clavulanic acid, cephalexin, chloramphenicol, ciprofloxacin, and cotrimoxazole. the r plasmid profile was studied to understand the mechanism of drug resistance. to combat the problem of drug resistance, a strategy of combined antibiotic response of cultures were studied. such a synergistic combination would possibly have the effect of overcoming multiple antibiotic resistances. for e.g. kanamycin resistance strains were inhibited in presence of kanamycin and cefotaxime. further, the bactericidal activities of antimicrobials in honey and garlic were also tested. the mic of honey was observed to be 4%, while that of garlic was between 0.1 and 1%. honey and garlic were also found to inhibit the growth of organisms in the presence of antibiotics. kanamycinresistant e. coli was unable to grow in presence of kanamycin and 0.08% garlic. these traditional agents, long used in ayurvedic system of medicine in india, could be further explored for potent antimicrobial properties. hepatitis b virus (hbv) infection is s global health problem. assays for hbv antigens and antibodies are widely available and standardized. extremely sensitive qualitative pcr kits are also available for detection of hbv in serum. hbv pcr kit may be useful in assessment of occult hepatitis b in hbcab positive alone subjects and carriers. a positive pcr results show presence of virus articles in serum without considering serologic results. but there are differences in efficiency of hbv dna amplification kits. in this study we compare two commercially available hbv pcr kits for evaluation viremia of hbsag positive carriers and hbcab alone positive subjects. material and methods: of the 368 randomly selected subjects serologically examined for hbv, 49 and 43 were positive for hbsag and hbcab alone respectively. both later groups were tested for hbv dna by two commercial kits, hbv pcr detection kit (cinnagen, iran) and hbv pcr test (pazhohesh azma, iran). dna extraction kits recommended by each manufacturer were used for hbv dna extraction. amplicons in both kits were a highly overlapped fragment in 5 conserved sequence of viral genome. results: of the 49 hbsag positive carriers, hbv dna was detected in 37.4 and 65.3% using kit1 and kit2 respectively. only 28.6% were positive in both kits. in hbcab positive subjects (n = 43), 23.3% were positive by kit 2 and all samples were negative when tested by kit 1. there was a significant difference between two kits. sensitivity of kit 1 and kit 2 were 48.6 and 91.4%, respectively. overall, kit 2 increased the detection rate of hbv dna by 88.2%. discussion: our study show there is a significant variation between these two commercial kits especially in hbcab positive subjects that the copy of virus is very low. from these results, it can be concluded that the unstandardized kits have not compatible results, and pcr test interpretation should be done with great care. . the antibacterial activity of the extracts was evaluated based on the inhibition zone using plate diffusion method. most of the extracts were active against both gram positive and gram-negative bacteria, but pseudomonas aerugenosa, bacillus subtilis and sarcina marcescens, were more susceptible to almost all the extracts. finally, further research will be done to elucidate the nature of the active compound and investigate for peptides by using protein gel immobilization bioassay. short interfering rna delivery and gene silencing using polymeric nanocarrier systems k.a. howard 1 , x. liu 1 , d. oupicky 2 , f. besenbacher 1 , j. kjems 1 : 1 inano, university of aarhus, denmark, 2 department of pharmaceutical sciences, wayne state university, detroit, usa the effectiveness of a drug is determined by the ability to migrate through the body and reach target sites in therapeutically relevant levels. nanocarriers for delivery of bioactive agents are being developed at inano to maximise drug payload at target sites. the inclusion of "biological triggers" into the nanocarrier design is used for modulation of cellular nucleic acid trafficking and increased target interaction. chitosan and peptide-based polymers were used to formulate nanocarriers in the size range 30-250 nm containing small interfering rnas (sirnas) for gene silencing applications. page analysis showed the structural integrity of the sirna was maintained during particle formation. in systems composed of bioresponsive polymers, nanocarrier disassembly and sirna release under cellular conditions were shown, using atomic force microscopy. the time course for sirna uptake into nih cells was visualised using confocal microscopy. in addition, sirna localisation within cells could be modulated by the composition of the polymer used. the ability of the nanocarrier system to mediate gene expression was investigated in a cell line stably expressing enhanced green fluorescent protein (egfp). furthermore, the various delivery systems were tested in a mouse model stably expressing the egfp protein using both nasal and intravenous delivery routes. the use of animals as a source of organs and tissues for xenotransplantation can overcome the growing shortage of human organ donors. however, the presence of xenoreactive antibodies in humans directed against swine gal antigen present on the surface of xenograft donor cells leads to the complement activation and immediate xenograft rejection as a consequence of hyperacute immunological reaction. the graft of genetically modified organ of a swine depleted of enzyme ␣1,3-galactosyltransferase that is responsible for gal antigen origin, would be tolerated with simultaneous administration of medicines decreasing other less severe immunological reactions. to prevent hyperacute rejection it is also possible to modify swine genome by human genes controlling enzymatic cascade of complement or modifying the set of donor's cell surface proteins. for this purpose genetic constructs containing inactivated ␣1,3-galactosyltransferase gene, human cd59, cd55 and cd46 genes controlling complement activation and human genes encoding ␣1,2-fucosyltransferase and ␣-galactosidase enzymes modifying cell surface proteins were prepared. these genetic constructs were transfected into the pig foetal fibroblast using lipofection method. after selection, molecular and cytogenetic characteristic of cells with transgene integrated into the host genome were performed. introduction: protease 2a(2a-pro) of coxsackievirus b3 (cvb3) plays major role in viral replication. in case of infection, viral proteins are being synthesized from viral mrna using host biosynthesis machinery. 2a-pro of virus, after being synthesized, exhibit two critical functions, cleavage of viral proteins and breaking eif4g (eukaryotic initiation factor 4g-formerly called p220) which leads to host cell translational system shot-off. the enzyme plays essential role in viral replication and cellular damage. to understand pathogenicity of infection and also developing potent and selective inhibitors against picornavirus infection, it is necessary to prepare pure 2apro enzyme. in this study an expression system with efficient and high yields was obtained. methods: cdna of 2apro was synthesized using in vitro infection of permissive host through reverse transcription process and was cloned in pet22b(+) and since 2a-pro is a toxic product, naturally before induction its expression will act on the host and damage the cells. for this different hosts were checked and finally, blr(de3) plyss which carries an extra-plasmid for lysozyme expression that minimizes unwanted target protein production (leakage) was selected. on the other hand for biological activity assay, polyclonal antibodies against antigenic sites of p220 was prepared by synthesizing small peptides, corresponding to antigenic site of p220 coupling to klh and injecting subcutanously to rabbit. then, the enzyme and its substrate (hela cells lysate that contain p220) were incubated together for different times intervals. results: the recombinant product was confirmed by sds polyacrylamide gel electrophoresis and immunoblot analysis. also p220 cleavage by 2apro was assessed by sds-page and western blot analysis. cleavage of p220 by r2a-pro was prominent after 24 h. so recombinat 2a-pro with good activity was prepared. application of bombyx mori nuclear polyhedrosis virus (bmnpv) bacmid system on production of glycoprotein in larvae of silkworm ayano kageshima, tatsuya kato, misun kwon, enoch y. park department of applied biological chemistry, shizuoka university, ohya 836, suruga-ku, shizuoka 422-8529, japan bombyx mori nuclear polyhedrosis virus (bmnpv) bacmid system was applied on production of glycoprotein in larvae of silkworm. the bacmid system of autographa californica nuclear polyhedrosis virus (acnpv) has already been established and widely used. since the acnpv does not have a potential to infect silkworm we developed the first practical bombyx mori nuclear polyhedrosis virus (bmnpv) bacmid system directly applicable for the protein expression of silkworm. by using this system, the green fluorescence protein and glycoprotein, human ␤1,3-n-acetylglucosaminyltransferase 2 were successfully expressed in silkworm larvae not only by infection of its recombinant virus but also by direct injection of its bacmid dna. three different kinds of signal sequences were tested for the secretion of glycoprotein into hemolymph of silkworm. signal peptides of prophenoloxidase-activating enzyme and bombyxin: insulin-like brain secretory peptide showed the highest secretion ratio, 99% of total expressed ␤1,3-n-acetylglucosaminyltransferase 2 was secreted into hemolymph of silkworm. using bacmid system 50 mu/ml of ␤1,3-n-acetylglucosaminyltransferase 2 was expressed in hemolymph of silkworm, which was 2-three times higher than that of insect cell. silkworm is one of the most attractive hosts for large-scale productions of eukaryotic proteins as well as recombinant baculoviruses for gene transfer to mammalian cells. this method provides the rapid protein production in silkworm, is free from biohazard, thus will be a powerful tool for the future production factory of recombinant eukaryotic proteins and baculoviruses. we have established an efficient system for foreign gene expression in lily (lilium longiflorum) pollen in a transient mode. pollen was transformed using agrobacterium via vacuum filtration for 20 min. the pollen germinated for 24 h was analyzed to confirm its foreign gene expression in molecular analysis. mouse fed the transgenic pollen culture for 8 weeks showed immune reaction specific for pollen-derived recombinant protein. and the igg level was highly elevated by one time boosting injection afterwards. the lily pollen system may be suggested as a novel type of biofactory for producing edible vaccine with rapidity. anti-apoptosis engineering with the 30kc6 gene obtained from silkworm shin sik choi, won jong rhee, tai hyun park school of chemical and biological eng., seoul national university, seoul 151-744, korea the chinese hamster ovary (cho) cell line producing recombinant human erythropoietin (epo) was manipulated to express the 30kc6 gene, which was originally obtained from a silkworm. the expression of 30kc6 inhibited serum deprivation-induced apoptosis and increased the cell density and epo expression level per unit cell by five-and two-folds, respectively. an increase in these two factors resulted in a 10-fold increase in the volumetric productivity of epo. compared with the controls, the oligosaccharide structures of the epo synthesized by the cells expressing 30kc6 showed greater homogeneity. the terminal sialylation of the glycans of epo were promoted by the expression of 30kc6. the positive effects of 30kc6 expression on the cell viability and productivity were attributable to the stable maintenance of the mitochondrial activity. these results demonstrate that the cho cell line genetically engineered with the 30kc6 gene has a great potential for use in the production of therapeutic proteins. evaluation of fucoidan-chitosan hydrogels on superficial dermal burn healing in rabbit: an in vivo study a.d. sezer 1 , f. hatipoglu 2 , z. ogurtan 3 , a.l. baş 4 , j. introduction: healing of dermal wounds with macromolecular agents such as natural polymers is one of the research areas of the pharmaceutical biotechnology. fucoidan is a sulphated polysaccharide which is commonly obtained from seaweeds. although a great number of studies on the different pharmacological properties of fucoidan are present, there is very limited information on the fucoidan-based system used in dermal burns. the aim of this study was to prepare chitosan hydrogel containing fucoidan and to investigate this hydrogel formulation for treatment of dermal burns on rabbits. methods: fucoidan-chitosan hydrogels were prepared as follows: the polymers were dissolved in acidic solution and sonicated for removing the air-bubbles then the gels were stored at +4 • c for in vivo studies. seven adult male new zealand white rabbits (mean weight, 3.8 ± 0.6 kg) were used for the evaluation of the gels on superficial dermal burns. the back of the rabbits were depilated and sedated. the wounds were made by circular stamp aluminium caps (3.8 cm 2 ) at 80 • c. four wounds were formed for each rabbit; (a) was treated with fucoidan-chitosan gel, (b) was treated with fucoidan solution, (c) was treated with chitosan gel (without fucoidan) and (d) as a negative control group. biopsy samples were taken at 7, 14 and 21st days at the beginning of the study and each wound site was macroscopically observed and evaluated histopathologically. results: oedema was not observed in all groups after 3 days treatment except controls. after 7 days treatment, fibroplasia and scar were observed on wounds treated with fucoidan-chitosan gel and fucoidan solution. the best regenerate dermal papillary formation and the fastest closure of wounds were observed in group a after 14 days treatment. the wound epithel elongation and thickness values were measured; a (5566 and 179 m), b (3586 and, 146 m), c (3666 and, 162 m), d (3533 and, 134 m) at the end of the study. conclusion: re-epithelization and contraction of the wound area which was treated with fucoidan-chitosan hydrogel were faster than the other groups. the fucoidan-chitosan hydrogel formulations can be suitable for the treatment of dermal burns. aim: to analyze the neutralizing -related activity of antibodies against e1 region of hcv, specific polyclonal antibody was raised by immunized rabbits with synthetic peptide that had been derived from e1 region of hcv with the amino acid sequence e1 antibody [ghrmawdmm). materials and methods: hyper-immune hcv e1 antibodies were incubated over night at 4 • c with serum samples from patients positive for hcv rna, with different viral load, ranged 7-11 million copes/ml, then incubated 90 min to hepg2 cells. rt-pcr and flow cytometry were used to study the inhibition binding and entry effect of e1 antibody. direct immunostaining of e1 antibody conjugated with fitic and flow cytometry analysis showed reduced the mean fluorescence intensity in the samples pre-incubated with e1 antibody compared with samples without e1. of 18 positive serum samples, 13 (72%) samples showed completely inhibition of infectivity as detected by rt-pcr. conclusion: in house produced e1 antibody, blocks binding and entry of hcv virion infection to target cells suggesting the involvement of this epitope in virus binding and entry. isolation of these antibodies that block virus binding and entry will be useful in providing potential therapeutic reagents and for vaccine development. tumor necrosis factor beta (tnf␤) is a pleiotropic cytokine mediating its activity through the same receptors as the structurally related tnf␣. shortening of the n-terminal part has been reported to enhance its cytotoxic activity, with the explanation that removal of the flexible n-terminus reduces steric interferences in the receptor binding process. for studies of relationship between the n-terminal protein structure and physicochemical properties, three different forms mettnf␤, his7tnf␤ and n19tnf␤ were expressed in e. coli. high solubility of n19tnf␤ was expected, however, both analogs his7tnf␤ and n19tnf␤ were predominantly obtained in the form of inclusion bodies (ibs). on the other hand, mettnf␤ was equally distributed between the soluble and insoluble fraction. most probably, the composition of the n-terminal part, such as the exposure of hydrophobic residues in the case of n19tnf␤, or a mildly hydrophobic stretch of seven histidines appended to the natural hydrophobic n-terminus in the case of his7tnf␤, lead to incorrect folding and force aggregate formation. solubilization of ibs was performed under native and denaturing conditions using various concentrations of nls, urea and gndhcl. mettnf␤ ibs were easily dissolved with 0.2% nls, while his7tnf␤ and n19tnf␤ demanded denaturing conditions. structural differences in the nterminal part of various tnf␤ proteins were also reflected in protein refolding characteristics and chromatographic behavior. dilution, ultrafiltration and dialysis were used for refolding, and imac was chosen as the main chromatographic step. our results suggest that not only the length of the n-terminal part of tnf␤ but also the composition and exposure of certain amino acid residues affect its physicochemical properties. enzymatic modification of sphingomyelin long zhang, lars hellgren, xuebing xu biocentrum-dtu. e-mail: lz@biocentrum.dtu.dk (l. zhang) due to its major role in maintaining the water-retaining properties of the epidermis, ceramide is of great commercial potential in cosmetic and pharmaceuticals such as hair and skin care products. currently, chemical synthesis of ceramide is a costly process, and developments of alternative cost-efficient, high yield production methods are of great interest. in the present study, the potential of producing ceramide through enzymatic hydrolysis of sphingomyelin (sm) have been studied. sm is a ubiquitous membrane-lipid and rich in dairy products or by-products. in present study, we have optimized the production of ceramide from sm using phospholipase c from clostridium perfringens. water and enzyme amount had the biggest influence on sm hydrolysis in the system. botulinum neurotoxins constitute a family of bacterial toxins for botulism syndrome in human. the toxins bind with high affinity to nerve cells where they cause a complete inhibition and release of neurotransmitters and thereby produce flaccid paralysis. in this work we have reported isolation of the binding domain of type e neurotoxin by pcr and expressed in a proper expression vector. the output of this investigation can be used as a tool to study the mechanism of binding of holotoxins and also can be useful to study the antibody production against botulism syndrome. the synaptobrevin (vamp2) a protein which play a key role in the fusion and exocytosis of the vesicle of mammalian nerves terminals this protein is substrate different serotypes of botulinum neurotoxins. light chain of clostridium botulinum type b, d, f and g cleave end of neurons. due to intraction of clostridium botulinum neurotoxin with vamp2, this protein can be used as one of the toolsin the detection of poisings cause by this bacteria in the clinical laboratory using the enzyme with proof reading activity the above gene amplified by pcr technique for the production of the recombinant protein, the prokaryotic expression vectors (pet system) was used. a recombinant protein developed on poly acrylamide gel analyzed by western blotting and eliza. most children with down syndrome (ds) are born to younger mothers (<35 years). recent reports linking down syndrome (ds) to maternal polymorphisms at the methylenetetrahydrofolate reductase (mthfr) gene locus have generated great interest among investigators in the field. in this study, forty mothers with their affected outcomes and 100 control mothers were included. all mothers were subjected to complete medical and nutritional history with special emphasis on folate intake through food or oral supplementation. estimation of blood homocysteine level was done. also we examined the two polymorphisms in genes encoding the folate metabolizing enzyme methylenetetrahydrofolate reductase (mthfr), namely, 677c > t and 1298a > c. folic acid intake from food and from vitamin supplements was significantly low (below the recommended daily allowance) in the group of mothers with ds children compared to control mothers (p < 0.01) using student t-test. blood homocysteine was normal in both control and ds mothers. the prevalence of the two polymorphisms, namely, 677c > t and 1298a > c in mothers of ds children (case mothers) (n = 40) was compared with controls (n = 100). frequencies of mthfr genotypes (cc, ct, and tt) at position 677 demonstrated no difference between the case and control groups. genotype frequencies of mthfr at position 1298a (aa, ac, and cc) were different among the case and control mothers. we here report the first study on a possible relation between ds with mthfr 1298a > c genotypes in egypt. our results showed that mthfr 1298a > c polymorphism is a remarkable genetic entity among egyptian females with d.s. children. sufficient folate intake and supplementation is an important preventive strategy in overcoming the risk of nondisjunction. homologous recombination (hr) is the mechanism that permits the creation of genetically engineered strains through gene targeting. in order to further develop gene targeting techniques, notably for higher eukaryotes such as filamentous fungi, it is of crucial importance to fully understand the molecular mechanisms behind mitotic hr. in saccharomyces cerevisiae, a dna double strand break (dsb) is an essential intermediate in meiotic hr. however, as hr occurs at a low rate in mitotic cells, it has been difficult to determine the nature of the event(s) that triggers it. rad52 is a key protein involved in hr and is evolutionarily conserved from yeast to human. to shed light on the molecular events in hr, we have generated a large collection of defined rad52 mutant strains in the yeast s.cerevisiae. a screen of these mutants led to the identification of strains that fail to repair dna dsbs, yet are proficient for homologous recombination. this result strongly suggests that dsbs may not be the major cause of spontaneous mitotic hr and gives new perspectives in respect to novel potential gene targeting substrates. we have analyzed these separation of function mutants in a variety of new assays to obtain a more detailed understanding of their controversial phenotype. our latest results will be presented. the outcome of interferone plus ribavirine treatment of hepatitis c virus (hcv) genotype 4 is unfortunately poor. development of alternative therapy for this genotype is of a paramount importance. inhibition of hcv gene expression in vitro by the use of antisense phosphorothioate oligodeoxynucleotides (s-odn) against internal ribosomal entry site (ires) elements were associated with favorable results. to assess s-odn activity, previous studies utilized viral subgenomic or full cdna fragments linked to reporter genes and transfected into adhered cells or in a cell free system. in the present study we utilized hepg2 cells infected with native hcv rna of genotype 4. the culture system presented herein was shown to support hcv replication on the following bases (1) consistent detection of both plus and minus rna strands for 4 weeks in cells and in fresh culture supernatent, (2) ability of supernatent to infect naive hepg2 cells (3) consistent expression of core and e1 envelope proteins in infected cells throughout the 4 week culture. s-odns against aug translation start site (s-odn-1, nt 326-348) of the viral polyprotein precursor and stem loop iiid within the ires region (s-odn2, nt 264-282) were added to infected cells. intracellular viral replication was monitored by nested rt-pcr of plus and minus strands. the results of these experiments demonstrated that intracellular replication of hcv genotype 4 was completely arrested after 48 h in culture using either s-odn molecule (with more efficacy of s-odn1 than s-odn2) at concentrations as low as 1 m. the inhibitory effect of s-odn appeared to be specific to hcv replication since equal levels of human glyceralehyde 3-phosphate dehydrogenase (gapdh) gene expression were noted pre and post supplementation of s-odns. in conclusion, the present study provides evidence antisense phosphorothioate oligonucleotides have potent inhibitory effect on genomic replication of hcv genotype 4, the most common type in egypt. a sensitive and specific pcr-elisa was developed to detect shigella dysentery in food. the assay was based on the incorporation of degoxigenin-labeled dutp and a biotin-labeled primer specific for shiga toxin genes during pcr amplification. the labeled pcr product were bound to streptoavidin-coated wells of a microtiter plat and detected by an elisa. the elisa detecting system was able to increase the sensitivity of the pcr assay by up to 100-fold, compared with a conventional gel electrophoresis. the detection limit of the pcr-elisa was 0.1-10 cfu dependent upon shigella dysentery serotypes and genotypes of shigatoxin. the entire procedure took about 4 h. isolation, cloning, expression and purification of snap-25 as a substrate of botulinum neurotoxin type a and e m.l. mossavi 1 , f. ebrahimi 1 , j. amani 1* , h. basiry 1 , z. ahmadi 2 : 1 department of biology, faculty of science, imam hussein university, tehran, iran; 2 department of biology, faculty of medicine, bageatallah university, tehran, iran. e-mail: kpjamani@ihu.ac.ir (j. amani) clostridial neurotoxin inhibit neurotransmitter relase by selective and specific intracellular proteolysis of synaptosomal associated protein of 25 kda (snap-25); synaptobrevin/vamp-2 and syntaxin. snap-25 is one of the components that form docking complex in synaptic ends. this protein is substrate for botulinum neurotoxins type a and e. each of these toxin serotypes specifically cleave snap-25 in particular position and there by block docking and synaptic vesicle membrane fusion and finally prevent neurotransmitter exocytosis and transition of neurotic signals. in order to use the protein as a substrate for detection of different type of clostridium neurotoxin in vitro test, the protein was produced by recombinant technique. the dna encoding snap-25 was isolated from rat brain by pcr using the two primer the amplified fragment colonel into expression vector pet32a.the expression protein was purified by affinity chromatography. confirm by different method the his tag fusion protein was digested with entrokinase. the present work deals with the preparation of pure alpha-1antitrypsin (aat) protein from healthy subjects, which can be used in preparing its corresponding monospecific antibody in albino rabbits. this antibody was found very useful in the immuno-diagnosis of pulmonary emphysema. this study has been also concerned with the biochemical changes associated with aat deficiency in pulmonary diseases. to fulfill this work, a group of healthy blood donors was selected for separation of pure aat antigen from blood. the pure aat was used for the preparation of anti-aat, the purity and potency of antibody was checked by titration methods. the biochemical changes were studied in thirty subjects clinically divided into three groups including, control, heavy cigarette smokers with pulmonary emphysema, and non-smoking subjects with pulmonary emphysema. the activity of elastase and hydroxyproline (hp) level as a marker of elastin and collagen breakdown were assayed in bronchoalveolar lavage (bal) fluid. the activity of aat and to inhibit proteases as represented by tryptic inhibitory capacity (tic) was evaluated. serum ceruloplasmin, transferrin and iga level as well as thiobarbituric acid reactive substances (tbars) were estimated in all groups. our data revealed that aat and its tic showed very highly significant decreased levels in all patients with emphysema as compared to control, while the elastase activity and hp level in bal fluid were significantly increased in these patients. serum tbars was significantly increased in such patients associated with increasing level of both ceruloplasmin and transferrin. while, serum iga was significantly increased. furthermore, the biochemical changes were markedly changed in smokers with emphysema than non-smoking subjects. in conclusion, the preparation of anti-aat on the local level is very important where less expensive and less time consuming and can be useful in immuno-diagnosis and prognosis of pulmonary diseases. a new method combined with boosting and projective adaptive resonance theory for analysis of gene expression data from cancer patients hiro takahashi, yasuyuki murase, hiroyuki honda department of biotechnology, school of engineering, nagoya university, nagoya 464-8603, japan. e-mail: h041305d@mbox.nagoyau.ac.jp (h. takahashi) an optimal and individualized treatment protocol based on accurate diagnosis is urgently required for the adequate treatment of patients. for this purpose, it is important to develop that a sophisticated algorithm that can manage large amount of data, such as gene expression data from dna microarray, for optimal and individualized diagnosis. in our previous study, we developed the projective adaptive resonance theory (part) as a gene filtering method and boosted fuzzy classifier with sweep operator (bfcs) as a modeling method. in the present study, we applied the combination of part and bfcs (part-bfcs method) to microarray data of brain tumor (central nervous system tumor) obtained from the website. the method enabled the selection of 14 important genes related to the prognosis of the tumor, i.e., sensitivity for combined therapy with surgery, radiotherapy, and chemotherapy mainly with vincristine, cisplatin, and cytoxan. the constructed model showed about 20% higher accuracy than that of the conventional method. genomic signal analysis of hiv variability based on rt gene p.d. cristea, rodica tuduce d. otelea biomedical engineering center, university "politehnica" of bucharest, romania. e-mail: pcristea@dsp.pub.ro (p.d. cristea) dna sequences genotyped from 60 clade f hiv-1 isolates from romanian patients in the laboratory of the national institute of infectious diseases "matei bals", bucharest, romania, have been studied. the symbolic sequences have been converted into digital genomic signals by using a complex quadrantal representation of the nucleotides described earlier. the cumulated phase and unwrapped phase of a complex genomic signal reflect the statistical distribution of bases and base-pairs, respectively. independent component analysis of the genomic signals has been used to characterize the variability of the f subtype hiv strains isolated in romania. the sequenced segment is of (about) 1302 base pairs, approximately aligning with the standard sequence of hiv-1 (accession nc001802 in genbank) over the interval 1799-2430 bp. this segment, which is currently used for the standard identification and assessment of hiv-1 strains, comprises the protease (pr) gene and two thirds of the reverse transcriptase (rt) gene. only results referring to the analysis of the rt gene region are presented here and used for extracting features of virion isolates and for establishing phylogenetic trees of the studied strains. the analyzed rt encoding segment has the length 1005 bp and is located in the second interval (298-1302 bp) of the analyzed dna segment, respectively along the 2096-3100 bp region of nc001802. taking into account the mutations identified in these sequences, the samples were classified in three groups from the point of view of their resistance to current antiretroviral compounds: sensitive, resistant and multiresistant. the paper presents results for the isolates in which mutations leading to multiple drug resistance have been identified. over expression of secb protein in e. coli enhances the periplasmic expression of human growth hormone m. ghafari 1,2 , a. zomorodipour 2 : 1 islamic azad university of jahrom, tehran, iran; 2 national institute for genet eng and biotechnol., tehran, iran. email: maryamghafari2001@yahoo.com (m. ghafari) among several proteins involved in the secretion pathway of proteins in e. coli, secb plays a key-important role in solubilization of preproteins before processing. in order to increase processing of a human growth hormone precursor (pelb::hgh) which appeared to have problem in processing efficiency, as a possible solution a regulated co-expression of a secb gene was considered. in this regard, we designed an arabinose-regulated secb expressing plasmid compatible with an iptg/lactose-regulated pelb::hgh expressing plasmid. for the construction of the secb expressing plasmid the origin of replication and antibiotic resistant gene (amp) of a pbad vector was replaced by a p15a-ori and a kanamycin resistant gene, respectively. the expression and processing of pelb::hgh preprotein in the two-plasmid containing bacteria in a secb over-expression state was compared to that of the pelb::hgh expression in normal bacteria. although a decline in total expression level of hgh during the overexpression of secb was observable, probably due to presence of two different expressing plasmids, but both the processing efficiency of pelb::hgh and the transport of mature protein into the periplasmic space was enhanced during prolonged arabinose induction. current diagnostics and therapeutics for cat allergy are based on cat epithelial extracts. natural allergen extracts contain a mixture of allergenic and non-allergenic components that are difficult to standardize. recombinant allergens can improve diagnosis and de-sensitization against single component. major cat allergen is a heterodimer composed of disulfide linked 7.8 and 10.1 kda polypeptide chains. both chains of feldi protein were obtained in e.coli system. purification of recombinant proteins was performed in denaturing conditions using immobilized metal affinity chromatography specific for proteins with histidine tag. from 1000 ml culture approximately 13 mg protein for feldi chain 1 and 43 mg of feldi chain 2 were obtained. after purification histidine tag was removed by hydrolysis with thrombin. immunological activity of feldi against serum of patients allergic to cat was narrowed to subgroup of patients allergic to feldi protein by surface plasmon resonance. immunological activity of each chain and renatured heterodimer was also tested using immunoprecipitation techniques against serum of population group. subdoligranulum variabile -a novel member of the human gut micro flora with a high prevalence kim holmstrøm, trine møller bioneer a/s, hørsholm, dk-2970, denmark in 2003 we isolated and cultured for the first time a bacterium from a human fecal sample representing a hitherto unknown member of the clostridium leptum rrna supra generic cluster. the c. leptum rrna supra generic cluster represents one of the 3 major phylogenetic lineages within the human gut microbiota, and is characterized by having only a small proportion of its members actually identified by cultivation compared to the estimated numbers of bacteria contained in this group from culture-independent gut flora analyses. s. variabile is an obligate anaerobe gram negative bacterium with a characteristic pleiomorphic coccoid-droplet-like cellular morphology. its closest previously cultivated relative based on a 16s rdna phylogenetic analysis is faecalibacterium prausnitzii, a rod-shaped and therefore easily distinguishable gram negative bacterium present in high numbers in the human fecal micro flora. based on 16s rdna sequence we designed a s. variabile specific oligonucleotide probe for use in fish analysis to estimate the prevalence of this "new" bacterium in fecal samples collected from healthy human beings. interestingly, we observed a high proportion of s. variabile present in all tested samples, and in some instances we observed a higher prevalence than the more well-known group of bifidobacteria equally estimated by fish analysis. documentation of these results will be presented. several species of sea cucumbers, long an incumbent of traditional medicines were selected as the source of animal based antibiotic compounds. swabs of the inner surface and coelomic fluid (inner fluid) samples from sea cucumber (holothuria atra jaeger) were taken. thirty strains of bacteria were isolated. these strains were grown in different antibiotic production media. nine human pathogenic bacterial species were used as test agents and they are, k. pneumoniae, mrsa, m. luteus, s. thyphimurium, s. epidermitis, s. saprophyticus, b. subtillis, p. aeruginosa and s. pyogenes; only four bacterial strains showed mild antibiotic activity against s. pyogenes and s. thyphimurium. similar testing on two other species, h. scabra and s. variegatus will be carried out. different media, especially antibiotic production enrichment media will also be used. characterization will be done upon obtaining an antibiotic compound, which shows moderate to high activity against at least one of the nine human pathogens used. for plant based medicines, three rhizomes, "cekor," "jerangau" and "bonglai" were analyzed. solvent extraction using ethanol, methanol and acetone was carried out, at a concentration of 20-50 mg per ml solvent. the filtrates were used for the antibiotic testing stage. all the three plant species showed moderate antibiotic activity against m. luteus, s. epidermitis, s. pyogenes and s. saprophyticus. interestingly, the antibiotic activity increased when combinations of the herbal extracts were used. cell-based therapy continues to be a promising avenue for the treatment of duchenne muscular dystrophy, an x-linked skeletal muscle-wasting disease. recently, we have demonstrated that freshly isolated myogenic progenitors contained within the adult skeletal muscle side population (sp) can engraft into dystrophic fibers of non-irradiated mdx 5cv mice after intravenous transplantation. engraftment rates, however, have not been therapeutically significant, achieving at most 1% of skeletal muscle myofibers expressing protein from donor-derived nuclei. to improve the engraftment of transplanted myogenic progenitors, an intra-arterial delivery method was adapted from a previous procedure. cultured, lentivirus transduced skeletal muscle sp cells were transplanted into the femoral artery of non-injured mdx 5cv mice. based on the expression of microdystrophin and gfp transgenes in host muscle, sections of the recipient muscles exhibited 5% to 8% of skeletal muscle fibers expressing donor-derived transgenes. further, donor muscle sp cells, which did not express any myogenic markers prior to transplant, express the satellite cell transcription factor pax7 and the musclespecific intermediate filament desmin after extravasation into host muscle. the expression of these muscle-specific markers indicates that progenitors within the side population can differentiate along a myogenic lineage after intra-arterial transplantation and extravasation into host muscle. given that femoral artery catheterization is a common, safe clinical procedure and that the transplantation of cultured adult muscle progenitor cells has proven to be safe in mice, our data may represent a step towards the improvement of cell-based therapies for dmd and other myogenic disorders. metabolic engineering design of an extracellular hgh synthesis system birgül şentürk 1 , pınar ç alık 2 , güzide ç alık 1 , tunçer h.özdamar 1 : 1 bre lab, department of chemical engng, ankara university, 06100 ankara, turkey; 2 iblab, department of chemical engineering, metu, 06531 ankara, turkey. e-mail: ozdamar@eng.ankara.edu.tr (t.h.özdamar) metabolic engineering design of an extracellular human growth hormone (hgh) synthesis system is based on cloning the dna encoding human therapeutic protein together with the signal dna sequence of an extracellular enzyme gene into a host-vector system. in this context, extracellular protease (subc) signal dna sequence, i.e. pre-signal dna sequence, was fused into the frame in front of hgh mature dna sequence, by the use of four primers designed using pcr-based gene splicing by overlap extension method. b. licheniformis chromosomal dna and plasmid carrying hgh cdna were used as templates in pcrs, respectively, for the amplification of the subc signal dna sequence and hgh mature peptide sequence. for the fusion of two target genes, i.e. mature peptide sequence of hgh and, signal dna sequences were amplified separately by pcrs. the primers used at the ends to be joined were designed as complementary to one another by including nucleotides at their 5 ends that are complementary to 3 portion of the other primer. these products were mixed in the next pcr reaction, where one strand from each fragment contains the overlap sequence at 3 end. extension of this overlap by dna polymerase has yielded the recombinant hybrid-gene; and hybrid-gene serve as template for the continuation of reactions for the increase of the concentration in the microreactors. the hybrid gene fragment was first cloned into puc19, and then sub-cloned to pmk4 e.coli-bacillus shuttle plasmid. thus, a new expression vector with high stability and high copy-number was obtained and transferred into host b. subtilis 168 (spo − ). the metabolic flux distributions calculated by the mass balance based stoichiometric model based on the proposed metabolic reaction network for r-b.subtilis were determined by using time profiles of the substrate, dry-cell, hgh, amino acids and organic acids concentrations as the constraints. on the basis of the intracellular bioreaction rates and the interactions with the bioreactor operation parameters, an in-depth insight will be provided for further metabolic engineering design for the extracellular hgh production in r-b.subtilis. medicines based on polypeptides consisting of 30 and more amino acid residues are widely spread in pharmaceutical market at the present time. practically all polypeptide medicines known are prepared by general chemical synthesis that caused high cost of their production. that's why biotechnological way of polypeptide medicines preparation using recombinant gene expression in bacteria seems to be promising. during our work we design hybrid construction allowing solving a problem of specific cleavage of target polypeptide from the hybrid protein using system of protein splicing from new england biolabs. in case of thymosin ␣ 1 production impac system (intein mediated purification with affinity chitin-binding tag) has been used, where the modified sce vma from s. cerevisiae has been applied as intein. in contrast to other systems, impact allows the preparation of target protein without using of serine protease and other factors that may cleave the hybrid protein. in the presence of thiol reagents, such as dithiothreitol, mercaptoethanol, or cystein the hybrid protein can be site-specifically cleaved to give intein, the target protein and small fragment of nextein. using of this system does not allow to obtain the target protein with ser, cys or thr on n-terminal of protein, because in those cases target protein and n-extein ligation product will be formed. ser is n-terminal amino-acid in thymosin ␣ 1 . it was recently found that some metal ions essentially affect the splicing. we tried to use zncl 2 and we have found that, in case of intein-thymosin ␣ 1 , the maximal yield of target polypeptide and the minimal yield of splicing products are observed in the absence of dithithreitol and in the presence of 0.5-1 mm zinc chloride in buffers on all stages of thymosin ␣ 1 isolation. the structure of recombinant thymosin ␣ 1 of human was confirmed by the determination of n-and c-terminal amino-acid sequences and by maldi tof mass-spectrometry. mature embryos of five t. aestivum and five t. durum cultivars formed embryogenic callus on two different media. embryos were removed from surface sterilised seeds and placed with the scutellum upwards on a solid agar medium containing the inorganic components of murashige skoog and 2 mg/l 2,4-dichlorophenoxyacetic acid (2,4-d) or 1 mg/l naphtalenacetic acid (naa). the developed calli and regenerated plants were maintained on 2,4-d or naa free ms medium. wheat plants can be regenerated via two different systems. there were significant differences in percentage of callus induction and regeneration capacity on the different initiation medium. among the t. aestivum cultivars, yakar had the highest regeneration capacity in both induction medium. in t. durum cultivars, kiziltan gave the highest regeneration capacity in ms + 2,4 d medium and yilmaz gave the highest regeneration capacity in ms + naa medium. a strong genotypic effect on the culture responses was found for both induction medium. tumor necrosis factor beta (tnf␤) is a pleiotropic cytokine mediating its activity through the same receptors as the structurally related tnf␣. shortening of the n-terminal part has been reported to enhance its cytotoxic activity, with the explanation that removal of the flexible n-terminus reduces steric interferences in the receptor binding process. for studies of relationship between the n-terminal protein structure and physicochemical properties, three different forms mettnf␤, his7tnf␤ and n19tnf␤ were expressed in e. coli. high solubility of n19tnf␤ was expected, however, both analogs his7tnf␤ and n19tnf␤ were predominantly obtained in the form of inclusion bodies (ibs). on the other hand, mettnf␤ was equally distributed between the soluble and insoluble fraction. most probably, the composition of the n-terminal part, such as the exposure of hydrophobic residues in the case of n19tnf␤, or a mildly hydrophobic stretch of seven histidines appended to the natural hydrophobic n-terminus in the case of his7tnf␤, lead to incorrect folding and force aggregate formation. solubilization of ibs was performed under native and denaturing conditions using various concentrations of nls, urea and gndhcl. mettnf␤ ibs were easily dissolved with 0.2% nls, while his7tnf␤ and n19tnf␤ demanded denaturing conditions. structural differences in the nterminal part of various tnf␤ proteins were also reflected in protein refolding characteristics and chromatographic behavior. dilution, ultrafiltration and dialysis were used for refolding, and imac was chosen as the main chromatographic step. our results suggest that not only the length of the n-terminal part of tnf␤ but also the composition and exposure of certain amino acid residues affect its physicochemical properties. high level expression of recombinant growth hormones in e.coli faces common problems such as protein aggregation and inclusion body formation. discussions are raised whether it is more beneficial to obtain soluble protein but to loose expression rates. here we describe an experiment based on the hypothesis that slower expression should result in at least partially soluble recombinant protein. experiments were performed on bovine, chicken and mink growth hormones. expression rate was controlled externally by adjusting cultivation temperature, media, inducer amount, and both induction and cultivation times. another approach to the problem was performed through genetic manipulation. we changed strong t7 promoter to e. coli promoter consensus sequence thus reducing expression rate. recombinant growth hormone was still found to form aggregates, even when expressed at extremely low levelsseveral (2-8) percent of total intracellular protein. we developed optimization scheme of insoluble protein production and showed that expression rate minimization is not influencing recombinant growth hormone solubility in vivo thus suggesting an idea of sequence specific aggregation. to optimize the recombinant protein production in small-scale shake-flask system and high cell density fermentation, new tools have been developed at our laboratory which helps to get knowledge about the physiological state of the cell culture. these tools include (i) a quantitative monitoring system for cellular mrnas based on a sandwich hybridization technique, and (ii) a wireless online monitoring tool (senbit), applicable for standard sensors such as ph, po 2 and temperature for the continuous data collection from shake flasks. the senbit system is a new tool supplying valuable information for the optimisation of the expression of recombinant genes in shake flasks and allowing conclusions towards the reproducibility of shake flask cultures. the presentation will focus on use of the sensitive sandwich hybridization technology for the quantitative analysis of process relevant marker genes in different kind of microbial cell cultures with a focus on the production of recombinant proteins. samples from shake flask cultures and high cell density fed-batch fermentations of the yeast pichia patoris, have been analysed. additionally the mrna analysis was combined with the application of the senbit wireless system to study the production of a recombinant protein in shake-flask cultures of p. pastoris. aside from p. pastoris, mrna sandwich hybridization also was used to monitor product expression in fed-batch fermentation of e. coli for the production of a protein with two subunits by sequential induction. quantitative rna analysis as a tool for optimization of tetrameric collagen prolyl 4-hydroxylase production in e. coli a. neubauer 1 , m. bollok 2 , j. myllyharju 1 , p. collagen prolyl 4-hydroxylase (p4h) involved in the biosynthesis of collagens is an ␣ 2 ␤ 2 tetramer. recombinant expression of p4h in e. coli was described recently . the construct for cytoplasmic expression contains the genes of both subunits in one plasmid under control of different promoters. the ␣ subunit forms inactive aggregates, when expressed separately. in mammalian cells the ␤ subunit is available in large excess and keeps the ␣ subunit in a soluble active form. to mimic this in the bacterial system, we induced both genes sequentially. after induction of the ␤ subunit with iptg, expression of the ␣ subunit was initiated with anhydrotetracycline. here we use the analysis of the product mrnas with a bead based sandwich hybridisation assay (sha) (rautio et al., 2003) for optimization of the fermentation procedure. a high p4h activity was obtained if a high mrna level of the ␣ subunit could be maintained over a longer time. the obtained results illustrate the importance of the second induction for a high level expression of the p4h tetramer. the cells need to be in a "healthy state" with low metabolic load to react efficiently to the second induction. the data illustrate the optimization of a fermentation process by monitoring mrna levels which is of general interest for optimization of products which are difficult to detect. cell-based therapy continues to be a promising avenue for the treatment of duchenne muscular dystrophy, an x-linked skeletal muscle-wasting disease. recently, we have demonstrated that freshly isolated myogenic progenitors contained within the adult skeletal muscle side population (sp) can engraft into dystrophic fibers of non-irradiated mdx 5cv mice after intravenous transplantation. engraftment rates, however, have not been therapeutically significant, achieving at most 1% of skeletal muscle myofibers expressing protein from donor-derived nuclei. to improve the engraftment of transplanted myogenic progenitors, an intra-arterial delivery method was adapted from a previous procedure. cultured, lentivirus transduced skeletal muscle sp cells were transplanted into the femoral artery of non-injured mdx 5cv mice. based on the expression of microdystrophin and gfp transgenes in host muscle, sections of the recipient muscles exhibited 5-8% of skeletal muscle fibers expressing donor-derived transgenes. further, donor muscle sp cells, which did not express any myogenic markers prior to transplant, express the satellite cell transcription factor pax7 and the muscle-specific intermediate filament desmin after extravasation into host muscle. the expression of these muscle-specific markers indicates that progenitors within the side population can differentiate along a myogenic lineage after intra-arterial transplantation and extravasation into host muscle. given that femoral artery catheterization is a common, safe clinical procedure and that the transplantation of cultured adult muscle progenitor cells has proven to be safe in mice, our data may represent a step towards the improvement of cell-based therapies for dmd and other myogenic disorders. collagen and its derived product gelatin are attractive mammalian proteins to be used as model for the production of complex heterologous proteins in plants. the availability of a recombinant product will provide a safer, more homogeneous product than the current animal-derived material. the aim of the project is to investigate the feasibility of a production system for the accumulation of recombinant collagen for conversion to gelatin using barley. the 5 -end of the cocksfoot mottle virus (cfmv; genus sobemovirus) genomic rna sequence, called cfmv -element, has been shown to enhance recombinant protein synthesis in barley (wo 01/55298, mäkinen et al., 1995) . the -element will be used to study whether accumulation levels of complex mammalian proteins can be further increased, using collagen as a model that will serve as basis for exploring the expression of other complex proteins. this system can be study the production of barley-derived recombinant collagen for conversion to gelatin. classical swine fever virus (csfv) is an animal pestivirus which can be used as a surrogate model to elucidate the role of envelope glycoproteins of closely related human hepatitis c virus (hcv). the necessity to use the surrogate models for hcv is due to the fact that this virus cannot be grown in vitro cultures. csfv genome codes for three major antigenic glycoproteins which are located in the same cluster of genes; they are designated as e2 and e0 (e rns ) and e1. glycoproteins form heterodimeric and homodimeric complexes on the external part of viral particles. it is generally accepted that envelope glycoproteins play a major role in the initial stages of viral infection both for csfv and hcv. formation of complexes is needed to effectively infect host cells. we have investigated the formation of glycoprotein dimers by immunoperoxidase monolayer assay and by immunoblotting (western blotting). immunoblotting is a very useful technique in these studies because the complexes are formed via cysteine-cysteine disulphide bonds and they are retained during sds-page under non-reducing conditions. by modifying the glycoprotein genes and by arresting n-glycosylation of e2 and e0 we have investigated which factors influence the formation of complexes. it has been found that some glycosylation inhibitors which act at the early stages of glycan chain processing influence, not only glycosylation, but also the stability of e2 protein, effectively inhibiting the formation of glycoprotein complexes and the yield of the virus. these inhibitors are potential agents for arresting the multiplication and spread of csfv, and its relative -human hcv. recombinant proteins have been produced in a variety of heterologous protein expression systems. eukaryotic unicellular algae have distinct advantages, e.g. it can synthesize complex protein that requires post-translational modification. furthermore, microalgae can be grown in confined environment and thus prevents leakage of genes to the environment. our group has developed a platform technology for the production of recombinant proteins in red microalga porphyridium sp. we have constructed algal transformation plasmid vectors containing a camv 35s promoter and polya signal site. a streptoalloteichus hindustanus bleomycin-resistant gene was used as the selective marker. we have expressed ovalbumin and hepatitis-b surface antigen (hbsag) as model proteins. transformation was carried out by agitating algal cells and vector dna with glass bead. transgenic lines were selected by growing algal cells on agar plate containing 6 g/ml zeocin. positive transgenic lines were selected by screening the colonies by pcr and confirmed by dna sequencing. expression of ovalbumin and hbsag protein was examined by western blot analysis. ovalbumin was found to be expressed inside the algal cells while small hbsag was secreted into the medium due to presence of signal peptide. these findings indicate that red microalgae are capable of producing heterologous proteins. life-material exhibition as ethical interpretation chang shih-lung biotechnology industry study center, taiwan institute of economic research, taipei 106, taiwan. email: schang@tier.org.tw in this thesis we aim to explore the medical-related life-material exhibitions within ntu hospital-the humanity building, and taipei mackay hospital-the historical showroom as abecedarian clues to understand the burgeoning phenomenon-medical museums in taiwan. following these clues, we treat the whole context of medical museums, which transform values through situational construction, as the background to interpret the ethical implications of group val-ues that are transformed in the medical profession. in this thesis, we see medical museums, as the social prescription transforming medical profession, format the ritual context of situation ethics with the cultural construction of life-material. multi-disciplinary interactions and visiting itinerary can be transferred to the exploring horizon of research approach through description and interpretation. among them, we observe that humanistic elements have become essential equipment (or mat'eriel) of medical profession. though humanistic equipment (or mat'eriel) has its bottleneck in the museum situation, it can also unblock new possibilities for museum exhibitions, ethical practice or life ethics. as part of a wide research program aimed at developing new antitumoral agents, we present herein a series of stereoisomeric derivatives of fused tetrahydrofuranes (fthf) substituted with diverse protecting groups either at the primary or secondary hydroxyl groups. unprotected derivatives were also synthesised to investigate the influence of substituents on the in vitro activity of fthf. data on the synthesis, chemosensitivity and apoptosis induction by this series of fthf will be correlated to substitution pattern, stereochemistry and protecting groups, as an aid to the rational design of novel antitumour drugs. establishment of in vitro test systems for pulmonary edema resorption by peptide drug candidates dominik geiger, aswin mangerich, rudolf lucas, klaus p. schäfer, inge mühldorfer 1 department of biotechnology, altana pharma ag, konstanz, germany; 2 imc university of applied sciences, krems, austria tnf-␣ was found to up-regulate the rate of lung liquid clearance (llc) in several animal models by the activity of its lectin-like tip domain. this activity can be mimicked by a circular peptide designated tip. tip was shown to induce llc and consequently pulmonary edema resorption in different animal models. in order to study the mechanism of action of tip, we established two in vitro test systems for edema resorption with the human lung epithelial cell line calu-3: (1) the ability of calu-3 cells for spontaneous dome formation within confluent monolayers was utilized for quantitative examination of tip's activity on active transepithelial fluid transport ("dome assay"). (2) the effects of tip on bioelectrical properties of polarized cell monolayers were studied by using the transepithelial electrical resistance (teer) technology ("teer assay"). dome assay experiments confirmed that dome formation is a sodium dependent process and that tip is able to increase this process. teer assay experiments proofed that tip acts in a polarized and dose dependent manner. in conclusion, there is strong evidence that the dome and the teer assays are suitable systems for in vitro activity testing of tip and other anti-edema peptide drug candidates and are useful for studies on their mechanism of action. increasing safety concerns in gene therapy result in more stringent regulatory requirements. those cover the complete process chain of cell banking, fermentation, and purification. a wide range of applications for pdna requires gram to kilogram amounts for clinical trials and market supply. economic, productive and robust processes are a prerequisite for low cost of goods (cogs). therefore manufacturer of biopharmaceuticals need to address these considerations by developing new production processes meeting the new standards. boehringer ingelheim austria developed a novel pdna production process suitable for large-scale cgmp production. the process is based on e. coli fermentation. the process contains no components from animal origin. the optimized fermentation process yields up to 1 g pdna/l fermentation volume. for isolation of pdna from e. coli alkaline lysis in glass bottles or stirred tanks is commonly used. cell wall structure is destroyed by a combination of alkaline ph and detergents. in our process alkaline lysis is operated in a closed, continuous system directly connected to clarification without using enzymes. we developed a scalable process for pdna purification based on 3 different chromatographic principles. the capture step is carried out by hydrophobic interaction chromatography followed by anion exchange chromatography as intermediate step. final polishing is carried out by conventional size exclusion chromatography in a group separation mode providing also buffer exchange and desalting for the final formulation. the process results in a pdna drug substance of highest quality containing a low level of impurities (genomic dna, rna, proteins endotoxines) suitable for therapeutic applications. depending on the conditions during fermentation and the used host homogeneities of greater than 95% or even 98% are possible, while a high over all yield can be achieved (∼50%). during the development monolithic chromatography supports (cim ® ) were compared with conventional resins and evaluated as potential alternatives. the complete process is monitored by a set of analysis covering cell banking to final purification. new sensitive methods based on hpce, hpiex and fluorometric measurement were developed. whereas many natural amino acids are currently produced by very cost efficient biological processes, the manufacture of methionine is still performed by traditional chemical synthesis. this was mainly due to the poor performances of the currently available producing strains that inhibited the development and commercialization of a biological process to l-methionine. metabolic explorer has recently reinvestigated and developed an efficient biological process that employs an engineered microorganism and utilizes a renewable starting material (corn sugar) as its feedstock and converts glucose into l-methionine. we will describe (a) the general scheme for the engineering of the host organisms and (b) the general approach to maximize carbon and reducing equivalent throughput to l-methionine. to highlight this effort, we will present the global approach developed to improve the process combining metabolic flux analysis, traditional protein biochemistry, molecular biology and fermentation optimisation. saccharomyces cerevisiae is an established 'work horse' of the fermentation industry and modern biotechnology has led to a spectacular expansion of the range of products that can be produced by this yeast. however, for the large-scale sustainable production of chemicals, it is equally important that the range of carbohydrate feedstocks be expanded. especially relevant in this respect is the ability to consume the pentose sugars d-xylose and l-arabinose, which make up a substantial part of plant carbohydrates. wild-type s. cerevisiae strains cannot metabolise d-xylose, but are capable of slowly metabolising its keto-isomer, d-xylulose. therefore, efficient conversion of d-xylose into d-xylulose has long been a key issue in yeast metabolic engineering. non-saccharomyces yeasts that is capable of growing on d-xylose use two enzymes, xylose reductase and xylitol dehydrogenase for this purpose. while both enzymes have been successfully expressed in s. cerevisiae, this is not always compatible with efficient product formation. for example, in the case of ethanol production, the different cofactor specificities of these two oxidoreductases cause massive byproduct formation. theoretically, introduction of a xylose isomerase, which catalyses the interconversion of xylose and xylulose, might circumvent these problems. however, it is notoriously difficult to express bacterial and archaeal xylose isomerases in s. cerevisiae and, until recently, activities of heterologous xylose isomerases expressed in s. cerevisiae were vanishingly low, at least under physiological conditions. a breakthrough was reached when, in 2003, a xylose isomerase gene from the anaerobic fungus piromyces was expressed in s. cerevisiae. while this led to high activities of xylose isomerase, these were not enough to enable fast growth or product (ethanol) formation. in this presentation, we will discuss how a combination of metabolic and evolutionary engineering led to fast and efficient xylose utilization by engineered saccharomyces cerevisiae strains under aerobic as well as anaerobic conditions. furthermore, we will illustrate how evolutionary approaches can be applied to facilitate the utilization of mixed substrates. hyaluronic acid (ha) is a natural and linear polymer composed of ␤-1,3-n-acetyl glucosamine and ␤-1,4-glucuronic acid repeating disaccharide units with a molecular weight (mw) up to 6 mda. it is a major constituent of the extracellular matrices and the synovial fluid. in the last decades, various fields of application including cosmetics, ophthalmology, rheumatology, tissue engineering and drug delivery have been explored, owing to the many important biological functions of ha and its unique physico-chemical properties. however, for some specific applications, the relatively high mw of ha is a limiting factor and the availability of low mw fractions would be highly desired. for food applications, low mw ha has been shown to penetrate the gastrointestinal barrier, thereby increasing the ha bioavailability. moreover, low mw ha fractions are able to re-establish the ha content in the skin and can thus be used in cosmetics as anti-aging and anti-wrinkle agents. finally, low mw ha has shown to prevent oxygen free radical damage in granu-lation tissue during wound healing. a range of methods has been described for the depolymerization of ha to low mw fractions. these techniques involve heat treatment, ultrasonication, uv/gamma irradiation, chemical and enzymatic degradation. we present results on a degradation process of ha originating from bacillus subtilis fermentation into well-defined low mw fractions. the process developed in lab scale is safe, well-controlled and produce low mw ha fractions with narrow polydispersity. moreover, the process is readily up-scalable. the low mw ha fractions are being evaluated in various cosmetic applications. metabolic pathway manipulation for improving the properties and productivity of microorganisms is becoming an established concept. metabolic engineering can be defined as the directed improvement of product formation or cellular properties through the modification of specific biochemical reactions or introduction of new ones with the use of recombinant dna technology. a detailed analysis of the physiological means of the different pathways is needed to be able to introduce modifications aimed to the production of not only important metabolites, but also to understand the fundamentals of cell biology. aimed to produce single compounds, metabolic engineering necessarily includes the modification of the cellular pathway(s) as well as the redirection of the energy toward the production itself. the existing metabolic engineering applications are the culmination of more than two decades of global experience developing processes for the production of fine chemicals, vitamins, nutraceuticals and animal nutritional aids such as amino acids. based on the relative low complexity, the first biotechnological applications have been developed from microorganisms. our laboratory has been engaged in this field since different years. yeasts like saccharomyces cerevisiae, kluyveromyces lactis and zygosaccharomyces bailii have been developed for the production of fine chemicals like lactic and ascorbic acids from d-glucose. in this contribution, we will present the last data obtained. since 40 years amino acid production is in the focus of industrial microbiology. l-glutamate and l-lysine are produced with corynebacterium glutamicum, while escherichia coli is used for l-threonine production. the worldwide market of threonine drastically increases: in 1996 the amount of threonine produced worldwide was 15,000 t and raised to 30,000 t in 2000 and 45,000 t in 2004. concerning predictions of experts the demand of threonine will rise with a two-digit rate of economic growth within the next few years. meanwhile the prices declined. these conditions enforce very efficient production processes. beside the strain development and an optimised downstream procedure the fermentation process is an important target for productivity improvement. strain development is dependent on detailed knowledge of the production strains. with innovative methods we are able to get a close look inside the cells under different culture conditions. these methods have been called 'omics' in recent literature. knowledge about genome, transcriptome, proteome, phosphoproteome, metabolome and fluxome leads to new ideas for strain improvements. data generated by these methods must be based on clearly defined culture conditions. therefore, highly parallel fermentations have to be performed to generate biological parallel samples. cutting-edge technical equipment is the basic requirement for experiments like this. other requirements are optimised sampling for different analysis, technical parallels of analytical steps and a detailed statistical analysis of data. these procedures guarantee distinction between real data and data noise. integration of all these data to a holistic model of the cell is the challenge for the future. by combination of a new strain, process and downstream improvements the plant productivity was increased drastically. we ended up in an optimized, fast and high yield process to scope challenges worldwide market comes up with. rieping, m., hermann, t. (2002); fermentation process for the preparation of l-threonine; wo/0218543. ) are potentially quite useful biocatalysts, as they allow for the regioselective and stereoselective hydroxylation of activated as well as non-activated carbon atoms. in addition, the large number of members of the p450 superfamily exhibits a wide diversity of specificities from which a useful biocatalyst may be selected. from a technical point of view, however, they have significant drawbacks. thus, they usually cannot be produced in large quantities nor recovered or stored without a severe loss in activity. their catalytic activity is mostly quite low, and their operational stability leaves much to be desired. most p450 enzymes require a complex protein/phospholipid machinery for activity, and the final electron donor in the reaction cascade, usually nadph, does require extensive recycling to arrive at a commercially satisfactory process. recently, the use of bacterial cytochromes as hydroxylation biocatalysts has received considerable attention. some of them are natural fusion proteins, which contain the heme and the reductase domains on a single polypeptide chain. they are catalytically much more active compared to, e.g. cytochromes occurring in human tissue. we thus have set out to further improve the technical applicability of these enzymes, and have centered our activities around several bacterial cytochromes. it proved very useful to apply rational mutagenesis and directed evolution to these enzymes, leading to a surprising compatibility of mutant enzymes with a wide variety of substrates. mechanisms for the limited stability of the enzymes were explored, leading to hybrid enzymes with enhanced stability, and the cofactor problem was alleviated using auxiliary enzymes or mediator-based technologies. as a result, a bioreactor based on microbial cytochromes was built and operated for several days. baeyer-villiger monooxygenases represent useful biocatalytic tools as they can catalyze reactions, which are difficult to achieve using chemical means. however, so far only a limited number of these monooxygenases were available in recombinant form kamerbeek et al. (2003) . using a recently described protein sequence motif fraaije et al. (2002) and the available genome sequence information, we were able to identify and overexpress a number of novel bacterial bvmos. one of the overexpressed bvmos was found to be relatively stable as it originates from thermobifida fusca, which grows at ∼60 • c. the enzyme was shown to be active on a broad range of substrates, preferring aromatic ketones fraaije et al. (2005) . the best substrate discovered so far is phenylacetone, hence its name: phenylacetone monooxygenase. we have solved the crystal structure of phenylacetone monooxygenase, which represents the first structure of a bvmo malito et al. (2004) . the crystal structure provides insight into the complex mechanism of catalysis mediated by fadcontaining bvmos. by site-directed mutagenesis we have probed the role of several active-site residues. a crucial role is played by an arginine residue. as phenylacetone monooxygenase shares significant sequence identity (>40%) with all known nadph-dependent bvmos, many of the observed structural features seem to be conserved within this class of atypical monooxygenases. by homology modeling using the phenylacetone monooxygenase structure, catalytic properties of other baeyer-villiger monooxygenases can be explained or predicted. screening for fungal baeyer-villiger monooxygenases l. butinar 1 , j. friedrich 1 , v. alphand 2 : 1 laboratory of biotechnology, national institute of chemistry, ljubljana si-1001, slovenia; 2 groupe biocatalyse et chimie fine cnrs fre2712, université de la méditerranée, marseille, france the asymmetric form of the baeyer-villiger (bv) oxidation (transformation of ketones into lactones) is an important challenge for organic chemistry since the obtained lactones are valuable building blocks for synthesis of countless biologically active products. to date, enzymatic or microbial bv oxidations appears as more successful than their chemical counter-parts. (ten brink et al.) whereas most active bv monooxygenases are produced by bacteria (among which the well-studied enzyme of acinetobacter calcoaceticus), only a few fungal strains expressing bvmo were described (alphand et al., carnell and willetts) . in order to increase the number of available biocatalysts which perform such an asymmetric biotransformations, a screening of fungi belonging to major groups of zygo-, ascoand basidiomycetes was conducted using bicycloheptenone as testsubstrate. surprisingly, a large number of the tested fungi were able to transform the substrate into one to four different lactone isomers. the yields, the enantio-and regio-selectivity of the reaction depended on the fungal strain. alphand, v., furstoss, r., 2000. j. mol. catal. b 9, 209-17. carnell, a., willetts, a.,1992 . biotechnol. lett. 14, 17-21. ten brink, g.j., et al., 2004 . pyranose oxidase (p2ox) is a periplasmic enzyme that widely occurs in basidiomycetes. it catalyses the c-2 oxidation of several aldopyranoses to the respective 2-keto derivatives, transferring electrons to molecular oxygen to yield h 2 o 2 . p2ox is of interest for carbohydrate conversions, as its reaction products (2-keto sugars) can be attractive intermediates in the production of food ingredients. we cloned the gene encoding p2ox, and subsequently amplified a cdna clone by rt-pcr. the cdna was inserted into a bacterial expression vector and successfully expressed in e. coli. properties of the heterologous protein were compared to those of the native enzyme showing that they are essentially identical. both the native as well as the recombinant enzyme were used in biotransformations of sugars. recently, the 3d-structure of this tetrameric enzyme was elucidated. based on structural information, several enzyme variants containing point mutations were constructed and further characterised. two of these variants (e542k and e542r) displayed improvements in stability and certain kinetic properties thus making them attractive for biocatalytic applications. lactones are important compounds for the fragrance and flavour industry. right now the production of lactones is dependant on the import of crude materials from tropical countries. in this project, we want to tackle the manufacture of lactones via a biocatalytic route using p450 monooxygenases. cytochrome p450 monooxygenases catalyse the oxyfunctionalization of non-activated c-atoms. cyp102a1 from bacillus megaterium, cyp102a2 and cyp102a3 from bacillus subtilis are soluble fusion proteins comprising p450 monooxygenase and fad/fmn reductase domains in one polypeptide chain. all three enzymes are highly homologous fatty acid hydroxylases. especially, cyp102a1 also known as p450 bm-3 is well characterized and shows high activity compared to other p450 monooxygenases. the aim of the work is to change selectivity but conserve the high activity that is typical for those enzymes. using methods of structure modelling, rational protein design and directed evolution new mutants of these enzymes with changed regioselectivity are obtained. products of conversion with monooxygenases are intermediates in the production of lactones. the interface of biology and materials science has led to new materials with unique structural and functional properties, and new process technologies with the ability to produce, from "bottoms up", a wide range of biomimetic structures. these materials and their designs have broad application as catalysts, sensors, and devices for use in synthesis, cell and tissue engineering, bioanalysis and screening, and nanoelectronics. we have focused on the generation of sugar-based nanostructures, complete with tailored selectivities and biocatalytic activities at the molecular and nanoscales. these include biocatalytically-generated carbohydrate derivatives that selfassemble with high precision to give novel architectures with functional and responsive properties. izumoring: a strategy for total production of rare sugars ken izumori rare sugar research center, kagawa university, kagawa 761-0795, japan. e-mail: izumori@ag.kagawa-u.ac.jp we found a new enzyme, d-tagatose 3-epimerase (dte), that epimerize all ketohexoses at c-3 position. this epimerase catalyze not only between d-tagatose and d-sorbose, but also d-fructose = dpsicose, l-sorbose = l-tagatose, and l-psicose = l-fructose. this new enzyme offered us a useful key tool to connect all ketohexoses using hexitols as intermediates. the figure shows that all eight ketohexoses can be connected with dte and polyol dehydrogenases (pdh) in a ring. using this ring, we can easily find the pathway to transfer d-fructose to d-tagatose via d-psicose using dte and pdh. various aldose isomerases transform ketohexoses to the corresponding aldohexoses. so, we can connect all 16 aldohexoses with 8 ketohexoses using the enzyme. finally, all hexoses, 8 ketohexoses, 10 hexitols and 16 aldohexoses are connected using enzyme reactions in a ring structure (not shown). this kind of strategy is effective also on transformation of tetroses and pentoses. now, we can produce all monosaccharides; tetroses, pentoses and hexoses by enzyme reac-tions. the bioproduction strategy of all rare sugars (monosaccharides that are rare in nature) is illustrated using ring form structures named as izumoring. we have already succeeded to produce d-psicose in large scale and are now in the progress of mass production of various rare sugars from natural and cheap sugars using izumoring. bioprocess development for chiral intermediates christian wandrey institute of biotechnology, forschungszentrum jülich gmbh, jülich d-52425, germany chiral alcohols, diols, amino alcohols and chiral acids (e.g. hydroxy acids and amino acids) play an important role in pharma and agro synthesis. in the past such chiral intermediates were obtained by racemic resolution via chiral reductions using prochiral precursors. here the problem of cofactor regeneration arises. this problem could be solved by enzyme-coupled or substrate-coupled cofactor regeneration using formate or isopropanol as reducing agent. alternatively, whole cell bioreductions were developed where glucose is used as the reducing agent. in recent years "designer microorganisms" were developed in which oxidoreductases (e.g. alcohol dehydrogenases) were over-expressed in escherichia coli. in such cases, cofactor regeneration was achieved intracellularly with isopropanol as the reducing agent or by coexpression of a formate dehydrogenase, so that once again format could be used for reduction. another route to obtain chiral intermediates is a fermentative approach using classical pathways (like the aromatic amino acid pathway). here, the pathway is interrupted after the intracellular production of chorismate. new chiral intermediates can be obtained by over expression of additional genes, which catalyzed the production of chorismate derivatives leading to cyclohexadiene-transdiols and the corresponding amino cyclitols. the last example can be regarded as an example of industrial biotechnology where glucose is used as starting material (white biotechnology). here bioprocess development is carried out in an integrated approach, in which molecular biochemical engi-neering cares for the optimal intracellular metabolic network and the classical biochemical engineering cares for the optimal environment of the cell in a fermenter. examples will be given which reach (in cooperation with industrial partners) up to kilogram scale. biotransformations are usually involved in just one or very few separate reactions in organic syntheses. the development of a cell-free "system of biotransformations" (sbt), in which a set of enzymes acts in a coordinated fashion in a one-pot synthesis, lead to increased catalytic complexity, selectivity and yield, as well as facilitated operation at reduced costs. the example chosen to prove the usefulness of the sbt-approach is the production of dihydroxyacetone phosphate (dhap). dhap, a c3-compound from glycolysis, is an important precursor for asymmetric c-c-bond formation. so far, the production of dhap is difficult and expensive. for the construction of the dhap-producing sbt, e. coli's glycolysis is isolated from the metabolism to an as large as possible extent by the construction of a multi-ko-mutant. a culture is grown in an appropriate medium, homogenized in the production buffer, and used as the catalytic system. high production yields can be achieved since the production pathway is almost completely isolated from the metabolic network. the employed dhap-producing sbt provides not only a path from glucose to the product, but also an integrated atp-regeneration and nad-recycling system. in first experiments with a tpi-ko-mutant, a dhap-production yield on glucose of 32% could be achieved, without optimizing the system. the system remained active for more than 24 h. up to now, atp cannot be applied in catalytic concentrations, but has to be present in equimolar amounts to glucose. the production yield could be increased by 10% through the addition of phosphate ions as substrate to the reaction, enabling the system to utilize atp more efficiently. these experiments indicate that the sbt-approach is viable and a large potential remains to improve the dhap-producing sbt. for some years novozymes have manufactured a pectate lyase for scouring of textile as an ecological alternative to the traditional harsh chemical treatment, and recently, we at novozymes discovered additional applications for pectate lyases. however, to be commercially attractive more robust pectate lyses had to be developed. in this paper, we will demonstrate how we for two different pectate lyases have improved their stability significantly. as the two enzymes are quite similar in sequence and structure, it was a new discovery for us to find that different concepts of protein engineering had to be used in our attempt to stabilize each individual pectate lyase. the stability of one enzyme was improved by substitutions in the internal of the structure whereas the stability of the other pectate lyase was primarily improved by changing surface residues. starting with knowledge from structural analysis, we have applied rational based protein engineering resulting in few selected variants. also random based protein engineering combined with screening of hundreds of thousands variants was used. in conclusion: the project team showed that by synergistic use of the two approaches, we were able to move faster towards a solution and eventually we succeeded finding new stabilized pectate lyase variants, applicable for new business areas. the importance energy independence as a national goal equals or exceeds that of the moon landing in 1990. the development of a new industry to produce fuel ethanol from woody biomass would increase national security, improve employment and the environment, and provide substantial relief from the debt of imported petroleum. costs associated with the rapid development of this new industry (∼$1.4 billion per year) could be paid by re-assigning 1 cent per gallon from existing federal gasoline taxes, a small price to pay for future energy independence. the corn-to-ethanol industry continues to make a remarkable contribution to our liquid fuel needs through expansion. today, one row of every six rows of corn is converted into fuel ethanol in the u.s. however, this expansion will be limited to 3-4% of total automotive fuel by the economics of corn costs and production. corn can do more! corn stover is the single most abundant agricultural residue in the us and can be used as a feedstock to produce 60-80 gallons of ethanol per dry ton. further expansion with other biomass feedstocks such as agricultural and municipal residues (lignocellulose, woody biomass) could produce over 100 billion gallons of fuel ethanol annually according at a recent joint report by the usda and doe (april, 2005) . current technology has been demonstrated at pilot scale for the production of fermentable sugars from hemicellulose by dilute acid hydrolysis and for the hydrolysis cellulose using fungal cellulose enzymes. biocatalysts such as recombinant escherichia coli have been developed and demonstrated for the efficient conversion of all sugar constituents of biomass to ethanol. a national goal for the full-scale deployment of current technology to produce biomass-based fuel ethanol will allow the us to reduce imported petroleum by 50%. together with increased efficiencies of hybrid vehicles, energy independence could be achieved within 10-20 years. similar gains could be realized by many nations around the world to provide new manufacturing and employment, redistributing wealth and ensuring a cleaner, healthier environment. bioethanol production using thermophilic bacteria marie just mikkelsen, birgitte k. ahring emab, biocentrum-dtu, 2800 lyngby, denmark the industry of bioethanol production is facing the challenge of redirecting the process from fermentation of relatively easily convertible but expensive starchy materials, to complex but inexpensive lignocellulosic biomass. on lignocellulosic hydrolysates, gram-positive thermophilic bacteria have unique advantages over the conventional ethanol production strains. the primary advantages are their natural broad substrate specificities, and in some strains, a high tolerance to lignocellulosic hydrolysates. moreover, ethanol fermentation at high temperatures also has the advantages of high productivities and substrate conversions, low risk of contamination and facilitated product recovery. some thermophilic bacteria naturally produce primarily ethanol from most sugar monomers present in lignocellulosics, but modifications are still necessary to increase ethanol yields. the release of useable sugars from lignocellulose biomass for industrial fuel-ethanol fermentation is often facilitated by a weak acid hydrolysis step. as a consequence, inhibitors such as furfural and 5-hydroxymethylfurfural (hmf) are formed as degradation products of xylose and glucose, respectively. moreover, the fermentative end-product of ethanol is also inhibitory. these, and other inhibitors present an environment, which elicits the expression of stress-related genes in saccharomyces cerevisiae. recently, 65 s. cerevisiae genes have been identified as important in furfural stress tolerance. when furfural is present, yeast with these genes disrupted grows poorly compared to wild-type yeast. a sub-class of these genes suggests that yeast grown under furfural-induced stress may rely upon similar pathways as cells grown under various other stresses, including oxidative, heat, and sorbate. to investigate this link further, we analyzed stress-induced phenotypes such as ros activity, dna damage, and membrane damage in wild-type and mutant yeast exposed to furfural or hmf stress. moreover, we investigated whether overexpression of this sub-class of genes would provide protection from furfural-induced stress and oxidative damage. micro-organism to be used in fermentation of lignocellulose hydrolyzates should preferably have three characters: (a) high ethanol tolerance, (b) resistance to inhibitors found in the hydrolyzate, and (c) a broad substrate utilization range, since the hydrolyzate contains several sugars. in addition to the possibility of controlling the level of potential inhibitors, fed-batch fermentations also permit the parallel uptake of several different monomeric sugars. two strains of saccharomyces cerevisiae, cbs 8066 (a commonly used laboratory strain) and tmb 3000 (a strain isolated from a spent sulfite liquor fermentation plant), were characterized in batch and fed-batch fermentation of a dilute-acid hydrolyzate from spruce. the strains had different abilities to ferment spruce hydrolyzate. the study suggests that the furan reduction capacity of a yeast strain is a key factor for its performance in fermentation of lignocellulosic hydrolyzate. polyketides constitute a structurally highly diverse group of natural products that possess broad ranges of pharmacological properties and represent a major source for novel cancer therapeutics. however, these compounds may be sub-optimal in regard of activity, selectivity, availability and unwanted side effects. in addition, the sustainable production of these valuable metabolites can be a challenge. studying the molecular basis of the biosynthetic pathways may set the basis for improving the production and for rationally engineering derivatives with altered bioactivity profiles, e.g. through targeted knockouts, mutasynthesis ziehl et al. (2005) , and swapping of pathway genes. our results in elucidating and manipulating the biosynthesis of selected antitumoral polyketide metabolites from bacteria (aureothin, chartreusin) and fungi (cytochalasines, rhizoxin) are presented. analyses at the genetic and biochemical levels provided new insights into several unusual biosynthetic features, e.g. non-linear polyketide assembly for the nitroaryl-substituted polyketide aureothin he and hertweck (2003, 2005) , an oxidative rearrangement cascade in the chartreusin pathway xu et al. (2005) , and a fungal iterative pks-nrps hybrid synthase schuemann and hertweck (2005) involved in cytochalasin biosynthesis. the most surprising result was obtained from elaborating the biogenesis of the antimitotic agent rhizoxin from rhizopus sp., which allowed for a significant improvement in large-scale production partida-martinez and hertweck (2005). he, j., hertweck, c., chem. biol. 2003 , 10, 1225 -1232 chem. bio. chem. 2005, 6, glycopeptides such as vancomycin and teicoplanin are the drugs of last resort for the treatment of severe infections caused by antibiotic resistant gram-positive bacteria. glycopeptides inhibit the peptidoglycan biosynthesis by binding as dimers to the d-ala-d-ala termini of the cell wall precursors. amycolatopsis balhimycina synthesizes the vancomycin-type glycopeptide balhimycin, whose structure and biological properties greatly resemble vancomycin and which only differs by its glycosylation pattern. using a "reverse genetics" approach we have identified the 66-kb gene cluster encoding the biosynthesis of balhimycin. by a combination of genetics, biochemistry and analytical organic chemistry, we were able to elucidate the biosynthetic pathway and to assign functions to almost all genes of the cluster. the biosynthesis starts with the pathway-specific provision of the non-proteinogenic amino acids ␤-hydroxytyrosine (␤-ht), hydroxyphenylglycine (hpg) and dihydroxyphenylglycine (dpg) which form together with (n-methyl)-leucine and asparagine the heptapeptide backbone of balhimycin. dpg is synthesized via a polyketide synthase mechanism (pksiii) similar to that known from plant chalcon/stilben synthases (pfeifer et al., 2001) . for the ␤-ht synthesis three genes are essential which form an operon (puk et al., 2004) : bpsd, an nrps binds a tyrosine molecule, which is then hydroxylated by the p450 monooxygenase oxyd. the perhydrolase bhp is required for the release of ␤-ht. subsequently bhaa, a nadh/fad-dependent halogenase catalyzes the chlorination of ␤-ht to form chloro-␤-hydroxytyrosine (puk et al., 2002) , which is needed to stabilize the dimerization. the amino acids are linked by non-ribosomal peptide synthetases (recktenwald et al., 2002) , and the aromatic side chains are interconnected by p450 monooxygenases; a series of reactions which lead to the first antibiotically active intermediate. inactivation of the oxygenase genes revealed the order of the cyclization steps (bischoff et al., 2001) : the oxygenases act in a stepwise fashion in the sequence oxyb, oxya and oxyc. the resulting cross-linked heptapeptide is then finally modified by methylation and glycosylation. the biosynthesis is regulated by the strr-type regulator bbr, which was shown to bind in front of different operons of the balhimycin gene cluster. this ensures the coordinated expression of the biosynthetic genes. the non-producing mutants, defective in the supply of the non-proteinogenic amino acids, were used as recipients in cloning experiments as well as in approaches of precursor-directed biosynthesis by feeding chemically synthesized alternative precursors. thus, novel balhimycin derivatives were generated (weist et al., 2002 (weist et al., , 2004 . bischoff et al., 2001 . angew. chem. int. ed. 40, 4688-4691. pfeifer et al., 2001 . j. biol. chem. 276, 38370-38377. puk et al., 2004 . j. bacteriol. 186, 6093-6100. puk et al., 2002 . chem. biol. 9, 225-235. recktenwald et al., 2002 . microbiology 148, 1105 -1118 . weist et al., 2002 . angew. chem. int. ed. 41, 3383-3385. weist et al., 2004 spectroscopy guided discovery of novel bioactive microbial natural products thomas ostenfeld larsen, michael edberg hansen cmb biocentrum-dtu, technical university of denmark, 2800 lyngby, denmark the task of finding novel bioactive natural products is usually bioassay driven. often a certain type of compound (e.g. polyketide, alkaloid) turns out to be active in an assay. when having generated a promising hit in a bioassay the normal procedure in the drug discovery process usually is to produce a large number of structurally analogous compounds either by traditional chemical synthesis or by combinatorial chemistry in order to study structure activity relation-ships and to find even more active lead compounds. alternatively to chemical synthesis of analogues nature can be explored for structurally similar compounds by uv-spectroscopy guided screening. this work will present a new method for the systematic and automated computer assisted search of full uv spectra in large number of datafiles for both dereplication of known and discovery of new natural products based on the use of the new mathematical algorithm x-hitting. exploring the substrate spectrum of the antibiotic producing bacteria saccharopolyspora erythraea p. krabben, p. oliveira, f. baganz, j. ward department of biochemical engineering, university college london, london wc1e 7je, uk knowledge of substrate utilisation capabilities play an important role in the development of genome scale metabolic models (borodina et al., 2005) and refinement of first generation annotations. furthermore, knowledge of the product formation during catabolism of different substrates provides valuable information about the distribution of metabolic fluxes and thereby forms a basis for rational strain improvement. we present here, data on the substrate utilisation capabilities and the corresponding product formation of s. erythraea. this analysis will help in improving the production of erythromycin and provide clues to the activation of the cryptic secondary metabolic pathways present in the s. erythraea genome. reference borodina, i., et al., (2005) . genome-scale analysis of streptomyces coelicolor a3 (2) modeling of growth and product formation on complex media containing multiple substitutable substrates is a challenge. complex media offers the organism multiple choices of carbon and nitrogen substrates including free amino acids, peptides, soluble and insoluble proteins in addition to the defined sources such as glucose and ammonium sulfate. we present a structured model that accounts for growth and product formation kinetics of rifamycin b fermentation in a multi-substrate complex medium. the model considers the organism to be an optimal strategist with a mechanism to regulate the uptake of the substrate combinations. further, we assume that the uptake of a substrate depends on the level of a key enzyme, which may be inducible. the model also considers control parameters as fraction of flux through a given metabolic branch. the control parameters are obtained using a simple multi-variable constrained optimization. the model parameters were rigorously estimated via a specifically designed experimental plan. the model correctly predicts the experimentally observed growth and product formation kinetics and the regulated simultaneous uptake of the substitutable substrates under different fermentation conditions. the model and the model parameters provide useful insights into the growth and product formation strategy of this industrially important process. this presentation will describe the experimental results, the model development and the relevant model parameters for a. mediterranei s699. the recent surge in oil price and the increasing concern on our environment have generated much interest in the production of chemicals from renewable resources. succinic acid, also called as amber acid, is a four-carbon dicarboxylic acid, which can be used as a precursor of numerous products including biodegradable polymers, green solvents, pharmaceuticals, and bulk and fine chemicals. a new capnophilic bacterium named mannheimia succiniciproducens mbel55e was isolated from the rumen of korean cow. this bacterium can produce large amounts of succinic acid along with some other metabolites such as lactic, formic and acetic acids. we have completely sequenced the genome of m. succiniciproducens and charaterized its genome content in the context of metabolic pathways. we then constructed the genome scale in silico metabolic network for metabolic flux analyses, and carried out metabolic flux analysis under varying environmental conditions. based on the in silico analyses results, we selected several target genes to be manipulated for enhanced succinic acid production. detailed results of metabolic engineering based on genome-scale information will be reported. we have been developing tools for inverse metabolic engineering in order to identify gene targets that improve the phenotype of industrial strains and cells for medical applications. to this end, we create genomic fragment libraries from a source organism and use it to transform the host organism. cells are properly selected in environments that favor the phenotype of interest and genes enriched in these cells are sequenced and used in follow up transformations of cells with specific genetic backgrounds. this overall strategy is complemented with additional tools for modulating gene over-expression, gene deletion, and high throughput clone isolation. we will demonstrate applications of this strategy to the identification of gene targets for improved xylose assimilation in recombinant saccharomyces cerevisiae and improved lycopene production in escherichia coli. based on assumed reaction network structures, nadph availability has been proposed to be a key constraint in ␤-lactam production by penicillium chrysogenum. in this study, nadph metabolism was investigated in glucose-limited chemostat cultures of an industrial p. chrysogenum strain. enzyme assays confirmed the nadpspecificity of the dehydrogenases of the pentose-phosphate pathway and the presence of nadp-dependent isocitrate dehydrogenase. pyruvate decarboxylase/ nadp-linked acetaldehyde dehydrogenase and nadp-linked glyceraldehyde-3-phosphate dehydrogenase were not detected. although the nadph requirement of penicillin-gproducing chemostat cultures was calculated to be 1.5-1.7-fold higher than that of non-producing cultures, activities of the major nadph-providing enzymes were the same. isolated mitochondria showed high rates of antimycin a-sensitive respiration of nadph, thus indicating the presence of a mitochondrial nadph dehydrogenase that oxidizes cytosolic nadph. the presence of this enzyme in p. chrysogenum has important implications for stoichiometric modelling of central carbon metabolism and ␤-lactam production and may provide an interesting target for metabolic engineering. bacteria and mammalian cells have been traditionally used as hosts for commercial recombinant protein production. however, in recent years, the insect cell-baculovirus system has emerged as a potentially attractive recombinant protein expression vehicle. although flow cytometry has been used widely for analysis of mammalian and microbial cells, there is very little information on applications of this powerful technique in insect cell culture. here we compared cell ratiometric counts and viability (propidium iodide and calcein am) of sf-21 cell cultures using flow cytometry to those determined by more traditional methods using a haemocytometer and the trypan-blue exclusion dye. flow cytometry has also been used to monitor various parameters during cultures of sf21 infected with the recombinant autographa californica nuclear polyhedrosis virus (acnpv) containing the inserted nucleic acid sequence amfp486 coding for am-cyan coral protein, which emits natural green fluorescence. complete elucidation of the genetic control of a metabolic flux requires the availability of fine-grained expression levels of the gene(s) of interest. we developed a collection of promoters of varying strength for tuning gene expression in the yeast s. cerevisiae. engineered promoters were obtained through random mutagenesis of the constitutive tef1 promoter. eleven mutated promoters were selected by fluorescence-activated cell sorting (facs) spanning gradually increasing activities between about 8 and 120% compared to the native tef1 promoter. data were also confirmed at the level of mrna via rt-pcr. by introducing selectable markers in front of the different tef1 promoter mutations, we provided plasmid collections, which can be directly used to amplify promoter replacement cassettes for genomic integration of the fine-grained promoter collection in front of any yeast gene. l-arabinose, widely distributed in plant kingdom, is a component of plant cell wall. l-arabinose does not abundantly exist in free state in plants, but usually in corn hull, sugar beet pulp, gum arabic, mesquite gum, as the polysaccharide such as arabinoxylan and arabinogalactan. to produce arabinose from agricultural wastes, we screened arabinogalactan degradable strain from compost. thereafter, putative arabinase gene from this strain was cloned (b1029 ts2-8). as a result of spectrometric assay using -nitrophenyl ␣l arabinofuranoside, recombinant showed 3-4-fold higher activity than wild type e. coli strain. after enzymatic reaction with corn fiber, b1029 ts2-8 produced 2.15 g/l of l-arabinose, which was detected on hplc. however, the enzyme activity was very low. so, we are transferring the gene into expression vector system. further characterization study and enzyme engineering to enhance the activity toward corn fiber will be presented in poster. there are only a limited number of hypersaline areas all over the world, which include several locations in turkey such as van lake located in eastern region of turkey. isolation and identification of halophilic and hyperhalophilic microorganisms from such locations is essential for the determination of biodiversity in turkey. high-level production of extremozymes from these microorganisms has also many economical advantages due to their stability at extreme reaction conditions. proteolytic enzymes are the most important group of enzymes produced commercially. of these, proteases produced by alkalophilic microorganisms are investigated not only in scientific areas such as protein chemistry and protein engineering but also find wide application in food, pharmaceutical, leather and detergent industries. in this study, 24 microorganisms isolated from van lake were screened for the presence of extracellular alkaline protease activity. the optimum screening temperature and ph were determined as 37 • c and ph 9.5, respectively. one of the isolates that could grow at 0-20% salinity reached highest levels of extracellular alkaline protease activity. this best producer, which was identified as the halotolerant bacillus pumilus, was found to produce alkaline protease both in the presence and absence of nacl. to improve enzyme production yields, culture conditions such as medium composition, growth ph and temperature were optimized. the effect of different carbon sources, organic and inorganic nitrogen sources on the production of alkaline protease was studied. whereas a mixture of inorganic and organic nitrogen sources induced high protease production, use of only an organic nitrogen source supported poor enzyme production. halotolerant bacillus pumilus produced maximum alkaline protease activity when maltose, yeast extract and sodium nitrate were used as carbon source, organic and inorganic nitrogen sources, respectively. this project was supported by tubitak through project tbag 2321-103t069. in the market of biochemical products a very important role is played by heterologous proteins production, and despite recent advances in mammalian cells exploitation, yeasts can still present advantages as host systems. among them, the spoilage yeasts belonging to the zygosaccharomyces genus have become, due to some peculiar properties, significantly attractive. in particular, z. bailii is characterized by acid resistance, osmotolerance to high sugar and ethanol concentration combined with high biomass yield. despite still little is known about its genetics and cellular biology, our group is working on its development and exploitation for recombinant productions with an integrated approach coupling physiological study with the creation of molecular tools for heterologous proteins production. we previously described and did a patent application regarding the first techniques necessary to transform this yeast and to express and secrete different proteins derived from different sources. here we present and discuss the last advances in optimization of heterol-ogous protein expression in particular, on one side we present a reproducible strategy for target gene deletion, leading to the first z. bailii auxotrophic mutant, and on the other we show the improvement of gene dosage and plasmid stability by building a set of multicopy expression vectors based on the sequences of the z. bailii 2 -like endogenous plasmid psb2 and an integrative plasmid. all the known ␥-butyrolactone autoregulator receptors are highly conserved in the dna binding motif present in their n-terminal portions and have been proposed to play roles as transcriptional regulators in antibiotic production and/or morphological differentiation. previously, kim et al. reported that the cloned scar in streptomyces clavuligerus has several characteristics of the autoregulator receptors in the genus streptomyces. in this study, to clarify the in vivo function of scar, a scar-disrupted strain was constructed by means of homologous recombination after introducing a scar-disruption construct via transconjugation from e. coli. no difference in morphology was found between the wild-type strain and the scar disruptant. however, the scar disruptant showed a 1.8-fold higher production of clavulanic acid than the wild-type strain. the phenotype was restored to the original wild-type phenotype by complementation with intact scar. therefore, the autoregulator receptor, scar, acts as a negative regulator of biosynthesis of clavulanic acid but plays no role in cytodifferentiation of s. clavuligerus. lactate dehydrogenase catalyses the production of lactate from pyruvate. it is the first target for many researches on lactic acid producer microorganisms like rhizopus oryzae. in the present study based on the known sequences of r. oryzae ldha and ldhb genes skory (2000), they were cloned and expressed in a citric acid producer fungus aspergillus niger. the aspergillus niger strains expressing ldha or ldhb gene resulted in increased production of lactate in aspergillus niger. among 50 transformants tested 4 ldha and 5 ldhb expressing strains were found to have higher lactate dehydrogenase activity compared to wild type in the conditions tested. the highest specific activity obtained with ldha transformants was only 2.5 times of the wild type while this was 10 times for one of the transformants expressing ldhb. in addition to increased lactate production citric acid production was also increased. however, gluconic acid production ceased in the ldha or ldhb expressing a. niger strains. the production of lactic acid in a. niger transformants and lactate dehydrogenase a and lactate dehydrogenase b enzymes are being investigated in the chosen strains. selection of n source suitable for production of rhodococcus sp. biomass for the purposes of microbial transformation of 5␣h-epoxypregnanolone (5␣h) and 5 -epoxy-pregnenolone ( 5 ) into their 9␣-hydroxy-derivatives was carried out. three dehydrated and three non-dehydrated n sources were tested. the transformation reaction was carried out in phosphate buffered medium containing 1 g l −1 of the steroid substrate and 0.1 g l −1 cells. the steroids were determined by hplc. the transformation resulted in formation of three derivatives appearing in the reaction medium in the sequence: 4 -3-keto-; 9␣-hydroxy-and 9␣,20␤-hydroxy-epoxy-pregnenolone. a strong influence of the n source on the hydroxylating activity of the biomass was observed. triptose (difco) gave a cell depot actively hydroxylating 5␣h without any significant accumulation of the 4 -3-keto-derivative. the most effective accumulation of hydroxylated derivatives of 5 was observed with biomass grown on freshly prepared meat extract while the commercial products triptose (difco), meat extract (difco) and lactalbumin (flika) gave valuable information about the dynamics of the transformation process. biobleaching of kraft cellulose pulp by poliporus versicolor aysun ergene 1 , nazif kolankaya 2 : 1 kırıkkale university, faculty of science and literature, department of biology, yahsihan, kırıkkale, turkey; 2 hacettepe university, faculty of science, department of biology, beytepe, ankara, turkey the suitability of culture supernatant from poliporus versicolor for use in the biobleaching of kraft cellulose pulp was investigated. p. versicolor was found to grow on mycological broth (1% soytone, 4% d-glucose and 0.5% cellulose pulp). maximal extracellular ligninase production was detected after7 day (7 nkat). the optimum biobleaching conditions are 30 • c and ph: 4.8, with 10 days. in this condition p. versicolor decreased the kapa number from 38.55 to 19.42 and increased brigthness from 28 to 32.7 in 10 day treatment. xylanase production, purification and characterization from a soil isolate, bacillus m-13 ayşegül ersayin 1 , aytaç kocabaş 1 , b. zümrüt ogel 2 , ufuk bakir 3 : 1 biotechnology department, middle east technical university, ankara, turkey; 2 food engineering department, middle east technical university, ankara, turkey; 3 chemical engineering department, middle east technical university, ankara, turkey, 06531. e-mail: ubakir@metu.edu.tr (u. bakir) xylan is a major component of the plant cell wall, representing up to 35% of the dry weight. xylan molecule is a complex polymer consisting of a ␤-d-1,4-linked xylanopyranoside backbone substituted with acetyl, arabinosyl and glucuronosyl side chains. hydrolysis of the xylan backbone is mainly catalysed by endo-␤-1,4-xylanases (ec 3.2.1.8). many bacterial and fungal species are able to utilize xylan as a carbon source. interest in the enzymology of xylan hyrdolysis has increased because use of xylanases in bioconversion of agricultural wastes to valuable products like single cell protein, xylo-oligosaccharides and fuel, in bio-bleaching processes, food and animal feed industries. in this study, xylanolytic nature of a soil isolate bacillus spp., bacillus m-13, has been shown. bacillus m-13 produced multiple xylanases when grown on a liquid medium containing agricultural wastes as the sole carbon source. various agricultural wastes including corn-cobs and cotton waste, with and without pretreatments were used to maximize enzyme production. the major xylanase having molecular weight of 20 kda upon sds-page and a pi of 9.1 upon ief was partially purified by liquid chromatographic techniques 150-fold with 40% recovery, including gel filtration, ion exchange and hydrophobic interaction chromatography. enzymes are important constituents in the laundry detergents due to their contribution to shortening washing times, reduction of energy and water consumption by lowering washing temperatures, provision of environmentally friendlier wash-water effluents and fabric care. however, they can loose a significant part of their activity in the chemically-hostile detergent matrix over a time period of several weeks. therefore, improving the storage stability of enzyme granulates is the main challenge in the development of a new product. the complexity of the detergent matrix implies the presence of a complicated mechanism involved in the inactivation of the enzymes. a combination of factors, such as oxidation by h 2 o 2 released by the bleaching agents, humidity, high temperature, autolysis of enzymes, high local ph in a granule, oxygen, and other detergent components, plays a role in the activity loss. an experimental investigation on the inactivation of the solid-state enzyme during storage has been initiated. the release rate of h 2 o 2 from the bleaching agent, sodium percarbonate, was determined using a simple and accurate method for measuring the gas phase h 2 o 2 concentration. the deactivation kinetics of pure enzyme was determined as a function of gas phase h 2 o 2 concentration and humidity. the preliminary results indicate that humidity plays a significant role in the inactivation mechanism of the detergent enzyme due to a possible increase in the mobility of the enzyme molecule and the surface area exposed to destructive agents. the effect of main granulate ingredients on the stability of the enzyme was investigated and the extent of the protection of each component was estimated. the study is important for the revealing of the phenomena occurring in the detergent matrix during storage. understanding the inactivation mechanism provides a valuable tool for the development of more effective protective coatings and stabilizers. the use of biosurfactants in cosmetic industry has attracted great attention of biotechnological researchers because they consist of two inexpensive, renewable and easily accessible starting agricultural materials: sugar and oil/fat. carbohydrate based products are non-toxic, biocompatible and biodegradable. in addition, the enzymatic processes present many advantages in comparison with the chemical methods, which employ high temperatures in the presense of alkalin catalysts, high-energy consuption and low selectivity of products. sugar esters present vast application, such as for antibiotics, biomaterials, surfactants, cosmetics and so on. we investigated the synthesis of sugar vinyl esters, using protease-catalysed transesterefication method applying protease from bacillus subtilis. sucrose 0.125 m and vinyl ester 0.5 m has been mixed in dimethylformamide at 160 rpm of agitation. at first, we have studied the effects of protease from bacillus subtilis concentrations (5, 10, 20 and 40 mg/ml) as catalyst. afterwards, the influence of the temperature (30 and 50 • c). after that, the influence of the molar ratios (1:1; 1:2; 1:4 m/m) between vinyl laurate (ch3(ch2)10cooch ch2) and sucrose. subsequently, we investigated the effects of water amount, using 0, 5, 10 and 20% of water in dmf. the conversion ratios of sucroseto-sucrose esters were determined decreasing sucrose measurement with hplc. the results showed that the best conditions to produce high activity on the enzymatic reaction was by using 40mg/ml of protease from bacillus subtilis at 30 • c, molar ratio of 1:4 (vinyl laurate:sucrose) and adding 10% of water in dmf. finally, we succeeded in the characterization of vinyl sugar ester, which was produced after 25 hours of reaction by 13 c nmr. the results confirmed the c1substituted sugar mono-ester (1 -o-vinyl lauroyl sucrose). effects of oxygen transfer on ␣-amylase production by b. amyloliquefaciens nurhan güngör, güzide ç alık bre lab, department of chemical engng, ankara university, 06100 ankara, turkey ␣-amylase (e.c. 3.2.1.1) a commercially important enzyme used in food, textile, detergent, brewing, paper, and animal feed industries; hydrolyses ␣-1,4 glucosydic bonds in amylose, amylopectin, and related polysaccharides. optimum medium composition and influence of bioreactor operation parameters on ␣-amylase production with high yield and selectivity were determined together with the metabolic flux distributions using b. amyloliquefaciens (nrrl b-14396), which is found to be a good producer of the enzyme. systematic investigation of oxygen transfer in relation with the metabolic fluxes for ␣-amylase is not available in the literature. shake-flask experiments were conducted in 0.5 dm 3 air-filtered bioreactors in orbital shakers with agitation and heating controls (b. braun, certomat-bs1). laboratory-scale bioreactors were composed of agitation, heating, foam, dissolved-oxygen and ph-controlled 1.0 and 3.5 dm 3 systems (b. braun, biostat q; and chemap). after separation of the cells with a sorval rc28s ultracentrifuge, ␣-amylase activity was measured by the dns method bernfeld (1955) . amino acid concentrations were determined with a hplc (waters), pro-tein and organic acid concentrations were measured with a hpce (waters, quanta 4000e) ç alik et al. (1998) . oxygen transfer characteristics in the bioreactor systems were calculated using the dynamic method rainer (1990) . in the mass flux balance-based analyses, a pseudo-steady state approximation for the intracellular metabolites and the accumulation rates of the extracellular metabolites measured throughout the fermentations in considerations of the biochemical feature of the system were used to acquire the flux distributions. in laboratory scale, the effects of different c sources, i.e. glucose, fructose, maltose, lactose and soluble starch; n sources, i.e. (nh 4 ) 2 hpo 4 , (nh 4 ) 2 so 4, and nh 4 cl and/or their concentrations; and the operation parameters, ph and temperature on cell growth, substrate utilization, ␣-amylase and by-product concentrations, and ␣-amylase activity were investigated. in pilot scale, the fermentation and oxygen transfer characteristics of the bioreaction system together with the metabolic fluxes were determined. bernfeld, p., 1955 . methods in enzymol. 1:149-159. ç alık, p., ç alık, g.,özdamar, t.h., 1998 . enzyme microb. technol. 23, 451-461. rainer, b.w., 1990 . geldanamycin is a benzoquinone ansamycin produced as a secondary metabolite by the actinomycete, streptomyces hygroscopicus var. geldanus, in submerged culture. it is a broad-spectrum antibiotic and exhibits an anti-tumour activity through its interaction with the heat shock protein 90 family of chaperone proteins. the optimal recovery of geldanamycin from fermentation broths is the focus of the presented work. the application of adsorbent resins was assessed and the viability of developing a solid phase extraction process for geldanamycin was determined. it was found that recovery of geldanamycin from fermentation broth was possible using adsorbent resins and the use of resins facilitated the recovery of a product stream of high purity. the composition of the fermentation broth had an impact on the performance of the resins and it was found that assessing performance on the basis of experimentally derived data was more apt than studying the kinetics of adsorption alone. adsorptive processes are, by their nature, difficult to optimise and this was found to be the case when optimising the recovery of geldanamycin from partially clarified fermentation broth. considerable effort was required to optimise geldanamycin adsorption, via examining the effect of environmental conditions and process system configuration, and geldanamycin desorption, via examining the effect of environmental conditions and investigating selective elution patterns. it is well known that halophilic eubacteria synthesize compatible solutes in order to face the high ionic strength environment in which they proliferate. these biomolecules are gaining more and more importance as biotechnological tools in a wide array of applications, and the recently developed novel bioprocesses enabled large-scale production of these compounds and therefore commercial distribution. however, there is still interest in the optimization of the production process of sole hydroxyectoine that was demonstrated to have a superior stabilization capacity. in this research project, we optimized growth conditions of marinococcus m52 to obtain high yield of hydroxyectoine. their production proved faster in the batch experiments at a higher oxygen supply, even if the stationary phase was comparable in all cases. the mf experiments showed a final biomass which was 5-fold that obtained in the corresponding batch process. in addition the monitoring of compatible solutes production showed that in the last 24h experiment hydroxyectoine accounted for 80-90% of the total content, accumulating up to 13-15% of the cell dry weight. studies for improving downstream process for ectoine and hydroxyectoine recovery showed that short permeabilization cycles in water are effective in a temperature range between 45 • c and 55 • c using a ratio 1:4/biomass:water. moreover, we evaluated the ability of ectoine to stabilize lactic acid bacteria during freeze-drying and to protect human cells from heat stress. in particular, the compatible solutes were added to the medium of confluent keratinocytes before subjecting the cells to heat stress, or lps insult. rt-pcr and western blot analysis demonstrated the hsp70b' gene over-expression in heat stressed human keratinocytes treated with ectoine. finally, we demonstrated that even at low concentration (50 mm) these compatible solutes are able to diminish cell death in lactic acid bacteria due to lyophilization procedure. among all existing alternative energy sources, biomass-derived bioethanol is especially advantageous since it is clean, sustainable and potentially inexpensive. the actual production of bioethanol is divided into a pre-treatment step, an enzymatic hydrolysis step and a fermentation step. while fermentation has been practiced by humans for centuries, our knowledge of enzymatic hydrolysis is still limited. nevertheless, it is well accepted that hydrolysis is a synergism among three classes of enzymes, ␤-glucosidase, endoglucanase and cellobiohydrolase. furthermore, a complete and efficient hydrolysis is only achieved when the enzymes are in the correct proportion. the common enzyme proportions have so far been based on the natural enzyme abundance as produced by the microorganism or mainly been determined by a trial-and-error approach. in this study, however, we used metabolic control analysis (mca) as a modelling tool to gain fundamental knowledge about enzymatic hydrolysis and to design an optimal enzyme mixture. using gepasi, a free software, the degree of control of each reaction step or each enzyme towards the overall hydrolysis can be calculated. our hypothesis is that the amount of each enzyme used for hydrolysis should be proportional to the degree of control of the enzyme. with mca, a significant amount of time, labour and reagents can be saved on developing hydrolysis enzyme mixture. furthermore, this study should demonstrate the usefulness of mca on understanding enzyme-catalyzed reactions outside the cell. process optimization for fed-batch fermentation of bacillus thuringiensis subsp. israelensis arindam chaudhury, gopinathan c department of biotechnology, university of calicut, calicut, kerala 673635 india. e-mail: g achaudhury@umassd.edu (a. chaudhury) bacillus thuringiensis (bt) is a desirable biopesticide because of its low cost and lack of toxicity. the use of bt in developing countries is limited due to process complications and economic non-feasibility of the fermentation process. in the present study, we have shown how regional production, using inexpensive alternatives for carbon and protein sources, can effectively reduce the cost of mass production of bt. while using alternative media supplements, the biomass production, nor the larvicidal activity was hampered. in addition, the positive effects of sparged aeration and the indispensable role of yeast extract were also proved. this work provides the first experimental proof of delineating the sporulation process and delta-endotoxin production. the role of various buffering agents and additives in increasing biomass production and early sporulation were also investigated. for the production of coenzyme q 10 (coq 10 ), an electron carrier in the respiration chain with antioxidant activity. with decrease of dissolved oxygen level from 20 to 5%, the intracellular coq 10 content increased about 4-fold, yielding 2 mg per g-dry cell weight at 5% dissolved oxygen level. azide significantly increased the intracellular coq 10 content, with the highest value of 5.3 mg per g dry cell weight in the presence of 0.45 mm of sodium azide. however, dnp (up to 200 m) and h 2 o 2 (up to 10 m) did not affect the intracellular coq 10 content, indicating proton gradient release and oxidative stress do not affect the synthesis of coq 10 . these results show that restricted electron flux by limited oxygen supply and the addition of azide increases the intracellular coq 10 content. fourier transform infrared spectroscopy (ft-ir), combined with in situ heat sterilizable attenuated total reflection (atr) probes, constitutes a promising and versatile technique for on-line monitoring of bioprocesses. the ft-ir enables rapid determinations of the medium composition without the requirement of sample withdrawal and preparation. in this work the concentration levels of the substrates glycerol and methanol were monitored on-line in pichia pastoris cultivation. partial least squares (pls) models were used for obtaining the concentration readings. the glycerol concentration measurement proved to be very reliable and reproducible in the glycerol batch phase. however, the on-line information regarding the glycerol concentration was not utilized for any process control purposes. on the other hand, the availability of on-line information about the methanol concentration proved to be crucial for the successful implementation of the cultivations. the temperature strategy in the methanol fed-batch phase utilized temperatures as low as 10 • c. in order to keep the metabolic activity at a reasonable level the culture was therefore pushed towards the maximal substrate consumption rate, rather than being a conventional substrate limited fed-batch. as a consequence methanol accumulation occurred on occasions. without on-line information about the concentration this accumulation, if sustained, would have resulted in a poisoning of the culture, either from methanol itself, or perhaps more importantly from formaldehyde. therefore, it can be concluded that the ft-ir/atr instrument was very useful in this application. jørgensen department of chemical engineering, denmark technical university, building 229, dk-2800 lyngby, denamrk. e-mail: fpd@kt.dtu.dk (f.p. davidescu) modeling biochemical reaction network in microorganisms still represents a challenge due to the very large number of enzyme catalyzed biochemical reactions, to the very complex system and to the many feed-forward and feedback regulation mechanisms. the presented approach to model such a system is based on the stochastic grey-box modeling framework proposed by kristensen et al. (2003) . this methodology consists of parameters estimation based on a prediction error method followed by different statistical tests for parameter significance and for model (in-) validity. the methodology furthermore allows estimation of unknown functional relations, e.g. kinetic rates. a set of experimental data zangirolami (1998) obtained during continuous cultures of a high enzyme producing aspergillus oryzae strain. the oxygen concentration was decreased stepwise and the substrate concentration was modified from one experiment to other. a model proposed by agger et al. (1998) is investigated on these data. the primary interest is to develop a physiologically feasible model, also at the low oxygen concentrations often found in industrial practice. microbially produced secondary metabolites such as antibiotics have tremendous economic importance. streptomyces spp. have long been identified as sources of antibiotics and chemotherapeutic compounds, synthesising over 4000 bioactive compounds. geldanamycin is a novel chemotherapeutic agent produced by streptomyces hygroscopicus var. geldanus in submerged fermentation. initial studies have focused on optimisation of media design through understanding and controlling metabolic routes of biosynthesis within the cell. geldanamycin is a by-product of the shikimate or aromatic amino acid biosynthesis pathway. stimulation of this pathway and concomitant production of geldanamycin is achievable through amino acid control. increasing concentrations of primary carbon source greatly influence biomass generation and product formation, as does the inclusion of cations such as magnesium and calcium to the fermentation media. optimisation of production media through balancing minerals, nitrogen, and carbon sources has significantly improved antibiotic yields in shake flask cultures and the development process will be extended into pilot scale through the use of bioreactors. microbiology and biotechnology research group, school of life sciences, napier university, edinburgh, eh10 5dt, scotland. email: m.el-mansi@napier.ac.uk (m. el-mansi) synopsis: during growth of corynebacterium glutamicum on glucose or other glycolytic intermediates, pep carboxylase fulfils an anaplerotic function as it replenishes intermediary metabolism with biosynthetic precursors that are essential for growth and glutamate production. under these conditions, pep carboxylase plays a central role and this in turn is characterised by a high flux control coefficient thus rendering this enzyme an ideal target for metabolic interventions. further analysis in silico revealed that any increases in the concentration of the enzyme was accompanied by increases in flux through the enzyme itself as well as glutamate formation, presumably as a consequence of sustaining a high intracellular level of ␣-ketoglutarate; the immediate precursor for glutamate biosynthesis. a combined approach to enhance periplasmic expression of human growth hormone in escherichia coli, using a modified signal peptide from alpha amylase gene of bacillus licheniformis s.k. falsafi 1,2 , a. zomorodipour 2 : 1 islamic azad university of jahrom, iran; 2 department of mol genet. national inst for genet eng & biotechnol., tehran, iran. e-mail: soheil falsafi@yahoo.com (s.k. falsafi) the alpha amylase gene signal peptide, originated from a strain of bacillus licheniformis, was shown to be able to transport its native protein, when expressed in e. coli the competence of the fusion protein being processed and translocated through the inner membrane is highly dependent on the amino acid sequences in the signal peptide. therefore, in order to increase the expression efficiency of bla signal peptide, we reconstructed the bla signal peptide coding fragment with the following modifications. two rare codons of arg 6 (cgg) and arg 10 (cga) and codons for leu 15 (tta) and pro 23 (cct), in the signal peptide were substituted with their corresponding e. coli major codons. two other changes, including phe 20 (ttc) → leu 20 (ctg) and ala 28 (gcg) → met 28 (atg), were also introduced to increase the processing efficiency. the hgh-expressing plasmid equipped with the modified bla (blaf2) was subjected for further expression analysis in a t7-based expression system. the results obtained from the protein patterns of the induced bacteria indicates in high expression level of hgh preprotein (hgh::blaf2) followed by efficient transfer of the mature hgh to e. coli periplasm. (ip6) has been demonstrated to have a wide range of health benefits such as prevention and therapy of various cancers, amelioration of heart disease, and prevention of renal stone formation as well as complications from diabetes. on the hand, lower phosphorylated forms of inositol, especially inositol trisphosphate (ip3) and inositol tetrakisphosphate (ip4) are important signal transduction molecules within the cells both in plants and the animal kingdom. it has been hypothesized that at least the anticancer function of ip6 is mediated via these lower inositol phosphates. the diversity and practical unavailability of the individual myo-inositol phosphates preclude their investigation. phytases, which catalyze the sequential hydrolysis of phytate, render production of defined myoinositol phosphates in pure form and sufficient quantities. different phytases may result in different positional isomers of myo-inositol phosphates and therefore different biochemical properties. phytases differing in ph optima, substrate specificity, and specificity of hydrolysis have been identified in plants and microorganisms. in this paper the dephosphorylation pathway of the novel phyfauia1 was compared to other bacterial phytate degrading enzymes. preliminary results have shown that phyfauia1 converted ip6 into ip5 (myoinositol 1,2,3,5,6 pentakisphosphates) and another isomer, which is yet to be elucidated. characterization of the novel ␤-peptidyl aminopeptidase (bapa) from sphingomonas sp. 3-2w4 that cleaves synthetic ␤peptides birgit geueke, hans-peter e. kohler environmental microbiology, eawag, 8600 duebendorf, switzerland. e-mail: birgit.geueke@eawag.ch (b. geueke) non-natural peptides, which are capable of evoking a specific biological response, are currently receiving much attention. oligomers of ␤-amino acids (␤-peptides) are representing a group of pharmaceutically interesting peptides because of their very high stability towards enzymatic degradation and their ability to mimic the structure of naturally occurring biologically active peptides. the pharmaceutical potential on the one hand and the high stability on the other hand aroused interest for studies on the environmental fate and the degradation behaviour of this class of compounds. a novel bacterial strain (sphingomonas sp. 3-2w4) that was capable of degrading short ␤-peptides was isolated from an enrichment culture. the ␤peptide degrading enzyme was purified and its gene sequence was determined (bapa). the gene encodes a ␤-peptidyl aminopeptidase (bapa) of 402 amino acids that is synthesized as preprotein with a signal sequence of 29 amino acids. it belongs to the n-terminal nucleophile (ntn) hydrolase superfamily and is the first peptidase that is capable of cleaving amide bonds in ␤-peptides composed of synthetic ␤-amino acids. the biochemical properties of recombinant bapa were investigated regarding its substrate specificity and possible application in the synthesis of ␤-peptides. to produce efficient strains of agaricus bitorquis (quel.) saccardo, which are resistant to high temperatures p. guler, a. ergene, s. tan kirikkale university, faculty of science and literature, department of biology, yahsihan-kirikkale in this study, the culture mushroom agaricus bitorquis (quel.) sacc. the growth of the mycelium and the fructifications under high temperature is examined. the spores taken from the mushrooms that were collected from nature were grouped as a, b, c, d, e. the spores were inoculated into malt extract agar and incubated at 30 • c and primer mycelium was produced. the mycelium discus taken from primer mycelium in 8 mm diameter were inoculated into the center of malt extract agar and incubated at 30, 32, 34, 36 and 38 • c separately. during the incubation period the growth of the mycelium were measured. during the growing period the radial growth speed of the mycelium were taken as criteria. the best mycelium growth for all groups was seen at 30 • c. at 36 • c the e group mycelium and at 38 • c other group's mycelium did not grow. these temperatures were determined as thermal lethal point for the groups. from all the mycelium produced from all temperatures spawn was prepared and with the results taken from these, spawn calendar is prepared. in this research, the spawn was inoculated to compost with mixing system and separately put in culture rooms, temperatures as 30 and 32 • c. at this level the culture mushroom production techniques were used. the harvested mushrooms were inspected morphologically. at this morphological inspections the cap width, cap tissue thickness, stalk thickness and stalk long ness was taken as criteria. in the study the best growth was seen at d group mushrooms and this group mushrooms tyrosinase's activities were measured and graphics were made. introduction: viral contamination of biological products; cause many problems in viral diagnostic laboratories, blood transfusion organizations, and biological producers. bovine viral diarrhea virus (bvdv), from the pestivirus genus, is the most common viral contamination in (fetal) bovine serums (fbs). also, bvdv used as a module, for study hepatitis c virus inactivation due to its similarity in structure and genome. pulsed uv lights (puvls) have this potential to inactivate known and unknown or reemerging viruses as well as prions. two puvl with the wavelengths of 355 and 266 nm, were produced by q-switched nd 3+ :yag laser in its third and forth harmonic, respectively. the energy of each pulse for 355 nm was 12.7 mj/cm 2 and for 266 nm was 35.2 mj/cm 2 . bvdv were produced and titrated in mdbk cell line. mdbk and fbs were already checked for non-cytopathic or cytopathic pestiviruses, using related ag-elisa kit. bvdv suspended in solution with the dilution of 1:2 before exposure. the quartz tube with the minimum uv-absorption in compare with air, used as a container for exposed solutions. calculation of the virus titer, 10 4.3 tcid 50 /ml, was done based on the reed and muench method. bvdv suspended in pbs was exposed into the 3.52-352 j/cm 2 of puvls with the wavelength of 355 nm and also, was exposed into the 1.27-92.25 j/cm 2 of puvls with the wavelength of 266 nm. furthermore, bvdv suspended in fbs was exposed into the 88, 176, 352 and 704 j/cm 2 of puvls with the wavelength of 355 nm and also, was exposed into the 6. 35, 25.4, 63.5, 127 and 190 .5 j/cm 2 of puvls with the wavelength of 266 nm. results: the minimum dose for inactivation of bvdv suspended in pbs with the 355 and 266 nm wavelengths of puvls, were 352 and 92.25 j/cm 2 , respectively. also, the minimum dose for inactivation of bvdv suspended in fbs with the 355 and 266 nm wavelengths of puvls, were 704 and 127 j/cm 2 , respectively. to evaluate the fbs quality to support cell culture, treated fbs with the dose of 190.5 j/cm 2 of 266 nm puvls was used to grow vero cell line in 12 successive passages. the viability of cells in two study groups was identical. the statistical evaluation of two treated groups showed no significant difference, in 12 passages. conclusion: because inexpensive equipment can be used to produce puvls capable of handling different volumes of biologics with operational ease, this viral inactivation technique is cost effective for relevant industries. the procedure has the potential to be combined synergically with other inactivation method. puvls offer a new, nonadditive and chemically safe alternative for the treatment of fbs to inactivate adventitious viruses and to preserve the biological activity necessary for the propagation of cell culture. characterization and gene cloning of the g-resorcylic acid decarboxylase for application to selective production of g-resorcylic acid y. iwasaki 1 , y. ishii 2 , k. kino 1 , k. kirimura 1 : 1 dept. appl. chem., sch. sci. eng., waseda univ., tokyo, japan; 2 bme, asmew, waseda univ., tokyo, japan. e-mail: iwasaki@moegi.waseda.jp (y. iwasaki) for selective production of ␥-resorcylic acid (␥-ra, 2,6-dihydroxy-benzoic acid) from resorcinol (re, 1,3dihydroxybenzene) under mild conditions, we screened various microorganisms and found the reversible ␥-ra decarboxylase (rdc) as a novel enzyme applicable to carboxylation of re to form ␥-ra, in a bacterial strain rhizobium radiobacter wu-0108 1) . rdc catalyzed the decarboxylation of ␥-ra, and regio-selective carboxylation of re to form ␥-ra, without formation of ␣-ra and ␤-ra. the molecular weight of rdc was estimated to be 130 kda by gel-filtration, and that of the subunit was determined to be 34 kda by sds-page, suggesting that rdc is a homotetrameric structure. the gene encoding rdc was sequenced, and a site-directed mutagenesis study revealed that the two histidine residues at positions of 164 and 218 in rdc are essential for the catalytic activity of rdc. through the reactions using e. coli cells highly expressing rdc, 6.7 mm ␥-ra was produced from 15 mm re at 30 • c for 16 h, with a yield of 45%. ishii, y. et al., 2004. biochem. biophys. res. commun., 324, 611-620. laccase biosynthesis in stirred fermenters teresa jamroz, stanislaw ledakowicz, barbara sencio department of bioprocess engineering, technical university, lodz pl 90-924, poland industrial applicability of enzymes is closely related to development of efficient methods of their production. currently, significant interest in lignolytic enzymes, including laccase, has been observed. laccase is an enzyme applied in various industrial branches and environmental processes. broad laccase applicability induces researchers to develop urgently efficient methods for its commercial production. laccase (ec.1.10.3.2. p-diphenol oxidase) is produced by cap mushrooms from the class basidomycetes. this is the so-called white rot fungi which in natural conditions appears on both living and dead wood. as shown in the practice of biotechnological processes, high-efficient strains have low resistance to destructive factors in bioreactors. hence, to preserve a proper morphology and physiological state of an organism, strictly determined culture conditions must be obeyed. this is very important in the case of basidomycetes for which submerged culture in the liquid phase is not a natural habitat. results of studies on laccase production from cerrena unicolor family are discussed. cultivation of active biomass was carried out in stirred tank and rotating disc bioreactors of different volume (b. braun of working volume 12 dm 3 ; fas-01 of working volume 3.5 dm 3 ). experiments in both fermenters were made at impeller revolutions 200 and 300 min −1 , on a modified lindberg substrate. significant differences in the rate and yield of laccase production were reported. an almost three times higher values of laccase activity were obtained in the b. braun fermenter, at rotational speed 300 min −1 . to retain suspended cells in bioreactor a filtration process can be used. the biomass is concentrated by withdrawing cell-free culture broth. if the desired product is dissolved in the broth (extracellular production), the procedure enables the continuous harvest in the cellfree permeate. an application test of filtration system for suspended biomass of aspergillus niger in submerged single stage continuous culture was presented in this report. the system is easy to construct and there is a possibility of its sterile exchange during cultivation. the culture medium contained the following substances (g/dm 3 ): white sugar, 150.0; nh 4 no 3 , 1.5; mgso 4 ·7h 2 o, 0.2; kh 2 po 4 , 0.2; feso 4 ·7h 2 o, 0.05. fermentations were carried out in the lab bioreactor biomer 10. the bioreactor was a standard cstr (continuous stirred tank bioreactor) with working volume of 5 dm 3 . high citric acid concentration in culture medium (p = 108.9 g/dm 3 ), high yield of citric acid (y p/s = 72.6%) and high efficiency coefficient (k ef = 71.1) were observed in single stage continuous culture with biomass retention. oxygen transfer regulates benzaldehyde lyase production in e. coli pınar ç alık, iblab, department of chemical engineering, metu, 06531 ankara, turkey. e-mail: pcalik@metu.edu.tr the effects of oxygen transfer rate on benzaldehyde lyase (bal) production by puc18::bal carrying recombinant escherichia coli on a defined medium with 8.0 kg/m 3 glucose were investigated in order to fine-tune the bioreactor performance, in v = 3 dm 3 batch bioreactors at five different conditions with the parameters at, i.e. q o /v r = 0.5 vvm and n = 250, 375, 500, 750 min −1 and; q o /v r = 0.7 vvm and n = 750 min −1 . the concentrations of the product and by-products amino acids and organic acids were determined in addition to bal activities. medium oxygen transfer rate conditions and uncontrolled ph operation at ph o = 7.25 are optimum for maximum bal activity, i.e. 860 u/cm 3 at 12 h, and productivity and selectivity. on the bases of the data, response of the intracellular bioreaction network of r-e. coli to oxygen transfer conditions were analysed using a mass-flux balance based stoichiometric model that contains 102 metabolites and 133 reaction fluxes. the results reveal that metabolic reactions are intimately coupled with the oxygen transfer conditions. oxygen transfer rate showed diverse effects on the product formation by influencing metabolic pathways and changing metabolic fluxes. metabolic flux analysis was helpful to describe the interactions between the cell and the bioreactor by predicting the changes in the fluxes and the rate controlling step(s) in the metabolic pathways. therefore, knowing the distribution of the metabolic fluxes during the growth, and bal and by-product formations provide new information for understanding physiological characteristics of the r-e. coli, and reveals important features of the regulation of the bioprocess and opens new avenues to successful application of metabolic engineering. saprophytic mycobacterium strains belong to the best known microorganisms which have been applied to the pharmaceutical industry for the production of steroid drugs. the mycobacterial cell wall is the permeation barrier to chemical compounds, including lipophiles. using isoniazid (inh), the inhibitor of the mycolic acids biosynthesis, we were able to demonstrate increased ad production and susceptibility to antimicrobial agents. the process of sterol transformation and products accumulation was monitored using gas chromatography. isoniazid was shown to intensify ␤-sitosterol side-chain degradation by mycobacterium sp., and accumulation of 4-androstene-3,17-dione (ad) and 1,4-androstadien-3,17-dione (add), which are the starting materials in the biotechnology of medically important steroids. to confirm these results, the sensitivity of the bacteria to antimycobacterial drugs was performed. the minimum inhibitory concentration mic 50 of rifampicin and erythromycin decreased markedly in the presence of inh. this work was supported by grant nr 3p04c 06923 of the committee for scientific research. for the purpose of high-throughput screening and to reduce experiments with animals in pharma biotechnology biosensor systems gain importance. the principle of a biosensor is the combination of cultured cells and a sensorchip device, which allows the monitoring of cellular activity. in contrast to traditional analytics with a biosensor you can measure on-line cellular activity change caused by an effector as well as the restored activity after privation of the effector (re-native activity). cmos technology can be used for the realisation of various biological sensorchips such as adhesion sensorchips, metabolical sensorchips and electrophysiological sensorchips. the standard cmos technology allows a high reproducibility of the chips, the integration of electronic components on the chip, which reduces the amount of external devices and the combination of different sensors on one chip. in cooperation with the semiconductor company micronas and the biotech company bionas we have realised different types of cmos sensorchips to measure adhesion of a cellular monolayer with interdigitated electrodes (ides), metabolical activity via acidification with ion-sensitive field effect transistors (isfet) and sponteneous neuronal network activity with passive palladium electrodes. microbial agents have been applied to the different stages of pulp and paper processing. the work presented describes a study on the effect of applying ligninolytic enzymes, such as a laccase plus mediator system, on a variety of different types of pine and eucalyptus pulps and subsequently subjecting these to different ageing processes. industrial pulps were obtained from different portuguese pulp and paper companies. the pulps used were (1) unbleached pine pulp from portucel tejo; (2) unbleached eucalyptus pulp from portucel setúbal; (3) bleached eucalyptus pulp from portucel setúbal; and (4) pulp made from recycled paper from renova s.a. several types of handsheets were produced with 2 different grammage namely, 60 and 180 g/m 2 . the prepared handsheets were subject to an aging sequence in three different chambers: ultraviolet radiation (wavelength of 280 nm), temperature (19 • c) and moisture (70%); and thick saline fog at a concentration of 1% and temperature of 35 • c. in order to evaluate the effect of moisture cycles and temperature, two aging sequences were used for each type of handsheet. in the first, the moisture varied (60, 80 and 100%), while the temperature was held constant (25 • c); in the second the temperature varied (60, 70 and 80 • c) and the moisture was held constant (50%). following the aging phase, the handsheets were subject to several chemical (viscosity and index kappa) and physico-mechanical (colour, tensile breaking strength, stretch and the bursting strength) tests in order to characterize the effect of the aging conditions. results will be presented describing the effect of application of the laccase-mediator system on the optical and mechanical properties of the prepared handsheets. fundação para a ciência e a tecnologia, project pocti/ agr/47309/02. aspergillus niger is a filamentous fungi widely used in industry. its growth as freely dispersed hyphae leads to an increase in the medium viscosity and to problem of mass transfer, especially oxygen transfer. oxygen acts both as final electron acceptor in the mitochondrial chain and as nutrient for the biosynthesis of unsaturated fatty acids and sterols. thereby, a lack of oxygen affects the nadh/nad ratio, the atp production, the growth and has a strong influence on the physiology of the microrganism. in the present study, the metabolic changes of a. niger in response to a lack of oxygen was investigated using oxygen limited chemostats combined with nitrogen pulse. under these conditions, the main consequence of a sudden decrease of oxygen availability is an increase in the mannitol production. this work showed that the mannitol biosynthesis, involving the enzyme mannitol-1-p dehydrogenase, helps the reoxidation of nadh when the final electron transport acceptor, oxygen, is limiting. investigation of the lipase activity of the bacteria isolated from olive mill wastewater sevgi ertugrul 1 , nur koçberber 1 , gönül dönmez 1 , serpil takaç 2 1 department of biology faculty of science ankara university 06100 beşevler ankara turkey; 2 department of chemical engineering faculty of engineering ankara university 06100 tandogan, ankara, turkey the bacteria that could grow on media containing olive mill wastewater (omw) were isolated and their lipase production capacities were investigated. the strain possesing the highest lipase activity among 17 strains grown on tributryin agar medium was identified as bacillus sp. the effect of ph on the lipase activity of the strain was investigated in tributryin medium and ph 6 was found to be the optimal. the liquid medium composition was improved by adding different carbon sources and fatty acids into tributryin medium -omitted tributryin -to increase the enzyme activity. the cultivations were performed at 30 • c and ph 6. lipase activity of the bacillus sp. was measured spectrophotometrically through the hydrolysis of p-nitrophenol palmitate. among the media containing different compositions of tricapryn, trimyristin, tributyrin, triacetin, tween 80, glycerol-trioleate, glycerol-trioctanoate, glycerol-tridodecanoate, omw, glucose, and whey; the medium consisted of 20% whey + 1% glycerol-trioleate was found to give the highest lipase activity. cultivation of bacillus sp. in the optimum medium at ph = 6 and 30 • c for 64 h was resulted in the extracellular and intracelluar lipase activities of 15 and 168u/ml, respectively. this study was supported by ankara university biotechnology institute (project no: 2004-151 and 2005-164) . under different abiotic stresses, cell growth and metabolic activity are highly influenced in all types of living organisms medium osmolality is usually one of those factors affecting different types of biological systems in different ways. however, even in the same organ of higher eukaryotes the degree of osmoregulation mechanism is highly variable in different types of cells. therefore, studying the effect of osmotic stress on mammalian cell is very important subject for particular cell line. the effect of hyperosmotic pressure on the kinetics of cell growth of and metabolic activity of mesenchymal stem cells (mscs) and two industrially important cell lines, hybridoma cells and human embryonic kidney cell (hek) were investigated in batch cultures at different osmotic pressures in the range from 325 to 500 mosm kg −1 . in case of mscs cells, the maximal specific growth rate [] of 0.029 [h −1 ] associated with the highest specific glucose consumption rate [-q gluc ] of 0.1129 × 10 −6 [mol cells −1 h −1 ] was obtained in medium of 375 mosm kg −1 . in case of hybridoma cells, osmotic pressure showed not only influence on the kinetics of cell growth and metabolism but also on the monoclonal antibody production. the maximal mab production was obtained in case of cells cultivated under osmotic pressure of 375 mosm kg −1 . further increase in osmotic pressure resulted in significant reduction in growth rate as well as mab production. on the other hand, hek cells were more sensitive to osmotic pressure in industrially used serum free medium and the addition of serum decreased the inhibitory effect of high osmotic pressure on the cells. gustavo g. fonseca 1,2 , andreas k. gombert 2 , elmar heinzle 1 , christoph wittmann 1 : 1 biochemical engineering, saarland university, saarbrücken, germany; 2 chemical engineering, são paulo university, brazil kluyveromyces marxianus cbs 6556 is a potentially interesting yeast strain characterized by a high capacity of conversion of substrate into biomass. however, this yeast has been only marginally studied so far. therefore, we performed a metabolic characterization in batch and chemostat cultures at dilution rates of 0.10, 0.25 and 0.5 h −1 . the specific rate of o 2 consumption (qo 2 ) increased with dilution rate from 2.87 to 11.09 mmol (g dw) −1 h −1 . the respiratory coefficient remained almost stable around 1.0 for all metabolic states investigated. even at the dilution rate of 0.5 h −1 , which is close to the maximum growth rate of the strain of 0.56 h −1 , no significant overflow metabolism was observed. the concentration of extracellular metabolites increased with the dilution rate, but remained below 6% of the carbon consumed as glucose. all carbon balances closed near 100% underlining the consistency of the data. in contrast to s. cerevisiae the respiratory capacity of k. marxianus cbs 6556 is not strongly influenced by the dilution rate in aerobic chemostat or batch cultures, indicating its high potential for biomass-directed applications. a thermostable l-arabinose isomerase for enzymatic production of d-tagatose o. hansen, f. jørgensen, p. stougaard department of enzyme technology, bioneer a/s, kogle allé 2, dk-2970 hørsholm, denmark. e-mail: och@bioneer.dk (o. hansen) d-tagatose, an isomer of d-fructose, is a low-calorie bulk sweetener with a sweetness equivalent to sucrose. d-tagatose has obtained gras approval for use as a food ingredient, and is currently produced by chemical isomerization of d-galactose, which may readily be obtained by hydrolysis of lactose. structurally, d-galactose is closely related to l-arabinose, and it has previously been shown that some variants of l-arabinose isomerase (araa) may catalyze the conversion of d-galactose to d-tagatose, in addition to the metabolic conversion of l-arabinose to l-ribulose. we have screened a number of bacterial araa enzymes for their ability to catalyze the isomerization of d-galactose to d-tagatose. the best enzyme was found in the thermophilic bacterium thermoanaerobacter mathranii (dsm11426). the araa gene of t. mathranii was cloned, sequenced and expressed in e. coli. amino acid sequence comparisons of the t. mathranii sequence and other known araa sequences showed a relatively low sequence identity of about 30%, indicating a distant phylogenetic relationship to the other members of the l-arabinose isomerase group. the t. mathranii enzyme was thermostable with optimal activity at 65 • c and it required manganese ions. unlike other araa variants, the t. mathranii enzyme showed k m values in the same order of magnitude for l-arabinose and d-galactose, suggesting that this enzyme is a versatile isomerase capable of isomerizing structurally related aldoses. the enzyme was immobilized by chemical cross-linking of a crude e. coli cell homogenate, and the immobilized enzyme efficiently converted d-galactose into d-tagatose. currently, we are developing an enzymatic method for industrial production of d-tagatose using the immobilized enzyme. the agricultural production can be negatively affected by different pest insects (pi) and the use of chemical insecticides (chi) has been the traditional method for controlling pi during decades. nevertheless, there are various ecological implications due to the extensive application of chi. a viable alternative for the use of chi in certain agro-systems, is the use of entomopathogenic nematodes (epn) of the genera steinernema and heterorhabditis that are natural pathogens for different pi; besides, the presence of a symbiotic bacterium is necessary for an effective entomopathogenic activity can take place. the nematode/bacterium complex does not represent a risk for the environment. different authors propose that the best alternative for the massive production of epn is the submerged culture within bioreactors; nevertheless, more research is required to have really robust processes. particularly, information regarding the actual hydrodynamics during epn production and its relation with the epn productivities are scarce, among other aspects. the present study deals with the hydrodynamic characterisation during the production of the epn steinernema carpocapsae and its symbiont bacterium xenorhabdus nematophilus, in submerged monoxenic culture in an internal-loop air-lift bioreactor (v l = 4.5 l) using two culture media: one of them containing whey and the other one, agave juice, aguamiel, (agave spp.). process viscosity of the culture broth was determined along the time, exhibiting a maximum value of 20 mpa s. moreover, it was determined that the hydrodynamic conditions were always located within the laminar region (re < 500). at the experimental conditions tested, it can be inferred that the epn productivity are more sensitive to changes in the culture medium composition than on the prevailing hydrodynamic conditions during the fermentations. bacillus thuringiensis is a gram-positive bacterium used as a biological pest control agent. moreover, it is able to produce, several biologically active molecules such as bacteriocins and hydrolytic enzymes among which chitinases that play double roles, fungicide and improving the insecticidal effect of b. thuringiensis deltaendotoxins. a newly isolated b. thuringiensis subsp. kurstaki strain bupm4, was shown to produce a novel bacteriocin named bacthuricin f4. the highest bacteriocin activity was found in the growth medium and evidenced in the late exponential growth phase. upon purification of bacthuricin f4, the specific activity was increased 100-fold. this bacteriocin was heat-stable up to 70 • c and resisted up to ph 3.0. its molecular mass, determined by mass spectrometry was 3160.05 da. direct n-terminal sequencing of bacthuricin f4 revealed the following sequence: dwtxwsxl. the latter was unique in the databases. bacthuricin f4 was active against bacillus species while it had little or no effect on gram-negative bacteria. the bacteriocin produced by the b. thuringiensis strain bupm4 respond to both criteria of thermostability and stability to low phs. thus, it could be used as a source of bacteriocin active against related species of bacillus harmful for agricultural products and as food preservative. the other example of antimicrobial compound produced by b. thuringiensis is a chitinase. we describe the selection of b. thuringiensis high chitinase-producing strain bupm255, and the characterization and the heterologous expression of a novel chitinase encoding gene. the cloning and sequencing of the corresponding gene named chi255 showed an open reading frame of 2031 bp, encoding a 676 amino acid residue protein. both nucleotide and amino acid sequences similarity analyses revealed that the chi255 is a new chitinase gene, presenting several differences from the published chi genes of b. thuringiensis. the identification of chitin hydrolysis products resulting from the activity, exhibited by chi255 through heterologous expression in e. coli revealed that this enzyme is a chitobiosidase. the addition of the sequence of chi255 to the few sequenced b. thuringiensis chi genes might contribute to a better investigation of the chitinase "structure-function" relation. cloning and characterization of s-adenosyl-l-methionine synthetase from pichia ciferrii dscc 7-25 kwon-hye ko 1 , gee-sun yoon 1 , gi-sub choi 1 , joo-won suh 2 , yeon-woo ryu s-adenosyl-l-methionine(sam) has an important role for dna methylation and cell signaling. sam was synthesized from methionine and atp by sam synthetase and play an pivotal function in the primary and secondary metabolism of cells. recent studies have revealed in the effect of sam in case of morphological differentiation in both eukaryotes and prokaryotes. the p. ciferrii produces large quantities of sphingoid base. tetraacetylphytosphingosine(taps), which is a precursor of sphingolipid, could be used for the production of pharmaceuticals and cosmetics. we isolated sam gene from p. ciferrii and cloned it into expression vector for e. coli and p. pastoris, respectively. an 1.2 kb sam-s gene fragment was isolated by low-strigency pcr using degenerated primer. by the analysed primary sequence deduced from dna sequence, this gene included conserved domains similar with other well-known sam synthetase. first of all, sam synthetase gene cloned pgem-t vector and subcloned into histidine tagging system to purify the expressed protein using metal chelating resin. typical characteristic analysis of this enzyme is underway. metabolic networks offer a large variety of different synthesis pathways starting from cheap substrates and leading to interesting high-value compounds, i.e. metabolites. in case an interesting pathway can be disconnected from the remaining metabolic network, the perforated cell or the crude extract could be used for a one-pot multi-step synthesis of the desired compound. pathway isolation, achieved by deletion of genes encoding gene products enabling side reactions, interferes with the viability of the organism, which is a requisite for the production of the system of biotransformation (sbt). in this work, a rational systems biology-derived approach is presented for the design of a sbt. it is illustrated for a sbt allowing for production of dihydroxyacetone phosphate. the design procedure comprises three steps: (i) a production pathway is identified in the metabolic network of e. coli. the e. coli pathway is complemented by two additional enzymes in order to obtain a fully energy and redox balanced production pathway. (ii) an optimal combination of gene knockouts is designed and a suitable growth medium composition is identified, both by a model-based approach: flux balance analysis of a genome-scale metabolic network is used to predict enzyme expression within the wild-type organism on different media, while a mixed-integer optimisation is employed to identify viable mutants as this approach strongly depends on the quality of the fba prediction, available regulatory information on the usage of metabolic pathways and thermodynamic constraints were taken into account. (iii) thermodynamic analysis of the obtained, partially branched sbt reaction cascade revealed the extent of loss in yield by the remaining side reactions. in summary, this systems biology-driven approach potentially enables the substitution of a elaborative multi-step synthesis process by a one-pot enzyme reaction cascade. institute of industrial biotechnology, inha university, incheon 402-751, korea. e-mail: leecg@inha.ac.kr (c.-g. lee) algal biotechnology is drawing increasing interest due to its potential as a source of valuable pharmaceuticals, pigments, carbohydrates, and other fine chemicals. currently, its application is being extended to the areas of wastewater treatment and agriculture. however, lack of suitable photobioreactors (pbrs) makes the cost of algally-derived compounds higher than those derived by chemical synthesis and thus has prevented widespread use of algal cultures. the culture of algae prior to the late 1940s was apparently restricted to laboratory scale operations. experiments on outdoor algal mass production began in the late 1940s with nearly concurrent development of experimental culture facilities in germany and the united states. for the next two decades, outdoor mass culture of algae was undoubtedly the hottest topic in the algal biotechnology area. recent developments of high-density pbrs enable the production of valuable biologically active compounds by algal mass cultures. however, light is almost always the limiting factor in high-density photobioreactors. key factors for successful photobioreactors will be discussed and various photobioreactors will be analyzed and compared for their advantages and disadvantages. the new techniques, such as pigment redcution and application of flashing light and lumostatic operation, will be discussed for possible solutions to overcome the light limitation in high-density microalgal cultures. a quick and reliable method for screening fungal transformants for specific genetic modifications is essential for many molecular applications, for example when one tries to develop a transformation system for a new fungal host. southern analysis is laborious and time consuming. several colony hybridisation methods have been developed for the analysis of a large number of transformants. unfortunately these methods suffer from different disadvantages such as non-specific binding, a limited usability to screen for specific integration and the fact that these procedures always take a few days (van zeijl et al., 1998) . recently, methods for pcr-based analysis of fungal transformants have been described. most of these methods require high quality dna. many methods for dna extraction from fungi have been described in the past few years. these methods often are tedious, time consuming, costly or limited to a small number of samples each time. most of the available protocols include the growth of mycelium in a liquid culture, followed by freeze-drying or maceration in liquid nitrogen and grinding of the frozen material to break the cell walls (cassago et al., 2002) . lately a few methods have been described to isolate dna from fungi suitable exclusively for pcr and appropriate for the simultaneous treatment of a large number of samples. some methods also describe the direct use of mycelium (cooke et al., 1997) in the pcr-reaction mixture. we compared the applicability of a few rapid dna extraction methods for myrothecium gramineum and tested the resulting dna samples on there suitability for pcr-applications. myrothecium gramineum is a filamentous ascomycete used in ongoing research as a new cloning and expression host. five methods were tested. in four of these methods dna was extracted from mycelium (goodwin et al., 1993 and aljanabi et al., 1997) or spores (ferreira et al., 1997 and xu et al., 1995) prior to pcr. a fifth assay used mycelium straight in the pcr-reaction mixture. only this last method seemed useful for myrothecium to isolate dna suitable for pcr. fragments up to 2000bp were amplified. cheese whey is a liquid effluent from cheese-making processes. there is an increased interest in the economic utilization of whey produced by the dairy industries, because the whey is a pollutant, due mainly to its lactose content. the goal of this work was to find the most suitable values of some fermentation parameters for lactic acid production from whey by a lactic acid bacterium, lactobacillus helveticus (atcc 15009). the effects of lactose content, temperature, ph and the supplementation with yeast extract were investigated using surface response methodology. a composite central design was used with three center points, making a total of 27 operational conditions. the region of maximum production is outlined by the following intervals: temperature around 40 • c; lactose concentration between 70 and 85 g/l; concentration of yeast extract between 20 and 25 g/l; ph between 7 and 7.5. fermentation studies on a continuous fermentative process coupled to a vacuum flash evaporator were carried out in lab scale equipment. the phases of this work consisted in an assembly and instrumentation of the prototype and elaboration of a supervisory system coded in labview 6.1, which allows the data acquisition and control through personal computers. the experiments in continuous fermentation used saccharomyces cerevisiae and sugar cane molasses as substrate. the analytical follow up was done through analysis of total reducing sugars, ethanol, glycerol, dry mass and viable cells. the system worked for months uninterruptedly, producing an alcoholic solution at the condenser with 50 • gl. the fermentation operated with concentrations of ethanol at 5 • gl, which is a weakly inhibitory value for the yeast of the process, even when fed with concentrated cane molasses, containing up to 330 g/l of sugar. the result meets the initial goal, which was to operate the system with low level of ethanol and to guarantee high productivity, even in high concentrations of sugar in the feeding. the results showed that system productivity was superior to that of the conventional continuous process. lactic acid (la) is a versatile chemical, used as an acidulant, flavor and preservative in the food, pharmaceutical, leather and textile industries, and for production of biodegradable poly lactic acid (pla). l(+)lactic acid is the only optical isomer for use in pharmaceutical and food industries because human body is only adapted to assimilate this form. in this research, lactic acid production was improved on 20 l fermentor. in our experience, among six strains of lactobacillus were examined for the production of l(+) lactic acid, lactobacillus casei ssp. casei atcc 39392 was selected as a highest l(+) lactic acid producer. optimized medium used for lactic acid production contained (per l) 80 g glucose, 50 g whey powder and 20 g corn steep powder. for a homofermentative process, ph 6.0 was found to be optimal. in order to avoid product inhibition, the produced lactic acid was neutralized using calcium hydroxide. maximum production and productivity of lactic acid in batch system, were 81 g and 1.35 g/lh, but in fed batch system, after 3 feeds of glucose, production and productivity increased up to 360 g and 4 g/lh. saleh a. mohamed molecular biology dept., national research centre, cairo, egypt an extracellular polygalacturonase (pgii) from trichoderma harzianum was purified to homogeneity by two chromatography steps using deae-sepharose and sephacryl s-200. the molecular weight of t. harzianum pgii was 31,000 da by gel filtration and sds-page. pgii had isoelectric point of 4.5 and optimum ph of 5.0. pgii was very stable at the ph 5.0. the extent of hydrolysis of different pectins by enzyme was decreased with increasing of degree of esterification (de). pgii had very low activity toward nonpectic polysaccharides. the apparent km value and kcat value for hydrolyzing polygalacturonic acid (pga) were 3.4 mg/ml and 592 s −1 , respectively. pgii was found to have temperature optimum at 40 • c and was approximately stable up to 30 • c for 60 min of incubation. all the examined metal cations showed inhibitory effects on the enzyme activity. 1, 10 phenanthroline, tween 20, tween 80, triton x-100 and sds had no effect on the enzyme activity. the rate of enzyme catalyzed reduction of viscosity of solutions of pga or pectin was higher three times than the rate of release of reducing sugars indicating that the enzyme had an endo-action. the storage stability of the enzyme in liquid and powder forms was studied, where the activity of the powder form was stable up to one year. these properties of t. harzianum pgii with appreciable activity would be potentially novel source of enzyme for food processing. tarek m. mohamed, biochemistry division, chemistry department, tanta university, tanta, egypt preparation of peroxidase from horseradish, which could be used for commercial applications such as diagnostic kits, was occurred through a simple reproducible method consisting of extraction, ammonium sulphate precipitation, filtration through non-binding protein filter and lyophilization. the purification method was developed allow the preparation of 33 mg of enzyme from 1 kg of horseradish roots. the one mg of enzyme contains 900 units of peroxidase. this value is similar to that produced by sigma (50-1000 unit mg −1 powder). the final preparation is salt free reddish brown powder with free ammonia content less than 0.01 g −1 units. the rz value (a400/a280) of the enzyme, which is a good criterion of purity and heme content, was 2.6. the lyophilized enzyme was stable at −20 • c for at least one year. the liquid form of the enzyme in presence of 0.1% sodium azide was stable up to 25 days at 4 • c, while it lost most of activity at room temperature in the same period. the properties of horseradish peroxidase including km, optimal ph and temperature, activation energy, thermal stability and effect of different compounds were studied. the applicability of this enzyme in determination of serum glucose was performed. the analysis of glucose in human sera gave results using the kit containing the prepared peroxidase similar to those obtained with a commercial glucose kit. lactobionic acid production using lactose oxidase: from laboratory to 600 l scale mikkel nordkvist 1 , per munk nielsen 2 , peter budtz 3 , john villadsen 1 : 1 center for microbial biotechnology, technical university of denmark, dk-2800 lyngby, denmark; 2 novozymes a/s, dk-2880 bagsvaerd, denmark; 3 chr. hansen a/s, dk-2970 hørsholm, denmark. e-mail: mnq@biocentrum.dtu.dk (m. nordkvist) currently, lactobionic acid is mainly a high-price specialty product used, e.g. in solutions for organ stabilization. however, lactobionic acid can also be used as a biodegradable cobuilder in detergents, and it has several applications in food technology. with lower production costs it has the potential to become a bulk chemical. the kinetics for the oxidation of lactose to lactobionic acid by a new carbohydrate oxidase was studied in a 1 l bio-reactor with control of ph, temperature, and dissolved oxygen. the byproduct hydrogen peroxide has a negative influence on the lactose oxidase enzyme, and hence additional experiments were made with addition of catalase to remove hydrogen peroxide, thereby also providing extra oxygen. on the basis of the experiments in 1 l scale, experiments were performed in a 600 l reactor equipped with a new system for dispersion of air to supply the necessary oxygen for the oxidation. the aeration system in the large scale reactor was able to supply oxygen sufficiently fast to give the same production rate, at low values of the air flow rate and the energy input, as was obtained in the high-performance laboratory reactor. the non-characterized gene previously proposed as d-tagatose 3-epimerase from agrobacterium tumefaciens was cloned and expressed in escherichia coli. the expressed enzyme was purified by affinity chromatography on histrap hp, desalting chromatography on hiprep 16/60, and gel filtration chromatography on sephacryl s-300 hr with a final specific activity of 8.89 u/mg. using maldi-tof-ms, the native protein was estimated to have a molecular mass of 32,600 da and a monomeric structure. the purified enzyme exhibited maximal activity at 50 • c and ph 7.5 without the addition of metal ions and at 60 • c and ph 7.0 with 1.0 mm mn 2+ . among various metal ions, mn 2+ was the most effective divalent cation for d-fructose epimerization activity. the addition of mn 2+ significantly increased the thermal stability and the epimerization activity with other ketoses such as d-psicose, d-tagatose, d-ribulose, d-sorbose, and d-xylulose. the activity, substrate affinity, maximum velocity, and catalytic efficiency (k cat /k m ) of the enzyme for d-psicose were higher than those for d-tagatose, which suggests that the enzyme is not d-tagatose 3-epimerase but d-psicose 3-epimerase. the equilibrium ratio between d-psicose and d-fructose was 37:63 at 60 • c with 1.0 mm mn 2+ . when the enzyme was used at 14 u/ml, dpsicose was produced at 211 g/l from 700 g/l d-fructose containing 1 mm mn 2+ after 120 min, corresponding to a conversion yield of 30.2%. the role of ammonium ions in glucosamine formation during the citric acid fermentation process by aspergillus niger m. papagianni 1 , f. wayman 2 , m. mattey 2 : 1 department of hygiene and technology of food of animal origin, school of veterinary medicine, university of thessaloniki, thessaloniki 54006, greece; 2 department of bioscience, university of strathclyde, glasgow, g1 1xw, uk. e-mail: mp2000@vet.auth.gr (m. papagianni) stoichiometric modeling of the citric acid fermentation process by aspergillus niger, in 12-l stirred tank reactor, indicates that nh 4 + ions combine with a c-containing metabolite inside the cell to form a nitrogen compound which is then excreted by the mycelium. the close correlation between calculated and experimental profiles indicates that this metabolic process is rapid and takes place before the c-structure of the glucose has been greatly altered by glycolysis or the pentose phosphate pathway. hplc analysis identified glucosamine as the product of this relationship. a clear effect of medium concentration of nh 4 + on glucosamine formation was observed when fermentations carried out with optimal and sub-optimal ammonium concentrations. (nh 4 ) 2 so 4 addition when medium nitrogen was depleted, enhanced the formation of new cells from the tips of fragmented hyphae and led to glucosamine accumulation in amounts depending to pulse concentration. the fungus reacts in excess ammonium by converting it to glucosamine, to be utilized later when a regeneration process takes place with fragmentation of vacuolated hyphae and subsequent regrowth, depending always on the culturing conditions. however, depending on carbon and ammonium concentration in medium, glucosamine can be secreted in concentrations as high as 50 g/l. about 1000 microorganisms originated from traditional korean food origin were screened for efficient palatinose production. an isolate designated fmb1 was exceptionally efficient in sucrose-palatinose conversion activity. conversion of sucrose into palatinose by fmb1 was much faster than a reference strain of erwinia rhapontici. fmb1 is a gram negative, facultatively anaerobic, motile, noncapsulate, and straight rod-shaped bacterium producing acid from glucose. based on api and 16s rdna analyses, fmb1 was determined to be enterobacter sp. the maximum conversion of 10% sucrose to palatinose and trehalulose by enterobacter sp. fmb1 was achieved within 6 h. the preliminary dna sequencing result of the gene corresponding to sucrose isomerase of enterobacter sp. fmb1 revealed that it showed 87% similarity to that of klebsiella sp. (??). within the scope of an r&d project developed in collaboration with leather tanning portuguese industrial partners a screening of new proteases to be used in the industrial process was performed. a bacillus subtilis strain isolated from alkaline spent purge liquor was shown to be a promising protease producer. microorganism growth was studied for optimisation of temperature, agitation, ph and medium composition either for biomass or proteases production. optimal growth temperature is different for maximum biomass growth (40 • c) and optimal proteolytic activity (43 • c) yielding biomass specific growth rate of 1.6 and 1.4 h −1 , respectively. the achieved proteolytic activities were 5.7 and 7.2 u/ml of protease, respectively. optimised medium composition (7 g/l beef extract, 4 g/l yeast extract, 5 g/l peptone and 0.4 g/l cacl 2 ) yielded a specific growth rate of 1.5 h −1 and 13.9 ku/l of protease, in shake flask experiments. bioreactor experiments (from 1 to 16 l) with the selected medium were performed at 43 • c in order to test aeration rate (1 and 2 vvm), stirring (300-700 rpm) and ph (uncontrolled, controlled at 7 and 8). best protease activity was 64 u/ml in 2 l bioreactor without ph control at 500 rpm and 2 vvm. the proteolytic extract was characterized and compared to commercial bates. results indicate that these proteases can be employed in the purge phase of the leather tanning process in industry. the gram positive bacterium bacillus megaterium is known for its capacity to produce exoenzymes including amylases, proteinases, and penicillin amidase at industrial scale. here, we describe the development of various vectors for the production and export of recombinant heterologous proteins employing b. megaterium signal peptides. the target gene can be cloned directly adjacent to the signal peptide coding sequence (bart et al., 2005) . this arrangement allows for a correct n-terminal sequence of the mature protein after processing by the signal peptidase sipm. using this newly developed protein production and export system lactobacillus reuteri 121 levansucrase (van hijum et al., 2001) was secreted in significant amounts (∼4 mg/l) into the growth medium. fusion of the recombinant levansucrase to affinity tags allowed one-step purification of the recombinant protein from the growth medium. however, fused peptide tags resulted in a decreased secretion of the fusion protein. 1.4 mg his 6 -tagged levan-sucrase were purified per litre of culture. the system was further enhanced via coexpression of a gene for the signal peptidase sipm (malten et al., 2005a) and deletion of the gene for the extracellular protease nprm. developed new tools allow for various strategies of integrated high level production, export and purification of heterologous proteins in b. megaterium. methods for high-throughput screening of secreted enzymes are under development. the determined sequence of the b. megaterium genome, studies using high-cell density cultivations (malten et al., 2005b) and proteome data from batch fermentations implicate new targets for directed genetic optimization of b. megaterium production and secretion strains. novel strong and inducible promoters are currently under investigation. toru matsui 1 , naoya shinzato 1 , hisashi saeki 2 , hitoshi matsuda 2 : 1 center of molecular biosciences, university of the ryukyus, okinawa 903-0213, japan; 2 japan energy co., japan. e-mail: tmatsui@comb.u-ryukyu.ac.jp (t. matsui) optically active epoxides are considered as the potential intermediate for chiral drugs synthesis. although s-styrene oxide (so) have been extensively investigated using styrene monooxygenase from pseudomonas sp., microbial production of r-so with high enantiomeric excess was hardly examined. in this study, r-so producing bacteria from styrene was screened using various alkene assimilating bacteria. r-so with the highest ee (ca. 100%ee) was obtained using ethene utilizing bacteria, identified as mycobacteirum sp., while produced relatively lower at around 70%ee when using propene utilizing bacteria. the alkene monooxygenase gene homologue sequence amplified from the genomic dna revealed a significant similarity to that of etnabc. these bacteria also showed stereoselective degradation of racemic so, suggesting that the produced so might be further stereoselectively degraded to increase the ee. the ethene utilizing bacteira produced not only r-so but also s-epichrolhydrin at high ee.when using arylchloride as the substrate. this research was supported by nagase science and technology foundation. the secretion efficiency of the escherichia coli sec pathway is dependent on the growth phase but not on protein size f.j.m. mergulhão, g.a. monteiro centro de engenharia biológica e química, instituto superior técnico, av. rovisco pais, 1049-001 lisbon, portugal. e-mail: filipem@alfa.ist.utl.pt (f. mergulhão) the secretion efficiency of the escherichia coli sec pathway was evaluated through the expression of green fluorescent protein and human proinsulin fusion proteins. translocation to the periplasm is dependent on the growth phase of the bacterial culture and the highest secretion efficiency is attained in mid-exponential phase. secretion performance is independent of protein size (17-42 kda) and even when the amino acid composition of the secreted proteins is very similar, the amino acid distribution within the protein can affect translocation. in silico prediction analysis suggests that proteins that are prone to form ␣-helix structures are more efficiently translocated. culture medium composition plays an important role on secretion performance with the highest secretion results being obtained in minimal medium. streptokinase is a common fibrinolytic drug. that is used in thrombolytic therapy for long time. to compare with another thermbolytic drugs like tpa, it has lot of advantages. in this present research dna was extracted from s. equisimilis h46a for the first time in iran. streptokinase gene was amplified by using two forward primers and one reverse primer. a common restriction enzyme, bamh-i, was used for cloning. both ends of the pcr products (full length: 1323 bp and mature section: 1245 bp) and the restriction site on mcs of pqe-30 vector were digested. in this study, pqe-30 expression vector was used with high level expression ability for production of recombinant fusion streptokinase with simplifying the purification by employing affinity-metal chromatography method. in addition, the cloning results were controlled by double digestion and sequencing. takesono oxidation of short-chain iso-alkanes was studied with propanegrown resting mycelia of scedsporium sp. a-4. isobutane was oxidized to tert-butanol, but not to isobutanol. isobutanol was used for growth, but both isobutene and tert-butanol were not used for growth. isopentane was oxidized to 3-methyl-1-butanol, 2-methyl-2-butanol, and 3-methyl-2-butanol but not to 2-methyl-1-butanol. 2-methylpentane was oxidized to 4-methyl-1-pentanol, 2-methyl-2pentanol, and 4-methyl-2-pentanol but not to 2-methyl-1-pentanol or 2-methyl-3-pentanol. 3-methylpentane was not oxidized. oxidation of branched alcohols was also studied. application of nadph-dependent 2.5-diketo-gluconic acid reductase for production of l-ascorbic acid claudia pacher 1,2 : 1 division of food biotechnology, department of food sciences and technology, boku, university of natural resources and applied life sciences, muthgasse 18, a-1190 vienna, austria; 2 research centre applied biocatalysis, petersgasse 14, a-8010 graz, austria. e-mail: claudia.pacher@boku.ac.at ascorbic acid is an organic acid with various applications in the food and pharmaceutical industries. at present, the majority of commercially manufactured vitamin c is synthesized via the reichstein process, which is highly energy-consuming, involves considerable quantities of organic solvents and gives an overall yield of about 50%. therefore, during the past decades a lot of research was done to develop biotechnological alternatives for the synthesis of reichstein intermediates by enzymatic or fermentative means, which show some advantages regarding costs and environmental friendliness. one of the fermentation routes runs via 2.5-diketo-d-gluconic acid (2.5-kdg), produced by pectobacter (erwinia) cypripedii. this compound has been reduced by a nadph-dependent 2.5-diketogluconic acid reductase (dkr) from corynebacterium glutamicum to the key intermediate 2-keto-l-gulonic acid (2-klg) before chemical rearrangement leads to the final product. for economical reasons we wanted to express dkr heterologously. based on our long term experience with coenzyme regeneration we wanted also to perform the reaction in homogeneous solution. the spent coenzyme of nadph-dependent dkr has been regenerated by a second isolated nadp-dependent enzyme like glucose dehydrogenase. we describe here the recombinant production, purification and characterization of dkr and the results of enzymatic 2-klg formation by using the recombinant enzyme in a homogeneous system with conjugated coenzyme regeneration. grp78 residing in endoplasmic reticulum (er) functions as a molecular chaperon by associating transiently with incipient proteins as they traverse the er and aiding in their folding and transport. furthermore, the protein can also be induced under various stress condition such as glucose starvation, inhibition of protein glycosylation by tunicamycin, blockage of vesicular trafficking by brefeldin a and er-calcium-atpase pump inhibition by thapsigargin. thus, substances that directly down and up-regulate grp78 transcription are expected to be useful for treatment of cancer and alzheimer's disease, respectively. in the course of our screening program to obtain substances, which regulate grp78 expression, we first constructed an assay system monitored by the expression of a reporter gene. hela cells, which are transformed with luciferase gene under the control of grp78 promoter designated as hela 78c6 cells, respond sensitively to luciferase grp78 induction by er stress such as treatment of tunicamycin. by using this screening system, we isolated pyrisulfoxin as an up-regulator of grp78, and valinomycin, citreoviridin and alternariol as down-regulators. detailed studies on other biological activities were now under way. pdh is an enzyme that was described only several years ago in a number of ecologically related litter-decomposing fungi (agaricales, gasteromycetales). it catalyzes the c-3 and/or c-2 oxidation of several aldopyranoses to the respective keto sugar derivates. pdh shows a very broad substrate range, oxidizing almost all major sugar components of wood polysaccharides, and is implicated to play a role in lignocellulose degradation. agaricus bisporus, the white button mushroom, is an economically significant agricultural crop. the cultivation, which is done by solid-substrate fermentation on straw-and hay-based composted substrate, is sometimes seen as one of only few economically feasible methods for bioconversion of lignocellulosic agricultural waste material. deeper insight in the physiological role of pdh may provide help for mushroom growers to increase yield, improve quality or make new sources of raw materials utilizable. we amplified a fragment of the pdh gene with degenerated primers derived from internal peptide sequences. the screening of a genomic library led to the isolation of the pdh gene. subsequently we amplified a cdna clone by rt-pcr and investigated the transcriptional regulation by different carbon sources on a defined minimal medium. optimization of monoclonal antibody production processes with simulation and scheduling tools demetri petrides, charles siletti intelligen inc., scotch plains, nj 07076, usa. e-mail: dpetrides@intelligen.com (d. petrides) this presentation will review the state of the art in batch process simulation and scheduling tools and their applications in the design and debottlenecking of integrated biopharmaceutical processes. a systematic methodology will be presented for identifying and eliminating size, time, and throughput bottlenecks that limit production in single and multi-product facilities. the methodology will be illustrated with an industrial case study dealing with the optimization of a multi-product facility that produces therapeutic monoclonal antibodies (mabs). mab processes are characterized by a long bio-reaction time (e.g., 1.5-2 weeks for fed-batch operation and 1-2 months for perfusion operation). the cycle time for processing a lot in the recovery and purification train typically takes 3-4 days. consequently, one way of increasing throughput is by installing extra bioreactors that operate in staggered mode and utilize the same recovery train. the result is that multiple batches may be at different stages of completion at any given time. since cleaning equipment (e.g., cip skids) and buffer preparation and holding tanks are shared by multiple steps across many batches, this type of operation leads to time/scheduling bottlenecks that constrain the cycle time and the throughput of a process. the problem becomes more challenging in the context of multi-product facilities and when constraints imposed by the limited availability of resources are considered. our methodology and its computer implementation will illustrate how to systematically identify and eliminate such bottlenecks. the industrial case study will provide a real world example of the methodology. application of two stage continuous cultures of aspergillus niger for citric acid biosynthesis jerzy j. pietkiewicz, malgorzata janczar, wladyslaw lesniak food biotechnology department, university of economics, wroclaw 53-345, poland. e-mail: jerzy.pietkiewicz@ae.wroc.pl (j.j. pietkiewicz) the aim of the work was application test of submerged two stage single stream continuous cultures of aspergillus niger for citric acid production from sucrose. studies were carried out in lab fermenters with working volume of 5 dm 3 . the bioreactors were standard cstrs. in two stage continuous cultures (tscc) high mycelium growth and high citric acid production were observed in the first bioreactor. there was almost four times lower growth of biomass rate and about three times lower citric acid production rate in the second bioreactor. studies on influence of dilution rate in race from 0.0098 to 0.0230 dm 3 /(dm 3 h) on course and efficiency of tscc showed, that the highest citric acid yield (y p/s = 86.3%), high volumetric rate of its production (r pc = 0.958 g/(dm 3 h)) and the highest biosynthesis efficiency coefficient (k ef = 82.7) were obtained with dilution rate d = 0.0148 dm 3 /(dm 3 h). there was also high citric acid concentration (p = 129.5 g/dm 3 ) and low residual sugar concentration (s k = 6.9 g/dm 3 ) in the medium flowing out the second bioreactor in those cultures. the beta-galactosidases in commercial use are of different origins and yeast and fungal lactases present the greatest interest. the yeast lactases present neutral optima ph and are suitable for the hydrolysis of lactose in milk. in this work, the aim was to study the influence of aeration in the production of beta-galactosidase in batch fermentations with kluyveromyces marxianus atcc 46537. the medium composition for culture was as follows (in g/l): lactose pa 50, yeast extract 5, (nh 4 ) 2 so 4 4 and kh 2 po 4 2. the fermentations was carried out at 30 • c, ph 5.5, at 200 rpm starting with an initial cellular concentration of 1 × 10 7 cels/ml, with different aeration rates. the cells were disruped with chloroform 2% (v/v) as solvent. the enzymatic activity was determined as initial rate of lactose hydrolysis at defined conditions. the studies have revealed the importance of aeration on kluyveromyces marxianus in the growth and beta-galactosidase synthesis. the enzymatic activity of fermented medium with 0.5 vvm was 50% higher than one without aeration. furthermore, the cellular growth was faster in the aerobic fermentation than in the anaerobic one. the aeration has taken an important place in the enzymatic synthesis and in the cellular growth, however the results have shown that the aeration rate increase of 0.5-1.5 vvm has not implied a increase in the cellular growth neither in the enzymatic reached activity. the lactose presents in the milk has a solubility of only 20% at 30 • c, and a high percentage of the world population presents intolerance to this sugar, due to the low or absence of the activity of the lactase enzyme in the organism. to minimize such problems, the most viable alternative for nourishing dairy products is the enzymatic hydrolysis of milk, although it is an expensive process due to the high cost of the beta-galactosidase enzyme. an alternative that has been greatly studied is the immobilization of this enzyme, originated from many different sources. there are several immobilization procedures for this enzyme, however, a procedure considered ideal was not obtained yet. the objective of this work was to study the immobilization process of beta-galactosidase from aspegillus oryzae in sodium alginate with commercial gelatin. in the immobilization process was studied the glutaraldehyde influence for 1, 3 and 5%, in the presence of commercial gelatin at the concentration of 2% at the immobilization medium. the activities of the immobilized enzymes were obtained in a stirred micro-reactor, at the temperature 30 • c, ph 4.5 with a 50 gl −1 lactose solution in acetate buffer. the experimental results showed that the immobilized biocatalyst that presented the larger initial activity was the one obtained at the immobiliza-tion medium that contained 5% of glutaraldehyde. after 20 daily determinations of enzymatic activities, a fall of 30, 40 and 24% was verified in the enzymatic activities for the immobilized biocatalysts using glutaraldehyde at 1, 3 and 5%, respectively, however, in all cases, the enzymatic activity reached the half of their initial activity after 10 determinations. hydrolysis of sucrose by immobilized beta-fructofuranosidase in silica eloízio júlio ribeiro, ubirajara coutinho filho faculdade de engenharia química, universidade federal de uberlândia, uberlândia 38400-902, brazil. e-mail: ejribeiro@ufu.br (e.j. invertase, known as beta-fructofuranosidase (ec 3.2.1.26), plays a catalytic role in the conversion of sucrose into glucose and fructose. it is largely used in the food industry to prevent the crystallization of sucrose in sugar mixtures and can be used in enzyme reactors for hydrolysis of sucrose. the objective of this work was to study the kinetic of sucrose hydrolysis by immobilized betafructofuranosidase in a continuous recirculation reactor, evaluated the enzyme stability and determine the effective half-life of immobilized enzyme. invertase was covalently immobilized on sillanized controlled pore silica. nonlinear fitting were used to determine the kinetic parameters for substrate and product inhibition observed in the enzymatic hydrolysis of sucrose. the kinetics studies of immobilized invertase were carried out in a continuous recirculating reactor. the half-time of enzyme inactivation (t 1/2 ) was calculated from the initial rates of the remaining enzyme activity. the model of inhibition by substrate and product adequately represented the enzymatic hydrolysis. the fructose effect was competitive inhibition (k f = 3.1022.10 −4 mol/ml) and the glucose effect was noncompetitive inhibition (k g = 2.2521.10 −4 mol/ml). the effective diffusivity of sucrose into the support was shown to be the same as for sucrose in dilute solution (0.75 × 10 −5 cm 2 /s at 40 • c). the half-time of enzyme inactivation (t 1/2 ) was 1656 h. controlled pore silica showed to be an excellent immobilizing support. the immobilized invertase was very stable at temperatures lower than 50 • c. the intrinsic parameters (k i , k f , k g and v m ) were shown to be similar to the apparent values. the low permeability of mycobacterial cell wall envelopes is a result of the unique composition and organization of the cell wall lipids. the permeability of mycobacterial cell wall can be changed by means of partial disintegration of its compounds. the aim of present work was to characterize the changes in the cell wall skeleton (cws) and non covalently bound free lipids under the influence of isoniazid, the inhibitor of mycolic acids biosynthesis. fatty acid (fames) and mycolic acid methyl esters (mames) obtained from all tested preparations were analyzed by gc/ms analysis. the analysis of free lipids and cws revealed distinct changes in the composition of the frac-tions obtained from the cells exposed to action of the isoniazid. the changes in the quantity of fatty acids in the inh-treated cells indicates that inh interferes with the synthesis of lipidic compounds of the mycobacterial cell wall also. the decreased amount of covalently bound mycolic acids in the cws is responsible for the enhanced penetration of hydrophobic compounds through the cell wall. this work was supported by grant nr 3p04c 06923 of the committee for scientific research. barbara sencio, teresa jamroz, stanislaw ledakowicz department of bioprocess engineering, technical university, lodz, poland the enzyme laccase (ec.1.10.3.2. p-diphenol oxidase) is a subject of research in many centres dealing with improvement of bioprocesses with the use of different white rot fungi species. most strains that produce this enzyme in vitro require inductors initiating its biosynthesis. when cerrena unicolor was applied, it was found that the strain produced laccase very efficiently without additional toxic compounds. to specify optimum conditions for laccase production in a submerged culture, research was undertaken to obtain the most efficient inoculum c. unicolor. the goal of this research was to determine the effect of form and incubation time of inoculum on enzymatic activity of the laccase producing strain. the experimental inoculum was the mycelium prepared on a solid and liquid substrate. basing on results obtained, it was found that the laccase yield was the highest in the cultures where the mycelium was grown on a solid substrate. maximum activity of the c. unicolor strain was achieved on the 16th day of culture, and the amount of laccase produced was higher by, ca. 30% as compared to the mycelium obtained from the liquid substrate. results of these experiments were used to continue studies on the impact of inoculum age. experiments were carried out using an inoculum incubated for 1-3 weeks at the temperature 30 • c in a certomat bs1 shaker at 110 rpm. the best results in the c. unicolor strain culture were achieved using a 7-day-old inoculum. effect of alcohol treatment on hydrolytic activity of candida rugosa lipase serpil takaç, a. ezgiünlü department of chemical engineering, institute of biotechnology, ankara university, 06100 tandogan, ankara, turkey candida rugosa lipase (crl) was treated with 20, 40, and 60% concentrations of methanol (m), ethanol (e), 2-propanol (2p) and 1-butanol (1b) to investigate the changes in its hydrolytic activity toward p-nitrophenylacetate. the treatment included the following steps at +4 • c: (i) stirring crl with phosphate buffer for 24 h; (ii) treating the solutions with alcohols; (iii) stirring treated-crl for 24 h; (iv) centrifugation at 10,000 rpm for 30 min; (iv) dialysis the supernatant against bidistilled water for 39 h. the activity of crls was followed for 120 h at 37 • c in the presence and absence of isooctane. the enzyme activity was measured spectrophotometrically and the protein concentration was measured by lowry's method. it was found that the recovered protein did not change considerably with the type of alcohol; however, decreased with alcohol concentration. in the presence of isooctane, specific activities of the untreated and treated-crls increased compared with those obtained in the absence of isooctane. 1b-crls and e-crls showed higher activities than m-crls and 2p-crls whereas untreated-crl exhibited higher activity than m-crls, e-crls and 2p-crls. the highest and the lowest activities were obtained with 20% 1b-crl and 60% 2p-crl, respectively. the changes occur in the structure of crl after treatments were investigated by electrophoretic analysis. this study was supported by ankara university biotechnology institute (project no: 89) . different genera, species and strains of microorganisms were found to posses different cryoresistance. optimal ways for cryopreservation of microorganisms-producers of antibiotics, microorganisms, used in food industry, agriculture and veterinary have been developed. it was demonstrated, that non-lethal damages could occur in cryopreserved microorganisms after their returning to physiological culture conditions, which were manifested in streptomyces' hypha fragmentation, that of cyanobacteria's, streptococci's chains as a result there was an increase in a number of colony-forming units, a reversible inhibition of microorganisms' proliferative activity in bacillus thuringiensis and lactic streptococci, stimulation of the enzyme processes and antibiotic production. non-lethal damages are repaired during microorganism culturing in the first passage. the cause of non-lethal damages is a reversible inhibition of biosyntheses of protein and nucleic acids respiratory activity. the repairing of non-lethal damages is accompanied by the production of stressproteins, different from heat shock proteins. effect of ph in the 2-propanol treatment of candida rugosa lipase on its enantioselectivity in the hydrolysis of racemic naproxen methyl ester serpil takaç, a. ezgiünlü department of chemical engineering, ankara university, 06100 tandogan, ankara, turkey candida rugosa lipase (crl) was treated with 2-propanol (2p) at the ph values of 1.5, 4, 6, 7.5, 9 and 12 to investigate the changes in its enantioselectivity in the hydrolysis of racemic naproxen methyl ester. the treatment included the following steps at +4 • c: (i) stirring crl with different buffer solutions to maintain the desired ph values for 24 h; (ii) treating the solutions with 40% 2p; (iii) stirring treated-crls for 24 h; (iv) centrifugation at 10,000 rpm for 30 min; (iv) dialysis the supernatant against bidistilled water for 39 h. hydrolyses of racemic naproxen methyl ester to form s-naproxen were performed in shaking flasks at 200 rpm and 37 • c for 192 h in isooctane-phosphate buffer solution biphasic system using treated-crls with the activity of 7.75 u. the concentrations of the enantiomers of naproxen methyl ester and naproxen were determined with hplc. it was found that the treatment ph has an important role on the enantioselectivity and conversion. the highest enantiomeric excess for the substrate, for the product, enantiomeric ratio, and con-version were obtained with crl treated at ph 1.5 as 39, 98, 181 and 29%, respectively. these values were followed with 2p treated crl at ph 12 as 35, 98, 121 and 27%. however, lower enantiomeric excesses, conversions and enantiomeric ratios were obtained at the treatment ph values between 1.5 and 12. the effects of fatty acids, nitrogen (as nh 4 no 3 ), phosphorus (as kh 2 po 4 ), ph value, manganese (mn 2+ ), iron (fe 2+ ) and methanol concentration on growth and production of oxalic acid from post refining fatty acids by a mutant of aspergillus niger xp in submerged fermentation experiments was studied. of the a. niger strains screened, a. niger xp was identified as the best oxalate producer on lipids. the influence of the ph on oxalic acid formation shows that the maximum production rate and higher concentration of product are observed at the ph ranging from 4 to 5. with a medium containing 50 g fatty acids/l, the production reached a maximum of 68 g oxalic acid/l after 7 days. the addition of 1.5% (w/v) methanol to seed culture increased the product yield and concentration of oxalic acid but decreased the amount of an undesired by-product (citric acid). under this condition, the maximum oxalate productivity (14-18 g/l days) was maintained for 2-4 days of fermentation. other results of the experiments show that supplementation of the production medium with manganese and iron enhances oxalate production. fatty acids proved to be a very good substrate for oxalic acid production by a. niger xp giving excellent yields and productivity at low ph. the results are very promising as they may lead to cheap alternative processes for oxalic acid production from renewable lipid resources. department of food engineering, middle east technical university, ankara 06531, turkey. e-mail: banuy@metu.edu.tr (b. yalcindag) laccase (e.c. 1.10.3.2, p-benzenediol:oxygen oxidoreductase), which is an enzyme belonging to the multi-copper oxidase family, catalyzes the oxidation of a broad variety of polyphenols with a preference for p-isomers, which are converted to p-quinones. fungi generally contain several laccases which have been found to be involved in delignification, melanin synthesis and pathogenesis. laccase has also important potential application areas especially in food and chemical industries. after aspergillus fumigatus genome data were released, research on functional analysis of laccases has been initiated in our laboratory. laccase genes of aspergillus nidulans, ya and tila, and laccase and multi-copper oxidase genes of aspergillus fumigatus, abr2 and abr1, were used to analyze a. fumigatus genome for laccases. this sequence analysis resulted in 4 probable laccase genes, one of which was the previously cloned abr2 gene. in this study, one of these genes (aflac1) was further characterized. after sequence alignment and characterization studies, aflac1 was predicted to have 2128 bp having six introns, which makes the protein 606 amino acid long, and the predicted protein sequence showed 63% homology with the dihydrogeodin oxidase of aspergillus terreus and 38% homology to the laccase 2 of botryotinia fuckeliana. aflac1 gene is found within an uncharacterized gene cluster containing genes with homology to glutathione-s-transferase, polyketide synthase, o-methyl transferase, and others. the information obtained from sequence analysis was employed in designing pcr-primers to amplify the aflac1 gene, followed by cloning onto pan52-1 and pan52-4 vectors for heterelogous expression in aspergillus sojae. in addition, by the use of rt-pcr, aflac1 cdna will be cloned and expressed in escherichia coli. furthermore, gene silencing studies will be performed to enlighten the function of aflac1 and associated gene cluster. stability of growth rate of photosynthetic cells is an important factor in designing of effective photobioreactors especially in long term operations. in our experiments, in order to keep operational parameters almost constant, a semi-continuous culture method was developed. in this method, a part of culture broth containing grown cells was repeatedly replaced by fresh medium at a predetermined time interval. the replacement of broth with fresh media could keep the cell concentration, volume of broth and distribution of light intensity constant at initial values throughout the cultivation. it was shown that in one side illumination with a halogen lamp, if the ratio of the light intensity at the front side of a flat plate photobioreactor to that at the rear side was kept lower than 4, the growth rates was sustained in constant levels. however, at higher ratios the growth was followed by rapid decrease after 5-6 h. supplemental illumination with a fluorescent lamp from the rear side of the flat plate photobioreactor could sustain almost stable growth rate. beside of the illumination conditions, increased ferrous ion concentrations in medium could keep the stability of growth rate even in unstable illumination conditions, while consumed ferrous ion was slight. glutathione (gsh) plays a pivotal role in protecting cells from by-products generated by oxidative metabolism. these characteristics make this active tripeptide an important drug for the treatment of liver diseases and is of interest in the food additive industry, therapeutics and sport nutrition. in the first part of the research, a screening was carried out among 48 yeast strains, to find out those able to accumulate higher gsh intracellular levels. two saccharomyces cerevisiae strains proved to be the best gsh producers (1.3%dw), in every samples the presence of s-adenosyl-methionine in traces (0.2% dw) was also evidenced. s-adenosyl-methionine (sam) plays a role in the immune system, maintains cell membranes, participates in detoxification reactions and in the manufacture of brain chemicals and cartilage. the second part of the research was aimed at increasing, in a post fermentative procedure, gsh levels present inside the cells at the end of the growth phase. moreover, time course of sam intracellular levels, to be related with accumulated gsh, was also monitored. cells were then suspended in an appropriate solution containing mineral salts, glucose and the aminoacids, precursors of the two studied molecules. according to this procedure, gsh intracellular levels reached 4.6% dw after 48 h incubation. moreover, gsh levels can be related to sam production (up to 1.3% dw). the presence, in several samples, of intermediate metabolites, such as cystathionine and omocysteine, proved the establishement of an intracellular equilibrium between gsh and sam; this behaviour represents a promising starting point for the set-up of a microbial process for the simultaneous production of the two studied molecules. three acetate mutants of y. lipolytica yeasts, which varied in colony morphology (rough and smooth), were employed for continuous citric acid production from glucose and fructose syrup in a membrane reactor with cell recycle. the strains were compared for their product yields, specific acid production rates and ratios of citric acid to isocitric acid. experiments shoved that glucose syrup was a better substrate for citric acid production by y. lipolytica. citric acid concentration in the effluent ranged from 80 to 120 g/l, depending on the yeast strain used. all y. lipolytica strains produced very low amounts of isocitric acid. its concentration did not exceed 3 g/l. based on the results of these experiments, smooth strain awg-7 was found to be the most suitable for citrate production both from glucose and fructose syrup during long time continuous processes (500 h). in the steady state, the highest citrate productivity (1.3 g/lh) was obtained with this strain, when the feed medium contained 200 g/l of glucose and dilution rate (d) was d = 0.013 1/h. supplementation of the feed medium with bacto-pepton improved the productivity, citric acid yield and stability of the continuous process in the cell recycle fermentation system. for textile dyeing with natural dyes, indigo has an almost unique position as the most blue natural dye. due to the importance of indigo, considerable research has been conducted to replace the chemical synthesis of the dye by an application of biotechnological methods. therefore, we investigated several characteristics of natural indigo derived from polygonum tinctorium and its dyeing properties using silk fabrics, such as washing, perspiration, and light fastnesses. this work was financially supported by program for cultivating graduate students in regional strategic industry from korea industrial technology foundation. glycine oxidase is the product of the yjbr gene of bacillus subtilis that was predicted by sequence homology to be a flavoprotein similar to sarcosine oxidase. glycine oxidase catalyzes the oxidative deamination of various primary and secondary amino acids (e.g. sarcosine, n-ethylglycine, and glycine) and d-amino acids to form the corresponding ␣-keto acids and hydrogen peroxide. previous investigations reported on the cloning and production of the glycine oxidase gene in escherichia coli was up to 1 u/g cell. the present works has improved the expression of the recombinant his-tagged glycine oxidase by 15-fold by using pet28a and rosetta cells under the optimal iptg, temperature and time of induction. the protein obtained represented 30% of total soluble proteins in crude extract. the enzyme was purified to near homogeneity using imac with a 95% recovery and with and specific activity of 1.21 u/mg protein. the enzyme was active towards glycine, sarcosine and different d-amino acids, having in general, a basic ph optimum. the kinetic parameters were also studied, showing a km range from 0.3 to 300 mm. the enzyme was immobilized, and used to obtain pyruvic acid (␣-keto acid) from d-alanine with a good yield. the enzymatic synthesis of lipophilic derivatives of various natural antioxidants including flavonoid glycosides, as well as derivatives of cinnamic acid, was performed using various immobilized lipases in ionic liquids such as 1-butyl-3-methylimidazolium tetrafluoroborate (bmim-bf 4 ) and 1-butyl-3-methylimidazolium hexafluorophosphate (bmim-pf 6 ). the influence of various reaction parameters on the catalytic behavior and the selectivity of lipases was pointed out. a response surface methodology was applied for the optimization of the yield and the productivity of the biocatalytic process. the antioxidant activity of the biocatalytically prepared lipophilic derivatives of natural antioxidants, as expressed on cu 2+ -induced oxidation of low-density lipoprotein (ldl) and total serum, was investigated. process strategy for reduction of proteolysis in pichia pastoris fermentations jan weegar 1 , john dahlbacka 1 , noora sirén 2 , niklas von weymarn 2 , kaj fagervik 1 : 1 faculty of chemical engineering,åbo akademi university, finland; 2 laboratory of bioprocess engineering, helsinki university of technology, finland. e-mail: jan.weegar@abo.fi (j. weegar) the yeast pichia pastoris is a popular host organism for production of recombinant proteins. it is, however, common that the products are degraded by proteases towards the end of the fermentation, resulting in productivity and purity decreases. proteolysis of recombinant proteins in p. pastoris fermentations is affected by the temperature and ph of the growth medium. in this study, it was shown that decreasing the temperature from 30 to 10 • c during the induction phase effectively prohibited proteolysis. on the other hand, the temperature decrease resulted in a reduced maximal methanol consumption rate, which subsequently resulted in a culture highly sensitive to residual methanol. the temperature was slowly decreased according to a predetermined trajectory. as the temperature reached values below 12 • c, the methanol concentration had to be closely monitored and the substrate feed rate adjusted in order to prohibit methanol poisoning as well as to maintain the culture as a substrate limited fed-batch. measurement of protease concentrations revealed that proteases were present at 10 • c, but at this temperature the proteolysis rate was evidently effectively reduced. the recombinant protein produced could almost totally be recovered (i.e. high purity) with this process strategy compared to a constant high temperature culture (30 • c) where severe breakdown of the product was observed. with the growing of an ecological conscience in the public opinion, more and more industrial processes are analyzed for a possible ecologically beneficial alternative. for the production of ascorbic acid, which is nowadays done by the reichstein process, a lot of research was done to develop biotechnological alternatives for the synthesis of reichstein intermediates by enzymatic means, which show some advantages regarding costs and environmentalfriendliness. besides of two-stage fermentation, our approach is to design a tailor-made organism that produces 2-keto-l-gulonic acid, which is the direct precursor of ascorbic acid from glucose or gluconic acid and which can easily be converted to the final product by conventional methods. erwinia (pectobacter) cypripedii, which is a natural producer of 2,5-diketo-d-gluconic acid, was selected as a suitable host for a 2,5-diketo-d-gluconate reductase from corynebacterium glutamicum. to increase the yield of the desired compound we investigate two 2-keto-reductases in the host organism that diminish the yield of 2-keto-l-gulonate by reducing the compound to l-idonic acid, or by metabolisation of intermediates. these two enzymes were investigated with vari-ous biochemical and molecular biological methods, which will be presented. overproduction of bioinsecticides by heat and salt stress and control of dissolved oxygen in cheap media of bacillus thuringiensis nabil zouari, dhouha ghribi, samir jaoua laboratoire des biopesticides, centre of biotechnology of sfax, tunisia, bp:k, 3038 sfax, tunisia bioinsecticides based on preparations of spores and insecticidal crystal-proteins (icps) produced by the bacterium bacillus thuringiensis (bt) proved to be a high tool for fighting some agricultural pests and vectors of diseases. however, the use of bt preparations as commercial insecticides would be prohibitively expensive because it is not easy to reach cheap overproduction of icps during large-scale fermentation. here, we report possibilities to improve delta-endotoxins production as a consequence of responses of bt strains to low levels of heat and salt stress. each stressor results differently in the improvement of delta-endotoxins production, but both were shown to be most efficient at the beginning or the midexponential phase of the cultures which become resistant at the stationary or the sporulation steps. heat stress caused increase of 84% of synthesis yields of the sporulating cells, in contrast, salt caused increase of 25% of spores counts, corresponding to 28% toxins production improvement. combined effects of both stressors lead to toxins production improvement of 66%, yield improvement of 40%. we focused on the overcome of carbon repression catabolite, closely related to oxidative metabolism, by an adequate control of dissolved oxygen in the cheap media we formulated for bt insecticides production. we showed that an equilibrium between the high density of vegetative cells and their ability to synthesize toxins during their sporulation was necessary to take into account. 40% increase of icps production was reached into 3 l fermenter combination of mutagenesis, heat and salt stress and oxidative metabolism control allowed more than 100% improvement of delta-endotoxins. these results are of great importance in practical point of view, since high bioinsecticides concentrations could be produced without decrease of the yields of their production. mechanism and function of the intramembrane-cleaving protease rhomboid marius lemberg 1 , javier menendez 2 , christopher koth 2 , matthew freeman 1 : 1 mrc laboratory of molecular biology, cambridge, uk; 2 ontario center for structural proteomics, toronto, canada rhomboids are a family of intramembrane serine proteases that are widely conserved throughout evolution. among diverse functions discovered so far, rhomboids participate in intercellular signalling, parasite invasion, membrane dynamics and bacterial quorum sensing, making them potentially valuable therapeutic targets. the identification of physiological substrates and of selective inhibitors will be key towards their evaluation as drug targets. we have developed an in vitro cleavage assay to monitor rhomboid activity in the detergent solubilised state, enabling the first isolation of a highly pure rhomboid with catalytic activity. analysis of purified mutant proteins suggests that rhomboids use a serine protease catalytic dyad instead of the previously proposed triad, and gives insights into subsidiary functions like ligand binding and water supply. oligosaccharides and 3-keto-glycosides. availability of the enzyme is, however, hampered by the very slow growth and low production rates of the fungus. cloning of the encoding gene and production of the protein by heterologous expression are therefore a prerequisite not only for any biotechnological application, but further scientific investigations as well. on the basis of peptide sequences degenerated primers were designed, and the resulting pcr fragment was used as a probe to isolate the gene from a genomic library. two very similar genes encoding previously uncharacterized proteins were also found, and flanking regions were amplified using rage-pcr. furthermore cdna clones of all genes were isolated by rt-pcr. for textile dyeing with natural dyes, indigo has an almost unique position as the most blue natural dye. due to the importance of indigo, considerable research has been conducted to replace the chemical synthesis of the dye by an application of biotechnological methods. therefore, we investigated several characteristics of natural indigo derived from polygonum tinctorium and its dyeing properties using silk fabrics, such as washing, perspiration, and light fastnesses. this work was financially supported by program for cultivating graduate students in regional strategic industry from korea industrial technology foundation. the behavior of phytate degrading enzymes isolated from malaysian zea mays root in rice bran media anis shobirin meor hussin 1 , abd-elaziem farouk 1 , hamzah mohd salleh 1 , ralf greiner 2 : 1 biomolecular engineering research group, department of biotechnology engineering, kulliyyah of engineering, international islamic university malaysia, jalan gombak, 53100 kuala lumpur, malaysia; 2 centre for molecular biology federal research centre for nutrition and foods, haid-und-neu-straße 9, d-76131 karlsruhe, germany phytate degrading enzymes catalyze the step-wise release of phosphate from phytate, the principle storage form of phosphorus in plant seeds and pollen. they are widespread in nature, occurring in plants and microorganisms, as well as in some animal tissues. phytate-degrading enzymes have been studied intensively in recent years because of the great interest in such enzymes for phytate degradation and their application for animal feed and human health. from over isolate 140 screened isolates of phytate degrading enzymes, three isolates from the root malaysian maize plantation have shown phytate degrading activity. the production of phytate degrading enzyme was studied using different concentrations [%, w/v] of rice bran media during different stages of cultivation. the dephosphorylation of phosphate from rice bran phytate has shown regulatory effect on the secretion of bacterial phytases. in this conference, we will present data for the characterization of the enzymes. in this paper we present the properties of a phytase purified from a bacterium isolated from malaysian wastewater, which might find application as an animal feed supplement. the phytase described herein is a periplasmic enzyme. the phytase was purified about 180fold to apparent homogeneity using ion-exchange chromatography and gel-filtration with a recovery of 10% referred to the phytatedegrading activity in the crude extract. the enzyme exhibits an activity of about 1106 u mg −1 . gel filtration of the native enzyme on a calibrated sephacryl s-200 column gave a molecular mass of the phytase of 42,000 ± 1500 da with elution position being measured by determination of enzyme activity. lower molecular mass species or higher molecular mass aggregates were not observed. the phytase appeared homogeneous by polyacrylamide gel electrophoresis under non-denaturing conditions at ph 8.3 and 4.8 and gave a single protein band upon sds gel electrophoresis after coomassie staining of the gels. these results indicate that the phytase could be regarded as homogeneous. the estimated molecular mass after sds-page indicated that the phytase having a molecular mass of 41,500 ± 2500 da. consequently, this enzyme is a monomeric protein. the purified enzyme had a single ph optimum at ph 4.5 and was virtually inactive above ph 7.0. at ph 3.0, 40% and at ph 2.5, 20% of the activity at optimal ph was observed. the effect on enzyme stability was studied in the ph range 1.0-9.0 at 4 • c. within 14 days the phytase did not lose any activity in the ph range from 3.0 to 8.0, but at ph values below 2.0 a rapid decline in activity was observed. at ph 1.5, 72% and at ph 9.0, 65% of the initial activity was lost during 24 h. in the range of temperatures studied, 10-80 • c, the optimum temperature for the enzyme was found to be 65 • c. the apparent activation energy was estimated at ph 4.5 from the slope of log v max versus 1/t. the data showed excellent linearity from 15 to 65 • c. the arrhenius activation energy for the hydrolysis of phytate was calculated to be 37.5 kj/mol. in order to check thermal stability, the purified enzyme was incubated at different temperatures, cooled to 4 • c and assayed using the standard phytase assay. the enzyme lost no activity in 10 min at temperatures up to 65 • c. when exposed for 10 min at 70 • c, it retained 50% and at 80 • c 12% of the initial activity. in order to determine the substrate selectivity of the purified phytase, several phosphorylated compounds in addition to phytate, were used for k m and v max estimation by detecting the release of the phosphate ion during hydrolysis using formation of a soluble phospho-molybdate complex in an acidic water-acetone mixture. only phytate was identified as a substrate. the kinetic parameters for the hydrolysis of phytate were determined to be k m = 0.15 mmol l −1 and k cat = 1164 s −1 at ph 4.5 and 37 • c. like other bacterial phytatedegrading enzymes, the purified enzyme showed substrate inhibition. the activity of the purified enzyme was inhibited at substrate concentrations >7 mm. the study of the effect of metal ions on enzyme activity showed that none of them had an activating effect when used at a concentration between 10 −4 and 10 −3 m. mg 2+ , ca 2+ , mn 2+ , co 2+ , ag + , hg 2+ , and cu 2+ had little or no effect on enzyme activity, while zn 2+ , fe 2+ , and fe 3+ showed strong inhibitory effects. the reduced phytate-degrading activity in the presence of fe 2+ and fe 3+ is attributed to a lower phytate concentration in the enzyme assay because of the appearance of a fe-phytate precipitate. when compounds which tend to chelate metal ions, such as o-phenanthroline, edta, oxalate, citrate or tartrate, were tested for their effect on enzyme activity, it was noted that none of them was inhibitory at a concentration from 10 −4 to 10 −3 m. fluoride, a known inhibitor of different phytate-degrading enzymes from bacteria and the hydrolysis product phosphate as well as its structural analogs molybdate, wolframate and vanadate were found to be strong inhibitors of the purified enzyme. flouride inhibited the hydrolysis of phytate with a k i value of 112 mol l −1 . several fusion strategies have been developed for the expression and purification of small antimicrobial peptides (amps) in recombinant bacterial expression systems. in the present work, we investigated the use of the baculoviral polyhedrin (polh) protein as a novel fusion partner for production of a model amp (halocidin 18 subunit; hal18) in escherichia coli. the recombinant hal18 amp could then be hydroxylamine cleaved from the fusion protein and easily recovered by simple dialysis and centrifugation. this was facilitated by the fact that polh was soluble in the alkaline cleavage reaction but became insoluble during dialysis at a neutral ph. importantly, recombinant and synthetic hal18 peptides showed nearly identical antimicrobial activities against e. coli and staphylococcus aureus, which were used as representative gram-negative and -positive bacteria, respectively. these results demonstrated that baculoviral polh can provide an efficient and facile platform for production or functional study of target amps. extensive industrial and food additive applications of succinate have attracted much effort towards finding an environment-friendly alternative to the petrochemical production processes. it is very attractive to engineer s. cerevisiae for succinate production because of its generally regarded as safe (gras) status, ease of genetic manipulation and fermentation. we approached this metabolic engineering problem with a two-step methodology combining modern computational as well as molecular biology tools. in the first step we identified potential metabolic engineering targets leading to high succinate yield and productivity, with the aid of genome scale metabolic model and a bi-level optimization framework using flux balance analysis and quadratic programming. in the next step, various deletion mutants are being constructed and characterized for physiology and succinate production. results from these experiments then will be used to improve the predictions in computational models. so far, we have constructed a saccharomyces cerevisiae mutant deleted in sdh3, which encodes a major subunit of sdh-complex converting succinate to fumarate in mitochondria. the physiology of sdh3 mutant has been characterized in aerobic and anaerobic batch cultivations and in glucose limited chemostat at dilution rate as low as 0.027 h −1 . in aerobic batch fermentations, the mutant showed reduced maximum specific growth rate as compared to the wild-type, and it was incapable of growing on ethanol as sole carbon source, as predicted from the model. interestingly, the mutant showed much higher specific growth rate in anaerobic conditions, close to the wildtype strain. moreover, the chemostat cultivations indicate that the critical dilution rate of the mutant is below 0.027 h −1 . this opens further opportunities to investigate interesting behavior of the mutant and the underlying regulatory processes to improve our understanding of yeast mitochondrial metabolism. department of chemical engineering, yıldız technical university, davutpaşa campus, 34210 esenler/istanbul, turkey. e-mail: dkilic@yildiz.edu.tr (d.k. apar) lactose is the dominant carbohydrate in milks which are, in turn, the only significant natural sources of lactose. a large number of people do not digest lactose properly due to a lack or inactivity of the intestinal beta-galactosidase and they suffer from intestinal dysfunctions -gas, abdominal pain and diarrhea -if their diet contains lactose. moreover, lactose is a sugar with a high bod, low sweetness, low solubility, when compared to the products of its hydrolysis (glucose and galactose) and being a hygroscopic sugar has a strong tendency to adsorb flavours and odours. the hydrolysis of this sugar is very attractive towards the improvement of processes for the production of ice cream and other refrigerated dairy products and it could be very interesting for the development of additives for animal and human alimentation. the enzymatic hydrolysis of lactose is carried out by beta-galactosidases, enzymes that are widely distributed in nature, appearing in micro-organisms, plants and animal tissues. the present investigation describes the effects of the sonication process parameters on enzymatic hydrolysis of milk lactose and enzyme stability. bandelin sonopuls sonicator was used for the lactose hydrolysis experiments. ␤-galactosidase enzyme used is produced from kluyveromyces marxianus. the reactions were carried out in 250 ml of milk. the process variables for the sonicator are duty cycle, acoustic power and enzyme concentration. the amount of residual lactose concentration (g/l) and residual enzyme activity (%) against time were investigated versus process variables. beside of this; the mathematical models depending on the operating conditions were also derived by using the experimental data of lactose concentration and enzyme activity. kinözbek department of chemical engineering, yıldız technical university, davutpaşa campus, 34210 esenler/istanbul, turkey. e-mail: dkilic@yildiz.edu.tr (d.k. apar) over the last decade, the use of plant protein hydrolysates alternative to animal protein hydrolysates in human nutrition has broadly expanded. protein hydrolysates are often used in different nutritional formulations, such as supplementation of drinks to enhance their nutritional and functional properties, or special medical diets. there are many processes which employ protein hydrolysis and hydrolytic products. among these processes, the use of enzymes allows selective hydrolysis of protein and produces a potentially safer and more defined material. in the present study, the effect of the temperature, ph and viscosity on the hydrolysis of corn gluten was investigated using a stirred batch reactor system. the corn gluten was hydrolysed by using neutrase enzyme, a bacterial protease produced by a selected strain of bacillus amyloliquefacien. the reactions were carried out in 0.2 l of aqueous solutions containing 1% (w/v) corn gluten and 0.4% (v/v) enzyme. the degree of hydrolysis (%) and soluble protein concentration depending on the time were investigated at the temperatures 40, 45, 50, 55, and 60 • c; and at the ph values 6.5, 7, 7.5, and 8. to investigate the effect of viscosity, the various amounts of glycerol was added to the reaction solutions to produce viscosities in the range of 1.415-13.43 cp. the degree of hydrolysis (dh) was computed by using ph-stat method. for the soluble protein determination in the hydrolysates samples, the folin-lowry method (1951) was used. polyhydroxyalkanoates (pha), one of the most promising bioplastics for the partial replacement of synthetic polymers like polypropylene, are polyesters produced by bacteria as intracellular storage reserves of carbon and energy. the industrial production of pha is achieved by pure cultures in its natural state or using genetically engineered organisms. the main obstacle to the replacement of synthetic plastics by biopolymers is their great cost difference. research on the field of biopolymers synthesis using mixed cultures and waste organic carbon sources as substrates prove to decrease substantially the production costs of pha. the optimization of pha production under aerobic feeding conditions (adf) was achieved recently in our group, obtaining the highest value of pha content stored by mixed cultures (79.2% of cell dry weight). in this work only a homopolymer of polyhydroxybutyrate (phb) was obtained and since it is a highly crystalline and brittle material its application field is limited. the mechanical and thermal properties of pha can be varied to a great extend by adjusting the monomer composition. the incorporation of different monomeric units, other than hb, in the polymer chain, originates copolymers with improved mechanical properties. optimization of pha production from propionate by a mixed culture was studied varying the carbon and ammonia concentrations. propionate only, acetate alone or a mixture of acetate and propionate were tested. copolymers of hydroxybutyrate and hydroxyvalerate, p(hb/hv), with different compositions were obtained. consequently polymer properties could be manipulated by feeding the selected volatile fatty acid composition. the pharmaceutically important plant species of glycyrrhiza sp. (called licorice) is an important commercial product used as a natural sweetener, anti-inflammatory, anti-cancer and anti-diabetic agents. agrobacterium rhizogenes transformation system was used for the hairy root cultures of licorice g. uralensis. after inoculation of aseptic stem segments the ability of hairy root formation was scored for a period of 6 weeks. mean transformation frequency ranged from 27% (for 8196 up to 31% (for 15834). some transformed genotypes showed significant differences in roots weight, flavonoids and glycyrrhizin (gl) production. the cotransformation rate of licorice intact explants cultivars with lba 9402 tl-dna and the 35s gus gene showed an average of more than 35%. these obtained root cultures were additionly elicited with extracts of biotic elicitor acremonium sp. (endomycorhizal fungus), and were used as an in vitro system to metabolites production. the transformed and elicited hairy roots of g.uralensis were obtained by infection of a. rhizogenes 8196 have produced gl at an yield of 4.5% dry weight on the period of culture as a 30 days. according to tentative analyses the hairy roots cultures of glycyrrhiza species produced flavonoids (liquiritigenin and liquiritigen). more high levels (3.42 g/l) of the total flavonoids production have been identificated on the strains which transformed by lba 9402. this study involved any difference among elicitor treatments and incubation periods for the optimal meabolites production. clearly, the selection of an effective agrobacterium strain for the production of transformed root cultures is highly dependent on the plant species, and must be determined empirically. production, purification and characterization of scytalidium thermophilum phenol oxidases didem sutay 1 , ufuk bakir 1 , zumrut b. ogel 2 : 1 chemical engineering department, middle east technical university, inonu bulvari, 06531 ankara, turkey; 2 food engineering department, middle east technical university, inonu bulvari, 06531 ankara, turkey. e-mail: ubakir@metu.edu.tr (u. bakir) phenol oxidases (pos) are a group of enzymes which are responsible for oxidation of various phenolic compounds in the presence of molecular oxygen. there are different types of pos present in nature and three major groups of these enzymes are laccases (e.c. 1.10.3.2, p-benzenediol: oxygen oxidoreductase), catechol oxidases (e.c. 1.10.3.1, o-diphenol oxidoreductase) and tyrosinases (e.c. 1.14.18.1, monophenol monooxygenase). another group of enzymes, peroxidases (e.c. 1.11.1.7), can also be considered as a member of po family. pos have very wide substrate range and final oxidation products of these substrates are quinones, which are highly reactive molecules and polymerize into brown, red or black waterinsoluble compounds. pos are very common in nature, they can be found in almost all plants, animals and microorganisms. pigmentation and protection from the environment are main functions of these enzymes. pos have different applications in food, pharmaceutical, textile industries and waste-water treatment systems. the objective of this study was po production by the thermophilic fungus, scytalidium thermophilum, purification and characterization of the enzyme. for this purpose, enzyme production was performed either in a shaker-incubator or a temperature, ph and dissolved oxygen controlled 2 l bioreactor (probiotem) to optimize enzyme production medium composition and bioreactor parameters. as the carbon and nitrogen sources, 4% glucose and 0.4% yeast extract were determined as the optimal concentrations, respectively. copper, gallic acid and tannic acid were determined to increase enzyme production. purification was performed by using membrane and chromatographic techniques. hydrophobic, ion exchange and gel filtration columns were used by using a fplc system;äkta prime (amersham biosciences). especially the phenyl sepharose tm high performance column appeared to be very efficient for po purification from scytalidium thermophilum. purified po have been characterized by electrophoretic techniques and kinetic studies. isolation of lipolytic microorganisms from subtropical soils. cloning, purification and characterization of a novel esterase from strain pseudomonas sp. cr-611 núria prim, cristian ruiz, cristina bofill, f.i. javier pastor, pilar diaz department microbiology, university of barcelona. av. diagonal 645, microorganisms or their enzymes are used in a wide range of biotechnological activities such as polymer hydrolysis, synthesis of added-value compounds, sample decontamination, etc. thus, there is an increasing interest for isolating new enzymes and new enzymeproducing organisms for their use in industrial conversions (cherry and fidantsef, 2003) . among these enzymes, lipases, esterases, cellulases, xylanases and pectinases play an important role in many biotechnological processes like those related to pulp and paper processing. three samples of subtropical forest soil from puerto iguazú (argentina) were used for the isolation of autoctonous microorganisms growing in an organic matter-rich environment. a total of 724 pure cultures of bacteria and fungi were obtained and their hydrolytic activities on polysaccharide and lipidic substrates were assayed using olive oil, tributyrin, cholesterol esters, xylan, cellulose and pectin as substrates. among the isolates analysed, 449 were active on one or more of the substrates evaluated, and 43 of them degraded all substrates. nearly half of the strains displayed lipolytic activity, whereas the number of strains active on xylan, cellulose, pectin and cholesterol esters, was much lower. the 76 strains bearing the highest hydrolytic activities were selected and stored for further characterization (ruiz et al., 2005) . among them, strain cr-611, one of the most active isolates on tributyrin, was selected for identification and characterization of its lipolytic system. lipolytic strain cr-611 was identified by morphological, physiological and phylogenetic tests, as a pseudomonas sp., closely related to p. fluorescens. sds-page and zymogram analysis (diaz et al., 1999) of cell extracts and supernatants from the strain revealed a complex lipolytic system consisting of at least two lipolytic enzymes. sequence alignment and clustering of previously described pseudomonas lipases and esterases allowed the design of different sets of primers for the isolation of the lipase/esterase coding genes. a gene coding for an esterase with homology to family vi bacterial lipases (arpigny and jaeger, 1999) was isolated and cloned in escherichia coli. the cloned enzyme was further purified and characterized, showing preference for short fatty acid esters and displaying a typical michaelis-menten kinetics, with no interfacial activation. the substrate profile, together with the kinetic behaviour and sequence similarity of the cloned enzyme to family vi bacterial esterases, allowed to identify this enzyme as an esterase and was named esta6. maximum activity was achieved on muf-butyrate at 55 • c and ph 8.5, suggesting that it could be of interest for biotechnological purposes. microbial xylitol production from agricultural wastes has recently attracted much attention from industries because it has potentials to realize the cheaper production of xylitol with low environmental impact (tada et al., 2004) . in order to realize the effective xylitol production by a xylose utilizing yeast, the oxygen supply is a key for maximizing xylitol yield over consumed xylose (y xl ) because the intracellular xylitol metabolism is strongly influenced by the amount of available oxygen. in the present work, we tried to apply a metabolic reaction model in order to determine the optimal oxygen transfer rate (otr) in a fermentor for maximizing xylitol yield. corn cob hydrolysates containing 25 g-xylose/l was employed as medium for xylitol production by computer-controlled batch cultures using candida magnoliae (ferm p-16522, aist). a metabolic reaction model considering main xylitol metabolisms including glycolysis, pentose-phosphate pathway, tca cycle and cell synthesis was developed. the model allows to estimate various intracellular metabolic flux distributions including a xylitol production rate. the oxygen uptake rate to maximize the ratio of xylitol production rate over xylose consumption rate corresponds to the otr condition to maximize a xylitol yield over xylose consumed. based on the metabolic reaction model, the otr was optimized by a linear programming, the optimal otr and the maximum xylitol yield were estimated as 0.5 mmol o 2 /l h and 0.81 g-xylitol/g-xylose, respectively. the experimental verification using the optimal otr demonstrated that the xylitol yield was greatly improved to 0.75 g-xylitol/g-xylose from 0.6 g-xylitol/g-xylose in our previous study. expression of a bacterial sugar phosphate transporter in s. cerevisiae to release l-glycerol 3-phosphate accumulated by metabolic engineering almut popp, huyen thi thanh nguyen, ulf stahl, elke nevoigt department of microbiology and genetics, university of technology, 13353 berlin, germany. e-mail: a.popp@lb.tu-berlin. de (a. popp) in contrast to glycerol, its phosphorylated precursor l-glycerol-3phosphate (l-g3p) is retained by the plasma membrane. therefore, engineered yeast strains accumulate l-g3p in the cytosol resulting in low overall yield of the desired product and laborious downstream processing. a suitable sugar phosphate transporter in the yeast plasma membrane would overcome these limitations. the glycerol-3-phosphate transporter (glpt) of e.coli is an antiporter and naturally mediates the uptake of l-g3p in exchange with inorganic phosphate. we assume that this transporter would also mediate the excretion of accumulated l-g3p into a phosphate-rich medium if it was present in the plasma membrane of engineered yeast. despite many inconsistencies in codon usage, we were able to express the bacterial glpt gene in yeast. expression was monitored by a c-myc tag added to the c-or n-terminal hydrophilic tail, respectively. the quantity of the construct with n-terminal tag clearly exceeds the quantity of the construct with c-terminal tag. both gene products are located in the endoplasmic reticulum, as shown by immunofluorescence microscopy. obviously, yeast's transmembrane protein sorting machinery does not recognise it as a substrate for the secretory pathway. metabolic flux analysis of c-and p-limited shikimic acid producing e. coli gaspard lequeux 1 , louise johansson 2 , jo maertens 1 , peter vanrolleghem 1 , gunnar lidén 2 : 1 biomath, ghent university, coupure links 653, 9000 gent, belgium; 2 department of chemical engineering, lund university, p.o. box 124, 22100 lund, sweden. e-mail: gaspard.lequeux@biomath.ugent.be (g. lequeux) metabolic flux analysis (mfa) was applied to decipher why plimited e. coli fermentations are more optimal for shikimic acid production in comparison with glucose-limited fermentations. as mfa allows obtaining insight in the intracellular flux distribution over different pathways by only measuring net production and consumption rates of metabolites, under the condition that parallell pathways are removed. to this end a detailed metabolic network model was created and checked for consistency, dead-ends, and parallell pathways. several fermentations were performed at different dilution rates (ranging from 0.05 to 0.3 h −1 ) and different limitations (phosphate and glucose). the e. coli strain used was w3110 with genetic modifications in the aromatic amino acid pathway to enhance shikimic acid production. for each dilution rate, a metabolic model was solved. this way, the evolution of each flux can be followed with respect to the dilution rate. the p-limited cultures showed a better yield which can be explained by the diminished excretion of dehydro-shikimic acid (as is known from literature) and a reduction of the hydrolysis of atp. takasumi hattori, kuniki kino, kohtaro kirimura department of applied chemistry, school of science and engineering, waseda university, tokyo, japan. e-mail: takasumi@suou.waseda.jp (t. hattori) alternative oxidase is a terminal oxidase in respiration chain, which is a branched chain of cytochrome pathway, and inhibited by salicylhydroxamic acid (sham), but not by cyanide. the citric acid-producing fungus aspergillus niger wu-2223l has a cyanide-insensitive and sham-sensitive respiration catalyzed by the alternative oxidase (kirimure et al., 1999) and did not produce citric acid when cultivated with sham (kirimure et al., 2000) . in this study, the transcript levels of alternative oxidase gene (aox1) (kirimure et al., 1999) and activities of alternative oxidase under the conditions of citric acid production were examined during 9 days-cultivation. the amount of aox1 mrna was determined by northern blot analysis, and the specific activity of alternative oxidase as that of duroquinol oxidase. the transcript level and the activity were highest at 2 days at log-phase, decreased during 2 and 4 days, and thereafter maintained at low levels. however, the transcript and alternative oxidase activity was constitutively detected during whole the cultivation periods under the conditions of citric acid production. the sequence analysis of aox1 chromosomal dna revealed the presence of potential binding site of cyclic amp responsive element (cre), stress responsive element (stre) and heat shock factor (hsf) in its upstream region. these results indicated that the expression of alternative oxidase was regulated in the transcription level and alternative oxidase contributes as the main respiration chain during citric acid production. kirimura, k., et al., 1999 . curr. genet. 34, 472-477. kirimura, k., et al., 2000 . biosci. biotechnol. biochem. 64, 2034 -2039 . trichoderma strains are considered to be among the most useful fungi in industrial enzyme production, agriculture and bioremediation. metabolic versatility displayed by these fungi makes it a very amenable source of new gene products. functional analysis of candidate genes goes by two complementary ways: gene overexpression and loss-of-fuction mutants generation. a few expression systems are available mainly to direct constitutive gene expression in catabolite repression conditions. following a genomic approach, we have recently cloned some gene promoters with high expression in glucose in trichoderma harzianum cect2413. a main goal in a gene functional analysis is the generation of knock-out mutants. up to date, there is no reference about successful gene disruptions in t. harzianum mainly due to a very low homolog recombination frequency. rna-mediated gene silencing has been shown as an efficient tool to diminish or totally abolish specific gene expression. especially those strategies based on the use of hairpin constructs allow rapid and easy generation of strains with reduced levels of mrna from genes of interest. we have used the t. harzianum cect2413 tss1 promoter to direct the expression of a hairpin dna construct that induced the appearance of small-interfering rnas (sirnas) and the silencing of the uida reporter gene in a previous uida overexpressing strain. reduced levels of mrna and gus activity correlated with the presence of sirnas. this is the first report on rna-mediated gene silencing in trichoderma and constitutes a useful and promising tool for functional genomic studies in fungal systems. analysis of the metabolic response of escherichia coli to quantitative modulations of the glucose-6-phosphate dehydrogenase based on 13 c-labelling experiments cécile nicolas, fabien létisse, stéphane massou, philippe soucaille, jean-charles portais. e-mail: fabien.letisse@insa-toulouse.fr (f. létisse) microorganisms have an efficient capacity for adapting their metabolism in response to genetic or environmental changes, and understanding metabolic robustness has become an emergent issue. part of the robustness originates from the network organization of metabolic systems, where the interplay between all available biochemical reactions provides alternative mechanisms for compensating the perturbations. recently, 13 c-metabolic flux analysis ( 13 c-mfa) has been applied to escherichia coli knock-out mutants lacking key enzymes to determine the phenotypic effects of structural changes in the metabolic network, providing further evidences for compensatory phenomena. the aim of our on-going work is to understand how the central metabolism in e. coli responds to quantitative alterations at a specific key-point of the metabolic network. the glucose-6-phosphate dehydrogenase (g6pdh), a key enzyme in the central metabolism for which the effects of deleting the gene (zwf) has been already described (zhao et al., 2004) , was chosen as the target. to this aim we have generated a set of expression mutants, i.e. mutants having each a fixed level of expression of the zwf gene. four different levels of expression, leading respectively to g6pdh activity 2; 2.9; 5.7 and 14 times higher than in the wt strain, have been obtained. for each mutant, transcriptomics analysis will be carried out and compared to both the zwf-and wt strains to detect changes in the network structure, and the distribution of fluxes will be measured using 13 c-mfa. the flux maps obtained for the various strains will be compared to evaluate the quantitative response of the central metabolic network to imposed and increased g6pdh activity. metabolic control analysis will be applied to provide insights onto the sensitivity of the measurable metabolic fluxes to g6pdh activity. combination of transcriptomics and fluxomics approaches will provide information on the nature and extent of the compensatory mechanisms. because the activity of a single enzyme is tuned at different levels in knock-out and expression mutants, this investigation provides a situation that mimics gene-level regulation of metabolism. , j., et al., 2004, metab. eng., 6, 164 . with the depletion of the world's petroleum supply, there has been an increasing worldwide interest in ethanol as an alternative, nonpetroleum source of energy. this fact caused increased interest in the new ethanol technology fermentation process research. as reported before, bacteria zymomonas mobilis possesses more advantages than saccharomyces cerevisiae, microorganism used for ethanol production in industrial scale. for that reason, we have focused on the fermentation studies of this facultative bacterium in free and immobilized form. the immobilization of the cells into polyvinylalcohol (pva) hydrogel lens-shaped capsules lentikats, improved the batch fermentation process efficiency nine times. starch, the substrate considered as one of the best of renewable energy source is considered as fuel ethanol feedstock. due to z. mobilis disability of maltose, maltotriose and dextrin utilization, the starch has to be converted into glucose monomers. this pre-fermentation step can overcharged whole ethanol production process. this ineffective part of the process was resolved with immobilization of glucoamylase into lentikats. the system with immobilized enzyme and cell was stabile in continuous mode for 60 days without any significant change in the system efficiency. the combination of cell and enzyme immobilization can significantly improve the efficiency and the cost of ethanol production in industrial scale. fermentation of an inhibitory dilute acid-hydrolysate from spruce using a fed-batch procedure combined with cell-reuse andreas rudolf, gunnar lidén department of chemical engineering, box 124, lund university, se-221 00 lund, sweden. e-mail: andreas.rudolf@chemeng.lth.se (a. rudolf) a well-controlled addition of hydrolysate to the fermentation has proved very efficient in reducing yeast inhibition due to compounds formed during lignocellulose hydrolysis. furthermore, using high cell mass concentrations has been another way of avoiding the negative impact of the inhibitory compounds. if possible, the yeast should therefore be re-used in the process. in the present work a dilute-acid hydrolysate from spruce was fermented using a fed-batch procedure with reutilization of yeast. the fermentation procedure worked satisfactorily, with more than 98% of fermentable sugars consumed in each of the four consecutive fed-batch fermentations performed. the ethanol yields on fermentable sugars reached 0.45 g/g. there was continued cell growth in the repeated fed-batch experiments, with an average cell yield on fermentable sugars of 0.06 g/g. in contrast, only about 20% of the fermentable sugars were consumed within 24 h, when the fermentation of the hydrolysate was run in a batch process. the work shows the potential to re-use the yeast in a suitably designed process. metabolic engineering is defined by bailey in his seminal 1991 paper as "the improvement of cellular activities by manipulation of enzymatic, transport, and regulatory functions of the cell with the use of recombinant dna technology". the manipulation of these functions ultimately results in the manipulation of metabolism, which is the purpose of many biotechnological processes. metabolic engineering has sought its methods and tools in mathematics and the physical sciences, and later in information technology (it) leading to the proliferation of bioinformatics. in this work we propose a novel approach to metabolic engineering that regards it as a business process reengineering (bpr) endeavour. hammer and champy in their celebrated 1993 book define bpr as "the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service and speed". our thesis is that metabolic engineering with its goal of reengineering the metabolism of the microorganism, is equivalent to business process reengineering (bpr) in business and management. indeed this is essentially what metabolic engineering does to the cell through the use of recombinant dna technology, which can be viewed as a radical redesign of the metabolic process. after all, it causes changes that cannot be attained otherwise, and whose purpose is to achieve dramatic improvements in cellular activities such as the increase in production of some metabolites by orders of magnitude. the cost incurred by the process, which is metabolism in this case, can be, for example, energy requirements or change to a cheaper substrate. in this study we elucidate this parallelism between the two with emphasis on modelling of metabolism and how the concepts of business process modelling can be applied to it. the purpose is not to produce a model, but rather to introduce the modelling methodology and demonstrate its utilisation and benefits and outline its limitations and challenges. we believe that the novelty of our work lies in applying a new paradigm in approaching metabolic engineering that has not been considered previously. thermophilic ethanol production from wheat straw hydrolysate in continuous culture tania i. georgieva, birgitte k. ahring biocentrum-dtu, technical university of denmark, building 227, dk-2800 lyngby, denmark. e-mail: tig@biocentrum.dtu.dk (t. georgieva) ethanol production from lignocellulosic biomass has attracted widespread attention as an unlimited low cost renewable source of energy to transportation fuels due to increasing petroleum use and air pollution towards greenhouse gases. wheat straw available as agricultural residue has been considered as a potential lignocellulosic substrate for industrial bioethanol production. a major technical obstacle to commercialize bioethanol production form lignocellulose (such as wheat straw) is associated with a lack of microorganism able to rapidly and efficiently ferment both hexose and pentose sugars into ethanol and to tolerate the inhibitors present in undetoxified hydrolysates. currently used industrial mesophilic microorganisms (saccharomyces cerevisiae and zymomonas mobilis) are excellent ethanol producer from glucose, however, they are not able to ferment other sugars such as xylose, which is the second most abundant sugar in lignocellulose. thermophilic anaerobic bacteria have been considered for ethanol production from lignocellulosic biomass as an alternative to mesophilic ethanol producing strains, predominantly because of their abilities naturally to ferment the whole diversity of sugars found in lignocellulosic biomass. an increase attention to thermophilies for ethanol production have also arise from broad spectrum of advantages regarding industrial scale ethanol fermentation such as high growth and metabolic rates, low oxygen solubility, reduced risk of reactor contamination, and cost savings via mixing, cooling and facilitated product recovery. in addition, simultaneous co-fermentation of glucose and xylose in a single operation unit could substantially reduced the ethanol production cost. research has been attempted to study the potential of using a thermophilic anaerobic bacterium for continuous ethanol fermentation of lignocellulosic biomass, with particular emphasis on effectiveness of our strain to ferment undetoxified wet oxidized wheat straw hydrolysate with respect to sugar (glucose and xylose) conversion and ethanol yield. the experiment was carried out in a lab-scale reactor operated at 70 • c with wheat straw hydrolysate as a substrate in concentrations from 20 to 80 wt.% equivalent to total sugar mixture of 11-38 g/l. wheat straw hydrolysate (woh) [200 g/l wheat straw, 92.4% dry matter (dm)] was prepared using wet oxidation pretreatment process followed by enzymatic saccharification with commercial enzymes mixture of celluclast1.5, and novozym188 (novozymes, denmark). both xylose and glucose sugars were simultaneously converted to ethanol. the sugar utilization was higher than 90%, and high ethanol yields were achieved. reactor shows good long-term performance (124 days) in terms of operation stability and reactor contamination. maltotriose is the second most abundant fermentable sugar in wort and due to incomplete fermentation, residual maltotriose in beer may cause problems in the brewing industry. to study genes that might improve utilization of maltotriose we used a library with dna from brewer's strains and a laboratory strain and identified a new transporter encoded by mtt1. mtt1 gave lager strain a15 the ability to grow on yp/2% maltotriose in the presence of 3 mg/l of the respiratory inhibitor antimycin a. this transporter gene shares 74% similarity with mph2 and mph3, 62% similarity with agt1 and 91% similarity with mal61 and mal31. moreover, mtt1 shares even higher similarity (98%) with the s. pastorianus mty1 gene (m. salema-oom, unpublished, ncbi accession number aj491328). purified radiolabeled maltotriose and radiolabeled maltose were used to study sugar uptake of lager strains a15 and ws34/70, and of a15 containing mtt1 or mtt1alt, a more efficient, altered version of this gene lacking the 66 basepairs from the 3 end and containing 57 base-pairs of vector sequences. these transport studies show that mtt1 and, especially, mtt1alt encode maltose transporters with relatively high activity towards maltotriose compared to maltose. this study is part of a multi-disciplinary project, funded by the european union (contract no. qlk1-ct-2001-01066) focusing on the development of high-gravity resistant brewer's yeast strains. metabolic engineering of l-phenylalanine pathway in bacillus subtilis yasemin demirci 1 , pınar ç alık 2 , güzide ç alık 1 , tunçer h.özdamar 1 : 1 bre laboratory, department of chemical engineering, ankara university, 06100 ankara, turkey; 2 iblab, department of chemical engineering, metu, 06531 ankara, turkey. e-mail: ozdamar@eng.ankara.edu.tr (t.h.özdamar) metabolic engineering design of a recombinant l-phenylalanine (phe) production system is based on coordinated overexpression of the flux-controlling genes in the aromatic-amino acid pathway. based on the insights gained by the work carried out in our laboratories (özçelik et al., 2004) , aroh for the reaction r96 at the branch-point chorismate that connects the preceding reactions of the aromatic group amino acid pathway to the proceeding reactions towards phe, was predicted as the first-, and aroa for dahp synthase (r89) predicted as second-metabolic engineering sites. aroa gene was cloned next to aroh, by the use of four primers designed using pcr-based gene splicing by overlap extension method. the genes were amplified separately by pcrs. the primers used at the ends to be joined were designed as complementary to one another by including nucleotides at their 5 ends that are complementary to the 3 portion of the other primer. these products were mixed in the next pcr reaction, where one strand from each fragment contains the overlap sequence at 3 end. extension of this overlap by dna polymerase has yielded the recombinant two-gene product; and the two-gene product serve as template for the continuation of reactions for the increase of the concentration in the micro-reactors. the two-gene fragment was first cloned into puc19, and then sub-cloned to pmk4 e. coli -bacillus shuttle plasmid. the new expression vector with high stability and high copy-number was obtained and transferred into host b. subtilis. the metabolic flux distributions calculated by the mass balance based stoichiometric model based on the metabolic reaction network for r-b.subtilis were determined by using time profiles of the substrate, dry-cell, phe and other amino acids, and organic acids concentrations as the constraints. on the bases of calculated intracellular fluxes of recombinant b. subtilis carrying pmk4::aroa::aroh, an in-depth analyses of the metabolic engineering design will be presented. ozçelik,i̇., ç alık, p., ç alık, g.,özdamar,t.h., 2004. metabolic engineering of aromatic group amino acid pathway in bacillus subtilis for l-phenylalanine production. chem. eng. sci. 59 (22-23), 5019-5026. the 100% respiratory-deficient nuclear petite amylolytic saccharomyces cerevisiae npb-g strain capable of excreting a hybrid protein possessing both ␣-amylase and glucoamylase enzyme activities was generated and its employment for direct fermentation of starch into ethanol was investigated under both shake flask and controlled bioreactor cultivation conditions. when compared with a standard host strain, higher ethanol concentrations and yields were achieved with the nuclear petite strain under both cultivation conditions. further improvement in ethanol production was achieved by the use of an initial glucose supplement. comparison of the ethanol fermentation performances of the respiratory-deficient npb-g and the parental respiratory-sufficient wtpb-g strain showed an increase of, ca. 48% in both ethanol production yields and ethanol productivities with the respiratory-deficient strain. response surface methodology (rsm) was used as a statistical tool to optimize the initial yeast extract and starch contents of the medium, which resulted in a substantial increase in the stability of the expression plasmid in both strains with concomitant improvement in their amylolytic potentials. high ethanol yields on substrate values of the bioreactor cultures, that were very close to the theoretical yield, indicated that the amylolytic respiratory-deficient strain developed in this study was very effective in the direct fermentation of starch into ethanol. establishing a biotechnology educational framework to support a knowledge-based economy in puerto rico rosa buxeda, lorenzo saliceti-piazza industrial biotechnology program, university of puerto rico, mayagüez campus, mayaguez 00680, puerto rico. e-mail: rbuxeda@uprm.edu (r. buxeda) industrial biotechnology has been identified as a major thrust area of economic development within the past five years for the island of puerto rico. the portfolio of biotechnology manufacturing investments in the island has passed the two billion dollar mark. world known companies like amgen, abbott and eli-lilly lead these investments, which have catalyzed a strong technology transfer to the island. a strong collaboration between industry and academia was needed to provide a well-trained workforce for these company startups. as a result, the university of puerto rico, mayagüez campus (upr-m) developed four initiatives which are part of the educational pipeline in biotechnology. these are: (i) an industrial biotechnology program, a 5-year bs degree which contains a novel curriculum with courses from science and engineering with undergraduate research and industrial internships as part of the degree requirements; (ii) an industrial biotechnology learning center, which provides customized biotechnology and bioprocessing training modules, including lectures and hands-on experiences to train and develop the workforce needed in the biotechnology manufacturing plants; (iii) a biotechnology summer camp, which addresses the high school student population and its main purpose is to educate and advise on the different career paths that can be followed in the field of biotechnology; and (iv) a biotechnology center for research and training in bioprocessing, that will address the development of corporatesponsored research projects to strengthen links between industry and academia in order to build up a knowledge-based economy. our paper will describe each initiative in detail as well as its outcomes and impact on puerto rico's knowledge-based economy goals. isolation of acid phosphatase from sweet potato and immobilization using different adsorbent d. omay, y. güvenilir, n. deveci istanbul technical university, department of chemical engineering, maslak 34769, istanbul, phosphatase enzymes occur in a wide range of plant and animal tissues. they catalyze the hydrolysis of phosphate bonds in organic phosphates, between the phosphate group and rest of the molecule. immobilization of enzymes and biological compounds is currently gaining importance due to its wide variety of applications in the food and pharmaceutical industries and also its biomedical applications. it was reported that enzymes can be activated by complexation with polysaccharides such as chitin or chitosan. the aim of this experimental study was determined as partial purification and isolation of acid phosphatase enzyme and its immobilization. the purification was realized by applying centrifugation, ammonium sulfate precipitation and dialysis respectively. the specific activity of the supernatant was 0.1 u/mg and after 80% saturation this value increased 0.64 u/mg. furthermore, acid phosphatase was investigated using different adsorbent (chitin, chitosan, synthetic zeolite and raw zeolite) and evaluated the storage stability and re-usability of the immobilized acid phosphatase. it was estimated that, acid phosphatase activity was shielded the ratio of 94, 96, 99, and 92% by using raw zeolite, synthetic zeolite, chitin and chitosan respectively under 12 h operation condition. øyvind m. jakobsen 1,2 , michael c. flickinger 3 , svein valla 1 , trond e. ellingsen 2 , trygve brautaset 1 : 1 department of biotechnology, norwegian university of science and technology, norway; 2 sintef applied chemistry, sintef, norway; 3 biotechnol. institute, department of biochemistry, molecular biology and biophysics, university of minnesota, usa aerobic methylotrophs can utilize one-carbon (c 1 ) compounds such as methane and methanol as a sole c-source for growth and energy. the majority of research on these organisms has focused on their biochemical novelity and commercial viability. for the industrial production of bulk products such as the amino acids lysine and glutamate raw material costs and abundance are important, and c 1 sources are thus attractive compared to sugars. bacillus methanolicus can secrete up to 55 g/l of glutamate upon methanol growth at 50 • c (thermotolerant and methylotroph) and mutants producing 35-40 g/l of l-lysine have been selected. we study the genetics for conversion of methanol into biosynthesis of glutamate and lysine, and in the present report we unravel the regulation of genes impor-tant for the consumption and tolerance level for c 1 compounds by b. methanolicus. b. methanolicus has a methanol dehydrogenase gene (mdh) for oxidation of methanol into formaldehyde and a ribulose monophosphate (rump) pathway for assimilation of formaldehyde. we recently discovered that mdh and five rump genes are carried by natural plasmid pbm19 in this bacterium and this represented the first documentation of plasmid-dependent methylotrophy in any microorganism. we here use real-time pcr to analyse the regulation of plasmid-and chromosomally located rump genes, in cells upon methylotrophic and non-methylotrophic growth. high methanol concentrations in the growth medium is cell toxic and the mechanisms for this sensitivity of b. methanolicus is poorly understood. our results indicate that plasmid pbm19 plays a fundamental role for this trait as well and the impact of our results on the biotechnological applications of this bacterium is discusssed. department, national research centre, dokki, cairo, egypt adding a proteolytic enzyme extraction from jack fruit (artocarpus integrifolis) in combination of fermentation process in low fat yogurts manufacture was tried to improve yogurt flavour and rheological properties. experimental yogurts milk contained control, 3.9 (t1), 7.8 (t2) and 11.7 (t3) units/ml milk from crude extracts of plant proteinase. the ph of the product treated with crude proteinase was lower than the control. however, the rate of acidity development during storage slightly increased with increasing the addition of crude proteinase level and progress of storage period of yogurt. the proteolytic activity of all yogurts gradually increased until the end of storage period (15 days). yogurts made from milk treated with crude proteinase preparations were less firm compared with control at all storage periods, where t3 showed more less firm after 15 days of storage being 20.17 g/100 g. generally, increasing units of plant proteinase preparations decreased the firmness. on the other hand, yogurt made from milk pretreated with plant proteinase had higher syneresis, and apparent viscosity than the untreated product. the greatest viscosity was found in t2 and t3 of 433 and 479 mpa s respectively, compared with control of 299 mpa s at 15 days storage. the results indicated that there is an inverse relationship between the amount of units of crude proteinase preparations and susceptibility of yogurt to syneresis. the t2 gained the highest scores (85 points) followed by the control (81.5 points) after 15 days of storage, while yogurt of t3 showed a low scoring being 75. from the foregoing results, it is recommend to use jack fruit (artocarpus integrifolis) as a source of plant proteinases and utilize it to develop a high quality yogurt at a level of 7.8 units of plant proteinases/ml milk. the penicillium chrysogenum oat1 gene encoding a class iii omega-aminotransferase has been cloned and characterized. this enzyme that converts lysine into 2-aminoadipic semialdehyde is important in providing 2-aminoadipic acid, a precursor of penicillin and other ␤-lactam antibiotics. the enzyme is related to ornithine-5-aminotransferases and to lysine-6-aminotransferases encoded by the lat gene located in the bacterial cephamycin gene clusters. expression of oat1 is induced by lysine, ornithine and arginine and repressed by ammonium ions. area-binding consensus sequences and an 8-bp direct repeat associated with arginine induction in emericella (aspergillus nidulans) have been found in the oat1 promoter region. deletion of the oat1 gene resulted in the loss of omegaaminotransferase activity. the deletion mutants were unable to grow on ornithine or arginine as sole nitrogen sources and showed a reduced growth on lysine. complementation of the deleted mutant with the oat1 gene restored growth on ornithine, arginine and lysine to the levels of the parental strain and omega-aminotransferase activity. the strong expression of oat1 gene after induction by the basic amino acids may provide additional 2-aminoadipic acid for the formation of the 2-aminoadipyl-cysteinyl-valine tripeptide for ␤-lactam biosynthesis. morphological characterisation of two high producing strains of penicillium chrysogenum carrying a disruption in the nadph-dependent glutamate dehydrogenase k. rueksomtawin, j. thykaer, h. noorman, j. nielsen center for microbial biotechnology, biocentrum-dtu, technical university of denmark, dk-2800 lyngby, denmark. e-mail: kr@biocentrum.dtu.dk (k. rueksomtawin) metabolic engineering has proven to be useful in optimisation of ␤-lactam production, e.g. constructing superior strains with multiple copies of the ␤-lactam gene cluster. it is however, of equal importance to gain insight into other aspects of the metabolism to establish a general overview in order to apply metabolic engineering for further improvement of the production strains. in that context, the redox metabolism is essential, as it functions as a tightly controlled connection between the different parts of the metabolism. in order to investigate this role of the redox metabolism in more detail, the gdha-gene, encoding the nadph-dependent glutamate dehydrogenase, was disrupted in two industrial strains of penicillium chrysogenum. during physiological characterisation of the two strains it became apparent that considerable changes in the morphology had occurred due to the genetic alteration. since the morphology is an important parameter in process optimisation, an examination of the morphology of the two strains was undertaken. in this work, the morphological differences between the gdha-disrupted strains and the reference strains were comprehensively investigated both during growth on solid media and submerged growth in a flow-through growth chamber. with the advance development of computerized image analysis techniques, the key morphological properties of the individual hyphal elements were quantified. in comparison to the reference strains, the disruption of the gdha gene resulted in a morphological change from short hyperbranched hyphal elements to long elongated hyphal elements with less branches. in the parallel studies with aspergillus nidulans and its corresponding gdha-deleted strain, no difference in the morphology was observed. polyketides (pk) represent one of the largest groups of natural products and are found in fungi, bacteria and plants. since many useful polyketides either originate from sources that are difficult or even impossible to cultivate or are produced in inadequate amounts, we are interested in expressing polyketide synthases (pkss) in heterologous hosts. saccharomyces cerevisiae, aspergillus niger and aspergillus nidulans were chosen as initial hosts, because the techniques necessary to cultivate and manipulate these strains genetically are well established. 6-methylsalicylic acid synthase (6-msas) was chosen as a model pks. replicative plasmids carrying the genes encoding 6-msas from penicillium patulum and phosphopantetheinyl transferase (pptase), respectively, were transformed into s. cerevisiae. in addition, an integrative vector was designed and the gene encoding 6-msas was integrated in the yeast chromosome. batch cultivations on galactose minimal media were performed. the results are presented and in particular the effect of expression mode and type of pptase (bacterial versus fungal) is discussed. furthermore, the progress of the work on expressing 6-msas in a. niger and a. nidulans using an integrative vector system is presented and discussed. the valorisation of functionalized chemicals from biomass resources compared to the conventional fossil production route ben brehmer, wageningen ur agrotechnology & food sciences, workgroup: valorisation of plant production chains, p.o. box 17, 6700 aa wageningen, the netherlands. e-mail: ben.brehmer@wur.nl at some undisclosed point in the foreseeable future, cheap fossil fuels resources will become depleted and the industry will be forced to pursue more difficult reserves with increasingly high extraction costs. most alternatives available and under research do not consider price as the main motivation for replacement, but focus solely on sustainability and environmental benefits. sustainability is an interesting word as there are enough fossil resources scattered around the world to be sustainable in quantity but not sustainable in price. seeing that fossil fuels are derived from prehistoric biomass it is not at all presumptuous to assume that every application and product can be replaced by the biomass of today. in fact, many highly specialised pharmaceutical chemicals already have a biomass origin. yet, not all of the uses of fossil fuels need to be replaced by a comparable carbon based source, such as biomass. energy and transportation in particular do not necessary need to rely on carbon cleavage, whereas practically all of the petrochemicals contain a carbon backbone. the main stipulation in substituting fossil-based chemicals with bio-based chemicals is availability and cost. it is proposed that already today, by utilising existing, recently developed and developing technology, it is economically advantageous for many chemicals to derive from biomass, in particular the functionalized chemicals. the only way to validate this conjecture is to develop a complete comparative life cycle analysis. as opposed to a traditional lca, the "multicriterion" developed here will revolve around energy flows and process efficiency in terms of exergy. the aim is to assess the optimum route with the best production options along the whole production chain while determining any possible limiting factors. using this tool, a systematic production matrix relating several logical source crops and a few key chemicals of varying derivative levels can be created and compared to the conventional fossil routes. combined with economic considerations and some unambiguous environmental factors, the investigation will provide all the information relevant to the industry. the goal is to create an objective and reliable simulation system ratifying the economic and environmental feasibility of exploiting biobased chemicals today and indicate the steps necessary for further improvement. biosynthesis of multi-enzymatic preparation from aspergillus niger ibt-90 useful in textile fabric treatment rita pyc 1 , jadwiga sojka-ledakowicz 2 , tadeusz antczak 1 , joanna lichawska 2 : 1 institute of technical biochemistry, the technical university of lodz, lodz 90-924, poland; 2 textile research institute, lodz 92-103, poland among many methods of producing enzymatic preparations, i.e. by liquid surface fermentation -lsf, submerged fermentation -smf or solid state fermentation -ssf, this last is most advantageous. cultivation in solid state means fermentation on a matrix formed by industrial and agricultural wastes. most often filamentous fungi -due to the lack of available water in the foundation -are the efficient, competitive microorganisms applied in solid state bio-conversion. the aim of research works carried out at the institute of technical biochemistry of the technical university of lodz and at textile research institute, lodz was defining optimal conditions for biosynthesis and testing the possibility of applying multi-enzymatic preparation from aspergillus niger ibt-90 in the treatment of woven fabrics made of natural cellulose fibres. as the result of biosynthesis optimization process, malt sprouts, wheat barn and beet pulp were selected as the best media to obtain enzymes of high pectinolytic activity maintaining at the same time high activity of cellulolytic enzymes and xylanase. the highest obtained activities reached: 2000 0 pm for total pectinolytic activity, 7 j/ml for endoglucanase and 527 j/ml for endoxylanase and they were 2.4-3.0 times higher than those achieved before optimizing process. performed research works demonstrated that the optimum activity of applied enzymatic system is obtained in the range of ph 4.6-4.8. woven fabrics made of flax and cotton fibre blends, subjected to bio-pre-treatment, were evaluated with reference to their sorption properties. comparative evaluation of liquid sorption by woven fabrics allowed to notice efficient enhancement of fibres' sorption capabilities after pre-treatment using enzymes system from aspergillus niger ibt-90. this offers the possibility of substitution of alkali scouring of linen-cotton woven fabrics before their bleaching by bio-treatment. the phylum actinobacter includes many bacteria of industrial importance both for accumulation of primary and secondary metabolites. both primary and secondary metabolites are dependent on precursors and cofactors that are provided by the central carbon metabolism of microorganisms. there are alternative pathways for catabolism of carbon either via the embden meyerhof parnas (emp) and pentose phosphate (pp) pathway or through the entner doudoroff (ed) pathway. the emp pathway is energetically more favorable and has therefore been presumed to be the dominating route for carbon metabolism in bacteria producing secondary metabolites. however, primary metabolism is poorly studied for most actinobacter species as focus traditionally has been on secondary metabolism. with the aim to gain more knowledge about the diversity of central carbon metabolism within the phylum actinobacter, 17 different strains were collected from various sources and strain collections. the strains were grown in minimal medium with supplement of standard vitamin solution and [1-13 c] glucose as carbon source. the 13 c-labeling patterns of proteinogenic amino acids were determined by gc-ms analysis. through this method, the fluxes in the central carbon metabolism during balanced growth were estimated and pathways for carbon metabolism were determined. in particular the labeling patterns of alanine and valine were of interest as they are derived from pyruvate and therefore can be used to distinguish between whether glucose is metabolized through the ed pathway or the emp pathway. nobacter, amycolatopsis balhimycina produces the glycopeptide balhimycin. the balhimycin aglycone is identical to the aglycon of vancomycin, which is a commercial glycopeptide. as a. balhimycina appears to be accessible to genetic modifications and the biosynthetic cluster responsible for balhimycin production is published, this genetically well-characterised academic strain can serve as a platform strain for production of vancomycin analogues derived through combinatorial biosynthesis. the understanding of the physiology of this microorganism is essential for the efficient accumulation of potentially commercial secondary metabolites. the bacterium is capable of growth in a fully defined minimal medium, with the production of balhimycin. the strain was grown at either nitrogen or phosphate limitation and balhimycin accumulation was followed at these conditions. flux analysis of balhimycin production by amycolatopsis balhimycina based on 13 c labelling experiments was performed. the zygomycetes blakeslea trispora and phycomyces blakesleeanus accumulate beta-carotene, ubiquinone (coenzyme q), various different sterols, and other terpenoids, all of them produced via the mevalonate pathway. these fungi are used or could be used for the industrial production of these terpenoids, edible oil, chitosan, and various organic acids. by measuring the specific radioactivity of terpenoids made from radioactive mevalonate, leucine or acetate in the presence of excess glucose in wild types and mutant strains we have concluded that these fungi have separate subcellular compartments for the production of carotene, sterols and triacylglycerols. the terpenoid moiety of ubiquinone is synthesized in the same compartment as ergosterol. these compartments contain separate pools of all their common metabolites, beginning from acetyl-coa. mevalonate carbon atoms do not find their way back to general metabolism, i.e., these fungi lack the "shunt" pathway. the compartments are regulated independently. the very large variations in carotene content caused by many environmental and genetic changes are not accompanied by variations in the ubiquinone content. the ubiquinone content increases when the cultures grow on leucine or acetate as carbon sources and is not affected by illumination. phycomyces, but not blakeslea, increases the production of ubiquinone in presence of oligomycin. lincomycin, produced by streptomyces lincolnensis, is important, clinically used antibiotic. its gene cluster consists of 27 putative open reading frames with biosynthetic or regulatory functions (lmb genes) and three resistance genes (lmra, lmrb, lmrc). the organization of transcription units was determined. the analysis of the lincomycin biosynthetic gene transcripts in various cultivation stages revealed the genes with putative regulatory functions which are transcribed in early stages of cultivation. previous analysis of biosynthetic pathways of lincomycin and functionally different anthramycin antibiotics (anthramycin, sibiromycin, tomaymycin, mazetharmycin and porothramycin) indicates that the genetic information on the lincomycin and anthramycin biosynthesis should share common elements (genes), both biosynthetic and regulatory. hybridization experiments demonstrated presence of several analogues of lmb genes involved in the biosynthesis of anthramycin produced by streptomyces refuineus and porothramycin produced by streptomyces albus. effect of various calcium salts on the erythromycin production by saccharopolyspora erythraea m. rostamza 1 , a. noohi 1 , j. hamedi 2* : 1 department of biology, faculty of science, science and research branch, islamic azad university, tehran, iran; 2 microbial biotechnology lab., department of biology, faculty of science, university of tehran, tehran, iran. e-mail: jhamedi@khayam.ut.ac.ir (j. hamedi) calcium carbonate has the positive effect on the erythromycin, however, because of its low water solubility, caused to clogging the spargers of fermenters and fouling the microfilters. in this research, various soluble calcium salts was added to the fermentation medium and their effect the growth of saccharopolyspora erythraea and erythromycin production was studied in the complex medium consisted soy bean meal, dextrin and starch as major ingredients. the fermentation conditions were 220 rpm, 8 days at 30 • c. the results obtained showed that there is no significant difference between erythromycin concentrations in the medium containing calcium lactate and calcium carbonate. however, erythromycin concentrations in the other calcium salts containing media were less than to calcium carbonate containing medium. optimum concentration of calcium lactate for erythromycin production was 10 g/l. lincosamides and its derivatives are clinically important antibiotics. comparison of gene cluster coding for lincomycin biosynthesis and newly identified cluster of genes for celesticetin biosynthesis revealed new information on functions of several genes common for both biosynthetic pathways. the celesticetin gene cluster was identified by screening the cosmid library of streptomyces caelestis with heterologous probes based on lincomycin biosynthetic genes involved in a part of biosynthesis shared by both antibiotics. sequence analysis of the cluster revealed 24 putative orfs, out of which 18 are lincomycin biosynthetic genes analogues, four are specific for celesticetin biosynthesis and one codes for resistance. the gene cluster is bounded with transposase genes on both sides. in order to clarify function of three putative regulatory genes of lincomycin biosynthesis, the insertional inactivation with the pcr targeting system in streptomyces lincolnensis was done and resulted in differently reduced production of lincomycin. larsen cmb biocentrum-dtu, technical university of denmark, 2800 kgs. lyngby, denmark we present a new algorithm called "x-hitting" for automatic identification of novel bioactive compounds based on full spectroscopic characters of highly complex mixtures of natural product. one of the most dramatic advances in recent drug discovery has been the increase in screening capacity throughput and data handling. therefore, analysis has become the bottleneck in the drug discovery process. the algorithm presented here is investigated and demonstrated on identifying potentially new bioactive compounds. in addition method is shown to have a high performance for automatic identification of known structures. these tasks are referred to as new-hitting and cross-hitting, respectively. finally, the receiver operating characteristics (roc) is introduced to the research field as an important tool for evaluating the performance of the "compound predictor". through examples it is shown, that known cross-hits are identified with high proficiency, and that the new-hitting works on finding new targets represented by analogues and structurally new compounds. a gene encoding a ␥-butyrolactone autoregulator receptor that have a common activity as dna-binding transcriptional repressors, controlling secondary metabolism and/or morphological differentiation in streptomyces was cloned from a natamycin producer, streptomyces natalensis, and its function was evaluated by in vivo. pcr using the primers designed from two highly conserved regions of streptomyces autoregulator receptors gave a 102-bp band, the sequence of which revealed high similarity to the expected region of a receptor gene. by genomic southern hybridization with 102-bp insert as a probe, a 687-bp intact receptor gene (sngr) was obtained from s. natalensis. in vivo to clarify the function of sngr, a sngrdisrupted strain was constructed, and a phenotype was compared with the wild-type strain. the sngr disruptant started natamycin production 6 h earlier and showed a 4.6-fold-higher production of natamycin than the wild-type strain. in addition, sporulation was earlier and 10-fold abundance. the phenotype indicates that the autoregulator receptor protein of s. natalensis acts as a primary negative regulator of the biosynthesis of natamycin and is related to regulation of sporulation. exploring the biocatalytic potential of the novel thermostable baeyer-villiger monooxygenase: phenylacetone monooxygenase daniel e. torres pazmiño 1 , gonzalo de gonzalo 2 , gianluca ottolina 2 , giacomo carrea 2 , dick b. janssen 2 , marco w. fraaije 1 , 1 biochemical laboratory, groningen biomolecular sciences and biotechnology institute, university of groningen, nijenbaeyer-villiger monooxygenases (bvmos) represent useful biocatalytic tools as they can catalyze reactions which are difficult to achieve using chemical means. however, only a limited number of these atypical monooxygenases are available in recombinant form. using a recently described protein sequence motif, a putative bvmo was identified in the genome of the thermophilic actinomycete thermobifida fusca. the nadph-dependent and fad-containing monooxygenase is active with a wide range of aromatic and aliphatic ketones and sulfides. genetic and kinetic data suggest that phenylacetone is the physiological substrate of the enzyme. previously, it was reported that this bvmo exhibits only a moderate enantioselectivity with (r,s)-␣-methylphenylacetone. this poster we will show an overview of the biocatalytic potential of the enzyme as explored so far. interestingly the enzyme has been found to perform highly enantioselective oxidations with a range of ketones and sulfides. this again indicates that this novel thermostable oxidative biocatalyst can be a useful tool for the synthesis of chiral building blocks. the enzyme s-adenosylhomocysteine hydrolase (adohcyase) catalyzes hydrolysis of s-adenosylhomocysteine (adohcy), an inhibitor of transmethylation reactions, into adenosine (ado) and homocysteine (hcy). the catalysed reaction is reversible, and the equilibrium is strongly displaced in direction of the synthesis of adohcy when reaction occurs in vitro. nevertheless, its biotechnological application resides in the synthesis of antivirals. for it, we selected between different producing microorganisms of the enzyme, the gram positive bacterium corynebacterium glutamicum. after designing the specific oligonucleotides, the gene was expressed in escherichia coli using the expression system pet28a+ with iptg. the electrophoretic analysis under denaturing conditions, shows a clear induction and over-expression of a protein with a mw of 49 kda. on the other hand, the immobilization of this recombinant enzyme in a solid support allows to use it as a catalyst for the synthesis of adohcy. the enzyme was purified by imac thanks to the presence of n-terminal 6 × his tag end, and immobilized in eupergit c for the optimization of the production of adohcy, a product of high value. sequencing, cloning, expression, purification and characterization of a novel cytochrome p450 monooxygenase from rhodococcus rubber luo liu, rolf d. schmid, vlada b. urlacher institute of technical biochemistry, stuttgart university, allmandring 31, 70569 stuttgart, germany. e-mail: itbvur@itb.unistuttgart.de (l. liu) the cytochrome p450 monooxygenases are heme-containing proteins, which catalyze a wide range of oxidative reactions (werck-reichhart and feyereisen, 2000) . a monooxygenation activity was observed for the strain rhodococcus ruber dsm 44319. a p450-like gene fragment was amplified by pcr using degenerated primers. for identification of regions that flank this p450-like dna fragment, the method "directional genome walking using pcr" was applied (mishra et al., 2002) . the full size gene encoding a cytochrome p450 enzyme was amplified by pcr from genomic dna and cloned into the vector pet28a(+) for heterologous expression in escherichia coli bl21(de3) cells. the enzyme was purified using metal affinity chromatography. the primary protein structure suggests, that this enzyme is a natural self-sufficient fusion protein consisting of a ferredoxin, a reductase and a p450 monooxygenase. the reductase activity was determined using an exogenous electron acceptor cytochrome c. the reductase domain of this p450 monooxygenase showed a strong preference for nadph over nadh. the substrate spetrum was investigated. in the presence of nadph the p450 enzyme shows hydroxylation activity towards 7-ethoxycoumarin, naphthalene, indene, acenaphthene, toluene and fluorene. mishra, r.n., singla-pareek, s.l., nair, s., sopory, s.k., reddy, m.k., 2002 . directional genome walking using pcr. biotechniques 33, 830-834. werck-reichhart, d., feyereisen, r., 2000. cytochromes p450: a success story. genome biol. 1 (6), 3003.1-3003.9. selective production of monoglyceride consisted of conjugated linoleic acid by penicillium lipase yomi watanabe 1,2 , yoshie yamauchi-sato 3 , toshihiro nagao 1 , satoshi negishi 3 , tadamasa terai 4 , takashi kobayashi 1 , rolf d. schmid 2 , yuji shimada 1 : 1 osaka municipal technical research institute, osaka, japan; 2 stuttgart university, stuttgart, germany; 3 the nisshin oillio group, ltd, yokosuka, japan; 4 osaka institute of technology, osaka, japan conjugated linoleic acid (cla) is a group of c18 fatty acid (fa) containing two conjugated double bonds. it is expected to prevent cancer, adipositas, atherosclerosis etc, and is commertially available in the primary form of free fa, containing almost equal amounts of 9cis,11trans-and 10trans,12cis-cla. it is therefore desired to be converted to a palatable form. for this purpose, we have previously proposed two enzymatic ways to convert cla to monoglyceride, an emulsifier, with penicillium camembertii lipase; (1) sequencial esterification-glycerolysis, (2) esterification at low tem-perature. these methods, however, are time-and energy-consuming. esterification of cla with glycerol under ambient pressure by the lipase produces equal amounts of mono-and diglycerides. in contrast, it was newly found that the reaction under reduced pressure supressed the formation of diglycerides and achieved to produce 90% monoglyceride at 95% esterification. improving the thermal stability of cellobiohydrolases i (cel7a) from t. reesei by site directed evolution frits goedegebuur 1 , lydia dankmeyer 1 , peter gualfetti 2 , brad kelemen 2 , edmundo larenas 2 , paulien neefe 1 , pauline teunissen 1 , colin mitchinson 2 : 1 genencor international bv, archimedesweg 30, 2333cn leiden, the netherlands; 2 genencor international inc., 925 page mill road, palo alto, ca 94304, usa genencor international has been working to produce improved enzyme products for economic conversion of ligno-cellulosic biomass to fermentable sugars. most of this work was performed under a subcontract with the u.s. department of energy for cellulose cost reduction for biomass conversion. cellulolytic biomass conversion is performed in nature by a complex mixture of enzymes. cellobiohydrolases play a key role and all effective cellulase mixtures contain a large excess of cellobiohydrolases over endoglucanases, suggesting that it is the exoglucanase activity that is limiting. the fundamental dependence of reaction rate on temperature predicts that large increases in performance, and decreased enzyme cost, would be achieved if the enzymatic conversion could be operated at elevated temperatures. industrial strains of trichoderma reesei produce cellulases at very high levels and low cost. however, t. reesei cbh1 (hypocrea jecorina cel7a) does not have sufficient stability to survive and perform at high temperatures. this poster shows the thermal stability improvement in t. reesei cbh1 by site directed evolution. sites with increased thermal stability properties were combined and evolved in high temperature stable cbhi variants. the evaluation of lipases as biocatalysts for organic chemistry can be carried out, at laboratory scale, by using soluble enzymes for biotransformations in aqueous media. however, the industrial exploitation of such an enormous potential should require a suitable protocol for immobilization of lipases. the binding of lipases on suitable pre-existing supports should greatly improve the performance of industrial reactors allowing us a continuous use or re-use of such interesting biocatalysts. in addition, lipases, like most enzymes, are not perfect chemical catalysts. lipases may be unstable and they may not have the optimal activity nor the optimal enantio or regioselectivities. in this way, immobilization of lipases, together with its relevance for the performance of each different industrial reactor, could be also used as a tool to improve and optimize some of these parameters. that is, immobilization of lipases, far from an already solved problem, constitutes an exciting field of research in the promising area of industrial bio-organic chemistry. in this work we would like to present useful immobilization methods of several lipases. the lipase immobilised were used for enzymatic hydrolysis of peptidomimetics of structure a. type of immobilzation used can changed the enantioselectivity of biocatalyst prepared. the absolute configuration of products b and c obtained in enzymatic reactions were assigned by chemical correlation. two-component flavin-dependent monooxygenases form an interesting class of flavoenzymes. they consist of two separate proteins; a monooxygenase component, which catalyses an oxygenation reaction in the presence of reduced flavin, and a flavin reducing component, which reduces flavin (fad or fmn) using nad(p)h as an electron donor. a well-known example of this class of monooxygenases is styrene monooxygenase (otto et al., 2004) . due to the ability to form enantiopure epoxides, which are relevant building blocks for the pharmaceutical industry, styrene monooxygenases form a valuable class of enzymes for biocatalysis. while screening a metagenomic library for oxidative enzymes, an indigo-producing clone was found. sequencing the particular clone revealed an inserted fragment of environmental dna encoding a two-component monooxygenase ( many investigations over the recent years have been directed to the production of natural aroma compounds. through biotransformation and bioconversion, aroma compounds considered as "natural" can be produced starting from monoterpenes, generating high value products as rose oxide. rose oxide is found in small amounts in some essential oils such as bulgarian rose oil and geranium oil. (−)-rose oxide is an impacting flavor compound and has a small threshold: 0.5 ppb. application of agro-industrial residues in bioprocess on one hand provides alternative substrates, and on the other hand helps solving pollution problems, that might be caused by the disposal of this residue in nature. the liquid cassava waste, is originated by the pressing of cassava roots. it is considered as a "harmful" pollutant waste due to its high organic charge and presence of cyanide. on the other hand, can be considered rich in nutrients that can be used in other applications. it was found that sporulated surface cultures of penicillium sp. were able to convert citronellol into cis-and trans-rose oxides. other bioproducts were 3,7-dimethyl-5-octen-1,7diol and 3,7-dimethyl-6,7-epoxy-1-octanol. no chemical oxidation or auto-oxidation products were detected in liquid control broths. the experiments were conducted at 30 • c and 160 rpm. when the medium was cassava, the production of rose oxide, 3,7-dimethyl-5octen-1,7-diol and 3,7-dimethyl-6,7-epoxy-1-octanol were insignificant reaching trace amounts. but when the mycelium developed in cassava medium and than transferred to mineral medium (citronellol as c-source) the concentrations of rose oxide increased dramatically, reaching 70 mg/l for the cis-isomer and 30 mg/l for trans-isomer. a mechanistic mathematical model of enzymatic degradation of avicel and phosphoric acid swollen cellulose (pasc) has been proposed. the model is based on the degree of polymerization (dp) of starting substrate, and follows its decline with time, to the final end product -glucose. three enzyme classes, namely, endoglucanase (eg), cellobiohydrolase (cbh) and ␤-glucosidase (bg) are all individually incorporated in the model. the model is, additionally, taking into account cooperative action of the involved enzymes, as well as effects of enzyme inhibition by end-products, cellobiose and glucose. to be able to describe the complex process of enzymatic hydrolysis with a set of differential equations certain assumptions needed to be introduced. those assumptions represent the simplification of an up-to-date knowledge of both substrate and enzyme structure, but also enzyme mode of action. for example, one of the often asked questions is: "what is happening with shorter cellooligosacharides (dp 7 and up) laying on the surface of cellulose chain? are they being adsorbed to the core cellulose chain or partly solubilized to a hydrolysis broth?" to give answers to these questions and confirm mathematical modeling real enzyme hydrolysis data are needed. in this work, four well characterized, highly purified mono-component enzymes from humicola insolens (two eg and two cbh) and one bg from aspergillus niger were used to hydrolyze avicel and pasc. by careful choice of catalyst, some enzyme specific characteristics like presence or absence of cellulose binding domain will also be incorporated into the model. hydrolysis experiments were initially performed by distinct mono-component enzymes, to confirm the basic characteristics of each of the enzyme classes. soluble hydrolysis products (dp 1-6) were analyzed by hplc and detection of non-soluble, higher-dp polysaccharides was performed by technique of polysaccharide analysis using carbohydrate gel electrophoresis. optimisation of halogenase enzyme activity k. muffler 1 , m. retzlaff 2 , k.-h. van pée 3 , r. ulber 1 : 1 technische universität kaiserslautern, germany; 2 technische universität münchen, germany; 3 technische universität dresden, germany. e-mail: muffler@rhrk.uni-kl.de (k. muffler) halogenases provide the opportunity of a regioselective and stereospecific halogenation of organic subtrates in contrast to the class of haloperoxidases. these enzymes allow a gentle synthesis of halogenated organic molecules and are capable to halogenate in specific positions (hammer et al., 1997; keller et al., 2000) , whereas traditional organic synthesis often failures or mainly leads to byproducts (hasegawa et al., 1999) , e.g. halogenation of tryptophan in other positions than position 3. our current research is focussed on tryptophan-5-halogenases, because the 5-br/cl-tryptophan could be applied as a pharmacologically attractive precursor of serotonin. in our work we describe the optimization of an enzyme assay respectively the enzyme activity. for this purpose we use a genetic algorithm. the responsible gene of the fadh 2 dependent enzyme was cloned from streptomyces sp. origin into a pseudomonas fluorescens strain. however, the optimization procedure was done with the purified his-tagged tryptophan-5-halogenase, which was easily obtained from the crude extract of the lysed cells by application of immobilized metal affinity chromatography. the application of algorithms allows an optimization of the multidimensional search problem leading to a global optimum in the search space in contrast to the traditional used one-factor-at-a-time method. latter often failures, because this method does not reflect possible influences the parameters can have on each other. effect of organic solvent type on the enantioselectivity of candida rugosa lipase in the hydrolysis of racemic naproxen methyl ester in biphasic reaction system serpil takaç, deniz mutlu department of chemical engineering, institute of biotechnology, ankara university, 06100 tandogan, ankara, turkey hydrolysis of racemic naproxen methyl ester to produce s-naproxen was carried out in the biphasic system using isooctane, cyclohexane, hexane and toluene with candida rugosa lipase after stirring in the phosphate buffer (ph 7.5, 0.02 m) for 2 h at +4 • c and centrifuging. the hydrolyses were carried out in shaking flasks for 120 h at 200 rpm and 37 • c with the initial substrate concentration of 0.034 m. the concentrations of the enantiomers of racemic naproxen methyl ester in organic solvents and those of naproxen in buffer solution were determined with hplc. it was found that the enantiomeric excess for substrate (ee s ), enantiomeric ratio (e) and conversion (x) decreased in the following order: isooctane > cyclohexane > hexane > toluene. the enantiomeric excess for product (ee p ) was found to be the same for isooctane, cyclohexane and hexane where the lowest ee p was obtained in toluene. the highest ee s , ee p , e and x values achieved in isooctane at the residence time of 120 h were 91, 95, 130, and 49%, respectively. this study was supported by ankara university biotechnology institute (project no: 89). fusarium fujikuroi (gibberella fujikuroi mating group c) produces multiple secondary metabolites such as gibberellins and bikaverin. gibberellins are terpenoid hormones that induce growth and regulate various stages of development in plants. they have numerous applications in agriculture industry. bikaverins are polyketides that have toxicity against different organisms because they inhibit respiration. regulation of polyketide and gibberellin synthesis by nitrogen has been intensively studied in fusarium but little is known about their regulation by carbon source. our main interest is to understand the regulation of biosynthesis of these compounds in f. fujikuroi imi 58289. to investigate this regulation the organism was grown in high nitrogen medium under submerged conditions and then transferred to nitrogen-free media having various concentrations of different carbon sources. the gibberellin production was not affected significantly. on the other hand bikaverin was synthesized enormously when sucrose was used as the only carbon source. high production in sucrose required a minimal amount of the sugar, but did not change appreciably above this threshold along a wide range of concentration. the bikaverin synthesis was repressed when glucose coexisted with sucrose in the medium. the effect of the c source on the expression of key genes, cps/ks (copalyl diphosphate synthase/kaurene synthase) for gibberellin and pks4 (polyketide synthase) for bikaverin biosynthesis is currently under investigation. ment of dormancy and loss of viability in seeds with the passage of time, it lacks any systematic propagation from seeds and is typically propagated through rhizomes. this restricts large scale cultivation of this plant. in vitro propagation of plants is an effective means for rapid multiplication of species, in which conventional methods have limitations. in the pesent study we have analysed the role of various growth promoters and the effects of dark and light incubation on germination of n. alba seeds. the results indicate that in vitro propagation of n. alba from seeds can be applied as an efficient method of multiplication. the study was funded by the state planning commisson (dpt) turkey and the university of ankara vide projects no. 120640. this research was performed for developing of biological treatment process of odor gas such as mek, h 2 s, and toluene, which is generated from the food waste recycling process. to establish the operational conditions of odor gas removal by small-scale biofiltration equipment, it was continuously operated by using toluene as a treating odor object. when the odor treating microorganisms were adhered to fibril form biofilter, high removal efficiency over 93% was obtained by biofilm formation. at 400 ppm of inlet odor gas concentration and 10 s of retention time, the removal efficiency was 76 and 93% in first stage reactor and second stage reactor, respectively. however, the removal efficiency remained over 97% at the operational conditions above 15 s of retention time. post soviet countries are going through the transition stage and are extremely sensitive to new technology, economics or social changes and globalization processes. that is why decision-making and system of regulation of the use of gmo's are very sensitive to number of factors. three levels of factors, the most influential to decision-making: global, regional and local; are identified at the research paper. global level depends on external policy of leaders of gmo regulations. usa and eu have the biggest influence on transition countries, though their positions completely differ. as usa was leader in inventing gmos, it is lobbing newly created biotechnological industry. in eu and other european countries lobbing of biotechnological industry was not as strong as in usa. thus, their national law is stricter. world trade organization and international agreements are also part of global level. regional level. geographical position of country is also very important because every regulation system depends a lot on regulation that is implemented in neighboring countries. countries do not exist in vacuum; they are linked territorially, politically, economically and socially with neighboring states. regulation systems of the transition countries in the eastern europe can be divided into three types: those who have no system of regulation of gmo (belarus, romania, hungary and ukraine); who approved some variety that are treated as safe to the market (poland, moldavia and georgia) and who approved all of the gmos (bulgaria, croatia and russia [before 2004]). but even if a country declares not to use gmo it is rather difficult to control import of such products, because of the lack of the testing laboratories, corruption of the state employees, agreements on intellectual property, and institutional country problems. in spite of global and regional tendency of gmos related regulation the most important part is the local level, namely the national regulation system. depending on national level we are choosing priorities at higher levels. as an example of post soviet countries ukraine is taken, as ukraine is one of the largest countries and one of the biggest exporters of agricultural products in europe. research includes the analysis of attitude to gm product, their potential risks and benefits of three categories that influence decision-making the most: farmers, gm experts and non-governmental organizations. recombinant microorganism development for extracellular benzaldehyde lyase production hande kaya 1 , pınar ç alık 1 , tunçer h. ozdamar 2 : 1 iblab, department of chemical engineering, metu, 06531 ankara, turkey; 2 bre laboratory, department of chemical engng, ankara university, 06100 ankara, turkey. e-mail: e119497@metu.edu.tr (h. kaya) in this study, the extracellular production of the benzaldehyde lyase (bal, ec 4.1.2.38) that catalyses the synthesis the enantiopure 2-hydroxy ketones for drug syntheses, by bacillus sp., was aimed. for this purpose, the signal dna sequence of an extracellular bacillus enzyme, i.e., serine alkaline protease, was fused in front of the bal gene (accession number ax349268) from pseudomonas fluorescens biovar i, using pcr-based gene splicing by overlap extension (soe) method. b. licheniformis (dsm 1969) chromosomal dna was used as sap gene (accession number x03341) template for the synthesis of sap signal sequence. bal gene was amplified by using the plasmid carrying bal gene, puc18::bal. thereafter, the signal peptide of sap with its own promoter was fused in front of the bal gene by soe method. the hybrid gene first cloned into puc19 plasmid, thereafter sub-cloned into pbr373, pmk4 and phv1431 shuttle vectors. the escherichia coli-bacillus plasmids carrying the hybrid gene pre(subc)-bal was transferred into bacillus subtilis npr− apr−, and bacillus licheniformis. the influence of the host bacillus species on bal production on a defined medium with glucose was investigated in bioreactor systems. for each of the recombinant (r-) bacillus species, effects of initial glucose concentration on cell growth and bal production were investigated; and, physiological differences and similarities between the wild-type and r-bacillus species are discussed. thereafter, the benzaldehyde lyase production capacities of recombinant e. coli and b. subtilis are compared in terms of cell concentration and bal volumetric and specific activities. for the comparison bal gene was cloned into prseta vector which is under the control of strong t7 promoter and expressed in e. coli bl21 (de3) plyss strain. the variations in by-product distributions with each recombinant organism and yields are also discussed. phosphoketolases (ec 4.1.2.9) are thiamine diphosphate (thdp)-dependent enzymes, that play a crucial role in the pentose phosphate pathway (ppp) of heterofermentative and facultative homofermentative lactic acid bacteria, and of the d-fructose 6phosphate shunt of bifidobacteria. reports affirm that cellulomonas flavigena can use the ppp when is cultured under anaerobic conditions. a genomic library of c. flavigena constructed in -zap express vector was screened. four positive clones were isolated, in vivo excised and the resulting pbk-cmv phagemids, each containing a 4.0 kb insert, were characterized by restriction analysis and dna sequencing. the open reading frame (orf) of the dxylulose 5-phosphate/d-fructose 6-phosphate phosphoketolase gene, xpkl, was located from nucleotide 54 to 2466. the xpkl orf encoded a 804 amino acids-residue polypeptide (xpkl) with a calculated molecular mass of 89,000 da, a value coincident with that estimated by comparative sds-page (about 90,000 da). a putative ribosome binding site (gggagc) is present 11-5 nucleotide upstream of the translational start of the xpkl polypeptide. the c. flavigena xpkl polypeptide sequence was 66% identical to dxylulose 5-phosphate/d-fructose 6-phosphate phosphoketolase from bifidobacterium sp., bifidobacterium gallinarum, gloeobacter violaceus and bifidobacterium adolescentis. this analysis also revealed highly conserved regions. lactococcus lactis is a main diacetyl-producing bacteria by citrate metabolism in dairy products. the transport of citrate in these bacteria is dependent on citrate permease that is encoded by citp gene. previous studies of the citqrp operon in escherichia coli mutants showed that citp message is considerably stabilized in rnase iii mutant. so, in the context of the citrate metabolism research, the characterization of the lactococcal rnase iii enzyme is very important for the dairy industry. rnase iii is an endoribonuclease which has an important role in rrna processing and control of gene expression. with the aim of studying lactococcal rnase iii we have cloned the rnc gene from l. lactis ssp lactis il1403 in the broad host range pls1rgfp vector. this plasmid includes the gfp gene, encoding the green fluorescent protein (gfp), cloned under the control of the pm promoter, that is inducible by maltose. maltose induction of the lactococcal rnc expression showed a 5-fold increase of rnc transcription from this plasmid. activity assays for lactococcal rnase iii were standardized using crude extracts and a substrate specific for b. subtillis rnase iii. the results showed that this substrate was specifically cleaved by lactococcal rnase iii and its activity induced by maltose. lac-rnc was cloned in pet15 vector and the corresponding six-histidine-tagged rnase iii protein was overproduced in e. coli bl21 (de3) strain by iptg induction. the protein was purified by affinity chromatography using hplc system and was shown to be active by in vitro activity assays using the lac-rnase iii specific substrate mentioned above. we have also cloned lactococcal rnc gene and studied its expression in an e. coli rnc deletion mutant ( rnc). complementation assays performed in e. coli demonstrate that the lactococcal rnase iii (lac-rnase iii) is able to process rrnas and to regulate the levels of polynucleotide phosphorylase (pnpase). these results demonstrate that the lactococcal enzyme is able to substitute the ec-rnase iii not only in the rrna processing, but also in the processing of mrnas. the amount of lactococcal rnc transcript in an e. coli rnc strain was 3.3-fold higher than in the wild type strain, suggesting that the e. coli rnase iii triggers the degradation of the heterologous rnc mrnas. the results obtained have shown that lac-rnase iii is an interesting enzyme for biotechnological purposes. objectives: the pharmaceutical and food industry has an increasing demand for selectively glycolized active agents. in our application isomaltose can be synthesized by immobilized dextransucrase, which transfers a glycosyl residue from sucrose (substrate) to glucose (acceptor). as the reaction proceeds, isomaltose can act as an acceptor and is converted into undesired follow-up products called isomalto-oligosaccharides, imos. we investigate on two approaches to avoid imo formation, selective adsorption of isomaltose (ergezinger et al., 2005) and the co-entrapment of dextranase adsorbate, which breaks imos down to isomaltose. results and conclusions: the first part of our research concerns the adsorption of dextranase on bentonite, which complies with langmuir model. at complete saturation (0.8 g g −1 ) our immobilisate exhibits an activity of 16,000 u g −1 . a kinetic analysis does not reveal significant differences between the adsorbed and free form of enzyme (k m,bentonit : 14.1 ± 0.7 m versus k m,free : 13.0 ± 0.7 m). thus, bentonite displays a high binding capacity paired with favorable kinetic properties. beyond that we investigate the activity of dextransucrase in co-immobilisates, which is reduced during coimmobilization due to interactions with the adsorbate. among various co-immobilisates, the one containing dextranase bound to preblotted bentonite imparts the highest activity (40% as compared to control: immob. dextransucrase). the molar yield coefficient of coimmobilisates y isomaltose/sucrose surpasses coefficient of control by 13%. further on we will characterize mass transfer of dextranase substrate into alginate matrix as well as bentonite-dextransucrase interactions. and kefir. a second approach is to use yeast as a production organism to produce natural folates for fortification. here we investigate and discuss the folate content in skq2n, a diploid strain of saccharomyces cerevisiae, when cultured in different media and at different stages of growth. the aim is to gain a basal knowledge of the folate production profile, forms of folate produced and degree of leakage to the surrounding medium, in relation to the culturing medium and physiological state of the cells. danisco innovation, danisco a/s, langebrogade 1, po box 17, dk 1001 copenhagen k, denmark we at danisco a/s (copenhagen, denmark) have revealed a new starch degrading pathway by the discovering several new enzymes and metabolites in fungi and algae. these new enzymes include glucan lyases, dehydratases and tautomerases, which proved to be useful in biocatalysis. these new metabolites proved to be useful as both antioxidants and antimicrobials for food and non-food applications. this pathway is named as anhydrofructose pathway of starch and glycogen degradation. this technology is referred to as the anhydrofructose technology. diet is evolving from nourishing populations via providing essential nutrients to improving health of individuals through nutrition. modern nutritional research focuses on health promotion and disease prevention, on protection against toxicity and stress, and on performance improvement. as a consequence of these ambitious objectives, the disciplines "nutrigenetics" and "nutrigenomics" have evolved. nutrigenetics asks the question how individual genetic disposition, manifesting as single-nucleotide polymorphisms (snps), copy-number polymorphisms (cnps) and epigenetic phenomena, affects susceptibility to diet. nutrigenomics addresses the inverse relationship, i.e. how diet influences gene transcription, protein expression and metabolism. the mid-term objective of nutrigenomics is integrating genomics (gene analysis), transcriptomics (gene expression analysis), proteomics (global protein analysis) and metabolomics (metabolite profiling) to define a "healthy" phenotype. the long-term deliverable of nutrigenomics is personalised nutrition for maintenance of individual health and prevention of disease. the major challenges for -omics in nutrition and health still lie ahead of us, some of which apply to -omic disciplines in general while others are specific for -omic discovery in the food context: (i) the integration of gene-and protein expression profiles with metabolic fingerprints is still in its infancy as we need to understand how to (a) select relevant sub-sets of information to be merged, and (b) resolve the issue of the different time-scales, at which transcripts, proteins and metabolites appear and act; (ii) the definition of health and comfort is less of a clear-cut case than the one of disease; (iii) -omics in nutrition must be particularly sensitive: it has to reveal rather many subtle than a few abundant signals to detect early deviations from normality; (iv) in the food context, health cannot be uncoupled from pleasure, that is, food preference and nutritional status are interconnected. transcriptomics serves to put proteomic and metabolomic markers into a larger biological perspective and is suitable for a first "round of discovery" in regulatory networks. metabolomics, the comprehensive analysis of metabolites, is an excellent diagnostic tool for consumer classification. the great asset of this platform is the quantitative, non-invasive analysis of easily accessible human body fluids like urine, blood and saliva. this feature also holds true to some extent for proteomics, with the constraint that proteomics is more complex in terms of absolute number, chemical properties and dynamic range of compounds present. proteomics in the context of nutrition and health has the potential to (a) deliver biomarkers for health and comfort, (b) reveal early indicators for disease disposition, (c) assist in differentiating dietary responders from non-responders, and, last but not least, (d) discover bioactive, beneficial food components. independent of the context of application, proteomics represents the only platform that delivers not only markers for disposition or condition but also targets of intervention: the only way to intervene in a biological condition and to modulate its outcome is interfering with the proteins involved. it is evident that not only comprehensive analyses with one discovery platform (lateral integration of information) are required but also vertical integration between different -omic levels are indispensable for a deeper understanding of disposition, health, environment and diet (desiere, 2004) . a major "vertical integration issue", to date unresolved, is given by different timescales of transcript production, protein expression and metabolite generation (nicholson et al., 2004) . the transcript machinery usually responds fast to an external stimulus (seconds to minutes), the proteins may be expressed within minutes to hours (and have a halflife from minutes to even months) and metabolites vary significantly during the day and depend on latest dietary input. this means that data, which seem to correlate qualitatively (e.g. reflecting the same pathway), may not necessarily be related time-wise. rather, they may represent different responses at different time points and, possibly, to different stimuli. comprehensive -omic analyses is an essential building block of "systems biology", which can be defined as follows (clish et al., 2004) : systems biology is the comprehensive analysis of the dynamic functioning of a biological system (cell, organ, organism or even ecosystem) at gene, protein and metabolite (or higher organizational) level, achieved by comparison of two defined biological states of this system, typically before and after perturbation. while a comprehensive list of components (genes, proteins, metabolites) of a given biological system is a pre-requisite for this kind of research, the main reasoning for the "system view" is that only information on the interactions between the components gives clues to function of the entire network. a systems biology approach has recently demonstrated the power of proteomics to dissect immunity and inflammation. toll-like receptor recognition and signalling was elucidated and showed, how bacterial "barcodes" are read and interpreted in order to trigger an adapted immune response (aderem and smith, 2004) . in order to address some of the challenging objectives of -omics-driven nutritional research, we have addressed (a) the effect of early antibiotic administration on the maturation of intestinal tissues, (b) protein discovery in human milk, (c) the effects of polyunsaturated fatty acids on gene expression and lipid profile in the liver, and (d) biomarkers for intestinal stress. (a) antibiotics and gut maturation: the effects of early administration of antibiotics on intestinal maturation were assessed at the gene expression level in a rat model. (b) human milk: rapid enrichment and iterative, consolidated identification of immunologically relevant milk proteins was achieved through the employment of restricted-access media and a tailored proteomic strategy (labéta et al., 2000; lebouder et al., 2003; panchaud et al., 2005) . (c) fatty acids and liver transcriptome/lipidome: epidemiological studies have correlated higher intakes of poly-unsaturated fatty acids (pufas) with lower incidence of chronic metabolic disease. the molecular mechanisms regulated by pufa consumption were examined assaying the liver transcriptome and lipid metabolome of mice fed a control and a pufa-enriched diet (mutch et al., 2005) . (d) gut stress markers: as a first step, we catalogued protein expression along the jejunum, ileum and colon of the rat intestine and found gut segment-specific proteins (marvin-guy et al., 2005) . the innovative combination of a neonatal separation model with proteomic analysis allowed us to study, whether early life psychological stress may impact the adult gut neuromuscular protein expression and the approach revealed specific protein biomarkers. omics for engineering lactic acid bacteria willem m. de vos wageningen center for food sciences, wageningen university, the netherlands. e-mail: willem.devos@wur.nl. url: http://www.wcfs.nl/ lactic acid bacteria (lab) are high at-rich gram-positive bacteria that have a well-established record in industrial food fermentations where they contribute to conservation, flavour and texture. in addition, several lab are used as food-grade hosts for the production of enzymes, peptides or metabolites. finally, lab are exploited in functional foods that contribute to the health and well-being of the consumer. a variety of metabolic engineering approaches have allowed for the improvement of many attributes of lab. these approaches have been facilitated by the possibility of uncoupling of growth and metabolite production in lab, the wealth of genetic tools that allow modulation of gene expression in a dynamic range, and the determination of several complete lab genomes (de vos et al., 2005) . we have developed lactobacillus plantarum as a paradigm for lab engineering by experimental and modelling approaches, the application of functional and comparative genomics, and the implementation of other post-genomics avenues (kleerebezem et al., 2003; smid et al., 2005; de vos et al., 2004) . examples of optimizing the production of vitamins and other cofactors, the impact of these engineering approaches on the global transcription and metabolite profiles, and determining the l. plantarum activity in the human host will be discussed. solanum tuberosum (potato) is the fourth major crop worldwide and used for food, feed and biotechnological applications. to fully realize the biosynthetic potential for production of starch, protein and metabolites, we conducted an extensive quantitative profiling of the expressed genes of mature potato tuber. a total of 58,322 sage (serial analysis of gene expression) tags of 19 nt representing 22,235 different tags were analyzed. the 695 tags seen 10 or more times were assigned a tentative function by comparison to homologous genes. contrary to the transcript profile of rice seedlings (gibbings et al., 2003) the storage organ of potato is not dominated by transcripts encoding storage proteins. transcripts for four types of protease inhibitors, a metallothionein and a lipoxygenase were more prominent than patatin isoforms. the lactic acid bacterium lactococcus lactis is used extensively in the production of fermented milk products. during cheese production the bacterium experiences many changes in its immediate environment, as a result of its own reactions. the most severe change is the accumulation of lactic acid, which changes the ph of the medium until growth is totally inhibited. we have focused upon a survey of these dairy related stress responses, as a means of constructing more robust strains. when l. lactis starter cultures are produced in rich media, they will experience an initial period with purine limitation after being added to the milk substrate, a stress condition that in several studies have been found to induce cross resistance towards a number of other stresses. we have analyzed both general purine nucleotide (atp and gtp) and specific gtp limitation in chemi-cally defined medium, using both proteomics and transcriptomics. the differential expression analyses were performed with a custom designed dna microarray of pcr amplified probes. the two stress conditions resulted in very different stress responses, both at the transcriptomic and proteomics level. from a new study on the temporal expression pattern of l. lactis during growth in milk, we present preliminary data showing differential expression of genes and proteins of the purine stress stimulon as well as other stress stimulons. cell physiology of the yeast saccharomyces cerevisiae glucose repression mutants ∆snf1, ∆snf4 and ∆snf1∆snf4 was studied in batch and glucose limited chemostat cultivations. detailed physiological studies were performed on cells grown in batch using glucose, galactose, or glucose-galactose mixture as a carbon source. during growth on glucose-galactose mixtures it was shown that after glucose was consumed, galactose consumption remained repressed for about 15 h in ∆snf1 or ∆snf4 mutants, and for more then 40 h in ∆snf1∆snf4 mutant, whereas it only lasted 6 h in wild-type cells. the global transcriptional response in the glucose repression mutants was studied using chemostat cultures. s. cerevisiae wild type and the mutants were grown in glucose limited aerobic chemostats at a dilution rate of 0.1 h −1 . biological triplicates were performed for each strain. to identify transcriptional responses of the glucose repression mutants, statistical tests, clustering method and a model-driven analysis method were used. the global transcription data analysis experiments showed that genes involved in hexose transport, carbohydrates metabolism, respiration, and signal transduction were differently expressed in ∆snf1 and ∆snf4 mutants comparing to wild type cells. combination of gene expression data and gene-scale metabolic model indicated changes in the metabolic sub-networks among studied glucose repression mutants. genomics technologies have recently been introduced into food and nutrition science for identifying targets of molecular actions of nutrients as well as non-nutrient components of foods. changes in the transcriptome, proteome and metabolome have been determined for assessing the molecular actions of zinc as an essential micronutrient and of flavonoids in processes such as colon carcinogenesis and atherosclerosis. zinc is essential for the structural and functional integrity of cells and plays a pivotal role in the control of gene expression. zinc deficiency effects in human cells and an animal model (rats) were analyzed by microarrays and showed that a low intracellular zinc concentration caused major alterations in the steady-state mrna levels of several hundred target genes-dependent on the tissues studied including liver, brain, muscle, intestine and kidney. proteome analysis from the same samples by 2d-page followed by peptide mass fingerprinting via maldi-tof-ms identified similarly a large set of proteins with altered expression level but allowed a common theme of action to be identified. although pleiotropic in first view, the obtained pattern of zinc-affected genes/proteins may represent a reference for defining the zinc regulon in mammalian cells. flavonoids occurring in large number in plant species are considered protective agents in a variety of processes including inflammation and cancer development. we have studied the effects of around 80 selected flavonoids in a screening program and identified for compounds such as flavon, genistein or quercetin by genomics technologies their putative mode of action in colon cancer models and endothelial cells. as part of a collaborative effort employing human endothelial cells and blood mononuclear cells from a human intervention trial with soy isoflavones (genistein/daidzein) the effects of the flavonoids on the stress-response to oxidized ldl and homocystein was analyzed. a set of markers of anti-inflammatory and anti-apoptotic activity was identified for genistein and daidzein and cell biological studies confirmed that both compounds prevented programmed cells death in stressed endothelial cells. in the food processing industry, unwanted presence of extremely heat-resistant bacterial endospores creates major problems due to their capability to survive classical thermal treatments and their ability to subsequently germinate and form actively growing vegetative cells. screening of spoilage isolates using genomic typing techniques to visualise putative genome-based biomarkers allowed us to classify strains according to the degree of thermal resistance of their spores. in addition, we showed that sporulation in the presence of ingredients rich in calcium ions promotes thermal resistance of developing spores and correlates with the expression of specific (marker) genes (see oomes and brul, 2004) . finally, the molecular program that forms the basis of spore germination has been assessed using genome-wide expression analysis. noticeably genes involved in dna-repair were transiently expressed in germinating wild-type spores. also, surprisingly, it was found that spores contain significant levels of ribosomal and messenger rnas. degradation of these rna molecules upon spore thermal injury was found to be characteristic for their thermal resistance and predictive for their subsequent outgrowth behaviour. this finding is currently being patented. the information on spore presence, predictions of their thermal resistance and process survival chances, is used to structure a process management system to facilitate optimal food quality assurance and allow for real time analysis in case of the need for quality control. the information on spore presence, predictions of their thermal resistance and process survival chances is now being integrated. this is used to formulate the conditions for a process management system with state of the art food production quality assurance, which allows for real time analysis in case of the need for quality control. the interplay of the pectinase spectrum of aspergillus niger as revealed by dna microarray studies elena martens, jac benen, johan van den berg, peter schaap fungal genomics group, laboratory of microbiology, wageningen university, dreijenlaan 2, 6703 ha wageningen, the netherlands. e-mail: elena.martens@wur.nl (e. martens) the saprobic fungus aspergillus niger is an efficient producer of extracellular enzymes many of which show carbohydrate modifying activities. these enzymes have gras status and therefore are widely used in the food and feed industry. after determination of the genomic sequence of a. niger by dsm, it was estimated that only a fraction of the potential of secreted enzymes is currently characterised. database mining using the proprietary genome sequence has resulted in the identification of more then 80 genes encoding enzymes involved in the depolymerisation of the back bone and the site chains of the complex polysaccharide pectin. additional enzymatic activities required to remove methyl and acetyl esters, present in pectin were also observed. by using dna microarrays we have sought to gain insight into the complex regulation of all the genes involved in pectin degradation. a. niger was cultivated on sugar beet pectin and on the monomeric constituents of pectin, viz. galacturonic acid, rhamnose and xylose. subsequently the corresponding transcriptomes were analysed. we will report on our findings concerning the regulation of the expression of the genes involved in the degradation of pectin and its main constituent-galacturonic acid and the consequences for the interplay of the encoded (novel) enzymes. since reactive oxygen species (ros) are formed in all living organisms a wide range of antioxidative enzyme systems are present to keep the system in balance. when an animal is slaughtered the cellular anti-oxidative capability is reduced, resulting in an accumulation of ros followed by an increased oxidation of dna, lipids and proteins. generally lipid oxidation is a well-known problem, causing increased rancidity during prolonged storage, of especially fatty fish. the implications of protein oxidation are, however, more unclear also in respect to quality decay of fish. protein oxidation causes a wide variety of amino acids modifications, where of the most studied is carbonylation of proline, argenine, lysine or threonine. these carbonyl groups can be labelled with 2,4-dinitro-phenylhydrazine. combining two-dimensional gel electrophoresis with immunoblotting enables the detection of carbonyl groups for each single protein. the results presented here, reveal that both during frozen storage and tainting of rainbow trout protein oxidation/carbonylation increases, furthermore there is an increase in oxidation/carbonylation for distinct proteins. anne-marie neeteson european forum of farm animal breeders, benedendorpsweg 98, 6862 wl oosterbeek, the netherlands society is concerned about food, animal welfare, food safety, new technologies, scientists and industry. these elements are all present in genomics for farm animals. therefore, it is important to build awareness in scientists and industry, start a dialogue with stakeholders and society, and to be transparent in a pro-active way. this paper will address the issues at stake for scientists and industry, when it comes to genomics and animal health. it will combine the results of imperical, ethical and sociological efforts in three eu funded projects. (c) the proper use of genomics in relation to infectious diseases in production animals, and the role of the scientist in the development in new technologies in this field, are being addressed in european animal disease genomics network of excellence (ead-gene, http://www.eadgene.org/). some observations are that: (1) genomics does not concern changing the gene. however, acceptability of any discovery dealing with living beings and edible products, is not obvious just like that! animals have a symbolic and emotional load. (2) genes are still related to eugenics, in the mind of people. genes as such cause reluctance, but it is seen as positive if the use of medication can be reduced, and if animals will be better resistant to disease. (3) consumers are in favour of consumer education, compulsory labelling and the imposition of minimum standards. the inclination to pay more for foods produced according to desired standards relates closely to income level. (4) animal welfare is the major issue citizens mention as a concern. the focus of breeding organisations on productivity should be counterbalanced by serious attention to the animal's needs in order to avoid unnecessary negative impact on the welfare of the animals. (5) when technical specialists and lay people communicate, they tend to use different languages: they use the same words, but with rather different interpretations. so transparency of breeding practices and clear definitions of terminology will be essential for effective communication among all stakeholders. (6) food safety and human health are the major concern for most people, when it comes to making a choice. during the latest decades research within the field of animal genomics has in general been following the same strategies as those used within the field of human genomics, although with much less resources. the porcine genome has been characterized intensively through the development of linkage maps, comparative maps and physical maps. until a few years ago it had not been anticipated that it would be possible to embark on whole genome sequencing of animals genomes. however, because of technological developments and much lower costs for sequencing, several animal genomes have now been assembled/are on the way to being assembled. the initial step towards sequencing the porcine genome was taken by the sino-danish pig genome project. the efforts within this project have now generated approximately 3.84 million genomic shotgun sequences and 700.000 expressed sequence tags (ests). the shotgun sequences have been included in a three-species alignment to make an initial evolutionary analysis. the results show that pig is much closer to human than mouse is. the ests represent 5 -end sequences from a total of 98 non-normalized cdna libraries. based on assembly and annotation of the ests the structure of the porcine transcriptome has been analysed. the relevance of assembling the porcine genomic sequence is justified both from the perspective of sustainable animal breeding and from the fact that the porcine model is an important research platform because of the anatomical, physiological, biochemical and metabolically similarities with man. examples of functional genomic studies both aimed at sustainable animal breeding and aimed at exploiting the pig as a model for medical studies will be discussed. genomics refers to global, systematic and high throughput approaches that allow collecting large amount of data and thus offer new possibilities for analysis and understanding biological processes. we will present some new knowledge related to reproduction in farm animals resulting from three different strategies. (1) functional analysis of gene and protein expression: the transcriptome and the proteome analysis allowed to identify new genes and proteins whose expression is associated with processes of ovarian follicular growth and atresia as well as oocyte maturation in bovine and porcine species, maturation of spermatozoa in the different compartments of epididymis. farm animals produce food as cost effectively as possible, however this may have negative side effects for their health and welfare. trade off processes between production on one hand and reproduction and health on the other hand play a crucial role. the principles of selective breeding for the best of naturally occurring variation has proven to be able to balance an increased level of production and quality of life for the animal. every year, the economic value of the genetic gain achieved by the breeders and carried over to the producers is 1.5% of the economic value of eu farm animal production. consequently, a conservative estimate of the gain from animal breeding is, every year d 1.2 billion in europe. recent developments, such as the sequencing of the genomes of the human, chicken and cow, together with high throughput laboratory techniques, means that there are new opportunities to enhance quality of life. the goal of this paper is to give an overview of the options offered by genomics for enhanced quality of life with focus on identifying relevant gene variants and technologies for large scale tracking and tracing. selective breeding for the best of naturally occurring variation remains the same as in traditional systems, but by pinpointing the relevant gene variants along genomics it is possible to identify directly the animals best selected for high production without comprising health and welfare. the combination of full genome sequences, software tools, study of functional physiological processes cost-effective high-throughput snp genotyping and comparative mapping have the (proven) potential to identify relevant gene variants, e.g. pork color, boar taint, general disease resistance. functional mutations have direct option of application in breeding programs. unfortunately this is not the case for genetic markers due to cost of genotyping and inconsistent phenotypic effects. new technologies for snp genotyping are cost effective and enable large scale genotyping (1.000 of animals/day). a selection of the best technology and strategic use of these opportunities enable tracking and tracing. the application of this technology offers new opportunities for quality of life, both for animal and humans. background: studies have shown that prebiotic and probiotic consumption alters the gastrointestinal flora, modulates the immune system, inhibits genotoxicity and has a protective effect on colon carcinogenesis. however, the effect of synbiotic consumption on these parameters in subjects at risk of colon cancer has not until now been investigated. aim: to determine if a synbiotic (prebiotic and probiotics together) modulates cancer risk biomarkers in human subjects at risk of colon cancer. methods: a 12-week randomised, double blind, placebo controlled, ethically approved trial of a food supplement containing lactobacillus gg, bifidobacterium bb-12 and raftilose synergy 1 (prebiotic) was performed in 37 colon cancer subjects who had undergone 'curative resection'. faecal and blood samples were obtained before (t1, week 0) midway through (t2, 6 weeks) and following intervention (t3, 12 weeks). rectal biopsies were obtained at t1 and t3. the effect of synbiotic consumption on the faecal flora was assessed using standard plate count techniques. genotoxic damage was measured in single cells derived from biopsies using the comet assay. fw was prepared by diluting faeces 1:1 in dmem, ultracentrifugation and sterile filtration. the genotoxic (comet assay) and cytotoxic potential (almar blue assay) of fw was determined. peripheral blood mononuclear cells were isolated from blood and cytokine production in vitro assayed by elisa. natural killer cell cytotoxic activity and the phagocytic and respiratory burst activity of monocytes and granulocytes in whole blood were determined by flow cytometry. results: in the synbiotic group faecal numbers of bifidobacteria significantly increased (p < 0.001) and lactobacilli increased although not significantly (p = 0.0674) while coliforms decreased (p < 0.05). enterococci, clostridium perfringens and bacteroides were unaffected. in the placebo group bifidobacteria decreased (p < 0.001), the other bacterial groups were unaffected. in biopsies genotoxic damage was increased in the placebo group at t3 versus t1 (p = 0.0301) but was unchanged in the synbiotic group. the genotoxic and cytotoxic potential of fw was unaltered. synbiotic consumption significantly increased (p < 0.05), ifn-␥ production by pbmcs but il-2, il-10, il-12, and tnf-␣ production was unaffected. natural killer cell, phagocytic and respiratory burst activities were unaltered. conclusion: synbiotic consumption did not have a strong immunomodulatory effect on the systemic immune system in this study, nor did it influence the genotoxic and cytotoxic potential of fw. however, synbiotic consumption altered the composition of the gut flora to a more beneficial composition as well as protecting against genotoxic damage in vivo, suggesting a protective effect of synbiotics against colon carcinogenesis. emmaårsköld, malin svensson, halfdan grage, peter rådström, ed w.j. van niel applied microbiology, lund institute of technology, lund university, p.o. box 124, sweden lactobacillus reuteri is used today in a variety of dairy products as a probiotic bacterium. several lactic acid bacteria have the ability to produce different kinds of exopolysaccharides (eps), which have the potential to be used as an alternative biothickener. however, the yield of eps is too low to be profitable in the food industry. to optimise the environmental conditions for eps formation a l. reuteri strain was chosen for a factorial design study. the factors used in this experiment were temperature (30-43 • c), ph (4.5-6.5) and sucrose concentration (50-150 g/l); for each factor three different values were chosen. the strain was grown in batch mode using a semi-defined medium at constant ph and sucrose as the carbon source. the results obtained with 27 fermentations revealed that the highest eps formation was found at 37 • c, ph 4.5 and 100 g/l of sucrose. sucrose did not further affect the eps formation above a concentration of 100 g/l. temperature and ph were significant for the eps formation, but only temperature was significant for growth. a central composite design study was chosen for further optimization of the ph and sucrose concentration for maximum eps formation. also the gene expression of the sucrase enzyme responsible for eps formation was investigated using qrt-pcr. the data were used to develop a model for growth and eps formation. dendritic cells (dc) play a pivotal immune regulatory role in the th1, th2 and treg cell balance. dc are present in the gut mucosa and may thus be target for modulation by gut microbes. here, we screened a large panel of human gut-derived lactobacillus and bifidobacterium spp. for dc polarizing capacity: bone marrow-derived murine dc were exposed to lethally irradiated bacteria and cytokine and dc surface markers were analyzed. substantial differences were found among strains in their capacity to induce proinflammatory cytokines, while the differences for anti-inflammatory cytokines were less pronounced. bifidobacteria were weak il-12, il-23 and tnf-␣inducers, while both strong and weak cytokine-inducers were found among the strains of lactobacillus. remarkably, strains weak in il-12 induction inhibited il-12, il-23 and tnf-␣ production induced by otherwise strong cytokine-inducing strains, while il-10 production remained unaffected. those lactobacilli with greatest capacity to induce il-12 were also most effective in up-regulating surface markers. surface marker up-regulation was however reduced in the presence of weak il-12-inducing strains. in conclusion, human lactobacillus and bifidobacterium spp. polarize differentially dc maturation. thus, the potential exists for th1/th2/treg-driving capacities of the gut dc to be modulated according to composition of gut flora including ingested probiotics. cell surface-associated glycolytic enzymes from lactobacillus plantarum 299v mediate adhesion to human epithelial cells and extracellular matrix proteins s.m. madsen, j. glenting, a. vrang, p. ravn, h.k. riemann, h. israelsen, m.r. nørrelykke, a.m. hansen, m. antonsson, s. ahrné, h.c. beck bioneer a/s, hørsholm dk-2970, denmark among the main selection criteria of lactic acid bacteria for probiotic use, the ability to adhere to intestinal epithelial cells, mucus, or extracellular matrix proteins is considered important. using a proteom based approach we identified a group of novel surface proteins that are non-covalently bound to the cell wall of the probi-otic bacterium lactobacillus plantarum 299v. surface proteins were extracted and analysed by gel electrophoreses followed by mass spectrometry analysis. the surface proteins included glycolytic enzymes like, e.g glyceraldehyde 3-phosphate dehydrogenase, which usually is a typical intracellular enzyme. a collection of lactobacillus species was screened and the phenomenon of surface-associated glycolytic enzymes was found in many of the analyzed species. this is to our knowledge the first example of surface-associated glycolytic enzymes in probiotic bacteria. however, in pathogenic bacteria these enzymes are well known and their surface localization is involved in adhesion to human epithelial cells and invasion. we suspect that the presence of these enzymes on the surface of probiotic bacteria could prevent adhesion of pathogenic bacteria and possibly also involve other probiotic activities such as immune modulation. binding studies showed that the surface-associated glycolytic enzymes of lb. plantarum 229v were able to bind to caco-2 intestinal cells and extracellular matrix proteins like fibronectin. scientific and industrial interest on exopolysaccharides (eps) synthesized by microorganisms and on their chemico-physical properties has been quickly growing in the last years. many strains of lactobacilli produce eps allowing them to adhere to human mucosae and therefore to have probiotic effects such as the stimulation of immune response and even antitumoral activity of these molecules has been claimed. futhermore prebiotic actions of eps beneficially affect the human host health improving the properties of indigenous microflora. in this research we have studied two novel interesting lactobacilli strains: lactobacillus plantarum dsmz 12028 and a particular human isolated strain of lactobacillus crispatus that have probiotic potentialities and are good eps and l(+)-lactic acid producers. the aim of this work has been to characterize these strains and their metabolites (eps, organic acid and bacteriocins), to state them as probiotics and to study their adhering ability on human cells. we have studied the physiology of l. plantarum and l. crispatus in shaking flasks as well as their optimal fermentation conditions to obtain high cell density cultures suitable for use as starters in the food industry and eventually for probiotic preparations. fermentation experiments have been performed in a bioreactor equipped with microfiltration (mf) modules using a semidefined medium and various carbohydrates in different culture conditions (aeration, temperature). the kinetic of eps production has been followed and according to results both strains have shown a growth-related production ranging from 200 to 400 mg/l. in vitro studies concerning the ability of these strains to adhere to human mucosae are in progress as well as the structural characterization of the exopolisaccharides. previous studies have shown that compression coating improves the storage stability of freeze-dried lactobacillus acidophilus, although this stability is related to the degree of cell injury, which in turn is related to the compression pressure used. compression coating has also been found to improve the survival of freeze dried l. acidophilus during exposure to simulated gastric fluid (sgf). the aim of the present work is to create a compression coated l. acidophilus formulation, with targeted release at the terminal ileum and beginning of the colon in the human gastrointestinal tract. dissolution studies were performed using a phosphate buffer with a ph of 2 and 6.8, to simulate gastric fluid and intestinal fluid (sif), respectively. cell viability was monitored using multi-parameter flow cytometry (mpfc), together with traditional cfu/ml counts. mpfc was used to identify live, dead and stressed cell populations, using the fluorescent stains propidium iodide (pi), 3,3 -dihexylocarbocyanine iodide (dioc 6 (3)) and to-pro-3. results show that an enteric coating material, eudragit l100-55, is both suitable for compression coating, and enhancing the survival of cells when exposed to sif. pectin usp 100 has also been shown to promote targeted release of the cells. the opium poppy papaver somniferum contains more than 80 tetrahydrobenzylisoquinoline-derived alkaloids. it is the source of the narcotic analgesics codeine and morphine, which accumulate in specialized internal secretory cells called laticifers. in the aerial parts of the plant, the laticifer cells are anastomosed, forming an articulated network. laticifers are found associated with the vascular bundle in all plant parts. the morphinan alkaloids morphine, codeine and thebaine are found both in roots and in aerial plant parts and specifically accumulate in vesicles within laticifers. the benzo[c]phenanthridine alkaloids sanguinarine and 10-hydroxysanguinarine are found in root tissue. the syntheses of sanguinarine and of the tetrahydrobenzylisoquinoline latex alkaloid laudanine are completely understood at the enzyme level. nearly all enzymes of morphine biosynthesis have also been described. in more recent years, cdnas encoding enzymes of alkaloid biosynthesis in p. somniferum have been isolated and characterized. the cell-specific localization of several of the enzymes of morphine, sanguinarine and laudanine biosynthesis has also been described. with knowledge of many of the gene of alkaloid formation and their sites of expression, the metabolic engineering of p. somniferum for tailored alkaloid profiles is now being undertaken. an agrobacterium tumefaciens-based transformation and a regeneration protocol have recently been developed specifically for narcotic tasmanian cultivars. the various cdnas encoding genes of alkaloid biosynthesis in p. somniferum are being systematically reintroduced into the plant to achieve engineered plants with altered alkaloid profiles. the first results have now been obtained with sense and antisense genes stably expressed in a regenerated tasmanian cultivar. the ultimate goal of exploiting the genes of alkaloid biosynthesis is to produce transgenic medicinal plants of specific alkaloid content that would facilitate commercial production and improve our understanding of the factors that regulate biosynthesis as well as provide experimental systems with which to investigate the ecological role of alkaloids in planta. integrated transcript and metabolite profiling for gene discovery in plant natural product pathways richard a. dixon, lahoucine achnine, bettina deavours, mohammed farag, marina naoumkina, lloyd w. sumner plant biology division, samuel roberts noble foundation, ardmore, ok 73401, usa the rich diversity of chemical structures found in the plant kingdom arises in large part from a limited number of basic chemical scaffolds (e.g. terpene, polyketide) that are modified by a limited number of chemical substitution types (hydroxylation, glycosylation, acylation, prenylation, o-methylation, etc.) . much of the diversity is brought about by the substrate-and/or regio-specificities of the substitution enzymes. in contrast to the large collections of gene sequence and transcript level data available on-line, little detailed information exists on the plant (secondary) metabolome. promiscuity of substrate specificity in vitro may complicate attempts to assign functions to genes of secondary metabolism accessible to researchers through various cdna library collections. using the isoflavonoid and triterpene pathways in medicago species as examples, we describe how integrated metabolite and transcript profiling approaches can aid functional genomics, help explain metabolic regulation, and provide tools for assessing the impacts of genetic modifications in plant secondary metabolism. focused and non-targeted approaches were used to assess the impact associated with introduction of new high flux pathways in arabidopsis thaliana by genetic engineering. transgenic a. thaliana plants expressing the entire biosynthetic pathway for the tyrosine derived cyanogenic glucoside dhurrin as accomplished by insertion of cyp79a1, cyp71e1, and ugt85b1 from sorghum bicolor accumulated 4% dry-weight dhurrin with marginal inadvertent effects on plant morphology, free amino acid pools, transcriptome and metabolome. in a similar manner, plants expressing only cyp79a1 accumulated 3% dry-weight of the novel tyrosine derived glucosinolate, p-hydroxybenzylglucosinolate with no morphological pleitropic effects. in contrast, insertion of cyp79a1 plus cyp71e1 resulted in stunted plants, transcriptome alterations, accumulation of numerous new glucosides derived from detoxification of intermediates in the dhurrin pathway, and in loss of the brassicaceae specific uv protec-tants sinapoyl glucose and sinapoyl malate as well as kaempferol glucosides. the accumulation of new glucosides in the plants expressing cyp79a1 and cyp71e1, was not accompanied by induction of glycosyltransferases, demonstrating that plants are constantly prepared to detoxify novel xenobiotics. the pleiotrophic effects observed in plants expressing sorghum cyp79a1 and cyp71e1 were complemented by retransformation with s. bicolor ugt85b1. accordingly, insertion of high flux pathways directing synthesis and intracellular storage of high amounts of natural products is achievable in transgenic plants with marginal inadvertent effects. arabidopsis thaliana-distinct function in gene transcription dynamics? jeppe madura larsen, brian stougaard vad, søren mølgaard, kell andersen, mads n. davidsen, klaus d. grasser department of life sciences, aalborg university, 9000 aalborg, denmark in recent years it has been shown that introns to some extent can regulate expression in the eukaryotic cell. insertion of introns in the 5 utr in arabidopsis genes has shown to increase gene expression at both rna and protein level. in order to investigate if 5 utr introns have distinct characteristics, we analyse these in the well annotated arabidopsis thaliana genome published by the arabidopsis genome initiative. 12,898 loci annotated with full-length cdnas were analysed and 1989 loci (15.4%) containing 5 utr introns were isolated. we studied if the genes containing these introns showed different patterns in alternative splicing and gene function (gene ontology classification) compared to the remaining genes not containing this intron type. 1802 of the isolated loci (90.6%) contained only one 5 utr intron and these were used for further analysis, where it was investigated if the 5 utr introns had a characteristic size distribution. genes containing transcripts with 5 utr introns were more subjected to alternative splicing (9.63% versus 2.39%) and had a tendency to be more involved in cell regulatory functions compared to genes without this intron type. it was also found, that 5 utr introns was characteristically size-distributed. we identified thee predominant sizes of approximate 110, 270 and 390 bp compared to only one for orf introns. this suggests widespread multiple splicing events in 5 utr introns. the results presented here suggest that 5 utr introns have distinct characteristics and function in gene transcription dynamics. cytochrome p450 monooxygenases appear to be involved in the biosynthetic pathways of a large variety of primary and secondary metabolites in microbial, animal and plant cells. in particular, cytochrome p450 scc catalyzes the conversion of cholesterol into pregnenolone-the precursor of all steroid hormones in mammalian steroidogenic tissues. cytochromes p450 are also involved in the biosynthesis of different plant steroid derivates that play important role in regulation of plant growth and development. therein, investigation of possible influence of cytochrome p450 scc expression on plant regulatory system is of a great interest. this report devoted to the investigation of transgenic tobacco plants, which have been generated by the transformation with recombinant plasmid pgbp450f constitutively expressing cyp11a1 cdna of the bovine cytochrome p450 scc . the transgenic state of the plants was confirmed by southern blot analysis. transgenic plants are phenotypically different from the control ones. in particular, they obtained a substantially higher growth rate and are larger than wild type plants. we have demonstrated that incubation of fragments of the transgenic plants leaves in [ 14 c]-labeled cholesterol containing medium results in formation of the radioactively labeled product with chromatographic mobility corresponding to pregnenolone. the presence of this metabolite in the steroid fraction of lipid extracts obtained from the transgenic plants leaves was confirmed by gas chromatography mass spectrometry (gc-ms) method. the data obtained indicate that cytochrome p450 scc synthesized in transgenic plants displays its specific catalytic activity. biotechnological production of glucose isomerase enzyme with streptomyces olivochromogenes for production of fructose syrup from hydrol m. hashemiravan, a. sadat barikani department of food science and technology, azad university (pishva, varamin unit), institute of food science and agriculture, tehran, iran the use of glucose isomerase for isomerization and production of fructose syrup was performed by selected industrial strain of streptomyces olivochromogenes ptcc 1457. growth of microorganism and production of enzyme in different culture media was studied, and effects of different parameters such as phosphate and aeration was evaluated. the growth of microorganism at 28 • c, caused a production of high amount of enzyme. the production of enzyme was considered in two culture media (a and b). medium (a) was selected for the higher production amount of enzyme. the highest amount of enzyme production was seen in medium a, which was 36.4 giu/ml, after 80 h. the use of baffles in culture flasks, increased the amount of enzyme production, four times more. the production of enzyme was increased, 1.25 times more, in phosphate deficient medium. the cells containing enzyme (intra cellular glucose isomerase) was separated by centrifuge, and extraction and release of enzyme was performed by ultra sonication, that is a physical-mechanical method. this method released about 89.9% of total intracellular enzyme. the best length of time for sonication was found to be 4 min. experiments showed that optimum ph and temperature of the enzyme were 7.5-8 and 80 • c, respectively. the highest activity of the enzyme was observed at ph 5.8 for up 120 min. at the time of isomerization reaction the existence of magnesium ions showed to be necessary and omission of this ion cause a decrease of enzyme activity in isomerization process, but this effect was not necessary for the enzyme activity results showed that treatment of glucose syrup at temperatures of 40, 60 and 80 • c, by the enzyme, caused 48%, 50% and 52% of isomerization, respectively. efficiency increase of high acetic acid production with the use of acetobactereace iranian native strains mutation m. hashemiravan 1 , a. alirezasadat barikani 2 : 1 department of food science and technology, azad university (pishva, varamin unit) institute of food science and agriculture, iran; 2 young iranian researchrer's club, tehran, iran the first step in the present research is the isolation of acetobactereace native strains from fruits (such as grape or apple) and fresh vinegar. this separation has been done with the use of effective isolation methods and optimized mediums. ten strains were isolated effectively, the bio chemicals test were performed, each of them were detected, classified and nominated with a special code. two methods have been used for mutation: (a) mutation with the ultraviolet radiation; (b) mutation with nitrous acid. in the first method the microbial cells were treated with us radiation for different periods of 15, 30, 45 and 60 s, and the effect of the uv mutation was assessed. as a result, the period of 45 s was determined as the optimum mutation periods. in the second method, the microbial cells were first washed with the acetate buffer 0.2 m with the ph of 4.5, then 10 ml nitrous acid 0.07 m was added and was mixed for 2-10 min. finally the sampling was done in the periods of 2, 4, 6, 8 and 10 min and was transferred to a plate containing the medium of ethanol-phenol red-agar. the two methods have been compared with each other. each method has its own advantages and disadvantages. the mutation pathway in method (b) is more stable and conducted, while mutation with the uv radiation method, change the position of thiamine-cytosine bases absolutely randomly. finally, the best mutant site was inoculumed with the medium containing alcohol 16%, acetozyme gz 0.6 g/l, acetozyme d 1 g/l, acetic acid 1.5% and 1 l double distilled water. the acetic acid was produced in the rate of 16 g/100 ml. also the mutant strains were detected with scanning electron microscope (sem) and interesting photographs were taken from mutant cell. the performed experiments were planned with the use of taguchi statistical method (qualitek 4). this research has been performed in the irost as the thesis of ph.d. in food biotechnology industry. isotachophoresis has almost exclusively been applied for contracting and stacking samples ions before zone electrophoretic separation of proteins. this study attempts to apply microfluidic isotachophoresis (itp) as a high resolution analytical method for proteins. beta-lactoglobulin and other milk proteins with slightly different pi were labelled with fluorescent red 646 and analysed by the micralyne tk system using microfluidic glass chips, either with simple cross (sc) or double cross (tt) injection or designed 2d-itp-cze chips with double tt injection and sc for transfer to the second dimension cze channel, efficiently non-covalently coated with 0.6% (w/v) epoxy-polydimethylacrylamide to lower electroendoosmosis. capillary zone electrophoresis (cze) in borate or phosphate buffer was reproducibly perform for more than 75 consecutive runs using upchurch tm reservoirs glued to the wells to enable larger buffer volumes and greater run-to-run stability. finally, isotachophoretic anionic separation of the proteins were done using phosphate (ph 8.1) or chloride (ph 8.5) as leading ion and -amino-caproic acid (ph 8.9) as terminating ion. the effect of narrow cut ampholytes as spacers needs further investigations. the perspective aim is to combine the migrating itp separated zones with second dimension capillary zone electrophoresis as a new microfluidic proteomic 2danalysis. genotypical differences affecting the response of pisum sativum to differing boron/iron applications e.e. hakki, u. zeynep, m. hamurcu, a. tamkoc, m.b. babaoglu, s. gezgin department of field crops, faculty of agriculture, selcuk university, kampus, konya 42079, turkey. e-mail: eehakki@selcuk.edu.tr (e.e. hakki) boron and iron are among the microelements required for the proper development of the vegetative and generative tissues of plants. though iron is present in high amounts in almost all soil types, its bioavailability to crops is extremely reduced, hence most of the plants face an iron defficiency problem and while on one side crop productions effected, on the other hand the nutrition problems come up to human through contagious nutritional chains. boron is also among the most problematic micronutrients of the major crop plantation areas of turkey. both defficiency and toxicity problems exist in a total of about 50% of the central anatolian soil where pea is among the legumes cultivated. application of varying levels of boron and iron combinations in greenhouse and the analysis of plant aquisition via icp-aes as well as determination of the effects of the element combinations both in morphological and molecular levels are the aim of our studies. the genetic bases of the response differences of plant genotypes to b and/or fe, were investigated through the applications of molecular marker techniques. considerable growth rate and stem size differences were detected within the parents (wild-type versus cultivar) and the f9 plants. the presence of efficient genotypes to high micronutrient levels are expected to help us increase the cultivation of the crop in problematic areas as well as in exploring the molecular bases of the microelement uptake mechanisms. increase in sulfite production by accelerating sulfate uptake in brewing yeast t. fujimura, y. kodama, y. nakao, n. nakamura, w. miki institute for advanced technology, suntory ltd., mishima-gun, osaka 618-8503, japan sulfite plays a role as an antioxidant, which stabilizes beer flavor. therefore, it is important to control the sulfite concentration during fermentation. sulfite is produced as an intermediate in the sulfate assimilation pathway in yeast. we have already reported that over-expression of a lager yeast-specific ssu1 gene, encoding a sulfite efflux pump, leads to increase of sulfite production (fujimura et al., 2003) . in the present work, we have clarified that there are two types of sul2 genes (scsul2 and non-scsul2) each encoding a high affinity sulfate permease in lager brewing yeast. eighty percent and 86% identity are found by comparing the dna sequences and the deduced amino acid sequences, respectively. a comparative functional analysis of the two genes has been performed aimed at achieving further increases in sulfite production by accelerating sulfate uptake. over-expression of scsul2 and non-scsul2 has been achieved by transformation of lager brewing yeast, saccharomyces pastorianus. experiments have been done with and without expression of non-scssu1. the resultant transformants have been evaluated by fermentation tests in wort. over-expression of either scsul2 or non-scsul2 failed to show significant effect on sulfite formation. a combination of over-expression of non-scsul2 and non-scssu1 resulted in two-fold higher sulfite production compared with overexpression of only non-scssu1 and four-fold higher compared with the parental strain. these results suggest that the non-sc-gene types significantly contribute to sulfite production in lager brewing yeast. , t., et al., 2003. functional phytases (myo-inositol hexakisphosphate phosphohydrolase) catalyse the release of phosphate from phytate (myo-inositol hexakisphosphate), the predominant form of phosphorus in cereal grains, oilseeds and legumes. possible applications of phytases have been suggested in animal nutrition to increase mineral bioavaliability and to decrease phosphate pollution in area of intensive life stock management and in human health. zea mays is one of the cereals that contain high amount of phytate as the major phosphate storage compound. over 200 bacteria were isolated and screened for phytases from the halosphere, rhizosphere and endophyte of malaysian maize plantation. the phytase activity of the isolates was screened by a modification of the ammonium molybdate method. the highest extracellular phytase activity was detected from bacteria that isolated from the endophyte of the maize root. in this paper, results for 24 isolates chosen for media, temperature and ph optimization will be presented. production of plant proteinase from jack fruit seeds (artocarpus integrifolis) and its influence on rheological and sensory characteristics of low fat yogurt el-sayed el-tanboly dairy sciences department, national research centre, dokki, cairo, egypt adding a proteolytic enzyme extraction from jack fruit (artocarpus integrifolis) in combination of fermentation process in low fat yogurts manufacture was tried to improve yogurt flavour and rheological properties. experimental yogurts milk contained control, 3.9 (t1), 7.8 (t2) and 11.7 (t3) units/ml milk from crude extracts of plant proteinase. the ph of the product treated with crude proteinase was lower than the control. however, the rate of acidity development during storage slightly increased with increasing the addition of crude proteinase level and progress of storage period of yogurt. the proteolytic activity of all yogurts gradually increased until the end of storage period (15 days). yogurts made from milk treated with crude proteinase preparations were less firm compared with control at all storage periods, where t3 showed more less firm after 15 days of storage being 20.17 g/100 g. generally, increasing units of plant proteinase preparations decreased the firmness. on the other hand, yogurt made from milk pretreated with plant proteinase had higher syneresis, and apparent viscosity than the untreated product. the greatest viscosity was found in t2 and t3 of 433 and 479 mpa s, respectively, compared with control of 299 mpa s at 15 days storage. the results indicated that there is an inverse relationship between the amount of units of crude proteinase preparations and susceptibility of yogurt to syneresis. the t2 gained the highest scores (85 points) followed by the control (81.5 points) after 15 days of storage, while yogurt of t3 showed a low scoring being 75. from the foregoing results, it is recommend to use jack fruit (artocarpus integrifolis) as a source of plant proteinases and utilize it to develop a high quality yogurt at a level of 7.8 units of plant proteinases/ml milk. an effective process for the chemical-biotechnological utilization of distilled white lees was studied. a first treatment with hydrochloric acid allowed the solubilisation of tartaric acid. the influence of temperature, amount of hcl and reaction time were considered through an experimental design. under the optima conditions 77 g/l from white distilled lees and 45.6 g/l from red distilled lees were recovered. the tartaric acid was precipitated as calcium tartrate so that it can be isolated from the rest of the raw material compounds. the solid residue was used as an economic nutrient for lactic acid production by lactobacillus pentosus using trimming wastes as substrate. the lactic acid concentrations and volumetric productivities achieved were similar to those obtained using distilled lees without tartaric acid recovery as nutrient. toasted wine was traditionally produced in galicia, northwest of spain. nowadays this technique is being recovered. grapes after harvesting are air dried in order to concentrate sugars, acids and flavor compounds. raisings are pressed to obtain a must with high sugars concentrations. two different grape wines were prepared concentrating the sugars up to 37 and 57 brix, respectively. in order to get a better knowledge of the problems involved, synthetic media simulating the grape musts were prepared. theses musts were used to optimize the initial sugar concentration, the amount of nutrients required, the optimum temperature to carry out the fermentation and the influence of the type and amount of yeast. under the best conditions some fermentations with grape must were carried out to produce wines with intense aroma and flavor notes and high residual sugar concentrations. in this studies, bacillus sp. e1 strain was isolated from koreanstyle fermented soybean paste and it was producing the biological response modifier (brm). the brm activated the b cell selectively. it was identified the bacillus licheniformis e1. the brm was purified by ion-exchange chromatography and gel filtration. chemical properties of brm: molecular weight of brm was estimated to be about 1,594,000 da. sugar content of brm was 33.0% (w/w) and glucosamine (35.1 mol%) was the high level. protein content of brm was 4.3% (w/w) and serine (17.2 mol%) was the high level. infra-red absorption spectrum was showed the characterization of glycoprotein. biological properties of brm: the brm which isolated from fermented soybean paste was similar to that of bacillus licheniformis e1 by immuno-fluorescence assay. we confirmed that the brm was capsular substance of b. licheniformis e1. potato nitrogen concentrate (pnc) is a highly viscous liquid with high complex nitrogen content produced from the protein-fraction in potato starch extraction. the concentrated extract is rich in minerals and ␣-amino nitrogen. although ␣-amylase nowadays is mainly produced exploiting bacillus production systems there is still considerable demand for fungal ␣-amylase from aspergillus oryzae origin. the aim of the experiments to be reported here was to investigate, if pnc can replace commonly used complex nitrogen sources in the production of fungal ␣-amylase. the following data have been measured in pnc pretreated by diluting to 1/2 and clarifying by centrifugation. total-n: 8.4 g n/l; ␣-amino-n: 2.8% (w/v) (as glycine); soluble protein (bradford): 51.2 mg/l (as bsa); total carbohydrates: 110.0 g/l; reducing sugars: 5.6 g/l; dry weight: 57.3% (w/w). in the following experiments nitrogen sources were replaced on the basis of their ␣-amino nitrogen content. the carbon source for all experiments was maize starch. the formation of ␣-amylase by a. oryzae atcc 1011 in shake flasks -using pnc (centrifuged or not), yeast extract, malt extract, casein hydrolysate or meat extract -was compared to "standard" cultivation with corn steep liquor. the experiments showed only small differences in ␣-amylase titers using complex nitrogen sources. no remarkable differences were observed in the resulting biomass. in general no differences in enzyme productivity and biomass formation could be seen after 50 h of incubation. especially the bench top bioreactor experiments indicated an optimal fermentation time of about 100 h. cultivations of a. oryzae atcc 1011 were carried out in bench top bioreactors. comparing cultivations in a medium with pnc as the sole complex nitrogen source to one containing csl as such no significant differences both in the formation and amount of ␣-amylase and the fungal growth were observed. thus pnc might be able to replace complex nitrogen sources such as csl or even the more expensive yeast extract and casein hydrolysate in fungal amylase production systems. 1.19) is involved in the metabolism of inositol and catalyzes the conversion of d-glucuronic acid to l-gulonic acid with nadph as a cosubstrate posterior to the oxidation of inositol to glucuronic acid by the enzyme inositol oxygenase. although the yeast sporobolomyces oryzicola (nakase and suzuki, 1986) is not able to grow on inositol as the sole carbon source, intracellular glucuronate reductase can be found in cells grown in a medium containing d-glucuronic acid. the enzyme could be a useful tool in the design of a specific quantitative assay for glucuronic acid, e.g. in so called energy drinks. the organism was grown in media containing either glucose and glucuronic acid or only glucuronic acid and difco yeast nitrogen base. whereas growth on both media was similar in shake flask culture, hardly any growth in either medium was observed in bench top bioreactors. the influence of dissolved oxygen tension was investigated and the relevant data will be shown. the formation of intracellular glucuronate reductase activity by sp. oryzicola is inducible by media containing glucuronic acid. no activity is found in cells grown in a medium containing only glucose as the carbon source. besides the activity against d-glucuronic acid, activities against 5-ketogluconate and -at very low levelsagainst galacturonic acid and the lactone of glucuronic acid were detected. the enzyme activity is stable up to 35 • c. the ph has relatively low influence on the activity against glucuronate, whereas the reduction rate of 5-ketogluconic acid is optimal at ph 7.0-7.5 with significantly lower values at ph 6.0 and 8.0, respectively. data on the kinetics of the conversion of both glucuronate and 5-ketogluconate will be shown. nakase, t., suzuki, m., 1986. j. gen. appl. microbiol. 32, 149-155 . the multiple nutritional and functional impacts of food fermentation on human health have been widely accepted (reddy and pierson, 1994; hugenholtz et al., 2002) . however, the related role of the involved microorganisms to the nutritional effect from the fermented food is still not well defined and the mechanisms involved are still largely unknown. the present study was to investigate iron bioavailability in carrot juice fermented by two selected lab strains, l. pentosus fsc1 and ln. mesenteroides fsc2. after digestion by gi enzymes, the juice was supplied to fully differentiated caco-2 cells to study iron uptake and transepithelial transport by caco-2 cells from the digested juice. our data revealed strain specified changes in iron bioavailability in carrot juice fermented by these two strains. after in vitro digestion with pepsin and pancreatic-bile enzymes, the best yield of soluble iron was from ln. mesenteroides fsc2 fermented juice. surprisingly, the l. pentosus fsc1 fermented juice yielded about five times higher uptake iron as compared to fresh juice, while ln. mesenteroides fsc2 fermented juice was not significantly different from the fresh juice. interestingly, the transepithelial transferred iron across the cell line was however better from ln. mesenteroides fsc2 fermented juice than from l. pentosus fsc1 fermented juice. to summarise, our study showed that level of soluble iron after in vitro digestion does not necessary indicate iron absorption, especially in the case of lab fermented food. data on improved iron uptake from l. pentosus fsc1 fermented juice indicated exiting of promoter(s) for iron absorption in such juice that is not related to the production of organic acids and lowering ph effect. peng zhang, herve vanderschuren, martin stupak, wilhelm gruissem institute of plant sciences, 8092 zurich, the tropical root crop cassava (manihot esculenta crantz) is a major source of food for approximately 1 billion people worldwide. in sub-saharan africa, more than 200 million people rely on cassava as their major source of dietary energy. in many parts of africa and latin america, cassava leaves are a vegetable source for daily uptake. cassava is grown mostly by poor farmers under marginal environmental conditions and in areas where few other crops can sustain competitive yields. the crop is therefore fundamental for subsistence farming and food security, but it is also very susceptible to stresses common in the areas and conditions where it grows. in many parts of africa, reliable cassava production is strongly impacted by infections with the african cassava mosaic geminiviruses (cmgs), a rapidly spreading disease that causes large yield losses. in the coastal areas of east africa, cassava production now is threatened by another devastating disease, cassava brown streak disease (cbsd). cassava plants are also frequently attacked by many pests, such as cassava hornworm and stemborers. several reports also indicate that greater leaf longevity, especially under drought conditions, could be important for increasing yields and/or the stability of production in cassava, as well as improve the access to an important nutrient source. conventional breeding efforts have attempted to address the constraint to cassava production, but with limited success. the new tools of biotechnology can change this situation by offering new approaches to the challenges of cassava. these new technologies have the potential to make cassava much more productive, a better source of nutrients, and profitable to grow, hence, greatly contributing on the sustainable development of tropical agriculture. recently we have developed biotech cassava with value-add traits, including resistance to cassava mosaic virus, prolonged leaf life and insect resistance. new strategies are also explored to increase protein content of cassava storage roots. we are currently undertaking pilot studies with two teams of leading scientists and experts for projects to test acmv-resistant transgenic cassava lines in africa and lines with extended leaf retention at ciat, colombia under field conditions. this development of substantially equivalent improved transgenic cassava lines is part of a larger study to analyze the need, effectiveness and biosafety of biotech cassava for agricultural production. the goal of the pilot studies will be the development and coordination of a broader project that produces important and novel scientific results, valuable information on the need and impact of biotechnology at the subsistence farming level, and a sound scientific basis for the development of guidelines for biosafety assessments and release of transgenic organisms into the environment and agricultural production in africa and latin american countries. this study was conducted to reveal the effects of different pretreatments on obtaining haploid plants by using the anther culture in pepper capsicum annum l. cultivars demre sivrisi and sirena. buds were collected at uninucleate microspore stage. anthers collected from buds were cultured in ms medium containing different hormones and hormone concentrations. experiment results revealed that when sirena anthers were pre-treated cold at +4 • c for 24 h and kept in darkness at 25 • c for a period of 1 week gave good results. in the case of demre sivrisi anthers were pre-treated cold at +4 • c for 48 h and kept in darkness at 25 • c for a period of 1 week gave good results. on the other hand no cold pretreatment to anthers resulted with low embryo formation. similar results were also observed on the anthers kept at 35 • c for 1 week as callus was produced in some petri dishes but no regeneration was observed. as a conclusion, since no cold pretreatment to anthers resulted with low embryo formation it is possible to say that cold pretreatment should be applied to anthers in pepper another culture studies. the yeast d. hansenii ufv-170 was tested in this work in batch experiments in synthetic media at constant initial substrate concentration (100 g l −1 ) under variable oxygenation conditions. to get additional information on its fermentative metabolism, a stoichiometric network was proposed on the basis of the general knowledge available in the literature on xylose metabolism in pentose-fermenting yeasts and the specificities of xr and xdh activities in d. hansenii and checked through a bioenergetic study performed using the experimental data of product and substrate concentrations. it can be stressed that under strongly oxygen-limited conditions xylitol production was negligible, whereas under semi-aerobic conditions maximum xylitol production (p max = 76.6 g l −1 ) and yield (y p/s = 0.73 g g −1 ) were obtained. a progressive decrease in these parameters was observed under fully aerobic conditions, suggesting that xylitol-producing yeasts require limited oxygen conditions, which is species-dependent. the proposed model, which utilizes the experimental specific rates of substrate consumption and product formations, allows estimating the main bioenergetic parameters. besides, it proved to be an effective tool to investigate different metabolic situations and showed how they can influence the flux distribution of the carbon source and the bioenergetics of this biosystem. the effect of disinfectants on fungi anne svendsen, pernille skouboe bioneer a/s, hørsholm dk-2970, denmark prevention of mould spoilage of foods can only be carried out successfully, if the species, which are actually spoiling the food product, are known. a very limited number of fungal species has been associated with the spoilage of each food category. proper disinfection of production facilities is very important to avoid mould spoilage. resistance of moulds to disinfectant treatments are known and different species have shown different response to the same disinfectant. to obtain proper disinfection it is important to know the resistance of the spoilage fungi against different disinfectants. in this study the effect of disinfectants on the spoilage fungi of cheese, rye bread, liver paté and fruit juice was investigated. in collaboration with five food companies the dominating species responsible for spoilage of each food product were isolated and used for testing. commercial disinfectants and disinfectants "under development" were tested. tests were performed in suspension and on surfaces, the methods used were modified after en 1650 and en 13697. considerable variability in fungicidal effect among the species was observed. some disinfectants were ineffective at low temperature. some disinfectants showed different effect in suspension and on surfaces, resulting in an effective kill in suspension and almost no effect on surface. the identification of effective disinfectants in the food industry includes: (1) testing against the specific spoiling species of the food product, (2) testing on surface, not only in suspension, (3) test parameters adapted to the food manufacturing plant. intensified research efforts in recent years confirm the major importance of the microbial flora in the gastro-intestinal tract for human health. ingestion of prebiotic oligosaccharides increases the number of the desirable bacteria like bifidobacteria and lactobacilli in the colon. we are looking at beta-galactosidases from lacto-bacillus spp. for the production of galacto-oligosaccharides (gos) because we speculate that the enzymes of probiotics will form gos with high prebiotic potential. in this present study, purified betagalactosidases of selected lactobacillus strains were used for the production of gos from lactose. different enzyme reactor set-ups, both discontinuous and continuous, were tested and compared. temperatures up to 37 • c and ph values between 6 and 6.5 were required for satisfactory enzyme stability during the process. enzyme source, substrate concentration and the level of substrate conversion were found to be critical process parameters for gos yields and composition. yields of up to 40% (w/w) of total sugars were achieved when the initial lactose concentration was 200 g/l. capillary electrophoresis (ce) and hplc with pulsed amperometric detection were the analytical tools for investigating the influence of reactor type, enzyme source and conversion level on gos composition. the prebiotics market is increasing rapidly and is expected to more than double until 2010 to about 180 million d world-wide. therefore, the development of enzymatic processes on an industrial scale is a high priority goal of our research. starter addition does not always succeed in improving standardisation and quality of the complex sensory properties of traditional fermented foods. in many cases the added strains do not grow as well as the environmental strains present in the production plant. here, a method of geometric simplification (by dichotomy) of a complex ecosystem found on a raw milk livarot (82 strains) was tested on cheese curd. by a limited number of cultures, successively, 40 out of 82, 20 out of 40 and 10 out of 20 strains were selected on the basis of two criteria (i) respect of the taxonomic proportion, (ii) generation by the daughter ecosystems of an odour close to the one of the mother ecosystem. finally a sub-ecosystem of 10 strains gave an odour similar to the one of the more complex mixture. the use of molecular methods (pcr-sscp) permitted to follow the main species growing. mother and daughter ecosystems were characterized by sensory analysis and gc-ms. probably because of an important redundancy of the strain functions, the method was very efficient. this method may permit to improve a lot the set up of mixture of strains and species used in fermented food industry. effect of the dilution rate on the exopolysaccharide production by bifidobacterium longum atcc 15707 c. shene, m. rubilar, s. bravo universidad de la frontera, chemical engineering, av. francisco salazar 01145, casilla 54-d temuco, chile exopolysaccharides (eps) producing lactic acid bacteria are used in dairy industry (cheese and yogurt) due to the rheological properties that these compounds confer to the products. preliminary results also suggest the use of eps as health-promoting (anti-tumor and immunostimulatory actions) ingredients. bifidobacteria are grampositive bacteria natural inhabitants of the gut of warm-blooded animals and man. a number of investigations have shown that bifidobacteria promote host health mainly because of the reduction proliferation of some pathogenic bacteria through acid synthesis. in this work results obtained in the experiments carried out to test the capability of b. longum atcc 15707 to synthesize eps are presented. continuous culture fermentations were carried out at dilution rates between 0.04 and 0.44 h −1 . composition of the culture media was that of the mrs broth. biomass concentration presents higher values (2.9-3.2 g l −1 ) at dilution rates between 0.1 and 0.2 h −1 . biomass growing at these rates is difficult to pellet and adheres to the fermentor walls behavior that was not observed at other growth conditions. eps from cultures grown at these rates were preparated and fractionated. authors wish to thank the chilean conicyt for the economical assistance given through the project fondecyt 1050602. high pressure-low temperature (hplt) inactivation processes were performed on bacillus subtilis vegetative cells at various conditions. at atmospheric pressure, lowering the temperature to as low as −45 • c was found to have minor anti-microbial effects. upon application of high pressure various phase transitions occurred in the microbial suspensions under study. after pressure treatment at 150-450 mpa, cells were plated under optimal conditions to assess cell viability. treatments at 250-450 mpa and −25 • c were the most effective in inactivation. in these cases, ice i-iii solid-solid phase transition was observed. in addition, we hypothesised that intracellular thawing (solid-liquid phase transition) had already occurred while the extracellular surrounding was undergoing solid-solid phase transition. this double effect is suggested to be key in mediating the observed large drop in viability. we speculate that more cells survived after treatment at −45 • c compared to the same treatment at −25 • c because both the extra-and intracellular surrounding remained fully frozen. at −45 • c a solid-solid phase transition was observed when pressure was higher than 350 mpa. a metastable state of ice i was observed at 250 mpa treatment. results from the current study will be presented (see also shen et al., ifset in press) . the data call for a mechanistic evaluation of the effects of hplt as an anti-microbial treatment. such data are currently being gathered and will be used in defining optimal hplt process conditions for the food industry. the influence of saccharomyces cerevisiae, kloeckera apiculata and candida pulcherrima mixed cultures on the selected alcohols formation during model fermentation pawel satora, tadeusz tuszynski department of fermentation technology and technical microbiology, food technology faculty, agricultural university, cracow, poland. e-mail: psatora@ar.krakow.pl (p. satora) for the study five yeast species were chosen, isolated from successive stages of plum fruits spontaneous fermentation: from the beginning (candida pulcherrima, kloeckera apiculata, saccharomyces cerevisiae w4), middle (s. cerevisiae w54) and final fermentation (s. cerevisiae k1). to characterize the potential influence of yeast mixed cultures on the selected alcohols formation, wick-erham synthetic medium (10% glucose) was fermented by mixed cultures of two and three yeast species. after distillation, ethanol, propanol, isobutanol, isoamyl alcohols, hexanol and 2-phenylethanol were determined using gas chromatography. findings were compared with the results obtained after monoculture fermentations. the use of mixed cultures resulted in increasing of glucose utilization rate, ethanol and fusel alcohols formation (except propanol) and decreasing of methanol synthesis. the samples fermented using two yeast species characterized higher (about 10%) amount of volatile compounds in relation to monocultures. it takes note of especially high level of ethanol (av. 44.3 g/dm 3 ), methanol (16.7 mg/dm 3 ) and isoamyl alcohols (47.6 mg/dm 3 ). the positive feature of triple cultures using was limitation of methanol and fusel alcohols synthesis that was accompanied by relatively high ethyl alcohol production (av. 41.5 g/dm 3 ). the consumption of sugar syrup becomes increasingly significant in industrial processes due to economic advantages and the easy of use. the production of sucrose syrup using enzymatic hydrolysis represents the safest alternative, once the reaction does not produce any toxic or undesirable substance. this work consists on the production of sugar syrup by immobilized inulinase from kluyveromyces marxianus, with two alternatives process: (a) syrup enriched with fructooligosaccharides or solely with glucose and fructose. the process is comprised by the following stages: production and purification of the enzyme in optimized conditions, immobilization of the enzyme in solid support and the conversion of sucrose in a fixed bed bioreactor with the immobilized enzyme. the final composition of the product can be a mixture of glucose, fructose, sucrose and fructooligosaccharides or a mixture of fructose and glucose, according to the operational conditions. the bioreactor can be operated continually for approximately 5 months with the same biocatalyst. the product from this process is ideal for applications in the food products such as sweet, candies, chocolates, yogurts, etc. besides, the prebiotics properties of the fructooligosaccharides, is a beneficial stimulant of the intestinal flora, which gives to the product a functional property. studies on plant microbial interactions using azotobacter sp. as bio-inoculants towards soil fertility baljeet singh saharan faculty of biotechnology, jcdm college of engineering, sirsa 125055, india. e-mail: baljeet.saharan@gmx.de, baljeet br@yahoo.co.uk (b.s. saharan) high nitrogen fixing, phytohormone producing isolates of azotobacter, azospirillum, acetobacter and pseudomonas were used as inoculants on wheat and cotton with varying doses of nitrogen under field conditions. bio-inoculants were selected on the basis of yield, dry weight and survival rate of bacteria under field conditions. seeds of wheat variety wh 711 were treated with different biofertilizers using nitrogen level of 90, 120 and 150 kg ha −1 and one level of p, i.e. 60 kg ha −1 in field along with control. under field conditions, maximum yield was obtained with azotobacter chroococcum e 12 at 90 (2506 ± 0.04 kg ha −1 ) as well as 120 kg ha −1 (2817 ± 0.07 kg ha −1 ) followed by a. chroococcum ht 57 (2482 ± 0.16 kg ha −1 ) and avk 51 (2474 ± 0.37 kg ha −1 ). whereas, with 120 kg ha −1 highest yield was observed with mac 27 (2833 ± 2.59 kg ha −1 ) followed by e12 (2817 ± 0.91 kg ha −1 ) and avk 51 (2804 ± 0.16 kg ha −1 ). maximum height at 90 kg ha −1 was observed with mac 27 inoculation (71.1 ± 5.24 cm) followed by avk 51 (70.8 ± 4.70 cm) and ht 57 (70.3 ± 1.76 cm). various chosen strains were tested with desi (hd 123) and american cotton (h 1098) under similar pot and field conditions as for wheat in the following season. plant height and yield were determined at the time of harvesting whereas survival rate was monitored at various intervals of time. survival rate of inoculated bacteria was determined after 30, 80, and 135 days. highest survival rate was observed in mac 68 ((3.34 ± 2.56) × 10 6 ), which decreased after 80 and 135 days, respectively. (3.38 ± 1.48) × 10 5 , (1.53 ± 0.92) × 10 5 with mac 68 and (2.97 ± 2.01) × 10 5 and (1.26 ± 3.01) × 10 5 with ht 54, respectively. maximum boll weight was with avk 51 (76.2 ± 2.34 g boll wt. plant −1 ) followed by pseudomonas (71.3 ± 1.77 g), ac 18 (61.5 ± 1.73 g) and ala27 (61.4 ± 2.79 g) boll no. plant −1 was maximum with ala 27 and avk 51 (46 ± 2.59 plant −1 ) followed by pseudomonas (34 ± 0.07 plant −1 ). maximum height and dry matter was obtained with pseudomonas (179.7 ± 1.97 cm) and avk521 (146.7 ± 3.49 cm) with variety hd 123 under field conditions. net saving of 20% nitrogen was observed using a. chroococcum (e 12 and avk 51) bioinoculants for wheat and cotton, respectively. to characterize the antioxidative properties of tempeh-fermented food prepared from vicia faba (l. kontu) with the use of rhizopus oligosporus, the sulfhydryl groups content and surface aromatic hydrophobicity of albumins were investigated. the results obtained for tempeh albumins were compared with raw vicia faba and bovine serum albumin (bsa). these results indicate that tempeh fermentation increased antioxidative activity of albumins. the measurements of antioxidative activity were carried out with the use of 1,1-diphenyl-2-picrylhydrazyl (dpph) and 2,2 -azinobis-(3ethylbenzothiazoline-6-sulfonic acid (abts). the albumins of faba bean-tempeh have possessed much higher activity for scavenging free radicals as measured with the dpph and abts (49.4% and 41.1%) than raw seeds (27.9% and 23.2%) and bsa (5.8% and 13.4%), respectively. it has been also found that tempeh fermentation process increased 2.5 times sulfhydryl groups content (43.2 m/mg of albumins) as compared to raw seeds (17.5 m/mg of albumins). the tempeh albumins have possessed lower surface aromatic hydrophobicity than raw seeds (352.3 fi and 692.1 fi, respectively). orange peel characterization and generation of fermentable sugars solutions for the biotechnological production of food additives b. rivas 1 , j.m. domínguez 1 , p. torre 2 , j.c. parajó 1 : 1 department of chemical engineering, vigo university (campus of ourense), polytechnic building, as lagoas, 32004 ourense, spain; 2 department of chemical and process engineering, genoa university, via opera pia 15, 16145 genoa, italy. e-mail: brivas@uvigo.es (b. rivas) the citrus processing industry generates in mediterranean area around 3 millions tonnes of orange peel as byproduct from the extraction of citrus juices in industrial plants. in order to avoid ecological problems and provide an extra profit, this residue was studied in order to generate a suitable substrate for the fermentation process oriented to the production of food additives. orange peels were characterized and the data collected allowed quantifying a 97% of this waste. soluble sugars (21.2%), cellulose (17.0%) and pectin (42.5%) were identified as more important fractions. this material was submitted to two hydrolysis techniques, prehydrolysis (with diluted sulfuric acid) and autohydrolysis (with water) under different experimental conditions. autohydrolysis was selected as the most appropriate technique for the production of suitable fermentation media. finally, the liquors obtained at 130 • c and liquid:solid ratio of 8 g/g, containing 38.2 g/l of sugars, without additional nutrients, were employed to citric acid production by aspergilus niger cect 2090 (atcc 9142, nrrl 599) . the influence of the addition of calcium carbonate and methanol were studied. under the best conditions an effective conversion of sugars into citric acid was attained, showing the viability of the production of fermentable solutions from this industrial waste. today there is an increasing interest in using high gravity fermentation in brewing. high-gravity fermentation involves production of beer wort of up to 18 • p or even higher and results in beer that has more consistent product quality. the main aim of this study is an increased understanding of how brewer's yeast respond to the various stress factors imposed during high gravity beer fermentation and the consequences these stress factors have on the gene regulation and its consequences on the metabolite levels (both intra-and extracellular). higher attenuation of the wort will be achieved by two different techniques: by the addition of highly fermentable adjuncts such as sucrose or glucose syrups and by mashing with addition of microbial enzymes such as pullulanases and glucoamylases. in the first part of the study model fermentation conditions are established, where the sugar uptake and product formation can be studied in details. characterization of the carbohydrate profile is analyzed by hplc. as flavour changes may occur at higher gravities, it is important to study changes in formation of secondary metabolites, especially esters. transcriptome and metabolome analysis will be used to establish how the stressful conditions prevailing under high gravity fermentations may influence the secondary metabolism in saccharomyces cerevisiae. furthermore, analytical aroma characterization of final beer will be studied by spme and gc-ms. detailed analysis of the effect of different stress factors on the cellular response using dna arrays and metabolite profiling will be carried out. dna arrays will be employed to evaluate if specific metabolic pathways are up-regulated or down-regulated as a consequences of the stress factors. naringin, a bitter compound that occurs in citrus fruit juices, may be converted to a nonbitter form by enzyme hydrolysis. the enzymatic complex naringinase was produced in aspergillus niger cect2088 cultures with naringin as inducer (pérez-mateos et al., 2004) . crude extracts from a. niger and purified naringinase from penicillium decumbens were immobilized into a polymeric matrix of polyvinyl alcohol (pva) hydrogel cryostructured in liquid nitrogen. the operating stability of the pva-naringinase beads was tested using synthetic citric juice (gray and olson, 1981) . immobilized enzymes reduced 40% the naringin content at 20 • c and ph 3.2. furthermore, immobilized preparations from aspergillus and penicillium could be re-used through six cycles (144 h) remaining 70% and 38% catalytic efficiency, respectively. financial support from "ministerio de ciencia y tecnología" and feder (no. agl2003-08006/ali). gray and olson, 1981 . j agric. food chem. 29, 1298 -1301 . pérez-mateos, et al., 2004 (1), 230. otimizing the fermentation broth for tanase production by a new isolated strain paecilomyces variotii vania battestin, gláucia pastore, gabriela macedo department of food science, unicamp, p.o. box 6121, campinas, cep 13083-862 são paulo, brazil tannase is an inducible enzyme that catalyses the breakdown of ester linkages in hydrolysable tannins, resulting in gallic acid and glucose. the fermentation broth can use by-products as wheat bran, rice or oats, adding tannic acid. the use of by products or residues rich in carbon source for fermentation purposes an alternative to solve pollution problems that can be caused by an incorrect environmental disposal. in the present study we have optimized the production of an extracellular tannase by a new isolated paecilomyces variotii using response surface methodology. the first step was to identify the variables having a significant effect on enzyme production. the variables evaluated were temperature, residues ratio (coffe: wheat bran), concentration of tannic acid, salt solution during 3, 5 and 7 days of fermentation time. results showed that temperature, residues ratio (coffe: wheat bran) and tannic acid had significant effects on tannase production. commercial wheat bran (cwb) and coffe rusk residues (cr) were used as solid substrate. for fermentation the medium was composed by, cwb:cr were mixed with distilled water and transferred into 250 ml capacity erlenmeyers flasks and auto-claved at 120 • c for 20 min. the medium was then inoculated with spores (5.0 × 10 7 ) and the flaks were incubated at 32 • c. tannase was assayed according to the methodology of mondal et al. (2001) . according to the statist analyses, the optimum conditions to produce tannase was the range of temperature (29-34 • c); tannic acid (8.5-14%); residues percent (coffe: wheat bran) (50:50) and 5 days fermentation time. the enzyme production increased 8.6 times more enzyme production than that was obtained before this optimization. yeast and lactic acid bacteria are two major microbial groups of the most fermented products. a large variety of fermented foods and beverage are made by the activities of both yeast and lactic acid bacteria, simultaneously or successively. during the spontaneous mixed fermentation of lactic acid bacteria and yeast population, it is extremely difficult to control microbial species due to the complexity of the microorganism involved. therefore, we have compared the antimicrobial activity of chitosan against two lactic strains, lactobacillus plantarum and lb. brevis, and yeast strains, saccharomyces cerevisiae to investigate the possible use of non toxic biopolymer chitosan for selective control in mixed culture. the lactobacilli were more sensitive to the inhibitory activity of chitosan than s. cerevisiae. the results suggest the possible use of low molecularweight-chitosan for the control of food fermentation in which both groups of organisms frequently occur together. the effect of vegetable oils on astaxanthin production of phaffia rhodozyma and xanthophyllomyces dendrorhous csaba vágvölgyi, gyöngyi lukács, miklós takó, árpád csernetics, tamás papp department of microbiology, faculty of sciences, university of szeged, p.o. box 533, astaxanthin (3,3 -dihydroxy-␤,␤-carotene-4,4 -dione) is one of the most important carotenoid product. it is used primarily as food colorant and animal feed additive. their effective antioxidant properties linked to a preventive action on various types of cancer and an enhancement of the immune response could lead to expanded commercial applications. among the natural microbial source available, the closely related red pigmented yeasts phaffia rhodozyma and xanthophyllomyces dendrorhous are of great biotechnological interest. these yeasts have desirable properties as biological sources of pigment, including rapid metabolism and producing high cell densities in fermentor, but the commercial production of astaxanthin is limited by the relatively low content in wild-type strains. the purpose of this study was to determine whether the different vegetable oils had an effect on the carotenoid production in p. rhodozyma. effects of media supplemented with corn germ oil, wheat germ oil, sesame-seed oil, palm oil, pumpkin-seed oil, coconut grease, olive oil (extra virgin), olive oil (sanza), sunflower-seed oil and cottonseed oil were tested. studies were performed on both a phaffia and a xanthophyllomyces strain. yeast was grown in yeast-pepton-glucose liquid medium complemented with the appropriate vegetable oil in different concentrations (0.5-2, v/v, %) . after four days the total carotenoid production was determined spectrophotometrically, and it was referred to dry cell mass. palm oil increased significantly the carotenoid production of the phaffia strain, while a similar effect on the xanthophyllomyces strain could be observed with coconut grease. composition of carotenoid compounds in the strains was determined by thin layer chromatography. lutein is considered a nutraceutic compound that has developed an increasing interest since it is one of the two carotenoids that are located in the macula of the human eye. its consumption is associated with the prevention of age related macular disease (amd). industrially, lutein may be produced using a saponification step of a mixture of lutein diesters that are previously extracted with hexane from natural sources. our proposal is to improve the process by catalyzing the same reaction using microbial lipases during the extraction step with hexane. additionally, the use of supercritical fluids represents an extension of enzymology in non-conventional media with process and environmental advantages. this work was developed using extracts from marigold flower (tagetes erecta) in hexane and supercritical carbon dioxide (sc-co 2 ) where the lutein esters were hydrolyzed by two commercial lipases: lipase b from candida antarctica (novozym 435) and lipase from mucor miehei (lipozyme rm 1m). in particular, we focused our interest in the role of water in the system. interestingly our results show an inverse dependence of the initial reaction rate with respect to the initial water activity (awi) for both lipases, a phenomena that seems to be related to the partition of substrates and products in the solid support)and the hexane phase as a function of water. when sc-co 2 was used as solvent an increase in the consumption rate of lutein diesters occurred, reaching conversions of 70% in 24 h. for hexane, the same conversion was reached after 160 h. this result suggests a significant effect of the media on the reaction that can be related to shifts in the partition of compounds that bring the substrates in closer contact with the enzyme. this work also demonstrates that lutein hydrolysis seems to be another potential application of commercial immobilized lipases in the food/nutraceutical market. the enzyme of interest in this work is ␤-galactosidase from lactobacillus sp. (ec 3.2.1.23). ␤-galactosidases catalyze the hydrolysis and transgalactosylation of ␤-d-galactopyranosides (such as lactose). an attractive biocatalytic application is found in the transgalactosylaction potential of these enzymes which is based on the catalytic mechanism of ␤-galactosidases. the products of transgalactosylation, galacto-oligosaccharides, are non-digestible carbohydrates which meet the criteria of 'prebiotics' and therefore have attracted increasing attention. to produce these 'prebiotic' galactooligosaccharides, an inexpensive and efficient process is desired. immobilization of the enzyme ␤-galactosidase on an insoluble support is an attractive tool to make the process of lactose conversion more economical because the enzyme can be recovered and reused during continuous operation. in this present study, we aimed at immobilizing ␤-galactosidase from lactobacillus sp. by covalent linkages on two solid supports which are commonly used for protein immobilization: chitosan and eupergit c. the protein-binding capacity, the immobilization yield, ph and temperature dependency of activity and stability, and the kinetic parameters of immobilized enzymes were studied. higher activity retention of the immobilized enzymes over a broader ph range and at higher temperatures compared to those of the free enzyme was observed. the immobilized enzymes were evaluated in terms of transgalactosylation activity and stability for a to introduce of foreign genes for the important crop plants such as rice, we need a reproducible efficient procedure for regeneration of the calli through somatic embryogenesis. for this intention, we established the best callus induction medium for tarom mahalli and deilamani cultivars and created the method that the regeneration frequency was reached to 48%. calli were induced from scutellar tissues of mature seeds on ms medium supplemented with three level of 2,4-d (2, 2.5 and 3 mg l −1 ) and n6 medium supplemented with five level of 2,4-d (1.5, 2, 2.5, 3 and 3.5 mg l −1 ). for deilamani cultivar the best medium was n6 with 1.5 mg l −1 2,4-d and for tarom mahalli the same medium with 2 mg l −1 were the best. in subculture media, sucrose was used instead of maltose. for regeneration analysis of plantlets, we used two-factorial experiment in base of crd; one factor was regeneration media with six levels (ms medium supplemented with five amount of kinetin and: naa (mg l −1 ) [(6:2), (4:2), (2:0.5), (3:1), (2:1), respectively] and 0.2 mg l −1 2,4-d and 2 mg l −1 bap). other factor was dehydration process with three levels (without dehydration, dehydration with two layers of filter paper for 30 min [prior to transfer to the regeneration medium], and third factor was factor of 2 with substitution of sucrose with maltose [after 2 weeks; 2 sucrose:1 maltose]). we conclude that maltose due to changing in osmolarity proceeding can elevate the regeneration frequency to 48%. therefore, type of carbon source is critical in callus induction and regeneration. márová ivana, hrdličková jana, kubešová jitka, kočí radka, vidláková tereza faculty of chemistry, brno university of technology, purkyňova 118, 612 00 brno, carotenoids are the most widespread natural pigments with important biological activities and applications mainly in food and feed industry. at present many ways including genetic engineering are developed to reach higher production of naturally formed carotenoids using microbial producers. in this work cloning and expression of crt gene cluster from pectobacterium carotovorum in recipient bacterial strain e. coli dh5␣ as well as in yeast strain s. cerevisiae was tested. plasmid vector phsg298 with inserted crt genes was used for transformation of chemically competent e. coli dh5␣ cells, while in s. cerevisiae shuttle vector paur135 was used. transformants were selected based on resistance to antibiotics, formation of orange-coloured transformant colonies, analysis of recombinant plasmid size and lc/ms analysis of carotenoids produced by recombinant cells. the yield of individual carotenoids (lutein, beta-carotene, lycopene) obtained from various bacterial transformants was several fold higher than in natural producer (lutein: 0.2-1.2 g/g of d.w., beta-carotene: 0.1-1.4 mg/g of d.w.). the highest yield obtained in transformed strain was 63.5 g/g of lutein and 15.4 g/g of beta-carotene. the yield of biomass and carotenoids in. transgenic s. cerevisiae was comparable to some industrial red yeast strains (4.5 mg of total carotenoids + 32 mg ergosterol/l; 36 g/l of biomass). so, transgenic yeasts could be suitable for large scale production of carotenoids and/or enriched biomass, while transgenic bacterial producers are perspective above al for high production of rare carotenoids as lutein or lycopene using transformation by specific genes of crt gene cluster. this work was supported by the project msm 0021630501 of the czech ministry of education, youth and sports. two forms of grape seeds, whole and powdered forms, were heated at four different temperatures-50, 100, 150 and 200 • c. after heating, grape seeds were extracted with 70% ethanol (0.1 g grape seed/10 ml of 70% ethanol), and total phenol contents (tpc), radical scavenging activity (rsa) and reducing power of the extracts were determined. thermal treatment of grape seed increased the antioxidant activity of extracts. the maximum tpc and rsa of whole grape seed extract (wgse) were achieved when the seeds were heat-treated at 150 • c for 40 min, while that of powdered grape seed extract (pgse) were at 100 • c for 10 min, and were greater than that of the non-treated control. according to the gc-ms analysis, several low-molecular-weight phenolic compounds were newly formed in the wgse heated at 150 • c for 40 min. these results indicated that antioxidant activity of gse was affected by heating conditions (temperature and time) and physical conditions of grape seeds at the time of heat treatments. analysis of the unexpected phenotypic consequences associated with plant transformation jonathan latham, allison wilson, ricarda steinbrecher econexus, 6, canon frome court, ledbury hr8 2td, uk. e-mail: jrlatham@gn.apc.org (j. latham) transgenic plants often exhibit unexpected phenotypes. such phenotypes could arise from pleiotropic effects associated with the transgene, or they could arise from other sources. a recent econexus report underlined the potential for the process of plant transformation to result in genetic damage to the transformed plant (genome scrambling-myth or reality? transformation-induced mutations in transgenic crop plants: http://www.econexus.info/). the report showed that mutations arising at the site of transgene insertion are often substantial, frequently resulting in loss or rearrangement of chromosomal dna and insertion of multiple superfluous dna fragments. unintended mutations were also documented at other locations in the plant genome. such transformation-induced mutations could provide an explanation for unexpected phenotypes in transgenic plants. we decided to survey regulatory documents and the scientific literature for instances of unexpected phenotypic consequences arising in transgenic plants. this poster documents the preliminary results of our survey. it is intended to assess the range and frequency of unexpected consequences and to examine whether there is sufficient data available to determine their origin. it is our belief that investigating the origin of these unexpected phenotypes should be a principal aim of biosafety research. biotechnological production of metabolites such as carotenoids could be of high interest because of their antioxidative and antimutagenic activities in human body. production of these metabolites by microbial cells is dependent on cultivation conditions. so, presence of exogenous stress in cultivation environment could stimulate biosynthetic pathways of desired metabolites. two non-conventional yeast strains, rhodotorula glutinis and sporidiobolus salmonicolor, were chosen for study of carotenoid production useful in feed industry. hydrogen peroxide, sodium chloride and/or their combinations were used as exogenous stimulators of carotenoid pathway. presence of exogenous stress led to important overproduction of pigments as well as of supplementary studied substances (ergosterol, glycerol). higher adaptability of yeast cells was observed not only in cultivations with one type of stress. combination of stress factors in cultivation media induced significant increase of pigment formation. moreover, under controlled conditions in laboratory fermentor s. salmonicolor produced about eight-fold amount of ␤-carotene (230 g/g) in medium with 2% nacl and 5 mm h 2 o 2 than in control sample. similar result was observed in r. glutinis cultivated in presence of 2% nacl in inoculation medium only (200 g/g of ␤-carotene). the use of stressed biomass of red yeasts in feed industry could have positive effect not only in animal and fish feeds because of high content of physiologically active substances, but it could influence nutritional value and organoleptic properties of final products for human nutrition. this work was supported by the project msm 0021630501 of czech ministry of education. the culture ph significantly affects mycelial growth and morphology, exopolysaccharide (eps) formation, and their molecular properties during submerged cultures of a medicinal mushroom ganoderma lucidum. when the culture ph shifts from 3 to 6, mycelial growth (12.5 g/l) and eps production (4.7 g/l) were favorable compared with other ph-control strategies. the mycelial morphology was also significantly varied upon culture ph: a feather-like pellets were found when the ph was controlled shifting from 6 to 3 at day 4, which was regarded as undesirable morphological form for eps production. compositional analyses revealed that the ratios and chemical compositions of the eps formed in bottom or top fractions of ethanolic precipitates were significantly different upon culture ph. the molecular characteristics of the eps were further investigated using a size exclusion chromatography/multi-angle laser light scattering (sec/malls) system. plant ␤-n-acetyl-hexosaminidase (hex) (ec 3.2.1.52) is reported to have diverse physiological roles like fruit ripening, degradation of reserved glycoproteins in germinating seed and chitin-elicited lignification. in this paper we report the purification and characterization of hex from korean ginseng roots. after extraction with citrate-phosphate buffer, hex was purified to homogeneity using ion exchanger chromatography, hydrophobic interaction chromatography and gel filtration. its molecular weight was determined using gel filtration and mass spectrometer. enzymatic parameters were studied with 4-methyl-umbelliferyl-n-acetylglucosaminide as substrate. the effect of heat stress and weak organic acids on escherichia coli and a comparison of its recovery by the plate count method and flow cytometry monica s. talsania due to the importance of microbiology for human health, methods have been developed to enumerate viable bacteria. dilution plating is seen to be the 'gold standard' for proof of a cells viability. however, the success of this method relies on post sampling growth, which is limited by our ability to grow cells in the laboratory. additionally, stressed or sub-lethally damaged cells, remain undetected. single cell measurements can provide rapid detailed physiological information, and the assessment of population heterogeneity. this work compared the recovery of stressed e. coli as measured by the number of cfu/ml and by multi-parameter flow cytometric analysis. weak organic acids and high temperature-short time processing (htst) were used to stress the cells both methods commonly used during food preservation. it was shown here that flow cytometry is a powerful tool for the enumeration and detailed analysis of any non-culturable microbial population, which is important because cytotoxic compounds and heat stresses used in food preservation often have a growth inhibiting effect but not necessarily a lethal one. paul g. kovalenko molecular biology & genetics nasu, zabolotnogo str. 150, 031473 kyiv, ukraine the pharmaceutically important plant species of glycyrrhiza sp. (called licorice) is an important commercial product used as a natural sweetener, anti-inflammatory, anti-cancer and anti-diabetic agents. agrobacterium rhizogenes transformation system was used for the hairy root cultures of licorice g. uralensis. after inoculation of aseptic stem segments the ability of hairy root formation was scored for a period of 6 weeks. mean transformation frequency ranged from 27% (for 8196 up to 31% (for 15,834). some transformed genotypes showed significant differences in roots weight, flavonoids and glycyrrhizin (gl) production. the cotransformation rate of licorice intact explants cultivars with lba 9402 tl-dna and the 35s gus gene showed an average of more than 35%. these obtained root cultures were additionally elicited with extracts of biotic elicitor acremonium sp. (endomycorhizal fungus), and were used as an in vitro system to metabolites production. the transformed and elicited hairy roots of g. uralensis were obtained by infection of a. rhizogenes 8196 have produced gl at an yield of 4.5% dry weight on the period of culture as a 30 days. according to tentative analyses the hairy roots cultures of glycyrrhiza species produced flavonoids (liquiritigenin and liquiritigen). more high levels (3.42 g/l) of the total flavonoids production have been identificated on the strains which transformed by lba 9402. this study involved any difference among elicitor treatments and incubation periods for the optimal meabolites production. clearly, the selection of an effective agrobacterium strain for the production of transformed root cultures is highly dependent on the plant species, and must be determined empirically. ayse gul nasircillar akdeniz university, biology, akdeniz univ. faculty of art-science, biological department, 07058 antalya, turkey mature embryos of five t. aestivum and five t. durum cultivars formed embryogenic callus on two different media. embryos were removed from surface sterilised seeds and placed with the scutellum upwards on a solid agar medium containing the inorganic components of murashige skoog and 2 mg/l 2,4-dichlorophenoxyacetic acid (2,4-d) or 1 mg/l naphthalenacetic acid (naa). the developed calli and regenerated plants were maintained on 2,4-d or naa free ms medium. wheat plants can be regenerated via two different systems. there were significant differences in percentage of callus induction and regeneration capacity on the different initiation medium. among the t. aestivum cultivars, yakar had the highest regeneration capacity in both induction medium. in t. durum cultivars, kiziltan gave the highest regeneration capacity in ms + 2,4-d medium and yilmaz gave the highest regeneration capacity in ms + naa medium. a strong genotypic effect on the culture responses was found for both induction medium. the glycolytic enzyme triosephosphate isomerase (tpi), which catalyses the interconversion of the triosephosphates dihydroxyacetone phosphate (dhap) and glyceraldehyde-3-phosphate (gap), was studied for its control on glycolysis and mixed acid production in lactococcus lactis il1403. we constructed a number of l. lactis strains in which the tpi activity was modulated from 3% to 217% of the wild-type level. the enzyme was found to be present in high excess with 3% tpi activity supporting 30% of the wildtype glycolytic flux, and with 24% of the wildtype tpi activity the glycolytic flux was essentially unchanged. measurements of the upstream metabolites glucose-6-phosphate (g6p), fructose-1,6bisphosphate (fbp) and dhap were essentially unchanged for tpi activities from 24% to 217%, and only in the strain with 3% tpi activity we observed a significant increase in the intracellular dhap concentration. homolactic product formation was preserved throughout the interval of tpi activity studied, though a small increase in the amount of acetate and formate production was observed in the strain expressing tpi at the lowest level (3% tpi activity). the finding of an increased mixed acid pattern under intracellular conditions with a high dhap concentration is in contrast to earlier data from literature, which indicated that the triosephosphates play an important role in regulation of pyruvate metabolism in l. lactis with a negative effect on the mixed acid flux. we have recently shown that alcohols induce the adhesion of l. monocytogenes at low temperatures, presumably accompanied by enhanced exopolysaccharide (eps) production. however, little is known about the mechanisms involved in the formation of biofilm and eps by l. monocytogenes. in the present project, we show that deletion of selected regulatory and up-regulated genes did not abolish attachment, though the degree of alcohol-induction in some cases was affected. we are applying bioinformatics to search for homologues in l. monocytogenes of known eps genes from various gram positive bacteria. this has revealed candidate genes involved in the synthesis of eps, such as genes encoding glycosyltransferases. moreover, we are at present performing dna microarray analysis for the egde strain grown at 10 • c in the presence of 2.5% isopropanol. this data should, combined with the bioinformatic results, give us a good indication of the genes involved in alcohol-induced surface attachment. repetitive-pcr (rep-pcr) was applied in research on non-starter lactic acid bacteria (nslab) in cheese. we first showed that strains previously differentiated by pulsed field gel electrophoresis (pfge) also could be differentiated by rep-pcr. this was partially due to slight changes in the pcr conditions that allowed reproducible amplification of 7-9 kb bands. more than 20 bands were obtained for most strains. a clear differentiation was also obtained between lactobacillus paracasei, lactobacillus plantarum, lactobacillus curvatus and lactobacillus danicus (a new species found in danish and estonian cheeses and traditional starter cultures). we found that this technique is highly reproducible, e.g. identical profiles in three different pcr-machines, two different dna isolation procedures, and different trained personnel. we applied the developed rep-pcr technique to confirm that survivors after heat treatment, were the actual strains introduced and not due to post-pasteurization contamination. we also showed that when we added a cocktail combination of five strains as protecting cultures to cheese, two to three members of this cocktail was dominating the cheese nslab microflora. in control cheeses without the cocktail in most cases other strains dominated, but in a few cases we were able to show cross-contamination between cheese vats. these data indicate that the rep-pcr will be useful to follow development of adjunct cultures as well as provide a reproducible subspecies (e.g. strain) differentiation. rep-pcr is a much quicker and less labour requiring procedure than pfge, and is apparently a much more reproducible technique than what has been seen for rapd. dynamic modeling of lactococcus lactis metabolism and its dynamic behavior for lactate secretion and regulatory characteristics jinwon lee, ui sub jung, hye won lee department of chemical and biomolecular engineering, sogang university, seoul, south korea, 121-741. e-mail: jinwonlee@sogang.ac.kr (j. lee) dynamic metabolic model for lactococcus lactis has been developed in order to analyze a time-dependent behavior of lactate secretion mechanism and probe its regulatory roles. the model was used to compare and analyze the lactate metabolism through in silico simulation and in vitro experimental measurements most of all pyruvate branch point seems to play a major role in producing lactate, and the results of metabolic control coefficient analysis recommend to increase lactate dehydrogenase activity and to decrease nadh oxidase activity. for obtaining more realistic data, we have added some measured flux data including some intermediate metabolites. by combining the simulation results and experimental measurements, we could establish more reliable and robust systematic lactate secretion model. in addition, an efficient parameter estimation method was used to test the exactness of the reported kinetic parameters. what to choose -the fast or the detailed -strategy to get informative profiles of secondary metabolite produced by fungi in culture. chemo-diversity and lead discovery calls for high throughput techniques, but do we need columns will direct infusion esi-ms (dims) do the job. the latter may give matrix effects and lacks resolution resulting in loss of information, while lc-ms analysis takes time and challenge the data processing. results from nano-esi dims and lc-ms analyses of the same extracts important penicillium species are compared. these results illustrate advantages and problems using these techniques for rapid profiling of fungal secondary metabolites, reviling that matrix effects in dims do not seriously hampers detection of important metabolites while the specificity and certainty, for e.g. de-replication is much higher in lc-ms. phenotypic classification of fungi is essential in food biotechnology ulf thrane center for microbial biotechnology, biocentrum-dtu, søltofts plads 221, technical university of denmark, dk-2800 kgs. lyngby, denmark. e-mail: ut@biocentrum.dtu.dk fungi are of great importance in food and food production. the intended use of fungi as cell factories for production of food ingredients is an upcoming issue in food biotechnology; however, this brings up a possible contamination with mycotoxins as a major issue. a reliable identification of the producer strains is crucial as a correct identification at species level following an updated taxonomy is the key to information on functional characters, e.g. useful metabo-lites and potential mycotoxins, growth conditions, resistance, etc. unfortunately, many mycological reports do not specify the taxonomy used or do not pay sufficient attention to taxonomical systems based on classification by functional characters-in contrast they are using a nucleotide sequence based phylogeny, which conveys little -if anything -about function of the organism. this situation is a major challenge for biotechnologists and mycologists in the years to come and will be highlighted by illustrative examples. the commercial interest in functional foods containing sufficient amounts of living probiotics is paralleled by the increasing scientific attention to the beneficial effect in the digestive tract. a daily intake of viable cells is proposed to ensure probiotic effect on consumer's health. one of the approaches which seems to be feasible to enhance probiotic viability and stability is to improve the fermentation conditions. during batch fermentation the viability of lactobacillus gasseri 5714 decreases after reaching a maximal value apparently indicating cell death. in this work, the apparent loss of viability can be avoided during fed-batch fermentation. a three-fold increase in viability is obtained when nutrient concentration was controlled compared with the viability reached in batch cultures. as a consequence, higher biomass concentration and lower specific lactic acid production were obtained. a mathematical model was developed to simulate and describe the effect of nutrient limitation on growth, viability, glucose consumption and lactic acid production. contribution to the metabolic adaptation to food restriction in rabbits (preliminary results) s. van harten 1 , s. borges 2 , p. cravo 2 , l.a. cardoso 1 : 1 instituto de investigação científica tropical, cvz, lisboa, portugal; 2 instituto de higiene e medicina tropical, lisboa, portugal. e-mail: svharten@gmail.com (s. van harten) in order to understand metabolic differences between two breeds of rabbits (halop ab and oryctolagus cuniculus algirus) during food restriction, the activities and expression of key enzymes and hormones of the rabbit were studied. animals from each breed were divided in two groups (ad libitum and restricted), revealing the results a similar difference in glycemic levels between fed and underfed rabbits, with a restriction of 50% of ad libitum feeding in the wild animals (decrease of 23% lw) and 16% of that ingestion in the halop breed (decrease of 32% lw). the activities of glutamine synthetase and glutaminase show a higher reduction of these enzymes in the wild animals superior to that of the halop breed, compromising, in this way, the ammonium detoxification and the entry of residual carbonated groups of the protein catabolism into the krebs cycle. in the latter animals, a rapid mobilization capacity of triacylglycerols (tga) appears to exist, with a rapid catabolism of fatty acids leading to their oxidation. the wild breeds' results reveal a rise of circulating tga, reflecting difficulties in the lipolysis and mobilization of nefa for oxidation. in these underfed animals, phosphoenolpyruvate and pyruvate suffered a large increase and oxaloacetate a decrease. the halop breed revealed results that indicate a diminution of glycolisis, being glucoses' energy substituted by carbonated chains of lipolysis and protein catabolism. hormone results showed a higher decrease in insulin, t3 and igf-1 in the underfed halop animals. in order to confirm the biochemical results, relative quantification of enzyme expression was studied by real time-pcr. since the introduction of genetically modified (gm) crops in 1996, the area under their cultivation has globally increased from 1.7 million hectares in 1996 to 67.7 in 2003. the number of countries adopting gm crops also rose from one country, the usa, in 1996 to 20 in 2003. despite numerous successes public opinion still questions the ecological, moral, ethical considerations and issues concerning altering the natural state of the organisms. in this study, a survey of food shoppers' knowledge, attitudes and perceptions of gm foods was carried out in food outlets in nairobi. the food outlets were determined by simple random sampling. using systematic sampling, shoppers were interviewed at targeted imported food products. focus group discussions were also conducted with farmers at city markets. the survey reflected views of a systematic sample of 387 shoppers in seven food outlets between november and december of 2003. it revealed knowledge at 20%, with positive attitudes and good perceptions towards gm foods (χ 2 = 42.873, d.f. 9, p < 0.001). seventy nine percent of shoppers were willing to buy and consume gm foods (χ 2 = 61.321, d.f. 2, p < 0.001). cross-tabulation of shopper's position on various issues raised in the survey showed a strong correlation between the respondents' respective knowledge, attitudes, and perceptions (r = 0.84). nineteen percent of food sampled tested positive for gms. poisson statistics were used to calculate the number of sample sequences. the statistical tools were obtained from spss version 11.5. the results of this study will be of great interest in determining the use and adoption of gm crops in kenya. it will also guide the development of national foreign food policy on gm foods. the technology should be embraced as soon as it is acceptable to alleviate, drought, famine and hunger estimated to be affecting 3.3 million kenyans today, mostly children. consumers and gm foods: the case of turkeyözlenözgen 1 , mustafa yildiz 2 : 1 department of family and consumer sciences, school of home economics, university of ankara, ankara 06130, turkey; 2 department of field crops, faculty of agriculture, university of ankara, 06110 ankara, turkey the future development of food biotechnology depends on consumer acceptance. scientists are aware that consumer attitudes will have a crucial impact on the process of the food biotechnology. because, food is one of the central features in human life. consumers' attitudes and trusts in the institutions will determine how gene technology will be used in food sector, in the future. recently, research concerned with consumer aspects of gm foods accelerated. but in turkey, the literature that deals with this subject is very limited and sparse. therefore, this research was carried out on the turkish consumers with the purpose of analyzing the consumers' awareness, assessments about benefits-risks, market place and labelling, and trusts in institutions, towards gm foods. this study was based on interviews with consumers who have recently purchased from major malls, during shopping hours. of the four major malls, voluntary male and female consumers were included in the research if they had main or secondary responsibility for household shopping. the questionnaire form was applied to subjects through face-to-face individual interview. the data were analyzed by using statistical methods according to explanatory variables, including age, gender and educational level. findings indicated that consumers' awareness and views about gm foods were connected to selected demographic characteristics. the results of this study can be important for consumer educators, marketing managers and policy makers. benefit-risk perceptions and moral beliefs of turkish consumers towards transgenic productsözlenözgen 1 , haluk emiroglu 2 , mustafa yildiz 3 , ayşe sezen taş 1 : 1 department of family and consumer sciences, school of home economics, university of ankara, 06130 ankara, turkey; 2 faculty of law, university of bilkent, 06590 ankara, turkey; 3 department of field crops, faculty of agriculture, university of ankara, 06110 ankara, turkey the use of biotechnology in production has generated considerable debate involving the benefits-risks and moral beliefs associated with its use. consumer acceptance of genetically modified product is a critical factor will affect the future of this technology. this study was planned and conducted to determine the relationship between product/process related benefit perceptions, product/process related risk percentions and moral beliefs of consumers towards transgenic products. a total of 400 university educated consumers, consisting of 200 males and 200 females, employed at the ministries selected by random sampling method in ankara, were included into study. interview techniques were used in the gathering the research materials. the interview instrument had been prepared considering previous research and literature. answers given to sentences typed likert were scored, used "varimax analysis technique" for validity. in order to test the reliability of questionnaire were calculated "cronbach alpha" as inner consistency coefficient. the t-test were performed for determining the differences dependent on gender and age variables between product related benefit perceptions, process related benefit perceptions, product related risk perceptions, process related risk perceptions and moral beliefs of consumers. the examination of relationships between product/process related benefit perceptions, product/process related risk perceptions and moral beliefs of consumers was made by correlation analysis technique. it is thought that the results of this study are important both for scientists and social scientists. the application of the dna recombinant technology for food production is generating a great debate in our society with the participation of scientists trying to explain the way of obtaining these new foods and which are their implications; environmentalist groups and anti-biotechnology associations that systematically are against to the application of this technology; legislatives bodies and the public in general, represented by consumers' organizations that expresses their right to be informed. considering that university students will be the future professionals and consumers their opinion on this topic will be decisive in its success or failure, this research is aimed to performed a global and comparative study of the agrobiotechnology perception by students from different areas of knowledge and studies. this study was carried out during the academic years 2000-2004, being analyzed a total of 1516 valid surveys. the designed questionnaire included 30 questions relatives to: evaluation of the own knowledge and interest on the topic; evaluation of the information sources mainly used by university students to obtain nutritional information; the opinion about gm food labelling; risks/benefits perception; purchasing intention and support of biotechnology. results obtained showed that 37% of the university students interviewed have a clear positive perception of biotechnology, mainly the students of the health sciences area. these students understand the scientific terminology and they use the university as the main source of information. they support the development of the biotechnology and they consider that in a future it will report them benefits. other group (11%) has a clear negative perception, they are mainly students from law and art history. they do not understand the scientific terminology, they consider that biotechnology will cause them risks, and as a consequence they don't have intention of buying these foods. the technology of the dna recombinant can be also used to introduce in the plants genome the gene that codes a protein of interest for their use as antigen. the application of agrobiotechnology has allowed the development of a new generation of vaccines that try to reduce or to eliminate the inconveniences of the classic ones. for the new vaccines design, the detailed knowledge of the biology of the pathogen is considered. with this knowledge the genes implied in virulence can be inactivated or modified selectively. the term "edible vaccines" it is usually applies to the use of edible parts of the plants (tubers, fruits, leaves, etc.) genetically modified with the purpose to produce specific components (antigens) of a pathogen (virus, bacteria, etc.) against which is wanted to protect a person or animal. however, oral is not the best vaccination route since the quantity of antigen for an efficient immunization it is usually high, being also needed, the co administration of an adjuvant that stimulates the immune answer. on the other hand, it is also important to highlight that the levels of antigen accumulation in transgenic plants is usually lower than the necessary ones. another problem is the irregular accumulation of the antigen in the different parts of the plants, thus difficult the appropriate control of the doses. tannin is polyphenolic component having some antioxidant properties and exists in many plants and fruits. in pomegranate juice this component causes turbidity and haze. during fruit juice clarification by conventional gelatin method, all poly phenolic substances which are responsible for antioxidant activity are removed and as a result the quality of the product is reduced. in the present study tannase enzyme (tannin acyl hydrolase; ec 3.1.1.20) was used to decompose tannin to gallic acid and glucose and as the result the amount of turbidity of the juice is decreased, however, the antioxidant properties remain unchanged since tannin is not decomposed and not separated in the juice as it occurs in the gelatin method. the amount of gallic acid in pomegranate juice samples before and after addition of tannase was measured using hplc tests and the optimum temperature, the enzyme and juice contact time, ph, and solvent concentration for clarification of pomegranate juice were obtained as 45 • c, ph = 5.5, 2 h and 50 mm citrate buffer, respectively. the potential benefits of enzymatic clarification of pomegranate juice, that is preservation of antioxidant activity and hence increasing the quality of the fruit product, in comparison to that of conventional clarification method by gelatin introduce a new technique in turbidity and haze removed in tannin containing fruit juices. the objectives of this study are to investigate the inhibitory effect of low molecular weight chitosan for its use as biopreservatives of foods. the inhibitory activity of chitosan against escherichia coli and staphylococcus aureus was investigated by determining the effect of chitosan treatment on bacterial growth during culture and its effect on the viability of non-growing cells. the activity of chitosan was decreased considerably for both strains when sucrose was added to ts broth containing chitosan. the addition of ethanol affected little on the inhibition of chitosan against s. aureus. the influence of nacl on the activity of chitosan was similar to the influence of sucrose and ethanol. however, the experiments with non-growing cells showed that the ethanol enhance drastically the antibacterial activity of chitosan on s. aureus. the results presented in this paper demonstrate that its antibacterial activity may be affected considerably by common food additives such as sucrose, ethanol and nacl. in lactococcus lactis the enzymes phosphofructokinase (pfk), pyruvate kinase (pk) and lactate dehydrogenase (ldh) are uniquely encoded in the las operon. we have applied metabolic control analysis to study the role of this organisation. earlier work showed that ldh at wildtype level has zero control on glycolysis and growth rate but high negative control on formate production (c j formate ldh = −1.3). we find that pfk and pk have zero control on glycolysis and growth rate at the wildtype enzyme level but both enzymes exert strong positive control on the glycolytic flux at reduced activities. pk has high positive control on formate (c j formate pk = 0.9 − 1.1) and acetate production (c jacetate pk = 0.8 − 1.0), whereas pfk has no control on these fluxes. decreased expression of the entire las operon resulted in a strong decrease in growth rate and the glycolytic flux. increased las expression resulted in a slight decrease in the glycolytic flux. at the wildtype level the control was close to zero on both glycolysis and the pyruvate branches. the sum of control coefficients for the three enzymes individually was comparable to the control coefficient found for the entire operon at the wildtype level; the strong positive control by pk almost cancels out the negative control by ldh on formate production. the analysis suggests that co-regulation of pfk and pk provides a very efficient way to regulate glycolysis, and co-regulating pk and ldh allows the cells to maintain homolactic fermentation during regulation of glycolysis around wildtype level. bovine chymosin is used extensively in cheese production because of its specificity and low proteolytic activity. we are interested in the caprine chymosin as an alternative because in the canary islands cheese has traditionally been made using goats milk with extract from the abomasum of newborn goats as coagulating factor. we isolated and characterized the prochymosin cdna from the abomasum of milk-fed kid goats. this cdna predicts a polypeptide of 381 amino acid residues, with a signal peptide and a proenzyme region of 16 and 42 amino acids, respectively. the caprine preprochymosin has 99% and 94% identity with the corresponding lamb and calf sequences. the cdna fragment encoding prochymosin was fused in frame to the killer toxin signal sequence in a constitutive vector, and to the ␣-factor signal sequence-flag in an inducible expression vector. kluyveromyces lactis pm3-5c, k. lactis sel1, characterized by a "supersecreting" phenotype, and saccharomyces cerevisiae bj3505 were transformed with the recombinant plasmids. activated culture supernatants of yeast transformants showed milk-clotting activity. the flag-prochymosin fusion was purified from bj3505 culture supernatants by affinity chromatography. after activation at acid ph, proteolytic activity assayed toward casein fractions showed that the recombinant caprine chymosin specifically hydrolyzed -casein. the recombinant caprine enzyme could be an alternative milk coagulant in cheese making. lipid accumulation in schizochytrium g13/2s was studied under batch and continuous culture. different glucose and glutamate source concentrations were supplemented in a defined medium. during batch cultivation, lipid accumulation occurred towards the end of the growth phase but ceased when cell proliferation stopped. under continuous culture, as dilution rate decreased from 0.08 to 0.02 h −1 , both cell dry weight and total fatty acid content (tfa) of the cell increased. with a constant dilution rate of 0.04 h −1 , nitrogen limitation induced lipid synthesis (28% tfa) as described for other lipid-accumulating organisms. however, with carbon-limited conditions, some lipid accumulation was still possible, the tfa being 22%. finally, the batch and continuous culture methods are compared for docosahexaenoic acid (22:6, n − 6) production. the objectives of this study is to investigate the inhibitory effect of low molecular weight chitosan for its use as biopreservatives of foods. the inhibitory activity of chitosan against e. coli and staphylococcus aureus was investigated by determining the effect of chitosan treatment on bacterial growth during culture and its effect on the viability of non-growing cells. the activity of chitosan was decreased considerably for both strains when sucrose was added to ts broth containing chitosan. the addition of ethanol affected little on the inhibition of chitosan against s. aureus. the influence of nacl on the activity of chitosan was similar to the influence of sucrose and ethanol. however, the experiments with non-growing cells showed that the ethanol enhance drastically the antibacterial activity of chitosan on s. aureus. the results presented in this paper demonstrate that its antibacterial activity may be affected considerably by common food additives such as sucrose, ethanol and nacl. nitrite-oxidizing bacteria catalyze an essential step of nitrogen elimination in biological wastewater treatment. recently, novel and yet uncultured nitrite-oxidizing nitrospira-like bacteria were found to be abundant in municipal and industrial wastewater treatment systems where they outcompete nitrobacter, which has long been considered as the organism responsible for nitrite oxidation in bioreactors. despite the importance of nitrospira-like bacteria for wastewater treatment and for nitrogen fluxes in natural ecosystems, little is known about their ecophysiology and interactions with other organisms. cultivation-independent molecular techniques were applied to investigate the diversity, distribution, and physiological and genetic features of nitrospira-like bacteria in nitrifying activated sludge and biofilm. a surprisingly high diversity of these organisms was found to exist in these engineered and in natural habitats. moreover, significant physiological differences could be identified among various phylogenetic sublineages in the genus nitrospira. quantitative co-localization analyses performed by novel image analysis software revealed that these metabolic features are reflected by the spatial organization of nitrifiers living in biofilm and activated sludge flocs. based on an environmental genomics approach the genome of a nitrospira-like bacterium found in activated sludge is being analyzed. results obtained so far point at unexpected physiological capabilities of this organism, and allow us to propose that nitrospira-like bacteria may also play roles in the bioremediation of (per)chlorate and chlorite. the activated sludge process is the most common way to remove organic matter, nitrogen and phosphorus from wastewater by microbiologically means. knowledge about the microorganisms involved is fundamental for optimisation of existing plants and development of new plants and process designs. many of the bacteria believed to be involved in nitrification, denitrification, biological phosphorusremoval, and removal of organic matter in full scale plants are now identified by use of molecular methods. recent developments in experimental approaches have allowed the study of the ecophysiology of these uncultured and potentially important bacteria, thus providing a better understanding of their function in full-scale activated sludge ecosystems. relatively few dominant species in each functional group (e.g. denitrifiers and polyphosphate accumulating organisms) seems to be present. some species appear to be very specialized regarding nutrient requirements while others are more versatile. a new method for mercury remediation from industrial wastewater based on the enzymatic reduction of mercury by live mercury resistant bacteria immobilized on the pumice particles has been developed in gbf, germany, and implemented in the industrial scale (unknown). the experience gained during operation of this instalation led to the idea, that the process of bioremediation may be integrated in one bioreactor with the sorption of mercury from wastewater, by immobilization of the bacteria directly on the activated carbon. for this it was necessary to define several significant parameters of the activated carbon used and the sorption process itself. the paper presents results of the equilibrium and kinetics investigations of the process of mercury sorption from aqueous solutions onto seven different types of activated carbon. the effective diffusion coefficients in the particles were obtained from the transient-state experiments and the sorption isotherms, saturation capacity of the sorbents and its dependence on the temperature and ph were identified. then the hydrodynamic and sorption characteristics of the activated carbon bed in a laboratory-scale fixed-bed bioreactor were investigated in different process conditions (mercury concentration, volumetric flow rate, temperature, ph). the results (effective capacity of the bed, dispersion and diffusion coefficients, mass transfer coefficient) enable implementation of this bioreactor for modified, integrated process of mercury bioremediation from industrial wastewaters. research supported by the grant kbn 4 t09c 013 25. bacterial cr(vi) reductases convert the very mobile toxic cr(vi) to the less toxic and less mobile cr(iii). the ability to reduce cr(vi) was studied on cell extracts of ochrobactrum tritici strain 5bvl1 and microbacterium sp. strain 3a. both microorganisms were isolated from the same sample of chromium-contaminated sludge, taken from a wastewater treatment plant. while in the first case activity was found to be associated with the intracellular soluble extract, in the second case it was a process occurring extracellularly. cr(vi) reduction by the intracellular soluble extracts of strain 5bvl1 required the presence of nadh or nadph as electron-donor, while the extracellular fraction of strain 3a only used nadph. several studies were made on strain 5bvl1 intracellular soluble extracts. a k m of 26.11 m cr(vi) and a v max of 5.75 ± 0.13 nmol cr(vi) min −1 mg −1 protein were estimated from the lineaweaver-burk plot and michaelis-menten non-linear regression. the temperature and the ph optima for cr(vi) reduction were 37.5 • c and 5.0, respectively. hyperthermus butylicus is an anaerobic hyperthermophilic crenarchaeon, isolated from the solfataric sea floor off sáo migel island, azores (zillig et al., 1990) . h. butylicus grows at up to 108 • c (optimally between 95 and 106 • c) at ph 7. it can utilize peptides, polysaccharides, and other substrates, as carbon sources to produce acetate, butyrate, and n-butanol. the capability to produce enzymes (e.g. hydrolases, dna and rna polymerases, etc.) that can tolerate and function at temperatures 20 • c higher than most other thermophilic archaea, renders h. butylicus of particular interest to the biotechnology industry. the complete genome sequence of h. butylicus was determined and it contains 1,667,186 bp on a single circular chromosome. 1695 protein encoding genes were identified which use a high level of uug and gug start codons. many of these were assigned functions on the basis of sequence comparisons. our analyses revealed some unusual metabolic properties in h. butylicus. several sugar transporters were identified, although the set of genes required for glycolysis is incomplete. moreover, genes encoding enzymes converting glucose to trioses are absent and no genes encoding enzymes of the pentose phosphate cycle or the kdpg pathway were detected. the h. butylicus genome encodes many proteases and peptidases although the lon proteases, encoded in all other archaeal genomes, are absent. although it was reported that h. butylicus does not utilize free amino acids in the media, genes for amino acid transporters were identified, and several proteins involved in di-or oligo-peptide transport are encoded. genes encoding signal peptidases are absent. we will summarize gene products of special biotechnological interest. reference zillig et al., 1990. j. bact. 172, 3959-3965. 2 hot genomics: insights in the thermophilic lifestyle of thermus thermophilus from its complete genome holger brüggemann 1,2 , anke henne 1 , gerhard gottschalk 1 : 1 göttingen genomics laboratory, institute of microbiology and genetics, university of göttingen, germany; 2 institut pasteur, unité de génomique des microorganismes pathogènes, paris, france thermus thermophilus is an extremely thermophilic, halotolerant bacterium, which was originally isolated from a natural thermal environment. recently completed genome sequences of two strains, hb27 and hb8, provide a solid foundation for investigating many aspects of thermophilic lifestyle; these range from molecular stability determinants to key elements of organismic physiology. in addition, the species has considerable biotechnological potential; many thermostable proteins isolated from members of the genus thermus are indispensable in research and in industrial applications. the closely related genera thermus and deinoccoocus belong to a distinct branch of bacteria called the deinococcus-thermus group. genome comparison of t. thermophilus and d. radiodurans, a mesophilic organism, which exhibits high resistant to radiation, oxidative stress, and desiccation, is of particular interest for the identification and exploration of thermophilic determinants. a large number of orthologs with a high degree of sequence identity are shared between the two species. this opens the opportunity for comparative studies of conformational and chemical thermostability of proteins, as well as for the identification of specific traits for each organism, explaining their unique physiological properties and their intriguing differences in stress tolerance. although strains hb27 and hb8 share a highly conserved chromosome, striking differences can be found between their megaplasmids, which encode a huge proportion of genes not found in the genome of d. radiodurans. possible contributions made by the megaplasmids to a thermophilic lifestyle will be discussed. microorganisms that can live in high temperatures, extreme ph and high salt concentration are called extromophiles. extromophilic microorganisms have extended our knowledge and understanding of fundamental questions such as the origin of life. the ability to grow in extreme conditions and to produce stable proteins makes extremopliles very attractive for the researchers and also for the industry. extremozymes from extremophiles have a great economic potential in many industrial processes, including agricultural, chemical and pharmaceutical applications. concurrent development of protein engineering will increase the application of enzymes from extremophiles in industry. turkey has vast and various ecologi-cal areas, and so it has a broad microbial diversity. based on the extremophilles which defined in the scope of this project, halophilic microorganisms produced industrially important proteins were isolated from ç amaltı saltern area in izmir, turkey. in this work, growth of isolates at different temperature, salinity and ph values were investigated to determine the effects of various growth conditions. eight isolates grow at ph between 6.50 and 8.50 and two isolates at 6.50-7.50. they grow at temperature between 37 and 55 • c and salt concentration between 3% and 25%. the results of some phenotypic characters showed that they are gram (−) and oxidase,ürease, dnase and nitrate reduction are (−), and catalase (+). they used d(+) glucose, maltose, lactose, sucrose, l(+) arabinose, d(+) mannose, glycerol and four of isolates used d(+) xylose as a carbon source. the isolates resistant to erythromycine, ampicilin sulbactam, cefoxitin, penicillin, bacitracin, novabiocin, amikacin and sensitive to ceftazidine, ciprofloxacin, amoxycillin/clavulanic acid, imipenem, chloromphenicol, ceftazidime/clavulanic acid, aztreonam, cefepime, cefotaxime, cefoperazone amoxicillin. this project was supported by tubitak through project tbag 2321-103t069. the technology of producing renewable energy sources such as ethanol, methane and hydrogen from biomass holds the potential of creating in-house energy resources while lowering the emission of greenhouse gasses as demanded by the kyoto protocol. recently, goals were defined for the european union determining that 5.75% of the transportation fuel has to come from biofuels in year 2010. a large-scale implementation of biofuels into the transportation sector will demand that lignocellulosic biomass, which is found in a surplus throughout the world is used as the raw material for the production process. the presentation will include a comprehensive description of the special bio-refinery concept developed in denmark for production of biofuels and other valuable products from straw. the concept includes several innovative steps such as a pre-treatment method using wet oxidation, on-site production of enzymes and a continuous fermentation process using a genetic modified thermophilic bacterium. by co-producing several biofuels in the plant optimal use of the biomass has been assured and the price of for instance of bioethanol is getting close to conventional oil-based fuels. optimizing each step in the bio-refinery, while having the full integration in mind, will be the way to make an economical viable biofuel production. in the presentation we will present our road map for achieving this goal in the nearest future. replacement of gasoline by liquid fuels produced from renewable sources is a high-priority goal in many countries worldwide. one such fuel, which has been found well suited, is ethanol. it may be produced from various lignocellulosic materials, such as forest and agricultural residues, which are fairly inexpensive. to compete with gasoline the production cost must be substantially lowered. ethanol production from lignocellulose comprises the following main steps: hydrolysis of hemicellulose, hydrolysis of cellulose, fermentation, separation of lignin, recovery and concentration of ethanol and wastewater handling. the enzymatic hydrolysis and fermentation can either be run separately (shf) or combined into a simultaneous saccharification and fermentation (ssf). the latter has been shown to result in higher ethanol yields than shf. some of the most important factors to reduce the cost are: efficient utilisation of the raw material by high ethanol yields, high productivity, high ethanol concentration in the feed to distillation and process integration in order to reduce capital cost and energy demand. in the last years we have performed several studies on the hydrolysis and fermentation of various forest and agricultural residues in a mini-pilot to improve the overall yield of ethanol and to reduce the energy demand and production cost. steam pretreatment, with small addition of acid catalyst, has resulted in sugar yields close to 90% of the theoretical for various types of raw materials, e.g. spruce, salix and corn stover. the ssf has been developed and optimized to give high yield of ethanol. for spruce an ethanol yield of about 80% of theoretical based on the composition of the raw material has so far been obtained using a two-stage steam-pretreatment of so 2 impregnated raw material followed by ssf. improvements of the ssf step, in the form of high dry matter content, recirculation of process streams and adapted yeast have resulted in ethanol concentrations around 45 g/l leading to substantial reduction in energy demand and production cost. these improvements have been assessed by techno-economic evaluation to determine the effect on the ethanol production cost. the process has been further optimised by process integration to further reduce the energy demand. the ethanol production cost was estimated to be around 0.38-0.46 euro/l ethanol assuming a yearly capacity of 200 000 tonnes raw material (dry matter). production of bioethanol from spent grain, a by-product of beer production sho shindo, tadanori tachibana, akita research institute of food and brewing, akita-city, akita 010-1623, japan. e-mail: shindo@arif.pref.akita.jp (s. shindo) the breweries generate one million tons of spent grain every year, and about 20% of the spent grain is recycled in japan. therefore, it is environmentally and economically significant to consider the production of ethyl alcohol as biomass energy using the spent grain from the breweries industry. ethyl alcohol production from spent grain with immobilized yeast cells was investigated. spent grains were liquefied by a steam explosion treatment to obtain liquefied sugar. when 1 kg of wet spent grain was treated under the 30 kg/cm 2 pressure for 1 min using a 5 l steam explosion reactor, 60 g of total sugar was obtained from the liquefied spent grain. furthermore, 1.3% (w/v) of glucose, 0.4% (w/v) of xylose, and 0.1% (w/v) of arabinose were produced when the liquefied spent grain was treated with glucoamylase, cellulase, and hemicellulase enzymes. ethyl alcohol production was carried out by immobilized sacchromyces cereviseae and immobilized yamadazyma stipitis simultaneously from liquefied spent grain. both yeast cells were immobilized on the glass beads carrier. xylose and arabinose were consumed after glucose was consumed completely during ethyl alcohol production. 5.8% (v/v) ethyl alcohol was produced from liquefied spent grain that was adjusted 17% of initial sugar concentration after 2 days. the vegetable oils constitute a resource of renewable potential for the production of fuels, becoming a viable alternative when compared to the diesel from petroleum. among the vegetable oils, the extracted oil of the castor plant seeds is a promising alternative source because it is constituted mainly of the ricinoleic acid (12-hydroxy-9octadecenoic) that represents 90% of the total constitution of the oil approximately. the biodiesel obtained from castor oil can be defined chemically as being a mixture of methyl esters or ethyl esters of carboxylic acids synthesized by transesterification reaction of the existent triglyceride and an alcohol of little chain through the use of alkaline or enzymatic catalysts. in this work, we described the results found in castor oil with different degrees of purity. initially, it was made a rheological characterization followed by structural characterization (rmn 13c, rmn 1h and infrared) and thermal characterization (dtg, dta and dsc) of the crude and refined castor oil. it has been also measured the hydroxyl tenor, acidity index, saponification index and iodine index in different oils. later, these results were used to evaluate possible differences in the quality of the biodiesel (ethyl esters) produced in the enzymatic alcoholysis of the castor oil catalyzed by lipases (novozym 435, liposyme rm im and lipozyme tl im). the degree of substitution of castor oil derivative was performed by titration with 0.1n hcl and confirmed by tlc analysis and the results showed conversion rates about 90%. has been demonstrated to have a wide range of health benefits such as prevention and therapy of various cancers, amelioration of heart disease, and prevention of renal stone formation as well as complications from diabetes. on the hand, lower phosphorylated forms of inositol, especially inositol trisphosphate (ip3) and inositol tetrakisphosphate (ip4) are important signal transduction molecules within the cells both in plants and the animal kingdom. it has been hypothesized that at least the anticancer function of ip6 is mediated via these lower inositol phosphates. the diversity and practical unavailability of the individual myo-inositol phosphates preclude their investigation. phytases, which catalyze the sequential hydrolysis of phytate, render production of defined myoinositol phosphates in pure form and sufficient quantities. different phytases may result in different positional isomers of myo-inositol phosphates and therefore different biochemical properties. phytases differing in ph optima, substrate specificity, and specificity of hydrolysis have been identified in plants and microorganisms. in this paper the dephosphorylation pathway of the novel phyfauia1 was compared to other bacterial phytate degrading enzymes. preliminary results have shown that phyfauia1 converted ip6 into ip5 (myoinositol 1,2,3,5,6-pentakisphosphates) and another isomer, which is yet to be elucidated. in a denitrifying pilot plant reactor, a new obligately anaerobic ammonium oxidation (anammox) process with great potential for nitrogen removal for high strength wastewater was discovered. after transfer of the complex microbial community to a laboratory sbr system, a highly enriched population, dominated by a single anaerobic chemolithoautotrophic bacterium related to the planctomycetes was obtained. the bacterium was purified via percoll centrifugation and characterized as 'candidatus brocadia anammoxidans'. survey of different wastewater treatment plants using anammox specific 16s rrna gene primers and anammox specific oligonucleotide probes revealed the presence of at least four other anammox bacteria, tentatively named 'candidatus kuenenia stuttgartiensis', 'candidatus brocadia fulgida', 'candidatus scalindua wagneri' and 'candidatus scalindua brodae'. a close relative of the two scalindua species, 'candidatus scalindua sorokinii' was found to be responsible for about 50% of the nitrogen conversion in the anoxic zone of the black sea and in the benguela upwelling system along the namibian coast, making anammox an important player in the global nitrogen cycle. electron microscopic studies of all five anammox bacteria showed that several prokaryotic membrane-bounded compartments are present inside the cytoplasm, which are surrounded by unique ladderane lipids. hydroxylamine oxidoreductase, a key anammox enzyme, was present exclusively inside one of these compartments, named the 'anammoxosome'. unique peptides fragments of the purified hao were used to locate the hao gene in genome assembly of 'candidatus kuenenia stuttgartiensis'. the implementation of the anammox process in the treatment of wastewater with high ammonium concentrations was started at the treatment plant in rotterdam, the netherlands, where it is combined with the partial nitrification process sharon. the estimated price for nitrogen removal with partial nitrification and anammox is about 0.75 euro/kg n. gas lift reactors could sustain the highest anammox capacity at 8.9 kg n removed/m 3 reactor per day. an alternative configuration of anammox is the oxygen-limited canon process in which aerobic ammonium-oxidizing bacteria protect anammox bacteria from oxygen and produce the necessary nitrite. maximum nitrogen removal with canon in gas lift reactors was 1.5 kg n/m 3 reactor per day. using several different conditions and parameters, the competition and co-existence of aerobic and anaerobic ammonium-oxidizing bacteria were modeled. in addition to ammonia, urea was also converted after a 2-week adaptation in the canon system. recently it was shown that anammox bacteria can use organic acids as additional energy source. murray moo-young, wa anderson department of chemical engineering, university of waterloo, waterloo, ont., canada n2l 3g1 bioreactors are central to the bioremediation of contaminated environments of water, air or soil. in all three areas of application, bioreactor design is critical to the development of new or improved processes. this overview focuses on the physical limitations of bioreactors caused by biological requirements. the information is based on our own research findings. the need for more applicationsoriented bioremediation research becomes apparent. for technoeconomic reasons, the airlift type has often been the bioreactor of choice for most bioremediations. however, lack of adequate understanding of the quantitative effects of operating conditions on its performance has been an ongoing concern. these effects have been characterized for engineering implementation. to enhance productivity, innovative pretreatment techniques of the polluted sources have also been developed using photocatalytic and chemical oxidation methods. case studies on petrochemical-contaminated water and soil reveal significant enhancement potentials. other studies on microbial biofilters for air bioremediation indicate that the active mass of the biological consortia is not sufficiently understood for rational design. analysis and retrofit design of wastewater treatment facilities using process simulation tools demetri petrides, alexandros koulouris, intelligen, inc., scotch plains, nj 07076, usa. email: dpetrides@intelligen.com (d. petrides) process simulators have been used in the petroleum and chemical industries for over four decades to facilitate the design of new processes and optimize the performance of existing ones. similar benefits can be derived from the use of such tools in the environmental arena, particularly in the field of physical and biological treatment of municipal and industrial wastewater. specifically, process simulators can be used to evaluate and improve options for: (1) more efficient removal of nutrients (e.g., organic nitrogen and phosphorous) that cause eutrofication, (2) estimation and control of volatile organic compound (voc) emissions from open tanks, and (3) more efficient removal and control of hazardous compounds. the potential benefits will be illustrated with cases studies involving both municipal and industrial wastewater facilities. the microbial reduction of metals has showed recent interest as these transformations can play crucial roles in the cycling of both inorganic and organic species in a range of environments and, if harnessed, may offer the basis for a wide range of innovative biotechnological processes. under certain conditions, however, microbial metal reduction can also mobilise toxic metals with potentially calamitous effects on human health. some effluents present heavy metals as soluble compounds, several microorganisms have the capacity to precipitate these metals as insoluble compounds, and this fact allows the collection and separation of these metallic precipitates from contaminated medium. sulfate-reducing bacteria (srb), under anaerobic conditions, oxidize simple organic compounds (such as acetic acid and lactic acid) by utilizing sulfate as an electron acceptor and generate hydrogen sulfide. hydrogen sulfide reacts with heavy metal ions to form insoluble metal sulfides that can be easily separated from a solution. the purpose of this work was study the capacity of desulfovibrio sp. cultures to reduce mixtures of the heavy metals in presence or not of petroleum. for it the experimental design 2 k (k = 5) was carried out. the five studied factors were cr, cu, mn, zn and petroleum. the study was carry out with desulfovibrio sp. batch studies were performed in 50 ml sealed bottles with different concentrations (cr(iii)-10 ppm, cu(ii)-5 ppm, mn(ii)-10 ppm, zn(ii)-15 ppm) of metal sulfate and 2 g l −1 of petroleum. during batch incubation the dissolved concentration of metal studied in supernatant were decreased to undetectable levels for zn (70-100%), however with cu (40-60%), mn (40-70%) and cr (50-80%). the development of continuous process with sulfatereducing bacteria seems to be a suitable alternative to reduce metals in solution from contaminated media such as industrial or mine effluents. after these preliminary results, some experiments in course are focused to study that purpose. reduction of odour emissions from livestock buildings using a bioscrubbing system morten øgendahl, nawaf abu-khalaf, jens jørgen lønsmann iversen department of biochemistry and molecular biology, university of southern denmark, dk-5230 odense m, denmark. e-mail: tvede@bmb.sdu.dk (m. øgendahl) a bioscrubbing system for reducing odour emissions from livestock buildings is presented. the bioscrubbing system consists of two separate units; an absorption column and a water purification module. the absorption column is mounted in the ventilations stacks in the livestock buildings absorbing odorants in the effluent air flow. the odorants are absorbed in a spray of droplets formed by a grid of high pressure nozzles in the inlet of the absorption column. the spray of droplets is extracted from the air flow and pumped to a centrally located water purification module, an inverse three phase fluidised bed bioreactor, where the bio-degradation of the absorbed odorants occurs. the bioreactor features a split sparging system for maximum mixing and aeration. the cleaned water is recirculated to the absorption column. an electronic tongue will quantify key odorants in the bioreactor. the absorption column is designed to be retrofitted into existing livestock building ventilation systems. the water purification module is constructed in standard size units simplifying scaling to match the requirements of individual applications. the total bioreactor volume is increased by increasing the number of standard bioreactors. this work describes a "light off" toxicity bioassay sensor based on whole cell genetically modified bioluminescent bacteria. the biosensor was constructed by mating between the environmentally isolated phenol-degrading acinetobacter sp. strain df4 and the plasmid putk2 that is an inc p␤ plasmid with the bioluminescence genes luxcdabe inserted into a genetic region involved in plasmid replication and transfer. subsequently, the bioreporter designated df4/putk2 and used to investigate phenolics toxicity. among examined phenolics, pentachlorophenol, catechol and nitrophenol recorded the fastest effect on the bioluminescence of bioreporter df4/putk2 over incubation period of 350 min. the effect of various concentrations of phenol and its derivatives either in an individual, duple or triple mixture forms on the bioluminescence response of the constructed bioreporter df4/putk2 were also examined. significant reduction of the bioluminescence was observed whenever a mixture contained pentachlorophenol, catechol and nitrophenol, respectively. to develop a system appropriate to commercialize, the constructed bioreporter df4/putk2 was subjected for immobilization in microtiter plates using several entrapment gels. after a selection of materials was tried, lb/agar was chosen as the most suitable candidate material. characterization of key odour compounds in an air wet scrubber is presented. the key odour compounds represent five chemical groups, i.e. sulphide, alcohol, volatile fatty acids (vfas), phenol and indole. direct aqueous injection (dai) and solid phase extraction (spe) methods were used before injection of key odorants into the gas chromatography-flame ionisation detection (gc-fid). the dai and spe methods were efficient in the identification of odour compounds in the wet scrubber. the spe method had a high recovery and can be more effective in the identification of compounds at low initial concentration. however, dai showed a better linearity and a lower limit of detection (lod) than the spe method. the dai method was the method of choice for characterization, as it is cheaper, easier to handle and highly applicable. at least two odorants, phenol and 1-butanol, were quantified successfully using the dai method. their lod was less than their odour detection limit in the wet scrubber. dai method can be used as a reference measurement method for any further analytical application, e.g. electronic tongue. recent developments in biotechnology enabled the widespread use of microbial enzymes in textile, detergent, food and dairy industries and also in various environmental applications. microorganisms which live at extremes of temperature, ph and salinity, produce extremozymes that offer many exciting opportunities for their use in clean production. in this study, microorganisms were isolated from camaltı saltern area ini̇zmir, turkey. effect of medium salinity on the growth of these microorganisms was determined. seven out of 10 isolates required salt for growth. the salinity ranges at which growth was detected were: 5-25% for two isolates, 6-25% for one isolate, 7-25% for two isolates and 8-25% for one isolate. the isolates were also screened for their capability of producing industrially important enzymes such as amylase, protease, lipase, xylanase and cellulose which are widely used not only in textile, detergent, food and dairy industries but also in various environmental applications. all of the isolates were found to be producers of both amylase and xylanase enzymes at varying salinity array within 5-30% salt concentration range at ph 7.0. extracellular protease activity was detected in the medium of all isolates grown at 5, 10, 15, 20 and 25% salinity at both ph 7.0 (optimum growth ph) and ph 9.0. out of 10 isolates, 9, 10 and 9 were found to produce cellulase enzyme when the salt concentrations were 5, 10 and 15%, respectively. at 20% salt concentration, only one isolate was found to be cellulase enzyme producer. none of the isolates were found to produce lipase enzyme at 5-30% salt concentration range. this project was supported by tubitak through project tbag 2321-103t069. chemical engineering department, middle east technical university, ankara 06531, turkey. e-mail: ubakir@metu.edu.tr (u. bakir) glass and ceramic tiles are very widely used industrial materials. in most cases, periodical cleaning is required to maintain their optical properties such as transparency and visual aspects. because of the ever-growing demand for healthy living, there is a keen interest in materials capable of killing harmful microorganisms. the application of these tiles in care facilities to reduce the spread of infections, in public and residental places to improve hygienic conditions are of general interest. in this study the aim is developing methods to apply thin film coatings on glass tiles to make them anti-bacterial by utilizing photocatalysis and investigating their anti-bacterial properties. semiconductors because of their reasonable band gap energies find great attraction through this purpose. the photocatalytic property of semiconductors are used in this process. oxidising radicals are formed on the coated surfaces and these radicals attacks the organic pollutants and bacteria on contact with the surface. titanium dioxide (tio 2 ) coated surfaces are considered to be very effective against organic and inorganic materials, as well as against bacteria. in the experimental procedure coating solution is prepared by sol-gel technique. after pretreatment of surfaces, the coating solution is applied on the surfaces by dip-coating method. after appropriate thermal treatments, to achieve thin, dense and strong coatings, indicator microorganism is directly applied on the coated surfaces and illuminated under solar simulater light source. finally, the number of surviving microorganisms are determined. in this study, the effects of titanium dioxide (tio 2 ), tin oxide (sno 2 ) solutions and metal doping to these coating solutions on anti-bacterial function were investigated. as a result of this study, the number of escherichia coli that is used as indicator microorganism, on tio 2 and sno 2 coated glasses with respect to the control glass reduced by 80-85% and 40-45%, respectively. doping with metals increased the activity of the coatings, hence the number of surviving microorganisms decreased. activity of a methanogenic ecosystem during the primary contact with a solid support s. michaud, n. bernet, p. buffière, j.p. delgenès inra-lbe, avenue des etangs, f-11100 narbonne, france in this paper, the biological activity during the first initial contact between a methanogenic sludge and a solid support was investigated in batch experiments, at different solid concentrations, using two different granular solid materials and with glucose as the main organic substrate. in all cases, the introduction of a solid material in a methanogenic suspended biomass induced a response of the anaerobic microorganisms, after a lag phase during which biological activity was not detected. this lag phase could be the consequence of a physical stress induced by the first contact between microbial cells and the solid surface. this lag phase was not observed when the biomass used originated from a biofilm reactor, i.e. using a biomass previously exposed to a solid material. a change in the metabolism of organic matter from catabolism and methane production toward production of other compounds could be observed, characterised by a sharp decrease of the methane yield in the anaerobic system. analyses of the gas and liquid phases did not show the production of any new gaseous or soluble compound as the biological end product of this activity. this suggests the production of non-soluble compounds by an anabolic pathway, which could indicate the initiation of biofilm formation. this metabolic activity was shown to be directly correlated to the ratio between the solid surface introduced and the microorganism concentration in the anaerobic culture (m 2 g vs −1 ). from kinetic observations, it could be observed that acetogenic methanogenesis recovered more rapidly than syntrophic propionate and butyrate degradation. evaluating microbial diversity of hydrocarbon degrading bacteria cleantis braithwaite, howard rosser, tawfiq al-ibrahim, hussain, al-bandi research and development center, saudi aramco, dhahran, saudi arabia the analysis of microbial diversity with molecular methods is central to isolating and identifying new and potential biocatalysts resources for research and industry. the ability to degrade hydrocarbon components of petroleum is widespread among bacteria, and is an effective method for remediation of a variety of ecosystems. due to the high carbon content of oil and the low levels of other nutrients essential for microbial growth, treatment of oil with phosphorus and nitrogen is generally required to enhance the growth of hydrocarbon-degrading bacteria and to stimulate oil sludge degradation. in this research study, three types of oily sludges from a gas plant, refinery, and terminal facilities were treated with nutrients. to assess the microbial diversity, both biolog culture method and culture independent polymerase chain reaction (pcr,) denaturing gradient gel electrophoresis (dgge) methods were used. nutrient addition significantly improved oil sludge degradation. we identified and characterized several hydrocarbon degrading bacterial strains that have the ability to convert petroleum. these bacteria included representatives both gram positive and gram-negative genera. there were slight difference in the quantity and type of hydrocarbon degrading bacteria found in the three sites. this is the first molecular analysis of hydrocarbon degrading microbial population in saudi arabian operations. mussel adhesive proteins, including the 20-plus variants of foot protein type 3 (fp-3), have been suggested as potential environmentally friendly adhesives for use in aqueous conditions and in medicine. here we report the novel production of a recombinant mytilus galloprovincialis foot protein type 3 variant a (mgfp-3a) fused with a hexahistidine affinity ligand in escherichia coli, and its ∼99% purification with affinity chromatography. recombinant mgfp-3a showed a superior purification yield and better apparent solubility compared to those of the previously reported recombinant m. galloprovincialis foot protein type 5 (mgfp-5). the adsorption abilities and adhesion forces of purified recombinant mgfp-3a were compared with those of cell-tak (a commercial mussel extract adhesive) and mgfp-5 using qcm analysis and modified afm, respectively. these assays showed that the adhesive ability of recombinant mgfp-3a was comparable to that of cell-tak but lower than that of recombinant mgfp-5. collectively, these results indicate that recombinant mgfp-3a may be useful as a commercial bioadhesive or an adhesive ingredient in medical or underwater environments. cresol, a monomethylated phenol, is an aromatic compound. the environmental protection agency (epa) has determined the carcinogenic potential of cresol. various options are being examined for the degradation of cresol because of their unavoidable large scale production and toxicity. many aromatic hydrocarbons can be used as electron donors aerobically by species of pseudomonas, thus leading to the ring cleavage of these compounds. in the present study, pseudomonas strains were isolated from activated sludge collected from sewage treatment plant. it was repeatedly transferred onto nutrient agar plate to check the purity of the culture. the organism was grown aerobically in an inorganic medium with p-cresol as the solitary carbon source. pseudomonas was confirmed by the expression of green pigment, gram staining and biochemical tests including koh, catalase, nitrate reduction and carbohydrate fermentation reaction. inoculation status was used to determine the rate of degradation of p-cresol. the effect of temperature on p-cresol degradation was studied. moreover, the effect of different concentrations of the aromatic compounds on pseudomonas as well as substrate variability was also documented. phenolic intermediates were estimated colorimetrically using 4-aminoantipyrene, folin-lowry method, uv spectrophotometry and hplc. the results indicated that pseudomonas could degrade up to 300 mg/l of p-cresol within 10 h. pseudomonas sp. exhibited good metabolic versatility and degraded other aromatic compounds including m-cresol and p-hydroxybenzoic acid. we conclude that this strain of pseudomonas has excellent potential for bioaugmenting the degradation of p-cresol-containing waste water treatment units. a considerable amount of waste cooking oil is produced by the restaurant industry worldwide. this poses a significant environmental and economic problem, since high oil and grease concentrations in the sewage system could lead to pipes occlusion and decreased efficiency in water treatment operation plants. therefore, sending these wastes to recycling companies or hazardous waste processors is usually required. yarrowia lipolytica, a well-known lipase producer, requires the presence of lipidic compounds (i.e. vegetable oils) to boost enzyme biosynthesis. in this work, the suitability of waste cooking oil as lipase inducer in submerged cultures of this yeast has been assessed. if successful, this procedure could allow both the degradation of an abundant waste and its valorisation as a raw material for the production of a high added value product. the microorganism was grown in a liquid medium to which various amounts of waste cooking oil were added. biodegradation degrees up to 80% (measured as decrease in cod) were obtained after 3 days of treatment. also, initial glucose concentration in the basal medium seemed to influence the efficiency of the process. on the other hand, addition of waste oil led to a significant increase in lipase production (more than two-fold), compared to oil-free cultures. moreover, chain-length specificity of the produced enzymes was significantly different: high activity towards medium chain length esters was found, which hinted to the occurrence of both lipases and esterases. biodesulfurization: a documental review j. ferrer, simon bolivar university, environmental engineering lab. caracas, venezuela a documental review about larger interest aspects in biodesulfurization technique is showed. especifically, the investigation is related to general framework and the justification of this technique, degradatives pathways elucidated up to now, involved microorganism, important elements in development of bacterial desulfurization and progress areas, and future tendency. in situ bioremediation of a p-nitrophenol contaminated site and assessment of its community structure debarati paul, gunjan pandey, sumeet labana, rakesh k. jain institute of microbial technology, sector 39a, chandigarh 160036, india. e-mail: rkj@imtech.res.in (r.k. jain) biodegradation of p-nitrophenol (pnp), a priority pollutant, was studied as a model system for bioremediation of sites contaminated with nitroaromatic/organic compounds. bioremediation studies were carried out in pnp-spiked soil in small plots under natural field conditions using arthrobacter protophormiae rkj100. role of carrier material was examined by immobilizing the bacteria on corncob powder prior to adding them to soil. these studies demonstrated successful removal of pnp by immobilized cells that were able to deplete pnp completely in 5 days, whereas free cells were able to deplete 75% pnp in the same time period. monitoring the fate of released bacteria revealed fairly stable population of the cells when they were immobilized on corncob powder throughout the period of study. on the other hand, there was a decrease of 2.7 log units in colony forming units of free cells at the end of the study (30 days). bacterial community structure and diversity was also studied for the pesticidecontaminated site wherein the effect of addition of an exogenous strain on the existing soil community structure and on soil functionality was determined using molecular techniques. as revealed by restriction fragment length polymorphism (rflp) studies 45 different phylotypes could be identified on the basis of similar banding patterns. sequencing of representative clones of each phylopyte showed that the community structure of the pesticide-contaminated soil mainly constituted of proteobacteria and actinomycetes. terminal fragment length polymorphism (t-rflp) analysis showed only subtle changes in community structure during the process of bioremediation. bacteriocins encompass an array of structurally different molecules produced by a number of phylogenetically distinct bacterial groups and trigger the killing of the same or closely related species. the recombined escherichia coli strain harboring a bacterocin coding region of xanthomonas campestris pv glycines 8ra was disrupted to obtain cell homogenate. peptidic xanthomonas bacteriocins (pxb) were separated by lowering ph and adding salt. the resulting pxb's were partially purified using ion exchangers, gel filtration. two final active fractions, a and b, were obtained with a yield of 0.005% and 500-1000-fold purification. the activity of pxb was stable at the ph ranging from 7.0 to 10.0. andreja kresal, vanja kokol, vera golob textile department, university of maribor, 2000, slovenia wastewater from textile dyeing industries is characterized by high chemical and biological oxygen demands (cod and bod) and intense color due to the extensive use of synthetic dyes. as dyes of complex aromatic structures are resistant to removal by the typical microbial population and may be toxic to the microorganisms present in the treatment plants, discharge of the wastewater to the treatment plants may lead to its failure. beside, direct discharge of these effluents into municipal wastewater plants and/or environment may cause the formation of toxic carcinogenic and/or unhealthy breakdown products. different chemical and physical methods (adsorption, coagulation-flocculation, oxidation, filtration and electrochemical treatments) for color removal have been proposed, but due capital costs and slow operating speed as well as huge amounts of sludge creation there is still a great need to develop an economic and effective method. the use of lignin degrading white-rot fungi and their enzymes (laccase, lignin peroxidase, manganese peroxidase) has attracted increasing scientific attention due their ability to oxidative degrade a wide range of recalcitrant organic compounds. in the contribution, the decolorization efficacy of different commercial textile reactive dyes (anthraquinone, azo, triphenylmethane) will be investigated after the treatment by laccase from trametes versicolor. in order to examine the reuse of enzymatically decolorized liquors, the ecological suitability and the toxicity of the degradation products after different time of enzyme exposure will be studied. this work was carried out within the scope of research project e! 3100 cawab. influence of heavy metals on growth and extracellular enzyme production of a trichoderma harzianum strain with biocontrol potential l. hatvani 1 , l. kredics 2 , a. szekeres 1 , z. antal 2 , l. manczinger 1 , a. nagy 3 , c. vágvölgyi 1 : 1 department of microbiology, university of szeged, p.o. box 533, h-6701 szeged, hungary; 2 hungarian academy of sciences, university of szeged, microbiological research group, hungary; 3 pilze-nagy ltd. kecskemét, p.o. box 407, hungary. e-mail: kredics@bio.u-szeged.hu (l. kredics) trichoderma species are common soil inhabiting asexual filamentous fungi with teleomorphs belonging to the hypocreales order of the ascomycota division. besides the industrial and clinical importance of the genus, certain strains have been found to cause great losses in mushroom cultivation while other strains are well known to possess high antagonistic activity against several plant pathogenic fungi and therefore used as biocontrol agents. important mechanisms of antagonism include competition and mycoparasitism, which -among others -can be related to the fast growth of trichoderma strains and the production of several extracellular enzymes. the influence of certain, soil-occurring heavy metals on mycelial growth and the secretion of extracellular enzymes involved in competition and mycoparasitism was examined in this study regarding an effective, potential biocontrol isolate of trichoderma harzianum. the metal ions zinc, manganese, copper, iron, lead and mercury were applied at the concentrations of 8, 16, 24, 32, 40, 60 and 80 m, and dry mycelial weight as well as the activities of extracellular ß-glycosidase, cellobiohydrolase, trypsinand chymotrypsin-like protease and n-acetyl-glucosaminidase enzymes were determined. it was found that mercury totally blocked mycelial growth, while other metal ions exerted a much lower influence on growth. the presence of heavy metals did not have a significant effect on the activity of the examined extracellular enzymes with the exception of trypsin-like protease, which showed a four-to six-fold rise in activity in the presence of certain sublethal concentrations of copper. based on these results, our further aim is to develop copper-resistant derivatives by mutagenesis from trichoderma strains with biocontrol potential. since proteases play an important role in mycoparasitism, these strains could be applied within the frames of integrated pest management in combination with copper-containing fungicides, resulting in an enhanced level of crop protection even with reduced amounts of fungicides. this work was supported by grants f037663 of the hungarian scientific research fund and grant omfb-00219/2002 of the hungarian ministry of education. the significance of biocontrol agents (bcas) is that some of them possess good antagonistic abilities against plant pathogenic fungi. a significant number of the most prominent fungi for the purposes of agricultural application belong to the genus trichoderma. in previous studies, in vitro assays on agar plates were reported as the generally used method for the evaluation of antagonistic abilities, as the results of these assays are well transferable to the practical application. the aim of the present study was to develop an accurate, image analysis-based method for the evaluation of the biocontrol characters of bcas. randomly selected trichoderma isolates were tested against fusarium culmorum. in the currently developed method, the areas of the fungal colonies were calculated on petri dishes by measuring the occupied surface of the medium on digital images. the inhibition effect was recorded as the value of biocontrol index (bci), which was calculated from the ratio of the area of the trichoderma colony and the total area occupied by the colonies of trichoderma and the plant pathogen. the proposed method was tested for numerous parameters, and the results revealed that bci proves to be capable for the accurate measurement and scale of the biocontrol abilities of fungal isolates. this work was supported by grants f037663 of the hungarian scientific research fund and grant omfb-00219/2002 of the hungarian ministry of education. the effect of advanced oxidation processes and recirculation on biodegradation of leachates from aerobic landfills liliana krzystek, anna zieleniewska-jastrzębska, stanisław ledakowicz department of bioprocess engineering, technical university of lodz, 90-924 lodz, poland modern landfills are built and operated in a way which allows us to treat them as a special type of bioreactor. simulation of municipal waste biodegradation in lysimeters provides knowledge on basic processes that take place in an aerated landfill. the aim of aeration is to stabilise mainly biodegradable and nitrogen containing components and to reduce methanogenic potential. stabilised leachates from old landfills contain big quantities of refractive carbon compounds that cannot be removed by biological methods. in such case most advantageous is to apply advanced oxidation processes (aops). the objective of this study is an experimental simulation of a landfill aerobic stabilisation and the impact of aops and recirculation of leachate on the reduction of organic load. the performance of the processes was monitored by the reduction in time of basic indices of organic load (bod 5 , cod, toc, vfa, tkn, n-nh 4 + ) and changes in biogas composition. the simulation of aerobic landfill processes was carried out in lysimeters with a fixed bed of household solid waste stabilised during 8 months in anaerobic conditions. leachates taken from the lysimeters were recirculated and subjected to advanced oxidation processes, i.e. ozonation and uv radiation with the addition of h 2 o 2 . experimental studies showed that the aerobic waste stabilisation was a very quick process. during a month the bed was stabilised, reaching a significant reduction of organic load indices. aeration of the lysimeters caused a quick reduction of mainly degradable organic substance (in terms of bod 5 ) and n-nh 4 + and vfa. the reduction of methanogenic potential of the landfill was even faster. the composition of gas at the outlet from the lysimeter changed and after one day already its content was similar to atmospheric air. a more frequent recirculation of leachates enhanced greatly the aerobic biodegradation. it was found that application of advanced oxidation processes (especially ozonation) contributed to a growing reduction of the organic load in the leachates from aerated lysimeters. the application of leachate ozonation resulted in a very high degree of reduction of organic compounds (up to 77%). the objective of the experimental study was to assess the effect of temperature on the extent of aerobic batch biodegradation of potato stillage with a mixed culture of bacteria of the genus bacillus. the experiments were performed at 20, 30, 35, 40, 45, 50, 55, 60, 63 and 65 • c, at ph 7, in five l l working volume stirred tank reactor (str) (biostat ® b, b. braun biotech international). the duration of the process was 120 h. initial cod of the stillage amounted to 51.9 g o 2 /l, the main carbon sources being reducing substances (18.7 g/l), organic acids (determined as their sum) (12.2 g/l) and glycerol (3 g/l). at 65 • c, no cod reduction or biomass increment was found to occur. at the other investigated temperatures, the reduction in cod measured after suspended solids (ss) separation varied from 77.6% (55 • c) to 89.1% (35 • c). without ss separation, cod reduction ranged between 55.6% (20 • c) and 75.1% (35 • c). this indicates that, in terms of the extent of cod reduction, the optimal process temperature was 35 • c and that there was a local optimum at about 63 • c. according to the temperature applied, the content of reducing substances decreased by 84.3-96%, that the organic acids by 91.7-99.6%, and that of glycerol by 91.5-96%. the experiments also produced the following two findings: (1) the rise in temperature brought about a decrease of biomass concentration in the str (measured as ss and bacterial number), and (2) temperature was a factor affecting the demand for ammonia nitrogen (n-nh 4 ), which was the highest at 20 and 60 • c. the high n-nh 4 demand observed both over the higher and lover ranges of the investigated temperature should be attributed to the release of n-nh 4 and to the large amounts of the biomass produced, respectively. the results obtained imply that the extent of potato stillage biodegradation with a mixed bacterial culture was high over a wide range of the investigated temperature. polychlorinated compounds such as tetrachloroethylene (pce) have become serious environmental pollutants. considerable attention has been paid to these organochlorine compounds. this paper describes the molecular analysis of dechlorinating gene in halorespirating bacterium and efficient bioremediation process. an anaerobic bacterium, that dechlorinates pce to tce, was isolated and identified as a species of the genus desulfitobacterium. a novel pce reductive dehalogenase (prda) gene from the desulfitobacterium sp. strain kbc-1 was identified. these prd genes, including membrane anchor protein, were classified as a novel type of pce reductive dehalogenase (approximately 40% homology with the general pce dehalogenase). according to the substrate utility of this strain kbc-1 and phylogenetic analysis of prda, the type of this microorganism may be expected to play the role of a primary degrader of pce in the environment. high efficient bioremediation process so called the restricted aeration system which means microaerobic/aerobic reciprocal bioremediation process was developed. strong modifications take place, as ammonia production with a subsequent rise of the ph value and a rapid heat evolution leading to temperatures of up to 70 • c. little is known about the microbial community in the toscano cigar fermentation and its development as fermentation proceeds. the aim of this study is to investigate the microbial community composition, its dynamic and its influence on the toscano cigar production process. our results show that the fermentation could be divided into three different phases: initially yeasts are the predominant microorganisms while bacterial growth is partially inhibited; the middle phase is characterized by exponential growth of bacteria while yeasts disappear. in the final phase the microorganism population is mostly represented by sporigen microbial species. the occurrence of yeasts in the first phase could be attributed to their ability to grow at low temperature and low ph levels. the bacterial population flourishes after the yeast cells have reached a stationary phase and probably grows on residual nutrients and autolysing yeast cells. yeasts and bacteria involved in the fermentation process were isolated and characterized. the microbial community was investigated by a combination of phenotypic and molecular approaches. the phenotypic characterization was based on both colony and cell morphology. the isolates were then identified by rrna genes sequence analysis. finally, in order to clarify the role of the identified microorganisms in the production process, a preliminary biochemical characterization was carried out. biosensors have undergone rapid development over the last few years; in particular, in environmental field many biosensors using microorganisms and purified enzymes as biological component, were recently studied. benzene is present everywhere with high levels in the cities and sometimes, in petroleum processing plants. it is classified as carcinogenic compound of first class able to cause leukaemia. because the evaluation of benzene requires complex instruments and quite long analysis times, it is required to study alternative systems for benzene detection simple, fast and highly sensitive, such as biosensors. from pseudomonas putida mst, strain previously isolated in our laboratory and able to degrade benzene, we isolated genes encoding for benzene 1,2-dioxygenase and cis-1,2-dihydrodiolbenzene dehydrogenase to use in the development of two different hydrocarbon biosensors based on microorganisms and on purified enzymes. the genes isolated were cloned in pvlt33 and we developed three microbial systems carrying: (1) benzene dioxygenase, (2) dihydrodiol dehydrogenase and (3) benzene dioxygenase-dehydrogenase modified by pcr to obtain enzymes with histidine tag. the cloning was planned to construct recombinant strains able to overproduce the enzymes; the enzymatic activities will be evaluated both using whole cells and purified enzymes. study of operation condition of biofilter using fibril-form matrix for odor gas removal don-hee park, chonnam national university, this research was performed for developing of biological treatment process of odor gas such as mek, h 2 s, and toluene, which is generated from the food waste recycling process. to establish the operational conditions of odor gas removal by small-scale biofiltration equipment, it was continuously operated by using toluene as a treating odor object. when the odor treating microorganisms were adhered to fibril form biofilter, high removal efficiency over 93% was obtained by biofilm formation. at 400 ppm of inlet odor gas concentration and 10 s of retention time, the removal efficiency was 76% and 93% in first stage reactor and second stage reactor, respectively. however, the removal efficiency remained over 97% at the operational conditions above 15 s of retention time. ozonated water is produced using an ozone generator in a container filled with cold water. it is useful for sanitizing the surfaces of various products for which heat or chemical treatment is inappropriate, such as fresh food products. in this study, we investigated the antimicrobial effects of ozonated water and electrolyzed ozonated water against escherichia coli, s. aureus, bacillus subtilis and yeast, saccharomyces cerevisiae for practical use in sanitizing various products. the results demonstrated that the electrolyzed ozone water was effective for the reduction of microbial population at relatively low concentration of ozone. also, the electrolyzed and the ozonated water showed synergistic antimicrobial effects. many xenobiotics can react spontaneously with thiol moieties of glutathione (gsh), forming gsh-conjugates, or via glutathione s-transferases (gst). these enzymes participate in detoxification of potentially harmful compounds from endo or xenobiotic origin. using saccharomyces cerevisiae as experimental model, we observed that cells mutated in the gtt1 or gtt2 genes showed twice as much cadmium absorption than the control strain. we proposed that the formation of the cadmium-glutathione complex is dependent on those transferases, since it was previously demonstrated that the cytoplasmic levels of this complex affect cadmium uptake. the addition of glutathione monoethyl ester (gme), a drug that mimics glutathione (gsh), to gtt1∆ cells restored the levels of metal absorption to those of the control strain. however, with respect to gtt2∆ cells, addition of gme did not alter the capacity of removing cadmium from the medium. taken together, these results suggest that gtt1p and gtt2p play different roles in the mechanism of cadmium detoxification. by analyzing the toxic effects of this metal, we verified that gtt2∆ and gsh1∆ cells showed, respectively higher and lower tolerance to cadmium stress than control cells, suggesting that although gsh plays a relevant role in cell protection, formation of the gsh-cd 2+ conjugate is deleterious to the mechanism of defense. furthermore, analyzing the harmful effects of other xenobiotic, menadione (2-methyl-1,4-napthoquinone), we have also observed that gtt1p and gtt2p isoforms play distinct functions in the process of cell protection as well as in drug remove, since both strains showed lethal phenotypes after direct exposure to 20 mm menadione. however, after adaptive treatments (mild-heat or exposure to a lower menadione concentration), cells acquired tolerance to menadione stress, although the gtt2∆ mutant had still shown a higher sensitivity against drug toxicity. by analyzing the malondialdehyde (mda) produced in response to menadione, we observed that gtt2∆ cells exhibited increased levels of lipid peroxidation, indicating that, during menadione exposure, gsh-conjugates are formed by the same transferase isoform, gtt2p, involved in cadmium stress. financial support: stint (sweden), cnpq and faperj (brazil). polycyclic aromatic hydrocarbons (pahs) are ubiquitous and persistent throughout the environment. they are generally distributed from both natural and industrial sources. many pahs can have a detrimental effect on the flora and fauna of affected habitats through uptake and accumulation in food chains, and in some instances, they induce serious health problems and/or genetic defects in humans. many research efforts have been expended to find a suitable method for remediation of soil and water environments contaminated with pahs. amongst them, the use of ligninolytic fungi is particularly suitable for the development of such processes, since they produce extracellular lignin-degrading enzymes (mnp, lip, laccase, . . .) which degrade a wide range of organic pollutants. coriolopsis rigida has been reported to produce extracellular laccase as the sole ligninolytic enzyme. this makes this fungus particularly suitable for the study of xenobiotics degradation by laccase. the purpose of this research was to obtain high laccase activities by c. rigida in solid state cultures and to determine their ability to degrade anthracene (typical pah). both in vivo and in vitro assays were performed. the former led to 60-80% degradation in 3 days depending on the culture conditions, whereas the latter showed a degradation percentage above 90% in 2 days when low mediator concentration (hbt) was added to the reaction mixture. focus will be given on pressure-driven membrane bioreactors, gastransfer membrane bioreactors and the novel ion exchange membrane bioreactor (iemb). the latter concept has been developed and currently studied by our group. this process, based on integration of donnan dialysis with bioconversion of one or more target pollutants to harmless products, has been modeled and experimentally verified for the removal of various charged inorganic pollutants such as nitrate, perchlorate and bromate by mixed microbial cultures under anoxic conditions. tests of up to 3 months showed a very good operational stability. the essential role of the microbial membraneattached biofilm, which develops naturally in this type of systems, will be also demonstrated and discussed. poly(lactic acid) (pla), which is one of biodegradable plastics, is depolymerized by hydrolysis and releases soluble monomer or oligomer of lactic acid. many bacteria can use the monomer and oligomer as an energy source or a carbon source. in this study, we applied pla to an electron donor for denitrification process of the previously developed bioreactor, which could remove ammonia from wastewater by simultaneously carrying out two biological processes, aerobic nitrification and anaerobic denitrification. a bench-scale bioreactor was constructed with a gel-plate containing pure-cultured cells of nitrosomonas europaea and paracoccus denitrificans and a pla-plate. the pla-plate was prepared by mixing three kinds of plas with different molecular weight and tricalcium phosphate to keep the constant release of the electron donor for a long term. batch treatment experiment with the bioreactor was repeated with an artificial wastewater containing ammonia for 100 days. the bioreactor could remove nitrogen from the artificial wastewater at nitrogenremoval rate of approximately 4 g n/day per square meter of gel-plate surface during the experiment period without an additional electron donor. the performance was equivalent to that obtained with our bioreactor using ethanol as electron donor for denitrification. the bioreactor using pla dose not need an additional pump for serving an electron donor (e.g., ethanol) and a hollow space for serving. therefore, the concept using solid electron donors like a pla would be effective our bioreactor to compact and simplify, and would be possible to develop a portable or disposable bioreactor. leucosporidium antarcticum as a source of enzymes for biotechnology arkadiusz wojtasik 1,2 , marianna turkiewicz 2 , jaroslaw dziadek 1 , pawel parniewski 1 : 1 centre for medical biology pas, 106 lodowa street, 93-232 lodz, poland; 2 faculty of biotechnology and food sciences technical university of lodz, stefanowskiego 4/10 street, 90-924 lodz, poland. e-mail: awojtasik@cbm.pan.pl (a. wojtasik) leucosporidium antarcticum is a psychrophilic yeast able to growth at low temperature. these microorganisms live in antarctic marine waters and are endemic to that cold environment. furthermore, l. antarcticum is also isolated from the digestive tract of antarctic krill euphausia superba. enzymes isolated from coldadapted microorganisms such as l. antarcticum having a specific activity at low temperatures ranging from 0 to 30 • c are considered for utilization at biotechnological applications such as bioremediation, production of polyunsaturated fatty acids of dietary significance and might be a source of industrially useful enzymatic proteins. the main goal of this study was to construct a cdna library of l. antarcticum. the partial cdna library was obtained and some of the clones were analysed. the sequencing analyses allowed us to find an approximately 450 base pair nucleotide sequence which displayed a very high homology to disulfide bond chaperone belonging to the hsp33 family from psychrobacter sp. high similarity of that heat shock protein was found on an amino acid sequence level and was reaching nearly 85%. the main object of our further research is to clone hsp33 family protein gene and to obtain its expression in a mezophilic host strain. also, further clones will be analysed to find other interesting genes encoding the psychrophilic proteins. this work was partially funded by the kbn grant i29/205/05. out of 9000 plant species found in the flora of turkey, about 3000 are endemic. beautiful flowering (geophytes) bulbous plants form an important part of this rich biodiversity. besides use as ornamental plants, these have great potential in perfume and pharmaceutical industry. genera of fritillaria, ornithogalum, muscari, bellevalia, tulipa, galanthus, sternbergia, crocus, arum and biarum have important and critically endangered species with high export potential that enters into this group. most of these are endangered and their collection from wild and export has been banned to conserve them. large scale production and conservation of these species could also be achieved by in vitro techniques. therefore bulb scale and immature embryo explants of sternbergia candida, s. fischeriana, muscari muscarimi, fritillaria imperialis and f. persica were cultured on different nutrient media supplemented with various concentrations of plant growth regulators using different culture applications. large numbers of bulblets were produced (over 100 bulblets/explants) from single immature embryos on nutrient media in most species tested after 12 months of culture initiation. regenerated bulblets were kept at 5 • c for 5 weeks and then transplanted to soil successfully. to our knowledge the present study is the first report for in vitro bulblet production from immature embryos of geophytes. the procedure described here provides a prolific bulblet production system that may form the basis of bioreactor culture and conservation of endemic and endangered geophytes. the commercial use of organofluorine compounds in industrial, pharmaceutical and pest-control applications has dramatically increased over the past few years, resulting in the introduction of numerous new organic compounds into the environment. organofluorine compounds are chemically very stable and are assumed to be resistant to biological degradation. given the chemical inertness of fluorinated organics, their bioactivity, and their potential for accumulation in the environment, it is important to understand their environmental fate and the mechanisms by which they might be degraded. examples of the biodegradation of fluorinated compounds in literature are scarce, being fluorobenzoic acids the most commonly reported. information on the cleavage of carbon-fluoride bonds in synthetic compounds is limited to fluoroacetate dehydrogenase. in this project we try to obtain more insight in the defluorination mechanisms by investigating the diversity of degradation routes for these compounds in several soil bacteria by making use of modern genetic tools. a gram-positive strain capable of aerobic biodegradation of 4-fluorophenol (4-fp) as the sole source of carbon and energy was isolated by selective enrichment from soil samples collected near an industrial site. batch cultures were set up and substrate consumption, accumulation of intermediates and product formation were monitored. the consortium was able to use 4-fp up to concentrations of 448.4 mg l −1 and was able to utilize a range of other organic compounds. stoichiometric release of fluoride ions was measured in batch cultures suggesting that there is no formation of dead-end products during 4-fp metabolism. biobleaching of kraft cellulose pulp by poliporus versicolor aysun ergene 1 , nazif kolankaya 2 : 1 kırıkkale university, faculty of science and literature, department of biology, yahsihan, kırıkkale, turkey; 2 hacettepe university, faculty of science, department of biology, beytepe, ankara, turkey the suitability of culture supernatant from poliporus versicolor for use in the biobleaching of kraft cellulose pulp was investigated. p. versicolor was found to grow on mycological broth (1% soytone, 4% d-glucose and 0.5% cellulose pulp). maximal extracellular ligninase production was detected after 7 days (7 nkat). the optimum biobleaching conditions are 30 • c and ph 4.8, with 10 days. in this condition p. versicolor decreased the kapa number from 38.55 to 19.42 and increased brightness from 28 to 32.7 in 10-day treatment. boron and iron are among the microelements required for the proper development of the vegetative and generative tissues of plants. though iron is present in high amounts in almost all soil types, its bioavailability to crops is extremely reduced, hence most of the plants face an iron defficiency problem and while on one side crop productions effected, on the other hand the nutrition problems come up to human through contagious nutritional chains. boron is also among the most problematic micronutrients of the major crop plan-tation areas of turkey. both defficiency and toxicity problems exist in a total of about 50% of the central anatolian soil where pea is among the legumes cultivated. application of varying levels of boron and iron combinations in greenhouse and the analysis of plant acquisition via icp-aes as well as determination of the effects of the element combinations both in morphological and molecular levels are the aim of our studies. the genetic bases of the response differences of plant genotypes to b and/or fe, were investigated through the applications of molecular marker techniques. considerable growth rate and stem size differences were detected within the parents (wild-type versus cultivar) and the f9 plants. the presence of efficient genotypes to high micronutrient levels are expected to help us increase the cultivation of the crop in problematic areas as well as in exploring the molecular bases of the microelement uptake mechanisms. butachlor is one of the selective systemic herbicide toxins that are act by inhibition of protein synthesis. this toxin is used exclusively in the rice, barely, cotton and wheat farmlands. butachlor is belonging to chloroacetanilide herbicide group, which are consisting of butachlor, alachlor, acetochlor, metolachlor and poropachlor. in the view of bioenvironmental, butachlor is degraded in the soil by microbial activity. its stability is about 6-10 weeks. it is converted to the water-soluble derivatives in soil or water, with a slow evolution of carbon dioxide. because of butachlor is one of the herbicide toxin, it is inhibitor factor against growth of bacteria and microorganisms. microorganisms can be continuing their activities in the limited concentration of butachlor. therefore treatment of industrial wastewater consist of concentrated butachlor by the biological treatment is impossible and it is necessary chemical or physico-chemical treatment are used. in this research, biological treatment methods are used. in the biological treatment, an activated sludge system with volume of 6.5 l is used. in this method, butachlor with concentration of 5 mg/l are treated. removal percent of butachlor for concentration of 2.5 and 3 mg/l are calculated to 41.20% and 41.67%, respectively. removal percent of cod is also calculated to 86%. olive oil mill wastewater (omw) as the effluent of the concern of olive industry has high organic load. the conventional biological treatments despite of their simplicity and rather suitable performance are ineffective for the omw treatment since phenolics possess antimicrobial activity. in order to carry out a proper treatment on omw, use of microorganism able to degrade the phenolics thus, is necessary. the ability of phanerochaete chrysosporium immobilized purification and downstream process of xylitol obtained biotechnologically from hemicellulosic hydrolyzate of corncobs b. rivas 1 , p. torre 2 , j.m. domínguez 1 , j.c. parajó 1 , a. converti 2 : 1 department of chemical engineering, vigo university (campus of ourense), polytechnic building, as lagoas, 32004 ourense, spain; 2 department of chemical and process engineering, genoa university, via opera pia 15, 16145 genoa, italy. e-mail: brivas@uvigo.es (b. rivas) biotechnological production of xylitol from lignocellulosic materials has been widely studied in the recent years with promising results that confirming the possible industrial application of this technology. xylitol purification from fermented broth is the limiting stage of this process. previous works suggest crystallization procedures in order to recovery xylitol from fermented synthetic solutions. the complexity of fermented hydrolyzate not allows direct crystallization. in this work, corncobs hydrolyzate obtained with autohydrolysis-posthydrolysis techniques, detoxificated with activated charcoal and concentrated was fermented to xylitol by d. hansenii. the fermented broth composition was 64.9% of xylitol (68 g/l), 13.3% of other sugars and 21.8% of other compounds that interferes in the crystallization process (as dry matter of the liquor). the fermented media was submitted to an absorption process with activated charcoal and concentrated until a xylitol concentration of 340 g/l. the liquor was then submitted to a second step of precipitation with ethanol, the best results achieved in this study were obtained with an ethanol/liquor ratio of 4. in these conditions this treatment allows to remove a 56.9% of the impurities. the resulting solution was evaporated and crystallized containing 60% of ethanol and a xylitol concentration of 443 g/l. crystallization was performed at t = 5 • c with slightly agitation. after 36 h were separated xylitol crystals with a recovery yield of 13% and a purity degree of 90%. numerous publications have documented that only a minor number of the indigenous prokaryotic organisms found in complex environments such as the human intestine, biogas reactors, and soil are known, and probably only a fraction of this diversity can be accessed using traditional culturing techniques. some of the reasons for this are the lack of knowledge of specific growth conditions, specific nutrients, and obligate coculture requirements. also growth on a solid surface directly exposed to the atmosphere puts a very strong selective pressure on single cells supposed to develop into visible colonies. therefore, the knowledge of these microorganisms is scarce and generally limited to the 16s rrna genes that have been extracted from different environments and cloned for phylogenetic analyses. an obvious approach to circumvent these problems was the development of techniques based upon micromanipulation for isolation of single cells from complex mixtures. continuous development of modern microscopes in combination with the precision of a servo-powered micromanipulator and the development of the modern microscopic micro injectors used in ivf techniques has further aided the manipulation of single cells. this technique, however, does not solve the problems of the non-culturable cells, and other approaches are needed to gain more information about these organisms. an approach to the non-culturable cells could be genomic analysis of isolated single cells without preceding cultivation. this pcr-based technique is widely used for genetic analysis of human cells, but due to the small amounts of dna present in prokaryotic cells it has so far not been possible to produce identifiable amounts of dna from single cell amplification using conventional polymerases. a promising alternative used for amplification of small amounts of dna is the f29 dna polymerase operating under isothermic conditions. applying random hexamer primers, this polymerase carries out a multiple displacement amplification (mda) of high molecular weight dna template. in this study we demonstrate the successful application of mda for selective amplification of genomic dna from a single prokaryotic cell. the yield was >20 mg of amplified genomic dna corresponding to about a 5 billion-fold amplification from a single cell. the technique was used to approach a large group of non-thermophilic archaea found in agricultural soil. our results show that combining mda with fluorescent in situ hybridization and cell isolation by capillary micromanipulation enables an unprecedented ability to investigate new species without cultivation. also this combination of techniques opens for studies of genetic heterogeneity within populations and processes such as horizontal gene transfer. precipitation of zn 2+ , cu 2+ and pb 2+ at bench scale using biogenic hydrogen sulphide produced from the utilization of volatile fatty acids by sulphate reducing bacteria maria teresa alvarez 1,2 , carla crespo 2 , bo mattiasson 1 : 1 department of biotechnology, center for chemistry and chemical engineering, lund university, p.o. box 124, s-22100 lund, sweden; 2 instituto de investigaciones fármaco bioquímicas, universidad mayor de san andrés, la paz, bolivia biological production of hydrogen sulphide (h 2 s) from sulphate using sulphate reducing bacteria (srb) is popular within environmental biotechnology. srb require absence of oxygen, presence of nutrients required for growth and oxidizable organic substrates (to supply hydrogen atoms for reduction of sulphate). many organic wastes have been used as electron donors for the sulphate-reducers in the treatment of acid mine drainage (amd) including straw, hay, sawdust, peat, spent mushroom compost and whey, however, other wastes such as municipal organic waste can be used. the aim of this work was to study the possibility of using srb for the treatment of amd at bench-scale. this process involved three stages: the volatile fatty acid (vfa) production by hydrolytic bacteria from the degradation of vegetables and fruits, the production of h 2 s through the utilization of the produced vfas by sulphate reducing bacteria and the precipitation of metals by using the biologically produced h 2 s. the substrates used for vfa production consisted of tomato, papaya, apple and banana. the h 2 s produced from the degradation of vfas was utilised for the precipitation of an artificial effluent simulating the heavy metal concentrations of a mine located at bolivian andean region, containing approximately 9 mg/l of zn +2 , 8 mg/l of cu +2 and 4 mg/l of pb +2 . the maximum concentration of hydrogen sulphide obtained was approximately 17 mm. removal efficiencies of 97%, 98% and 100% for zinc, cooper, and lead, respectively, were achieved in the present work. at 30 • c during 14 days. bacterial population was determined by counting in a neubauer chamber with optical microscope. sulfate concentration was measured by turbidity method and metal concentrations in the filtered supernatant were measured by icp-aes. the first part of study consists of determine the maximum concentration of each metal at which d. vulgaris and desulfovibrio sp. grow in similar way than control culture (without metal). both cultures tolerate: cr(iii) 15 ppm, ni(ii) 8.5 ppm, zn(ii) 20 ppm. the maximum precipitation percentages were approximately: 25% (15 ppm cr(iii)), 96% (8.5 ppm ni(ii)) and 99% (for d. vulgaris-10 ppm zn(ii) and desulfovibrio sp.-15 ppm de zn(ii)). time to reach the highest precipitation was minor for mixed culture (desulfovibrio sp.) y all the cases. the next part was focused to study the precipitation percentage when metals are present in combination in the same metal levels (cr(iii)-ni(ii), cr(iii)-zn(ii), ni(ii)-zn(ii) and cr(iii)-ni(ii)-zn(ii)). the combination of metals does not affect significantly the bacterial growth and precipitation percentage of metals. this fact supposes an importance advantage so metals are commonly found together in the environment. future experiments are focused in development of this process in continuous operation mode. biosolubilisation and depolymerisation of coal has potential to produce a clean energy source or high value organic products from low rank coals such as lignite or sub-bituminous coal. these complex soluble phenolic compounds are of value as starting materials for biotransformation to value-added compounds such as antioxidants and flavourants. the bioprocess is carried out at ambient temperature and pressure and is perceived to be environmental benign. in the evaluation of coal solubilisation an important quantity for the assessment of process feasibility is the yield, i.e. the determination of the mass of product obtained per unit mass of coal solubilised. to date, results for coal biosolubilisation reported in the literature are qualitative or at best semi-quantitative, indicating trends with operating variables. the process kinetics has not been determined rigorously because measurement of fungal growth during coal solubilisation is hindered by the presence of the solid coal substrate. knowledge of the profile of biomass growth is required for the rigorous determination of the kinetic parameters necessary for process design and optimisation. in this paper, the use of an indirect method for the estimation of the growth and metabolism of fungal biomass by measuring co 2 evolution and o 2 consumption using an off-gas analyser is reported in the study of fungal coal solubilisation. coal determined rigorously because measurement of fungal growth during coal solubilisation is hindered by the presence of the solid coal substrate. knowledge of the profile of biomass growth is required for the rigorous determination of the kinetic parameters necessary for process design and optimisation. biosolubilisation was carried out in a stirred tank slurry bioreactor with working volume of 1.0 l. complete suspension of the coal particles of 650-800 m mean diameter was achieved at an agitation rate of 560 rpm. growth yield coefficients based on coal and oxygen as well as maintenance coefficients were calculated from growth of the fungus under the same conditions using a non-coal carbon source such as glucose. these data were used to determine the stoichiometric coefficients for biomass growth, enabling the biomass production rate to be quantified in terms of co 2 production rate and o 2 consumption rate. a dna-chip platform for parallel detection of microorganisms related to biofilm in industrial systems and drinking water systems pernille skouboe, dorte lauritsen, kim holmstrøm bioneer a/s, kogle allé 2, dk-2970 hørsholm, denmark. e-mail: psk@bioneer.dk (p. skouboe) an oligonucleotide microarray for simultaneous detection and identification of pathogenic bacteria related to technical water systems as well as drinking water has been developed. the approach is based on the use of a tandem hybridization technique with two ribosomal 16s rdna-pcr products, 1000 bp and 500 bp long, generated from two consensus pcr reactions using conserved ribosomal primers end-labeled with cy3 and cy5, respectively. the tandem hybridization technique implies an internally quality control for discrimination between target and non-target signals. the current prototype of the dna-chip platform includes 20 oligonucleotide probes representing 11 different genera (and subgroups of species), e.g. legionella, mycobacterium, aeromonas, campylobacter, vibrio and enterococcus. the platform has been used for detection and identification of species from pure cultures, and initial experiments with water samples from industrial systems have been performed. the potential as well as the limitations of using a dna-chip based detection format in its present form will be documented. particularly, its potential application as a rapid method for initial screening of environmental or food samples will be addresses. the aim is to reduce and optimize the number of samples required for traditional microbiological identification tests. in the course of a project for the development of a novel kind of a mycotoxin inactivating feed additive, the aim of this study was to isolate and characterize microorganisms with the specific ability to enzymatically break down and detoxify fumonisins, a group of structurally related fungal toxins, with fumonisin b 1 (fb 1 ) being the most abundant and -with respect to toxicology -also the most important representative of this group. these toxins are produced as secondary metabolites by some fusarium species such as fusarium verticillioides and f. proliferatum and are naturally occurring contaminants of cereal grains worldwide. they are found especially in maize and maize based products, and are known to be hazardous to human as well as to animal health. a natural feed additive, based on microorganisms and/or enzymes, should ensure the detoxification of fumonisins during feed uptake and digestion via microbial or enzymatic break down of these compounds, by that protecting the animal from the harmful effects of these mycotoxins. besides an extensive screening of microbial strains derived from strain collections, various different natural habitats were investigated for the presence of fb 1 degrading microbial activity, such as intestinal contents of pigs, soil samples, and naturally fumonisin contaminated maize. while testing of nearly 150 organisms from strain collections did not show positive results, fumonisin transforming activity could be detected in one soil sample and a number of maize samples. trials in order to isolate the respective fumonisin degrading microorganisms resulted in a number of strains, whose fb 1 degrading activity could be proven. the most promising bacterial and yeast strains were further characterized with regard to a general taxonomic description, and to different aspects of their toxin degradation behaviour. approaching a more relevant in vivo situation, fb 1 degradation trials in food-and feed-stuffs were conducted. further on, the applicability of the respective organisms as stabilized lyophilisates was investigated. arsenic is one of the most important global environmental pollutants and the toxicological effects are related to its chemical form and oxidation state. arsenite [as(iii)] is reported to be on average 100 times more toxic than arsenate[as(iv)]. this work shows the ability of one strain of the species ochrobactrum tritici to grow in presence of several metals including arsenite, arsenate, selenite, selenate, tellurite and antimonite. its arsenite mic was determined as 50 mm, whereas for arsenate, this bacterium could resist to concentrations upper than 200 mm. we report the identification of two loci involved in high-level arsenic resistance. sequencing of the first locus identified four complete genes in the following order: arsr, arsd, arsa, arsb. the second locus containing genes for arsenic resistance was also characterized. each sequence has been compared with nucleotide and protein databank (blast programs) and significant homology with known orfs coding for arsenic resistance has been found. it is also possible that the phenomenon of high-level arsenic resistance in o. tritici could evolve other genes or loci. the ability of the ␣-proteobacterium o. tritici to tolerate high levels of arsenic in addition to other oxyanions has considerable potential for detoxification and bioremediation of contaminated environments. this work is based on the mathematical modeling of kinetics of a thermophilic bacteria cultivation system. the cultivation proceeded by way of batch and continuous on the synthetic medium with the main carbon source-lactose. this medium simulated an industrial waste approximately. mixed thermophilic aerobic bacteria popula-tion, applied to the wastewater treatment (sludge v&k bystrice pod hostynem), was used to the inoculation. the cultivation system consisted of the laboratory fermentor biostat b (b. braun biotech) with working volume 2 l and the control unit connected with a computer. it is possible the temperature, ph, aeration, stirring and foaming regulation. physical and chemical cultivation conditions were optimized. the chemical oxygen demand (cod), generally expressive the impurity level, was selected as the main parameter for the cultivations run classification. but into the mathematical model also the kinetics of biomass growth, lactose consumption, production of choice metabolites (acetate, lactate, succinate) and dissolved oxygen concentration was included. the modeling was located to two head distinguishable growth phases of the microorganisms. an optimization and identification of mathematical model parameters was practised in the software language psi/c. the difference between simulated curves and experimental data is not statistically significant on the relevancy level 0.05 (f-test). it took place the cod degradation at 74.0% with the average yield coefficient y chsk/x 3.8 g cod/g biomass in the batch process with the air aeration only. there is a better way to the cod elimination (>90.0%)-the aeration of air enriched by the pure oxygen. in experiments with the continuous system was obtained the 69.0% cod decrease after the steady state stabilization. this work was supported by project msm 0021630501. fluorescence in situ hybridization (fish) of whole cells using oligonucleotide probes was applied to study the influence of low temperature and temperature reduction on the bacterial community of biofilm reactors for the removal of chlorophenols (cps). two packed bed reactors were set up for degradation of a mixture of 2-cp, 4-cp, 2,4-dicp, and 2,4,6-tricp as sole source of carbon and energy at 14 • c (ra) and 24 • c (rb) and were inoculated with bacterial consortia adapted to these respective initial temperatures. the performance of the reactors was studied under different conditions of pollutant loading, aeration rate, and hydraulic retention times over 7 months. total chlorophenol removal capacities of 1240 and 1420 mg l −1 day −1 were achieved in the bioreactors ra and rb, respectively, under a total pollutant load of 1440 mg l −1 day −1 . the population of ␤-proteobacteria was the major bacterial community of the biofilm (35-47%) followed by the ␥-proteobacteria (12-6.5%). two bacteria with the ability to mineralize 50 mg chlorophenols l −1 were isolated from the bioreactors and characterized as ralstonia basilensis and alcaligenes sp., both belonging to ␤-proteobacteria. decreasing the temperature by 10 • c (in two steps of 5 • c each) resulted in an increase in the population of ␥-proteobacteria and a decrease in the population of ␤-proteobacteria in both reactors. application of genus specific probes showed an increase in the pseudomonas population from 25% of the ␥-proteobacteria at 14 • c to 59% at 4 • c. the pollutant removal capacity decreased to 548 and 833 mg l −1 day −1 in ra (4 • c) and rb (14 • c), respectively. the ␣and ␦-proteobacteria, cytophaga-flavobacteria and actinobacteria the survey was carried out in urban and rural areas of two cities (ankara and isparta). this paper is only analysing urban people by excluding villagers. urban sample is consisted of 400 urban consumers and 200 professionals. professionals were selected amongst pharmacists, doctors, agricultural engineering's and industrialists that is thought are affective in the process of developing new technologies and also the development of biotechnology in the society. basic data was gathered by a questionnaire including both structured and open-ended questions besides deep interviewed. workforce development for life sciences-the scottish experience carol booth scottish entreprise, uk the presentation will look at, the background and definition of workforce development for scottish enterprise, examine information available to support scottish enterprises economic intervention and where and when to intervene. conclusions emerging from the evidence base will be used to outline scottish enterprises approach to workforce development and look at which actions might be required to address identified issues. moving on to reasons for integrating workforce development into business development and how the life sciences cluster team at scottish enterprise, stakeholders and partners in scotland have approached their current and future contributions to workforce development for life sciences using a variety of projects. the national institute for bioprocessing research and training (nibrt) in ireland is a proposal that will be a state-of-the-art training, research and pilot plant service facility that brings together institutions with complementary expertise and state-of-the-art research technology, and industry partners. these include university college dublin, trinity college dublin, institute of technology, sligo and dublin city university. nibrt is an innovative collaboration between academic institutions at the forefront of biotechnology, cell biology, engineering and pharmacy and industry. for training, two separate training labs, for upstream and downstream training, in addition to 5 research labs are planted. it will also include a state-of-the-art pilot plant fermentation facility for fermentation optimisation, fermentation scale-up, product separation and purification, regulatory aspects and automation. by aligning with industrial demands, the new institute will tailor its training programmes while remaining on the cutting edge of biotechnology research and technologies. the fermentation facility will offer hands-on training workshops and educational modules for outside researchers and companies. these workshops cover the fundamentals of small-scale fermenta-tion, scale-up considerations, and fermentor design and set-up. the training and educational philosophy underpinning the nibrt will focus on the needs of industry with an emphasis on providing training for accreditation of existing industry staff and prepare technicians and graduates for the technical, business, regulatory and professional aspects of the industry. the strategy is to provide specialised modules in nibrt in support of courses established in the higher education institutions, which will provide the certificates, diplomas and degrees. modules will be offered for all categories of students and will be given credits respected by other third level institutions in ireland. the role of professional graduate degrees in meeting current and future biotechnology industry workforce needs a. stephen dahms san diego state university, usa the presentation will review the status of new graduate training models designed to meet the unique needs of the biotechnology industry as it transitions to commercialization. emphasis will be on professional master's degree programs in biotechnology and their various versions, with a focus on operational and funding strategies and industry acceptance. discussion will also centre upon the creation and operation of industry-validated, specialized and highly targeted professional masters degrees in various refined aspects of the drug development process, including regulatory affairs, biomedical quality systems, clinical affairs, management of drug development, management of reimbursement affairs, bioinformatics, etc. the eurodoctorate in biotechnology, new combined mba/phd combined degrees in the molecular life sciences, the u.s. professional doctorate in chemistry and the proposed u.s. professional doctorate in biotechnology will be also discussed. data will also be presented on the current workforce and the industry's projected needs. genetic studies show that mankind is a rapidly expanded population of closely related individuals with very similar disease sensitivity. bad nutrition and infections dominate among the main health problems in the world. apart from malnutrition, the overeating habits of the developed world are now creating problems in the developing world as well. infectious diseases are also a global problem since new contagious agents like hiv, sars and avian flew do not recognize borders. thus, the global responsibilities of modern health care are obvious. many research scientists from and in developing countries find it nearly impossible to use their talents for the benefit of their own countries. some struggle to develop research and education programmes with poor facilities, some leave science completely, and others migrate to more developed countries. the talents of such people are either being wasted or lost completely to their home countries just at the time they are most needed to combat the great humanitarian challenges of hunger, illness and lack of knowledge. europe must strengthen programmes which allow third world scientists to work to their full potential in their home countries or regions. yang beijing genomics institute, chine academy of sciences, beijing, china europe has all the reasons to be proud of being the cradle of modern science and of its achievements and resources in life sciences. as a model of having solved many of the problems that many other counties are now facing, europe is expected by the whole world to make its further contribution to a better future of mankind and to play a more important role in the international community of life sciences. tropical diseases and public health basilio valladares director of the university institute of tropical diseases and, public health, university of la laguna, 38200 la laguna, tenerife, spain. e-mail: bvallada@ull.es the presentation will look at the main research interests of the university institute of tropical diseases. these are the following: 1. immunology and molecular biology of parasites. we express and purify recombinant proteins from leishmania sp. which have been shown to act as immunomodulators and protect against disease such as l25, hsp70, hsp83. the study of acanthamoeba pathogenic factors has also resulted in the isolation and silencing of extracellular proteins related to their pathogenecity, which has a great potential in the development of novel chemotherapeutics. 2. diagnosis of parasites. the immunological diagnosis of leishmaniasis has been one of the main research interests in our laboratory for several years. as a result, we have identified peptides which could be used to develop kits for the immunological diagnosis of leishmaniasis such as hsp70 c-end, l25 n-end, etc. we have also developed some dna based methods for the identification of acanthamoeba species from biological and environmental sources. 3. water quality. biological parameters. our water research group has the expertise to identify and characterise bacterial, viral and parasitic indicators of faecal contamination in diverse water sources including tap water, rivers, reservoirs, sea, etc. this research area has been developed in collaboration with the local sewage treatment plant and reservoir managing authorities. currently, we are establishing a conjoined project with the center for disease control and prevention (cdc) in atlanta, usa for the identification of water-borne emerging pathogens. 4. development and formulation of chemotherapeutic antiparasitic agents. in this field, we evaluate the leishmanicidal activity both in vivo and in vitro of natural and synthetic drugs and synthetic peptides. in a later stage, the drugs which have shown the highest antiparasitic activity have been subjected to cytotoxicity assays and their molecular targets dissected. some of the drugs tested in the last few years have been submitted to patent due to their outstanding activity. finally, in order to allow the commercialization of these drugs, both in vivo and in vitro assays are being carried out to predict their chemical stability and degradation pathways. this will be followed by the use of liofilization and controlled crystallisation strategies for the development of efficient and safe treatments. 5. human and population genetics. tachykinins and their receptors in different tissues and groups of patients and their association with molecular polymorphisms is another one of our research interests. the knowledge of ligand and receptor sequences and their similarities will allow the rational development of drugs with specific activity against these receptors. furthermore, we are also interested in the interspecific variation along the evolutionary scale of these markers. 6. nitrate assimilation group. research in our group is focused on understanding nitrate assimilation in the yeast hansenula polymorpha. several biotechnological companies use this yeast to produce heterologous proteins (hepatitis b vaccine). genetic manipulation techniques for h. polymorpha are available in our laboratory. head of the department of biotechnology, technological institute of canary islands, pozo izquierdo 35119 santa lucía, las palmas, spain single cell analysis by flow cytometry has proved to be a tool to perform simultaneous and rapid measurements related to cell morphology and physiological state. previous studies showed the possibility of quantifying neutral and polar lipids spectrofluorometrically using a lipid specific fluorescent dye, nile red (nr), however the existence of inter and intraspecific variations in the fluorescent response had not been clearly established. in this work, two strains of marine microalgae: crypthecodinium cohnii and tetraselmis suecica, characterized both by high contents of polyunsaturated long chain fatty acids (dha and epa, respectively) and an hypersaline microalgae: dunaliella salina, characterized by a high ␤-carotene production, were grown under different conditions and collected at different growth phases to be used for in vivo lipid quantification with nr by flow cytometry. our results showed a high correlation between the mean fluorescence signal of nr stained cells and the neutral and polar lipid content measured by gravimetry for each strain. in this respect, these data make feasible the development of a rapid method for lipid quantification in monoalgal cultures. however, differences in the dye uptake related to specificity were detected. in this communication we also assess the possibility of use this cytometric technique to select microalgal strains with high lipid and polyunsaturated long chain fatty acids content from mixed samples. performance of such technique would be a good alternative to the time-consuming traditional screening protocols based on gravimetry and gas chromatography and would optimise the search of new commercial strains of microalgae. claverie-martín head of research unit, biomedical research institute, hospital universitario, de n.s. de candelaria, santa cruz de tenerife, spain our group is involved in the cloning and production of proteins of interest to the food and pharmaceutical industry. we have recently expressed in yeast the cdna that encodes the precursor of caprine chymosin. chymosin is the enzyme responsible for the coagulation of milk in the abomasum of unweaned calves. this enzyme is secreted by gastric mucosa cells as an inactive precursor, known as prochymosin. in the acidic conditions of the lumen, prochymosin is converted into the active form by autocatalytic cleavage of the n-terminal prosequence. chymosin is used extensively in cheese production because it cleaves -casein in a specific manner with low proteolytic activity. several biotechnology companies are producing the bovine recombinant enzyme for commercial use in the process of cheese making. we are interested in the caprine chymosin as an alternative because in the canary islands cheese has traditionally been made using goats milk with extract from the abomasum of newborn goats as coagulating factor. it is well known that the activity of these types of extracts varies depending on the age of the animal and the type of food ingested. these difficulties should be overcome using a recombinant caprine chymosin. the caprine mrna used for the synthesis of the cdna was obtained from the abomasum of milkfed kid goats. the cdna fragment encoding the mature portion of caprine prochymosin was fused in frame to a signal sequence in yeast expression vectors. culture supernatants of yeast cells transformed with the recombinant plasmids showed milk-clotting activity after activation at acid ph. proteolytic activity assayed toward casein fractions indicated that the recombinant caprine chymosin specifically hydrolysed -casein (patent 200402025). the recombinant caprine chymosin could be an alternative milk coagulant in cheese making. work is underway to optimise the expression of the new recombinant prochymosin for further purification and characterization. smart molecules for health victor martín head of research, university institute of bio-organics "antonio gonzález", avda. astrofísico francisco sánchez 2, 38206 la laguna, tenerife, spain the instituto universitario de bio-orgánica "antonio gonzález" (iubo) is a multidisciplinary research centre that belongs to the university of la laguna. the iubo is located at the town of la laguna, inscribed on unesco's world heritage list in 1999, and former capital of the canary island of tenerife. the geographical location in addition to its mild climate has made the canary islands to posses a variety of ecosystems with unique plants and animals. the institute was started up in the 1960s with the need to study the natural products and secondary metabolites produced by those marine and terrestrial organisms, thus providing a new source of bioactive products. the main research lines that are being developed at iubo are summarised in the following paragraphs: anticancer agents from natural sources: several natural products and their semisynthetic derivatives are produced at the iubo in diverse joint projects for the development of new antitumour drugs with novel mechanisms of action. as an example we can mention natural products from the mevalonic, shikimic or polyketide pathways. some products have recently shown in vitro reversion of the resistance in multidrug resistant (mdr) tumour cell lines. genetic engineering: in vitro cultures of the plants atropa baetica, maytenus amazonica and m. macrocarpa are developed in order to manipulate their biosynthetic pathways and induce the production in large scale of the secondary metabolites for diverse applications, including arthritis, rheumatism, and back pain. marine organisms and toxins: dinoflagellates are marine organisms responsible for the red tides and food poisoning episodes. among others, okadaic acid and yessotoxin are the most common toxins present in european shellfish. the isolation of these products is best done from the microorganism cultures, since they are present in very low amounts in the natural sources. at the iubo we develop culture systems to provide us with amounts of toxins large enough to perform biological, metabolic and bioactive studies. insecticide and repellents: natural products are being isolated for their use against plagues, specially those affecting agriculture. these projects are run in collaboration with a number of public institutions and agrochemical companies throughout europe and latin america. fine chemicals and pharmachemicals: our institute possesses large expertise in the field of organic synthesis devoted to the synthesis of medicinal substances, with special focus on asymmetric processes. of particular interest is the development of new methodologies for the total synthesis of biologically active substances like polyether toxins, (un)natural amino acids, sphingosine analogs, alkaloids, etc. with an annual source of 100s of new compounds, the fine chemicals and medicinal chemistry branch at iubo have recently started and anticancer screening program in collaboration with the biomedical research unit at the hospital universitario n.s. de la candelaria. the program is committed to the discovery of novel drugs for application in cancer treatment. the outcome of this project in its first year has been outstanding, leading to the finding of several leads that form the basis for current and future projects. institute of canary islands, biotechnological department, playa de pozo izquierdo s/n, 35119 santa lucía, gran canaria, spain the presentation will look at the main research interests of the biotechnological department of the technological institute of canary islands. these are the following: nutrition and feeding in aquaculture: we conduct studies on digestion, absorption, transport, and utilization of the different nutrients applying also different techniques such as histology, enzymology, genetic or immunology among others. aquaculture feeding is the main important cost in fish farms, being higher that the prize of fries, personnel cost or energetic costs. thus, studies conducted on the improvement of diet formulation and the use of different dietary ingredients is one of our main research lines (such as vegetable oils and meals to be used as alternative to fish meal and oil, or carotenoid sources to improve fish colour). not only the use of the different ingredient is studied, but also the effect of these ingredients on fish health, flesh quality, flesh healthy aspects related with human consumption, are being studied. different formulae are being developed and patents of different diets are being obtained. studies on nutritional requirements are also conducted allowing us to patent different formulae in larvae studies, developing microdiets to substitute the high-cost processes associated with live prey in larval nutrition. development of immunostimulants, anti-stress diets, immune techniques to be applied as bio-indicators of fish health and welfare, as well as use of dietary ingredients derived from bio-reactors are other research lines in our group. genetics: genetic techniques applied to aquaculture are being an important tool. microsatellites are being used to determine genealogy of the fish, allowing to decreases important problems in aquaculture such as fish deformities. genetic techniques such as micro-arrays and gene expression are being applied to obtain indicators of stress and health in fish. these technologies also permit to make different selective breeding programs, increasing the accuracy to estimate genetic parameters and evaluating brood-stock. furthermore, this technology allows to obtain a procedure to increase the productivity and quality of fish hatcheries. new species for aquaculture. development of new culture techniques: the diversification of species cultured is one of the main objectives of european aquaculture, since nowadays only four marine species are commercially produced: gilthead sea bream. european sea bass, turbot and salmon. we have been developed rearing techniques for new species such as red porgy or canarian abalone and also we are conducting studies on different new species, such as different sparids species, octopus or yellowtail. new rearing technology: the election of adequate systems for fish growth for each species and site of production and the localization of more appropriate sites for farms, using gis technology are also of special importance in our research team. besides, new larval rearing techniques such as semi-extensive hatchery (mesocoms) were development and nowadays is being used to increase fry quality (survival, no-deformities, better growth). this technology is being exported to other countries in order to offer new technology easy to manage to be implanted in developing countries. alejandro cañeque project officer, canary islands special zone, ministry of finance, c/leon y castillo 431 -4 a planta, edificio urbis, 35007 las palmas, spain. e-mail: acaneque@zec.org the canary islands special zone is the newest tax instrument within the canary islands economic and fiscal regime (ref). it offers a reduced tax rate of between 1 and 5% of corporate income tax for companies setting up a business. the companies must cover the following minimum requirements: create employment and make a minimum investment. the zec offers other tax advantages such as the exemption from paying transfer tax and stamp duty and the canary islands general indirect tax (igic). this tax scheme particularly fosters the biotechnology and pharmaceutical sectors. references fahnert references bechor the antimicrobial drugs high-level production of human collagen prolyl 4-hydroxylase in sandwich hybridisation assay for quantitative detection of yeast rnas in crude cell lysates microbial processes and products press. biotech. bioeng. 218. van hijum microbial xylitol production from corn cobs using candida magnoliae the role of bacillus methanolicus citrate synthase gene, city, in regulatiing the secretion of glutamate in lysine-secreting mutants plasmid-dependent methylotrophy in thermotolerant bacillus methanolicus 1,5-anhydrod-fructose; a versatile chiral building block: biochemistry and chemistry detailed dissection of a new mechanism for glycoside cleavage: the ␣-1,4-glucan lyase ␣-1,4-glucan lyase, a new class of starch and glycogen degrading enzyme ␣-1,4-glucan lyase, a new starch processing enzyme for production of 1,5-anhydro-d-fructose efficient purification, characterization and partial amino acid sequencing of two ␣-1,4-glucan lyases from fungi ␣-1,4-glucan lyases producing 1,5-anhydro-d-fructose from starch and glycogen have sequence similarity to alpha-glucosidases enzymatic description of the anhydrofructose pathway of glycogen degradation i. identification and purification of anhydrofructose dehydratase, ascopyrone tautomerase and ␣-1,4-glucan lyase in the fungus anthracobia melaloma a systems approach to dissecting immunity and inflammation integrative biological analysis of the apoe*3-leiden transgenic mouse innate recognition of bacteria in human milk is mediated by a milk-derived highly expressed pattern recognition receptor, soluble cd14 soluble forms of toll-like receptor (tlr)2 capable of modulating tlr2 signaling are present in human plasma and breast milk proteomics of the rat gut: analysis of the myenteric plexuslongitudinal muscle preparation an integrative metabolism approach identified stearoyl-coa desaturase as a target for an arachidonate-enriched diet the challenges of modeling mammalian biocomplexity rapid enrichment of bioactive milk proteins and iterative, consolidated protein identification by mudpit technology post-genomics of lactic acid and other food-grade bacteria to discover gut functionality genetics, metabolism and application of lactic acid bacteria. fems microbiol complete genome sequence of lactobacillus plantarum wcfs1 functional ingredient production: application of global and metabolic models metabolic engineering of lactic acid bacteria for the production of nutraceuticals reduction in antinutritional and toxic components in plant foods by fermentation the authors gratefully acknowledge the financial supports given to this work by ankara university, biotechnology institute (project no 2001k120240-41). the authors gratefully acknowledge the financial supports given to this work by ankara university, biotechnology institute (project no. 2001k120240-40) and the microorganisms given to this work by prof. dr. francesco molinari at university of milano, and prof. dr. leyla aç ikel at gazi university. the authors gratefully acknowledge the financial supports given to this work by ankara university, biotechnology institute (project no 2001k120240-40) and the microorganisms given to this work by prof. dr. francesco molinari at university of milano, and prof. dr. leyla aç ikel at gazi university. mrs. lynnette fernandez, johanna mäkeläinen, m.sc. and olli rämö, m.sc. are gratefully acknowledged for their help. this work was supported by the health science council, the academy of finland and the tekes-neobio program. supported by the mec of spain (ppq2002-04361-c04-02) and the canary islands government. jmp thanks icic for a postdoctoral fellowship. frpc thanks cajacanarias for a fpi fellowship. tm thanks the spanish mcyt-fse for a ramón y cajal contract. this work was supported by the korean systems biology research grant from the ministry of science and technology, lg chem chair professorship, ibm sur program and bk21 program. this study was supported by ankara university biotechnology institute (project no: 89). this work was partially supported by mec and feder (bio2004-00439), and fundación séneca carm (00842/pi/01). this work was supported by grants f037663 of the hungarian scientific research fund and grant omfb-00219/2002 of the hungarian ministry of education. keto sugars have long been implicated as attractive intermediates or substrates for further chemical or enzymatic reactions, to generate a number of synthetic sugar derivatives and fine chemicals. the quinone-dependent pyranose dehydrogenase (pdh) purified of the basidiomycete fungus agaricus meleagris catalyzes with high specificity the oxidation of the c-3 of glycosidically bound d-glucose, whereas in contrast it oxidizes simultaneously the c-2 and c-3 of free d-glucose. considering the broad substrate tolerance, pdh provides a new convenient tool for high yield production of 3-ketothis work was partially financed by the scientific and technological research council (cicyt, spain), grant ren2001-3224, by the iii pla de recerca de catalunya (generalitat de catalunya), grant 2001sgr-00143, and by the generalitat de catalunya to the "centre de referència en biotecnologia" (cerba). this work was supported by mega a.s. (czech republic) (www.mega.cz) and following vega grants: 1/2391/05 and 1/2390/05. this work was partially supported by mec and feder (bio2004-00439), and fundación séneca carm (00842/pi/01). financed by a marie curie re-integration grant merg-ct-2004-006378 and consejería de educación y ciencia de la junta de andalucía. this project is part of the collaborative research centre sfb 578 "development of biotechnological processes by integrating genetic and engineering methods", which is supported by the german research foundation dfg. this research was supported in part by grants from the hungarian scientific research fund (otka t37471, f46658 and d48537) and the hungarian-spanish intergovernmental s & t cooperation programme (omfb00103/2005). this research was supported in part by grants from the hungarian scientific research fund (otka t37471, f46658 and d48537) and the hungarian-spanish intergovernmental s & t cooperation programme (omfb00103/2005). this research was supported in part by grants from the hungarian scientific research fund (otka t37471, f46658, d48537) and gvop-3.1.1.-2004-05-0471. the authors thank dr. hamid narjiss (director of morocco inra) for the instruction to identify nadorcott mandarin by molecular markers and helpful discussions regarding this paper. this work was partially funded by the program for scientific cooperation cnrst (morocco) and iccti (portugal), the international foundation for science (ifs) support in stockholm (sweden). this work has been financed by xunta de galicia (pgidt03pxib30103pr). sevgil sadettin, gönül dönmez department of biology, faculty of science, ankara university 06100 beşevler, ankara, turkey this study was supported by ankara university biotechnology institute. demet ç etin 1 , sedat dönmez 2 , gönül dönmez 1 : 1 department of biology, faculty of science, ankara university, 06100 beşevler, ankara, turkey; 2 department of food engineering, faculty of engineering, ankara university, 06110 dışkapı, ankara, turkey sulfate-reducing bacteria (srb) that could grow on modified postgate c medium (pc) containing chromium(vi) were isolated from industrial wastewater and their chromium(vi) reduction capacities were investigated as a function of changes in the initial ph values, chromium, sulfate, nacl concentrations and carbon source. the optimum ph value at 50 mg l −1 initial chromium(vi) concentrations was determined as 8. chromium(vi) reduction by srb was investigated at 22.7-98.4 mg l −1 initial chromium(vi) concentrations. at the end of the experiments, the mixed cultures of srb were found to reduce more than 99% of the initial chromium(vi) levels which ranged from 22.7 to 74.9 mg l −1 within 2-6 days period. the effects of initial 0-9.0 g l −1 concentrations of sulfate and 0-6% (w/v) concentrations of nacl to chromium reduction were showed that, the lowest concentrations of sulfate and nacl, were the best for chromium reduction in the pc media including 50 mg l −1 chromium(vi). when the 25% whey was used as carbon source in the pc medium, 99.6% of the 65.9 mg l −1 initial chromium(vi) concentration was reduced within this study was supported by ankara university biotechnology institute. this study was carried out as a part of the project for analyzing and controlling the mechanism of biodegrading and processing entrusted by the new energy and industrial technology development organization (nedo). this work has been financed by xunta de galicia (pgidt03pxib30103pr). lead ions are considered a high pollutant of different waters. in our previous work were selected seven potential sorbent-strains rhodotorula mucilaginosa 1776, rh. aurantiaca 1195, rhodotorula sp. 4, williopsis californica 248, candida krusei 61t, cryptococcus sp. wt of lead ions. it was determined their stability to high concentration (up to 750 mg/l) of lead ions in medium. the ph changes and yeast physiologies of growth were studied in medium with these heavy metal ions. the influences of environmental factors such as ph of solution, age of microbial culture, biosorbent concentration in suspension, alive or living state of biosorbent, time of contact on sorption were investigated. the levels of maximal sorption ability and biomass affinity to heavy metal ions were established by experimentally received sorption isotherms with mathematical modeling of biosorption process separately for everyone researched yeast strain. the sorption isotherms obtained in these experiments for non-living yeast biomass showed that the maximal sorption capacity was 225 and 195 × 10 −6 mol (g sorbent) −1 for rh. aurantiaca 1195 and s. cerevisiae 1968, respectively. in the case of living biomass, the highthis work has been financed by the spanish ministry of science and technology and european feder (project ctm2004-01539). the authors wish to thank dra. m.j. martínez (cib, csic, madrid, spain) for providing coriolopsis rigida. this work was supported principally by embo and the mrc. fed-batch cultivation of haematococcus pluvialis under illumination with leds for production of astaxanthin abdolmajid lababpour, tomohisa katsuda, shigeo katoh department of molecular science and material engineering, kobe university, kobe, hyogo 657-8501, japan photosynthetic microalga haematococcus pluvialis is a most promising microorganism for production of astaxanthin, which has powerful antioxidant activity and is used both for human being as a food supplement and cosmetic; as well as animal farming such as salmon and poultry. the deficiency of nutrients in batch culture of h. pluvialis decreases the growth of cells and increases the induction of astaxanthin as an induction factor. therefore, it is impossible to reach to high cell concentrations in batch cultures. in previous experiments, the medium replacement increased the cell concentration, while accumulation of astaxanthin was not induced without other factors. in this work, the effects of fed-batch addition of culture medium on the cell growth and astaxanthin production in h. pluvialis cultures were studied. h. pluvialis was cultivated in 50 cm 3 culture medium containing sodium acetate, yeast extract, l-asparagines, mgcl 2 ·6h 2 o, feso 4 ·7h 2 o and cacl 2 ·2h 2 o (ph 6.8). light was supplied by panels of blue or red led lamps. the temperature was kept at 20 • c and the culture was mixed with a magnetic stirrer. in fed-batch cultures of h. pluvialis, the cell concentration and production of astaxanthin increased in comparison with those in batch culture. in addition, the operation in feb-batch manner is easier than medium replacement from industrial viewpoints for production of astaxanthin. simvastatin and similar compounds are wide used as antihypercholesterolemic agents. simvastatin is obtained by c-methylation of side chain of lovastatin. this process is not perfect and some unreacted lovastatin is present in the reaction mixture. simvastatin is separated from lovastatin using fungi clonostachys compactiuscula. using of living microbials has some disadvantages such as high cost, difficult product separation from the reaction mixture. we work on overcome these difficulties by separation and immobilization of enzyme that hydrolyses lovastatin ammonium salt in presence of simvastatin ammonium salt. our results of purification and immobilization lovastatin hydrolase will be presented. investigation of peptide antibiotics produced by trichoderma strains isolated from winter wheat rhizosphere a. szekeres 1 , l. kredics 2 , l. hatvani 1 , z. antal 2 , l. manczinger 1 , a. nagy 3 , c. vágvölgyi 1 : 1 department of microbiology, university of szeged, p.o. box 533, h-6701 szeged, hungry; 2 hungarian academy of sciences, university of szeged, microbiological research group, hungry; 3 pilze-nagy ltd. , kecskemét, p.o. box 407, species of the imperfect filamentous fungal genus trichoderma with teleomorphs belonging to the hypocreales order of the ascomycota division are of great economic importance as sources of enzymes, antibiotics, as plant growth promoters, decomposers of xenobiotics, and as commercial biofungicides. peptaibols and related peptaibiotics (prps) are secondary metabolites constituting a family of fungal peptide antibiotics which is constantly growing since alamethicin was isolated from cultures of trichoderma viride. these compounds are linear, amphipathic polypeptides composed of 5-20 amino acids and usually containing several non-proteinogenic amino acid residues, which are representing characteristic building blocks of the structure. one hundred and twenty trichoderma strains were isolated from roots of winter wheat grown in agricultural fields of southern hungary. the identity of species was examined based on morphological and molecular characters. the presence of prps-producing strains among the isolated trichoderma strains was detected by biological tests and the antibiotics were partially purified using a multistep chromatography procedure involving exclusion chromatography, adsorption chromatography and thin-layer chromatography. about 20% of the isolates proved to be able to produce prps. the antibacterial activity of the compounds was tested against staphylococcus aureus, bacillus subtilis, micrococcus luteus and escherichia coli, while the antifungal effect was recorded against fusarium oxysporum, f. culmorum, rhizoctonia solani and pythium debaryanum. ergezinger, m., bohnet, m., berensmeier, s., buchholz, k., 2005. integrierte enzymatische synthese und adsorption von isomaltose in einem mehrphasenbioreaktionsreaktor. cit 77, 167-171. glucansucrases from family 70 of glycoside-hydrolases are transglucosidases that produce ␣-glucans from sucrose, a very cheap substrate, without any use of nucleotide activated sugars. based on sequence analyses, these enzymes have been classified in two families, the family 70 and the family 13 of glycoside hydrolases. among the natural diversity existing in family 70 in which are found the glucansucrases produced by lactic acid bacteria, three enzymes have been selected for their distinctive specificities: dextransucrase from l. mesenteroides nrrl b-512f (dsr-s), which catalyses almost essentially the synthesis of ␣-1,6-linkages, alternansucrase from l. mesenteroides nrrl b-1355 (asr), which produces alternan polymer formed of ␣-1,6and ␣-1,3-alternated linkages and finally dextransucrase from l. mesenteroides nrrl b-1299 (dsr-e), which is responsible for the synthesis of a branched dextran composed of about 70% of ␣-1,6-linkages in the main chain and 30% of ␣-1,2-branched linkages. for all these enzymes, the natural polymerase activity can be shifted towards oligosaccharide production or gluco-conjugate syntheses by introducing acceptors in the reaction medium. a number of sugar acceptors have been successfully glucosylated with the view of developing new functional food products. acceptor glucosylation yield as well as acceptor reaction product structures were shown to be highly dependant on the enzyme specificity. consequently, using glucansucrases of distinctive specificities and varying the acceptors give access to a large variety of applications. amylosucrase, the sole glucansucrase found in family 13 of glycoside-hydrolases is also of great interest for functional food applications. this enzyme is able to synthesize highly resistant amylose from sucrose. again the reaction conditions can be used to modulate the yield and the size of amylose. the aim of our work is to further develop the applications of these enzymes via rational and combinatorial engineering. the most recent results obtained in this field will be discussed. novel food structure engineering concepts with enzymes johanna buchert vtt biotechnology, espoo, finland food structure is a very important quality attribute in food choice, since it affects not only the sensory perception of texture, but also release of flavour. enzymes offer specific means to engineer food structure by creating cross-links to food biopolymers, i.e. to proteins and/or carbohydrates. enzymatic cross-linking of food biopolymers can be exploited to create novel types of structures to foods without any need of added food ingredients. laccases and peroxidases can be used to crosslink ferulic acid containing carbohydrates, such as sugar beet pectin or arabinoxylan. proteins can be crosslinked by different oxidative or transferase type of enzymes. transglutaminases can crosslink protein via formation of isopeptide bond between glutamine and lysine residues. laccase and peroxidase can oxidize tyrosine residues to corresponding radicals, which in turn can further react with different groups in proteins. tyrosinases, on the other hand, oxidize tyrosine to a quinone, which can further react with aromatic ring, amine and thiol groups present in proteins. the biopolymer networks formed can be further engineered by combining adequate processing to the enzyme treatment. in this work the potential of enzymatic food structure engineering is reviewed. asparaginase-mediated reduction of acrylamide formation in baked, fried, and roasted products hanne vang hendriksen, beate kornbrust, steffen ernst, mary stringer, hans peter heldt-hansen, peter østergaard novozymes a/s, dk-2880 bagsvaerd, denmark. e-mail: hvhe@novozymes.com (h.v. hendriksen) in 2002, it was discovered that acrylamide is formed in several potato and grain-based foods (e.g. chips, french fries, toasted bread, biscuits, cereals) and in coffee, all of which have been prepared at high temperatures. the level of this potential carcinogen in the final food appears to range from 50 to 4000 ppb. later that year, the mechanism of acrylamide formation was unraveled, demonstrating that asparagine and reducing sugars are the precursors for acrylamide. this pointed to several potential enzymatic approaches to remove the root cause of the problem by degrading the precursors in situ. here, we demonstrate that asparaginase treatment leads to a more efficient reduction in acrylamide than alternative enzymatic treatments. asparaginase from aspergillus oryzae is used to reduce acrylamide formation significantly in laboratory models of a range of common food products. examples are french fries, biscuits, crisp bread, and fabricated chips. the sensory qualities appear to be constant. the implications for scaling up the processes for industrial food production are discussed. effect of cultivation conditions on folate content in yeast: exploring the potential of yeast as a bio-enrichment vehicle for folate in foods sofia hjortmo 1 , johan patring 2 , jelena jastrebova 2 , thomas andlid 1 : 1 department of chemical and biological engineering, chalmers university of technology, po box 5401, 402 29 gothenburg, sweden; 2 department of food science, swedish university of agricultural sciences, po box 7051, 750 07 uppsala, sweden. e-mail: sh@fsc.chalmers.se (s. hjortmo) over the past 10 years, the interest in health benefits of the b vitamin folate has increased considerably. a good folate status may hinder progression of several diseases such as neural tube defects and downs syndrome in foetus, as well as cancer, dementia, alzheimer's disease and cardiovascular disease in adults. it is however not easy to reach the recommended intake and new strategies have to be developed to increase the folate status. in this project we explore the use folate producing microorganisms for this purpose. many yeasts have the ability to synthesise folate de novo and can thus serve as a source for humans. folate enrichment in fermented foods could be much improved by using starter cultures better at producing folate compared to traditional strains. this is, e.g. applicable to bread making fb32 over-expression of isoprene biosynthetic enzymes in the ␤-carotene producer zygomycete mucor circinelloides tamás mucor circinelloides has been involved to study the carotene biosynthesis genesis of fungi. this fungus is more amenable to molecular techniques than the others traditionally used in carotenogenic studies (e.g. blakeslea trispora and phycomyces blakesleeanus). moreover, mucor has a great advantage: it is a dimorphic organism. this type of morphology is preferred by the fermentation industry because yeast-like growth allows the submerged culture, when usually higher biomass production can be achieved and cells can be more easily separated from the media. β-carotene is a terpenoid-type chemical compound likewise to sterols, quinones or chlorophylls. the production can be increased by improving the non-carotene specific terpenoid biosynthesis. this can be carried out by the overexpression of the genes responsible for the ratelimiting steps of these pathways. in this study, polyethylene glycol mediated transformations of m. circinelloides protoplasts were performed with autoreplicative expression vectors containing the known terpenoid genes of m. circinelloides (e.g. isoa encoding farnesyl pyrophosphate synthase and carg encoding geranylgeranyl pyrophosphate synthase). carotene production of the transformants and the wild-type strains were analysed by high-performance liquid chromatography (hplc). transformants harbouring plasmids with isoa or carg produce about 1.5 times more carotene than the recipient strain, while carotene production increased about two times in the co-transformants containing both type of plasmids. members of the genus rhizopus are important from biotechnological aspects in consequence of their effective extracellular enzyme, alcohol and organic acid production. moreover, rhizopus strains are used for fermentation of various foods, because they are capable of transforming soybeans into edible products. the high affinity iron permease (ftr1) contains both highly conservative and variable regions applicable for phylogenetic comparisons. the aim of this study was the comparative analysis of this gene of different rhizopus species in order to elaborate a simple and fast method to identify these fungi at a species and subspecies level. conserved regions of candida albicans and rhizopus oryzae ftr1 genes (fu et al., 2004) have been analysed to design degenerate primers for polymerase chain reaction. they were used to amplify the homologous regions from different strains of r. oryzae, r. microsporus, r. stolonifer and r. niveus. isolates of the similarly thermophilic rhizomucor miehei and r. pusillus, as well as a strain of m. rouxii were involved in the study as outgroups. deduced protein sequences were aligned and phylogenetic analysis was performed. surprisingly the r. oryzae isolates formed a group completely different with a significant distance from the r. microsporus isolate. r. niveus is currently not distinguished from r. stolonifer var. stolonifer because of morphological considerations. however, phylogeny of ftr1 gene sequences, in agreement with earlier results based on rapd data (vágvölgyi et al., 2004) , raise the need to handle r. niveus as a separate species. sequences and pcr primers useful for identification of all tested rhizopus strains were elaborated. potential application in a lactose conversion process. the prebiotics market is at high demand therefore the development of the process to produce 'prebiotic' galacto-oligosacharides efficiently and inexpensively is our particular interest. citrus, particularly mandarins and clementines, are among the most economically important fruit crops in morocco. besides morphological traits multiple molecular markers have been used for the caracterisation of citrus germplasm. the main aim of this study was to evaluate the moroccan mandarin germplasm and to identify specific polymorphisms among accessions sharing identical name. eighty mandarin and two sweet orange varieties were analyzed by dna markers. issr markers were amplified using 3 anchored primers and analysed by agarose gel electrophoresis. aflp markers analyses were performed using three primer combinations. the dice coefficient was used to estimate genetic similarities and the upgma algorithm was utilised to generate a phenogram depicting the genetic relationships among the acessions. the selection of 9 primers out of the 31 issr primers primarily assayed, allowed us to maximize the average number of amplified fragments analyzed per reaction (7.3), and the percentage of informative polymorphisms (49%). the three combinations of ecori/msei primers revealed 73 reliable aflp markers, 35 (47%) of witch were polymorphic. the range of fragment sizes varied from 100 to 650 bp. contrasting with the phenotypic diversity for agronomic and fruit quality traits, very low variability at the dna level has found among mandarins, which always showed a high (s > 0.81) coefficient of genetic similarity. the molecular marker analyses allowed the clarification of ambiguous denominations and the establishment of phenological relationships. the mandarin cultivars have been clustered into several different sub groups. this study allowed the identification of one issr marker, distinct and specific for the clementine sidi aissa and some aflp markers specific to maroc late and w. navel. many hybrids used in this study presented high coefficient of the similarity with one of their parents, such as siamelo and king of siam (s = 0.92), fortuna and clementine (s = 0.93), kara and king of siam (s = 0.92). this work was partially funded by the program for scientific cooperation cnrst (morocco) and iccti (portugal), the international foundation for science (ifs) support in stockholm (sweden). the new morocco mandarin variety nadorcott became very important in the international market because of high quality, good size, easy peeling and absence of seeds. also known as afourer or w. murcott, this variety was selected in 1981-1982 at the afourer experimental station, inra, located near to beni mellal city, as an original tree among several 18-year-old murcott honney (c. reticulata × c. sinensis) seedling trees. in order to shed additional light on the genetic origin of this variety, we have carried out isozyme and dna fingerprinting analyses. for better interpretation of the nadorcott molecular profiles, others mandarin cultivars, among which murcott honney, were also analyzed by molecular markers. three enzymatic systems (idh, pgm and pgi) permitted the discrimination between nadorcott and his female parent murcott honney. the molecular patterns displayed by these cultivars point out the sexual origin of nadorcott and discard the previously assumed hypothesis for its origin as a mutation of a nucellar zygote. the issr and rapd markers analyses allowed the identification of kinnow; du japon, vietnam; and swett lime as the genetically most closely related mandarins (s ∼ 0.95) to nadorcott. strong genetic similarity was also found with the clementine group, a possible male parent of nadorcott. the analysis by aflp markers confirmed the hybrid origin of nadorcott and the high genetic relatedness (s = 0.93) of this mandarin to its putative female parent murcott honey, and other cultivars as kinnow (s = 0.94) and clementine (s = 0.93). the possibility of an accurate molecular identification of nadorcott by specific molecular markers is of paramount importance for the protection and management of this original moroccan citrus variety. an effective process for the chemical-biotechnological utilization of distilled white lees was studied. a first treatment with hydrochloric acid allowed the solubilisation of tartaric acid. the influence of temperature, amount of hcl and reaction time were considered through an experimental design. under the optima conditions 77 g/l from white distilled lees and 45.6 g/l from red distilled lees were recovered. the tartaric acid was precipitated as calcium tartrate so that it can be isolated from the rest of the raw material compounds. the solid residue was used as an economic nutrient for lactic acid production by lactobacillus pentosus using trimming wastes as substrate. the lactic acid concentrations and volumetric productivities achieved were similar to those obtained using distilled lees without tartaric acid recovery as nutrient. thermophilic cyanobacterial strains that could grow in the bg 11 media was isolated from hot springs and their reactive dye bioaccumulation was studied under thermophilic conditions in a batch system, in order to determine the optimal conditions required for the highest dye accumulation. in the experiments performed with newly isolated synechocystis sp. and phormidium sp., the optimum ph values at about 25 mg l −1 initial reactive dye concentrations was determined as 8. lipases are extremely versatile enzymes, that catalyze both hydrolysis and synthesis reactions. they have a wide range of industrial applications, among which the manufacture of detergents, pharmaceuticals and fine chemicals are outstanding. the fungus rhizopus oryzae has been reported to synthesize a number of commercially interesting enzymes. in this work, its ability to produce extracellular lipases when grown in solid state culture has been assessed. cultures were carried out in erlenmeyer flasks, using a complex medium and several supports, both synthetic (nylon sponge) and natural (barley bran, ground walnut or peanut). the latter appeared to be more suitable for lipase production. since the best results were initially obtained with lipid-containing supports, barley bran cultures were supplemented with a vegetable oil, in an attempt to optimise lipase production and design an efficient procedure for reusing this agroindustrial waste. surprisingly, this strategy did not improve enzyme synthesis. however, when a surfactant (triton x-100) was added to the basal medium, a dramatic increase in extracellular activity was detected (up to 20-fold). the results agreed with those previously obtained in submerged cultures of r. oryzae, in which addition of olive oil did not increase lipase production, while the presence of triton x-100 had a remarkably beneficial effect. also, enzyme concentration in solid state cultures was up to two-fold that of the submerged ones. est maximal sorption capacity was found for yeasts cryptococcus sp. wt and rh. aurantiaca 1195 . these cultures also demonstrated the high sorption affinity, which makes them especially efficient biosorbents at low concentrations of lead ions. the high efficiency of lead elution was shown with 0.1n edta. isolation and identification of marine bacteria from deep-sea sediments els maas 1 , cara brosnahan 1 , vicky webb 1 , helen neil 2 , phil sutton 2 : 1 marine biotechnology, national institute for water and atmospheric research ltd., kilbirnie, wellington, new zealand; 2 oceanography, national institute for water and atmospheric research ltd., kilbirnie, wellington, new zealand marine sediments were obtained using a piston corer with associated trigger core (0.5 m, 0.06 m diameter). cores were collected from depths of 270 to 3911 m, along norfolk ridge and across challenger plateau. sediments ranged from coarse carbonate sands in the north to sandy and silty hemipelagic mud with increasing depth and latitude. all samples sites underlie subtropical surface water masses associated with, and south of, the tasman front. sediment samples were aseptically taken from the triggers cores upon recovery. samples were stored in sterile tubes at 4 • c on board the vessel for 3-17days. the core samples were plated on several different agar types and incubated aerobically for 4 weeks at 16 • c. individual colonies were sub-cultured and purified using standard microbiological techniques. morphological and molecular taxonomy revealed that the bacteria isolated from the sediments were closely related to novosphingomonas, halomonas, stappia, glaciecola, pseudoalteromonas and leeuwenhoekiella. phylogenetic trees constructed using 16s rrna gene sequence data showed that two other isolates were unrelated to known genera. the bacterial isolates are currently being investigated for their biotechnological potential. olive oil mill wastewater (omw) as the effluent of the concern of olive industry has high organic load. the conventional biological treatments despite of their simplicity and rather suitable performance are ineffective for the omw treatment since phenolics possess antimicrobial activity. in order to carry out a proper treatment on omw, use of microorganism able to degrade the phenolics thus, is necessary. the ability of phanerochaete chrysosporium immobilized on loofa was studied. the basal mineral salt solution along with glucose, ammonium sulfate and yeast extract were used to dilute omw properly. the fungus did not grow on the concentrated omw. therefore, omw diluted by 20% was used thought this study. the extent of removal in this biotreatment, of total phenolics (tp) and cod were 90 and 50%, respectively. while the color and aromatocity decreased by 60 and 95%, respectively. the kinetic behavior of the loofa immobilized fungus was found to follow monod equation. the maximum growth rate was 0.045 h −1 while the monod constant based on the consumed tp and cod (mg/l) were 370 and 6900, respectively. the control of water pollution has become of increasing importance in recent years. the release of dyes into the environment constitutes only a small proportion of water pollution, but dyes are visible in small quantities due to their brilliance. many dyes are difficult to decolourise due to their complex structure and synthetic origin. the adsorption of reactive dye remazol brilliant blue r (rbbr) on polyelectrolyte complex (pec) was studied in a batch systems. the adsorption parameters determined were: effect of the different values of ph on the adsorption of dye by pec, and the effect of contact time on the amount of rbbr adsorbed (in mg g −1 ). the data indicates that the adsorption capacity of rbbr by pec is dependent on ph. the maximum adsorption at 50 ppm was 88.52% equal to 11.8 mg of dye/g of polymer. the results show a tendency towards greater adsorption for reactive dyes (ph range of 8-12). the effect of contact time was studied at initial concentration (50 ppm) of dye, the amount of rbbr adsorbed for these pec increased and reached a constant value with the increase in contact time. the increase in the extent of removal of dye after 15 min of contact time is less and hence it is fixed as the optimum contact time. the pec show their capacity to remove rbbr to aqueous solutions by adsorption. flocculation of saccharomyces cerevisiae (diastaticus) ifo1958 was studied. cells of ifo 1958 did not flocculate even in the stationary phase without mg 2+ ("mg 2+ -deficient cells") although they began to flocculate strongly 18h after inoculation in the presence of mg 2+ ("complete cells"). cycloheximide completely inhibited induction of floc-forming ability of "mg 2+ -deficient cells". co-flocculation between "complete cells" and "mg 2+ -deficient cells" was investigated by chemical modification. treatment of "mg 2+ -deficient cells" by proteolytic enzymes did not affect the co-flocculation with "complete cells". photo-oxidation or mercaptoethanol-reduction of "mg 2+ -deficient cells" failed to weaken the co-flocculation with "complete cells" while treatment of "mg 2+ -deficient cells" by periodate brought about a significant loss of the co-flocculation. on the contrary, "complete cells" deflocculated by proteolysis or chemical modification of proteinaceous component failed to co-flocculate with "mg 2+ -deficient cells". these findings suggest that "mg 2+ -deficient cells" are non-flocculent because of lack of proteinaceous component essential for flocculation of cells of ifo 1958. the industrial toscano cigar production starts with the dark firecured kentucky tobacco fermentation process. during this phase the present work is a trial to study the portal serum factors which stimulate the cell proliferation of the schistosomules, aiming to find ways to block or inhibit their effects. our previous studies showed that portal serum of human and hamster (highly susceptible hosts) and a 1-50 kd fraction separated from human portal sera by ultrafiltration stimulate cell proliferation in immature schistosomules (20 days old) in vitro. for further identification of the portal serum factors in the range of 1-50 kd that stimulate cell proliferation, schistosomules were incubated in vitro in medium containing 10% fetal calf serum, 10% portal human serum or 10% peripheral human serum or their fractions separated by native electrophoresis followed by electroelution, incubations were performed in presence of bromodeoxyuridine (brdu) in order to measure differences in cell proliferation. the results showed that human portal sera enhanced cell proliferation of schistosomules compared to the peripheral serum. this stimulatory effect was substantially reproduced by fraction separated from human portal serum with molecular weight 20.8 kd. these results may help in designing a drug or antibody therapy to block the stimulating effect of the portal serum fraction and subsequently to disturb the life cycle of the parasite at early stage of development. center of molecular biosciences, university of the ryukyus, okinawa 903-0213, japan. e-mail: naoya-s@comb.u-ryukyu.ac.jp (n. shinzato)oil strage tank sludge, mainly composed of water and solid hydrocarbons (waxes) needs to be treated when harvesting the stored oil. although the sludge treatment by microbial surfactant or microbial cracking are considered as the feasible method, microbial degradation of the waxes (ca. solid n-alkanes) have been reported in a very limited species such as acinetobacter. in addition, long-chain n-alkanes (so called paraffin waxes) are one of the major components of oil, and their resistant properties to biological attack hold up the recovery of oil-polluted environments. in this report, we have screened n-tetracosane (c24) degrading bacteria from soils in okinawan island, an unique sub-tropical area in japan, to know the bacterial diversity and their degrading mechanism.16srdna phylogenetic analysis of the isolates, totally ca 40, showed they were not only acinetobacter and pseudomonas, but also other proteobacteria (alcaligenes), actinomycetes (gordonia, nocardia, and leifsonia), bacillus, staphylococcus, and unidentified ones. they also grew not only the solid n-alkanes but also iso-alkanes, mid-chain n-alkanes as the sole carbon source. results for the biosurfactant production will also be shown. the properties of 188 environmental enterococci were studied. the strains were isolated mainly from surface and waste waters and several strains from sheep manure were also included. species identification was provided by combination of phenotypic (micronaut system, merlin) and molecular detection methods (automated its-pcr, ddl-pcr). several discrepancies were observed when comparing molecular and biochemical identification. six enterococcal species were overall identified; e. faecium and e. hirae were the most abundant ones, almost 80% of isolates belonged to these two species. the distribution of selected genes conferring virulence to enterococci (cyla, gele and esp) was investigated, the positive signal was obtained mainly for e. faecalis strains. the strains were also characterized for the possession of enterocin genes (enta, entb, entp, ent31, entl50ab) and high frequency of enterocins was observed. biosorption of three different dyes (reactive black 5, cibacron brilliant yellow, cibacron brilliant red) onto immobilized scenedesmus obliquus a microalga was investigated in a batch sys-tem. the immobilized alga exihibited the highest dye uptake capacity at the initial ph value of 2.0 for all dyes. the effect of temperature on equilibrium sorption capacity indicated that maximum was obtained at 25 • c for rb5, cby and cbr biosorption. the freundlich, and langmuir adsorption models were used for the mathematical description of the biosorption equilibrium and isotherm constants were evaluated. biocontrol properties of microbially-treated sugar beet wastes in presence of rock phosphate n. vassilev, i. nikolaeva, m. vassileva department of chemical engineering, faculty of sciences, university of granada, c/fuentenueva s/n, granada-18071, spain. e-mail: nbvass@yahoo.com (n. vassilev) the effect of soil application of sugar beet wastes (sb) treated with aspergillus niger in the presence of rock phosphate (rp) on the control of fusarium wilt of tomato were studied. two treatments and a control were used: inoculation with glomus intraradices (am), further inoculation with a. niger grown on sb + rp medium, and the control (c). application of the am fungus increased plant growth, p and n uptake and reduced disease caused by fusarium oxusporum f. sp. licopersici (fol) as compared to non-mycorrhizal control plants. soil amendment with sb + rp + a. niger resulted in 347% and 467% (versus c) higher plant shoot biomass in plant-soil experiments contaminated or not with fol, respectively. in this case, disease severity and number of fol cfu reached the lowest levels while soil phosphatase and beta-glucosidase activities increased compared to all other treatments. fol negatively affected plant root mycorrhization determined in the am treatment while the difference between the mycorrhization of plants grown in the presence and absence of f. oxysporum in sb + rp + a. niger-amended soil was insignificant (53% versus 59%, respectively). in conclusion, the fermentation mixture containing mineralized organic matter, partially solubilized rp, and a. niger biomass could be efficiently used not only in improving plant growth, nutrient uptake and properties of degraded and polluted soils, as previously reported (vassilev and vassileva, 2003) , but also in environmentally-mild management of fusarium wilt. vassilev, n., vassileva, m., 2003. appl. microbiol. biotechnol. 61, 435-440 . biological treatment processes allow for the effective elimination of charged inorganic micropollutants, e.g. a number of oxyanions, heavy metals, etc. from contaminated drinking water supplies. however, dedicated technologies have to be implemented in order to eliminate the target pollutants without changing the quality of treated water, avoiding its secondary pollution by cells, nutrients and metabolic by-products. some innovative technologies, which combine the use of membranes with the bioconversion of charged micropollutants in order to deal with the secondary water contamination problem, will be presented and critically compared. a special on loofa was studied. the basal mineral salt solution along with glucose, ammonium sulfate and yeast extract were used to dilute omw properly. the fungus did not grow on the concentrated omw. therefore, omw diluted by 20% was used thought this study. the extent of removal in this biotreatment, of total phenolics (tp) and cod were 90 and 50%, respectively. while the color and aromaticity decreased by 60 and 95%, respectively. the kinetic behavior of the loofa immobilized fungus was found to follow monod equation. the maximum growth rate was 0.045 h −1 while the monod constant based on the consumed tp and cod (mg/l) were 370 and 6900, respectively. advanced start-up strategy of an anaerobic three-phase turbulent bed reactor treating winery wastewaters r. cresson, h. carrère, n. bernet, j.p. delgenès laboratoire de biotechnologie de l'environnement, institut national de la recherche agronomique (inra), avenue des etangs, 11100 narbonne, france. e-mail: cresson@ensam.inra.fr (r. cresson)the objective of our study was to compare two start-up strategies for an anaerobic biofilm process, to create an effective biofilm and increase the organic loading rate (olr) as quickly as possible. two methanogenic three-phase biofilm reactors have been started, using the same operational parameters (solid hold-up ratio, gas velocity of 1 mm s −1 ), in order to test two different strategies:• maximal load strategy (reactor a): the olr is increased as long as the global amount of removed cod (biogas production) increased. • maximal removal strategy (reactor b): the olr is increased stepwise as soon as the cod removal rate reaches 80%.both reactors have been operated for 90 days, until a volumetric olr of 20 g cod l −1 j −1 , with more than 90% of carbon removal. the total amount of cod removed and methane produced were higher in reactor b (19.6 and 32.2%, respectively). in both reactors, the short hydraulic retention time (hrt) applied during all the experiment caused a rapid wash-out of planktonic bacteria and an exclusive use of the substrate by the attached micro-organisms, which accelerates the biofilm growth. the lag-phase was reduced to approximately 7 days. the reactor submitted to repetitive disturbance by the maximal removal strategy appeared to be more robust when confronted to perturbation like organic overload or nutritional deficiency. experiments have demonstrated capability and the efficiency of the aggressive strategy for controlling anaerobic bioreactor start-up. sustainability is the generally accepted paradigm for future industrial development. the re-integration of waste products into production processes is a major aspect of environmental sustainability. in this study the use of sugar cane molasses is being investigated for the production of bioplastics by mixed microbial cultures, with the added possibility of parallel biohydrogen production. polyhydroxyalkanoates (phas) are polyesters synthesized by bacteria and accumulated as granules in the cytoplasm. studies conducted by this group have shown that mixed microbial cultures subjected to dynamic feeding conditions may accumulate phas up to 80% cell dry weight, a value close to that obtained for pure cultures. volatile fatty acids are good substrates for the production of phas by mixed cultures. on the other hand, sugar molasses, with a very high sugar content (about 50% dry weight), can produce organic acids by fermentation. the two-stage process being implemented in this study includes a molasses fermentation step, in which the high sugar content of the molasses is converted into volatile fatty acids (vfas), and a pha production step, in which the vfas serve as the precursors to the formation of phas under dynamic feeding conditions. moreover, hydrogen can be produced by anaerobic bacteria from carbohydrate-rich substrates giving organic fermentation end products, h 2 and co 2 . to optimize the production of both high-value products, design of experiments (doe) is being used to elaborate a set of experiments to study the effect of ph, hydraulic retention time and organic loading on both the organic acids distribution (which will serve as precursors for pha production in the second step) and h 2 production in the acidogenic fermentation reactor (a 1 l cstr). preliminary results show that the effluent of the acidogenic reactor fed with 10 g/l total sugars and operated at ph 7 and d = 0.1 h −1 (composed mainly of acetate and propionate) can be successfully fed to a polymer-accumulating mixed culture. under these conditions, the h 2 production yield has been estimated at 3.9 mol h 2 /mol sucrose. vanillin is a flavour compound used in food industry, fragrances and pharmaceutical preparations, which is nowadays mainly produced by chemical synthesis. the increased demand of natural products for the food industry as well as the high cost of natural vanillin extracted from vanilla pods has recently stimulated the research for alternatives to produce this compound by a natural way. the microbial transformation of ferulic acid, a phenolic compounds from lignin degradation, is recognized as being the most interesting alternative to produce natural vanillin. the combined effects of initial ferulic acid concentration (s 0 ) and biomass concentration (x 0 ) on vanillin production by resting cells of escherichia coli strain were investigated using response surface methodology. e. coli jm109/pbb1 a recombinant strain producing key enzymes of ferulate catabolic pathway from p. fluorescens bf13 (feruloyl-coa synthetase and feruloyl-coa hydratase/aldolase) was utilized in this work. a 3 2 full-factorial design was employed for experimental design. the results shown a possible inhibition phenomena at a vanillin concentration of about 0.1 g l −1 leading to the accumulation in the fermentation media of secondary compounds like vanillic acid and vanillin alcohol. removal of dissolved nutrients from wastewater using a microalgae biofilter line christensen, suvina sooknandan, jens jørgen lønsmann iversen department of biochemistry and molecular biology, university of southern denmark, odense, a microalgae biofilter can be used for treatment of wastewater from landbased fish farms in order to remove excess amounts of dissolved nutrients such as nitrate, ammonium and phosphate. a bubble column bioreactor has been developed for cultivation and characterization of microalgae. this type of bioreactor is equipped with a control system that enables online determination of the photosynthetic quotient and optimization of light intensity. furthermore the bioreactor has a dualsparging system simultaneously allowing adequate mixing and high gas-liquid mass transfer coefficients. different species of microalgae have been cultivated in batch and fed batch cultures to characterize growth and ability to take up the different dissolved nutrients. the specific growth rate and substrate uptake rate have been determined to compare and select the algal species most suited for use in a biofilter. additionally the composition of lipid, protein and carbohydrates has been measured to determine the nutritional quality of the algae when used as animal feed. at present, biological nitrogen-removal is mostly carried out through several complicated steps. to simplify the present systems for nitrogen-removal, we have investigated a new nitrogenremoval bioreactor using packed gel envelopes capable of simultaneous nitrification and denitrification. the envelope consists of two plate polymeric gels with a spacer in between. ammonia oxidizer, nitrosomonas europaea and denitrifier, paracoccus denitrificans are co-immobilized in the plate gels. when the envelopes are exposed to wastewater containing ammonia, the immobilized n. europaea oxidizes ammonia to nitrite in the outer aerobic surfaces of envelopes. at the same time, as ethanol solution is injected into the internal anaerobic spaces of envelopes, the immobilized p. denitrificans reduces the nitrite to nitrogen gas using the ethanol solution as an electron donor for denitrification. in this way, the envelopes can remove ammonia from wastewater in a single step. we have already reported advantages of our bioreactor in laboratory-scale experiments. in this study, we show our large-scale bioreactor (water volume 1.8 m 3 ) could treat three kinds of wastewater derived from coal power plants. ammoniacontaining wastewater that occurred regularly in a coal power plant was continuously treated with the bioreactor using thirty envelopes for over a year. the bioreactor could remove more than 90% of total nitrogen at hydraulic retention time (hrt) of 24 h. at hrt of 4 h, the bioreactor accomplished a maximum rate (the transformation of nh 4 + to n 2 ) of 6.0 g n/day m 2 of the envelopes' surface. the performance was equivalent to that obtained in the laboratory-scale experiments. furthermore, our bioreactor showed similar nitrogen-removal performances when it treated nitrate-containing wastewater occurring regularly and condensed ammonia-containing wastewater occurring at irregular intervals in coal power plants. these results show that our bioreactor can treat various wastewater containing nitrogen in coal power plants. thus, our concept is effective to simplify the large-scale systems in coal power plants and the other plants. in order to establish an environmentally friendly process for the treatment of metal containing waste, in a portuguese refinery a process involving sulphur oxidizing acidophilic microbes is being considered. bioleaching of metal containing bottom ash, from fluidised bed incineration of sludge resulting from the refinery water treatment station, was performed using a sulphur oxidising acidophilic culture isolated from an acid pool resulting from the weathering of sulphur piles from the claus plant. this sample served as inoculum for liquid medium cultures with 1% sterile sulphur flowers as source of energy. application of monod kinetics to adapted culture growth of free cells presented a value of µ = 0.124 day −1 . yield of sulphur conversion to sulfate after 17 days was η = 78%. in the presence of bottom ash from the incineration of refinery sludges µ = 0.141 day −1 and the yield of sulphur conversion was η = 67.5%. a η fe = 90% removal of iron is obtained from the treated ash. x-ray fluorescence spectroscopy of the solid residue revealed a total removal of metals namely, v, cu, ni, zn and most of the fe after 15 days of bioleaching. the present of heavy metals in the environment is a serious problem. they are commonly present in effluents from mining and industrial activities. usually, chemical conventional methods are very expensive and they have limitations when heavy metals are in low concentrations. at the moment the interest increases for processes that involve microorganisms as alternative method. some effluents present heavy metal sulphates which are soluble compounds. sulphate-reducing bacteria (srb), under anaerobic conditions, oxidize simple organic compounds (such as acetic acid and lactic acid) by utilizing sulphate as an electron acceptor and generate hydrogen sulphide. hydrogen sulphide reacts with heavy metal ions to form insoluble metal sulphides that can be easily separated from a solution. the purpose of this work was to evaluate the ability of srb to reduce cr(iii), ni(ii) and zn(ii) in artificial contaminated solution. desulfovibrio vulgaris and desulfovibrio sp. strains has been tested in this study. batch cultures was carried out in 50 ml sealed bottles with different concentrations of studied metals (1-20 mg/l), with 10% of inoculum bacterial and adapted to postgate's medium c. a gaseous nitrogen current was employed to purge oxygen and obtain anaerobic conditions. the assays were incubated statically represented very low portion (less than 4-5%) of the total bacterial community at all temperatures tested. schistosomules of schistosoma mansoni (20 days old) were incubated in rpmi 1640 medium containing 10% fetal calf serum, 10% hamster portal venous or 10% hamster peripheral venous serum (highly susceptible host) or 10% rat portal venous or 10% rat peripheral venous serum (poorly susceptible host) in presence of bromodeoxyuridine (brdu) in order to measure differences in cell proliferation. also the rate of cell proliferation of s. mansoni were assessed in vivo in hamster to study the cell proliferation in the natural ontogeny of the organism. the rates of cell proliferation as expressed by brdu labeling indices (blis) were determined as a function of time of incubation by immunohistochemistry using monoclonal antibody to brdu. compared to schistosomules cultured in presence of rpmi plus 10% fetal calf serum, blis were increased by 41% in the presence of hamster portal, but not in peripheral serum. while in case of rat, no significant changes were observed in the blis in both portal and peripheral sera. the experiment was repeated using hamster portal and peripheral sera containing different schistosomal igg antibody titres. the results showed decreased values of blis compared to sera which did not contain the schistosomal antibody(ies). the in vivo results revealed that there was no cell proliferation of s. mansoni schistosomules (6 days old) in the lungs. cell proliferation was detected in schistosomules of 17 days old and the results revealed a significant decrement in the brdu labeling indices (blis) with the increase of the age of schistosomules in vivo. the results indicated that hamster portal venous serum (highly susceptible host) could have stimulating factor(s) for schistosomule cell proliferation which is not found in rat (poorly susceptible host) and the presence of antibody(ies) greatly inhibit the cell proliferation. this could be due to the blocking of some portal serum factors, which stimulate the cell proliferation by the antibody(ies). the release of dyes into the environment constitutes only a small proportion of water pollution, but dyes are visible in small quantities due to their brilliance. many dyes are difficult to decolourise due to their complex structure and synthetic origin. the adsorption of reactive dye remazol brilliant blue r (rbbr) on polyelectrolyte complex (pec) was studied in a batch systems. the adsorption parameters determined were: effect of the different values of ph on the adsorption of dye by pec, and the effect of contact time on the amount of rbbr adsorbed (in mg g −1 ). the data indicates that the adsorption capacity of rbbr by pec is dependent on ph, the maximum adsorption at 50 ppm was 88.52% equal to 11.8 mg dye/g of polymer. the results show a tendency towards greater adsorption for reactive dyes (ph range of 8-12). the effect of contact time was studied at initial concentration (50 ppm) of dye, the amount of rbbr adsorbed for these pec increased and reached a constant value with the increase in contact time. the increase in the extent of removal of dye after 15 min of contact time is less and hence it is fixed as the optimum contact time. the pec show their capacity to remove rbbr to aqueous solutions by adsorption. compared to other european nations, the austrian population shows a low level of knowledge in biosciences and a strong denial of gene technology (1). the austrian non-profit organisation dialog<>gentechnik, a scientific society, organizes various activities to raise awareness for the "hot topics" in the life sciences. according to its principle of independence, all activities are funded publicly. projects on behalf of the austrian authorities and international cooperations demonstrate credibility and trust in dialog<>gentechnik (2). a few examples will be presented. dialogue with the public: on the occasion of the first anniversary that the gmo labelling rules became effective, the action "gene technology on my plate" is performed austrian wide in shopping centres. here, consumers are informed about health and labelling aspects of gm food. two days of open discussions were organized in the context of the austrian genome research program gen-au (3): topics were "gene diagnosis" (2002) and "genome research-what is in it for me?" (2004) . an open lab is currently set up in vienna to offer "hands on" experience in life sciences for everybody. motivating students: in the very successful gen-au summer school (3), high school students spend 3-4 weeks in the lab and work with scientists. the best documentations are awarded. in an innovative project, student groups (age 16-18) work on the topics stem cells and cloning and develop units of an elearning course which will be accessible for all austrian schools in the near future. engaging stakeholders: dialog<>gentechnik manages interdisciplinary wor king groups that develop leaflets, brochures and questionnaires on various aspects of gene diagnosis. four products are currently distributed to the public and to health services. (1) european commission. eurobarometer 55.2, december 2001; (2) www.dialog-gentechnik.at; (3) www.genau.at. the aim of this paper is to compare the attitudes of the urban consumers with professionals towards to new technologies, especially to biotechnology. it was tried to find out in which area, medical, agriculture or industry, people can accept biotechnological developments and in which area not accept.