528 William H. StarbuckM@n@gement vol. 16 no. 5, 2013, 707-718 Copies of this article can be made free of charge and without securing permission, for purposes of teaching, research, or library reserve. Consent to other kinds of copying, such as that for creating new works, or for resale, must be obtained from both the journal editor(s) and the author(s). M@n@gement is a double-blind refereed journal where articles are published in their original lan- guage as soon as they have been accepted. For a free subscription to M@n@gement, and more information: http://www.management-aims.com © 2013 M@n@gement and the author(s). M@n@gement est la revue officielle de l’AIMS M@n@gement is the journal official of AIMS William H. STARBUCK 2013 Why and where do academics publish? M@n@gement, 16(5), 707-718. M@n@gement ISSN: 1286-4692 Emmanuel Josserand, CMOS, University of Technology, Sydney (Editor in Chief) Jean-Luc Arrègle, EMLYON Business School (editor) Laure Cabantous, Cass Business School (editor) Stewart Clegg, University of Technology, Sydney (editor) Olivier Germain, Université du Québec à Montréal (editor, book reviews) Karim Mignonac, Université de Toulouse 1 (editor) Philippe Monin, EMLYON Business School (editor) Tyrone Pitsis, University of Newcastle (editor) José Pla-Barber, Universidad de València (editor) Michael Tushman, Harvard Business School (editor) Walid Shibbib, Université de Genève (managing editor) Alexander Bell, Université de Genève (editorial assistant) Martin G. Evans, University of Toronto (editor emeritus) Bernard Forgues, EMLYON Business School (editor emeritus) Special Issue 707 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 Why and where do academics publish? William H. STARBUCK University of Oregon & New York University, USA starbuck@uoregon.edu Abstract INTRODUCTION Social and behavioral research is a complex activity that takes place in an ambiguous environment. This environment is more ambiguous than most researchers are aware, and it is changing more rapidly than most researchers realize. In fact, focused on their own activities and struggling with the very unclear messages from their environments, most researchers have very limited perspectives on what has been and is happening. This essay describes some current and important issues that confront social and behavioral researchers1. Although I expose my personal opinions, I do not aim to convince readers that these opinions are necessarily correct. Rather, I hope to stimulate reflection and discussion. The first section of this essay raises issues related to researchers’ motivations. It points out conflicts between doing what is methodologically correct and doing what readers expect. The second section raises issues related to researchers’ abilities to evaluate research. It describes some behaviors of journal editors and reviewers that make evaluation unreliable. The third section looks at evolution in channels for academic publication. The final section presents data that suggest academic administrators – deans and department heads – have been increasing the pressures on professors to publish in prestigious journals and to publish papers that attract many citations. COLLISIONS BETWEEN NORMS An editor asked me to review a paper that investigated correlates of citations to published articles. The paper analyzed the citations of every article – more than 10,000 of them – that had appeared in the most prominent journals in a specific field over several decades. My research has convinced me that editorial reviews are unreliable (Starbuck, 2005), and Bedeian (2008) has reported that many authors say editors compelled them to make statements with which they actually disagreed. Since I have no evidence that my own reviewing is more reliable than that of other reviewers, I have adopted a policy of not making definitive recommendations to editors or authors. I tell authors what I find interesting, unclear, or apparently wrong, but I try not to come across as judgmental or to tell authors what they must do. Thus, my brief review of this paper only stated that the authors had examined interesting issues in reasonable ways. However, I also wrote that the authors should not report indicators of statistical significance in some sections of their paper. The concept of statistical significance deals with inferences about population parameters based on data about a random sample from that population. Key sections of this study 1. I thank Bernard Forgues, Allègre Hadida, and Andrea Mina for useful suggestions that improved this essay. 708 William H. StarbuckM@n@gement vol. 16 no. 5, 2013, 707-718 2. In the example in the first section, the authors and the editor had run into a situation that very few statistics courses discuss: a large sample from a finite population. Because the total number of articles published in that field was finite, it was possible to obtain a sample that comprised a large fraction of the population. In such a situation, researchers need to correct estimated variances by introducing the “finite population correction factor,” which equals (N – n)/N where N is the population size and n is the sample size (Cochran, 1977). Indeed, in the described example, n equaled N, so the correction factor became zero. As a result, the variance of the sample mean around the population mean was zero. Had the authors computed t values correctly, these values would have equaled infinity. It is a bit sad that this article did not become an opportunity for the journal to teach its readers something about statistical methodology that they very likely do not know. I have to accept some responsibility for that failure because I misjudged the situation. My report to the journal’s editor should have explained why data on complete populations are actually better than data from random samples and why statistical significance has no meaning for calculations about complete populations. discussed data about the entire population of articles. For example, if the authors calculated that a correlation was 0.123 across the entire population, this number was the exact value of that correlation for articles in that specific field during that period. There was no possibility whatsoever that the population correlation might equal zero2. The editor disagreed with my advice and told the authors: “While the reviewer is strictly correct that you need not use inferential statistics, please continue to do so, on the basis that this is standard practice in our literature – albeit flawed.” I did not think that I was clearly right and the editor was clearly wrong. The editor’s statement acknowledged that I had given methodologically correct advice to the authors. Rather, the editor was framing the issue as one of desirable conformity to widespread social norms. The editor was telling the authors to use statistical methods incorrectly because incorrect statistical methods are “standard practice”. My advice to the authors had urged them to exhibit correct methods even though their readers very likely expected, and would be more comfortable with, incorrect methods. More generally, the editor and I disagreed about different kinds of social norms. One norm asserts that concepts about methodology prescribe proper modes of behavior; methodology presents rules that enhance learning and promote correct inferences. Another norm says proper research reports must appear authentic to readers; methodology is a formal ritual that matches readers’ expectations and persuades them that they should have respect for researchers’ work. Issues of this type are widespread. Social and behavioral scientists often use language and methods that they do not understand. For example, methodologists have been trying to discourage the misuse of “statistical significance” for over 60 years . . . with little success3. Much of the talk about statistical significance in academic journals and seminars is technically incorrect, and the ways most researchers use this concept imply they do not understand its actual meaning. However, misunderstanding and incomprehension are so widespread that praxis dominates correct usage. Publishing is not only about learning or knowledge. The publication of academic papers has serious implications for authors’ personal prestige and continuing employment, the prestige of departments and universities, and the funding of education and research. In the instance above, the editor may have been worried that unconventional statements would make the editor’s journal appear strange and lower its social status, which would reflect poorly on the editor or reduce the journal’s circulation and revenues. The abstracted world of statistical theory does not attend to these issues, but they have powerful influence in the real world (Kepes and McDaniel, 2013; Mazzola and Deuling, 2013). At the same time, methodology prescribed as correct can be very difficult, even impossible, to enact. For instance, it is impossible to study enough different cases to justify broad generalizations while also studying every case in sufficient detail to obtain a thorough understanding of that case. Likewise, to assure that a sample is random, researchers need to know diverse properties of the population, which are nearly always unknown and unknowable. Lack of such knowledge may be the reason for undertaking the research. With some data sources, it may be impossible to satisfy the assumptions of the normal procedures for statistical calculations. These procedures assume that data 3. There are two reasons I do not detail why methodologists have been trying to discourage use of statistical significance. My coauthors and I have already published explanations (Schwab and Starbuck, 2009; Schwab et al., 2011). Also, I do not want to increase the emphasis on statistical methods in the essay. 709 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 do not include egregiously large errors, but audits of some large data sets have shown that data-entry errors occur as often as 25 to 30% of the time. Only a small fraction of these data-entry errors are large enough to make statistical calculations very incorrect, but it takes only a single data-entry error to invalidate widely used calculations (Rousseeuw and Leroy, 1987). Yet another example is the injunction for qualitative researchers to remain “objective”. Although there are ways of conducting interviews that reduce the influence of interviewers’ expectations, the very fact that interviews are taking place signals some of these expectations to the interviewees. For example, an interview that discusses an organization’s structure, its strategy, and its environment inevitably implies that the interviewer is looking for relations among these topics, so interviewees are likely to describe these subjects in ways that draw logical associations among them. Correct methodology can also create erroneous inferences. Many prescriptions for statistical methods incorporate assumptions that simplify calculations (e.g., they assume Normal distributions). These prescriptions assert that researchers should not draw inferences directly from their data; researchers should base inferences on hypothetical curves that would occur if the researchers had vast amounts of data and if these data conformed to the patterns predicted by logical extrapolation. However, researchers almost never have as many data as the statistical theorems assume and real data never conform exactly to the assumptions behind statisticians’ extrapolations. Thus, the methodological prescriptions are actually telling researchers to draw inferences that their data cannot support. For the last three decades, statisticians have been using bootstrapping methods that seek greater accuracy by reducing reliance on hypothetical curves, replacing the curves with calculations based on actual data (Diaconis and Efron, 1983). Few statistics courses teach these new methods, and few researchers know about them. BLUNDERING RANDOMLY IN DISAGREEMENT Academics in the Netherlands have recently been discussing an investigation into a psychologist’s cheating (Bhattacharjee, 2013). A formal investigation has concluded that the psychologist published at least 55 papers based on “data” that he simply invented, and that at least 10 doctoral students completed dissertations that relied on fraudulent “data” that the psychologist invented. Of course, it should surprise no one that researchers commit fraud. Researchers are human beings who strive for success, job security, respect, and other goals that may interfere with learning or discovery. What caught my eye about the Netherlands case was something else. A recently hired junior professor attended the psychologist’s research meetings and observed, “I don’t know that I ever saw that a study failed, which is highly unusual. Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.” The junior professor was referring to experiments by psychologists who are experts in their specialized subtopics, who have read all of the relevant research literature, who are highly motivated to produce convincing findings, and who can control almost every aspect of the situations they study, including the behaviors of their subjects. Yet, about 710 William H. StarbuckM@n@gement vol. 16 no. 5, 2013, 707-718 half of their experiments fail to confirm their hypotheses. This high failure rate seems to testify that psychologists are not learning very much from their studies. Even the people who have the most complete and intimate knowledge of psychological research findings have only a 50-50 chance of making correct predictions about the outcomes of new experiments that they themselves design. Obviously, those failed experiments rarely appear in print. One reason social and behavioral scientists have difficulty learning from research is that research reports are very difficult to understand and evaluate. This difficulty has several roots. Firstly, and most fundamentally, social and behavioral scientists disagree with each other about the purposes of research and therefore the qualities of “good” research. They disagree about the very nature of knowledge. Secondly, researchers very often misunderstand concepts they use to describe their work. For example, as remarked in the previous section, a large portion of researchers do not understand the term “statistical significance” (Schwab et al., 2011), and since many, many studies focus attention on “statistically significant” findings, this means that researchers are unable to distinguish between important and unimportant findings. Thirdly, a large fraction of research studies yield results that later studies cannot replicate. In studies of medical research, Ioannidis (2005) found that 80% of the research findings based on non-random samples were wrong, and 15 to 25% based on random samples were wrong. Peach and Webb (1983) estimated the frequencies of spuriously significant correlations in studies of macroeconomic time series. They created nonsense ‘models’ by selecting random combinations of one dependent variable and three independent variables. When they analyzed these nonsense ‘models’, they found that 64 to 71% of the independent variables had ‘statistically significant’ coefficients. Webster and Starbuck (1988) investigated the frequencies of spurious correlations in cross-sectional data. They compiled about 15,000 correlations published in three prominent management journals. The correlations had very similar distributions in all three journals. Both the mean and the median correlations were close to 0.09, and 69% of the correlations were positive. These weak correlations form a background of meaningless or substantively unimportant correlations that researchers may mistake for significant relationships, especially when they obtain large samples. Fourthly, research reports have many and diverse properties that constitute complex perceptual stimuli, which readers find difficult to interpret. Gottfredson (1978), Gottfredson and Gottfredson (1982), and Wolff (1970) found that reviewers for psychological journals agree rather strongly with each other about the properties that papers ought to exhibit. However, when presented with specific papers to evaluate, reviewers do not agree about the properties of the papers. As a result, journal editors receive unreliable evaluations from reviewers. I have found only 16 instances in which editors had the temerity to study the agreement between reviewers and to publish their findings. The mean correlation between reviewers across these 16 studies is just 0.18. When Gottfredson calculated the correlation between reviewers’ evaluations and later citations to published studies, he found it was only 0.14. In fact, even these small correlations overstate the reliability of most evaluations; for the lower-rated 70% of the papers, both of the above-mentioned correlations are approximately zero. 711 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 To indicate the practical meaning of these correlations, Figure 1 graphs simulated evaluations that resemble the available data about actual reviewers. Correlations this low imply that social and behavioral research cannot maintain consistent developmental trends; research just blunders around in different directions as reviewers and editors argue with each other. Reviewer 1 R ev ie w er 2 Figure 1. 800 simulated evaluations by reviewers who correlate near the mean (0.18) Facing complex stimuli that are hard to appraise and conscious that other reviewers are likely to disagree with them, reviewers render cautious judgments. On average, they reject about 55% of the papers they evaluate, and accept only 11%. Reviewers also display various biases consistent with their looking for external evidence about manuscripts’ quality. They give higher ratings to papers in the English language, to papers that incorporate algebraic equations, to papers by authors who work in prestigious universities, and to papers that agree with their own writings (Ellison 2002; Eriksson 2012; Mahoney 1977, 1979; Nylenna, Riis, and Karlsson 1994; Peters and Ceci, 1982). Mahoney (1977) found that reviewers mask their biases through comments about methodology. That is, reviewers expressed approval of the methodology of papers that agreed with their own writings, and they pointed out defects in the methodology of papers that contradicted their own writings. 712 William H. StarbuckM@n@gement vol. 16 no. 5, 2013, 707-718 Because reviewers disagree with each other, journal editors receive contradictory advice. Because reviewers are much more likely to recommend rejection than acceptance, journals tend to reject excellent papers. One consequence is that journals unintentionally reject approximately three-fourths of the best papers submitted to them (Starbuck, 2005). A second consequence is that papers in the most prestigious journals differ little from those in journals that have less prestige. Although the most prestigious journals have more opportunities to publish outstanding papers, they lack the ability to take full advantage of these opportunities. An implication is that researchers ought to search journals that are not prestigious in order to find the excellent papers that prestigious journals have rejected. However, citations indicate that researchers act as if prestigious journals publish much better papers than they actually do, and as if reviewers make much more reliable evaluations than they do. Table 1 presents estimates of the correlations between reviewers that would be consistent with the stratification of citations in published papers. These estimated correlations fall within the range of correlations reported in specific studies, but they are 50 to 95% higher than the 0.18 average reported correlation. Table 1. Correlations between reviewers that would be consistent with observed citations patterns Sociology 0.30 Management 0.33 Economics 0.34 Psychology 0.35 THE EVOLVING TERRAIN OF ACADEMIC PUBLISHING The industry that publishes academic books and journals has been quite turbulent over the last three decades. One driving force has been changes in printing technology. In 1980, a publisher had to produce and sell 1,200 copies of a book to avoid losing money. At that time, 900 libraries were likely to purchase almost any new academic book, so the publisher was risking one-fourth of the initial investment. After recovering this initial investment, the publisher could economically produce additional copies in lots of 50. By 2010, a publisher had to produce and sell only 300 copies to avoid losing money, but the likely sale to libraries had declined to 275 copies. The publisher had to risk only one-twelfth of the initial investment, and after recovering this initial investment, the publisher could economically produce additional copies just one book at a time. However, such small sales volumes yielded very little profit. Fewer and fewer libraries were buying books because they had shifted their budgets from books to journals. During the early 1980s, publishers launched many new journals, and academic readers urged their libraries to subscribe to these journals. Each new journal had few readers and attracted few citations, but they aggregated into a sizable market. Libraries found that journals in electronic form entailed lower maintenance costs than printed books, so they could make larger amounts of text available to readers. In effect, publishers had transferred their production activities from books to 713 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 journals, and university libraries had transferred their services from providing books to providing journals. The documents cited in academic writings shifted toward journal papers; the Institute for Scientific Information stopped counting citations in and to books; personnel evaluations in universities began placing less importance on books and more on journal papers. A secondary consequence of these changes was consolidation in the publishing industry. Small publishers of books could not sell enough copies to remain profitable, and larger publishers merged in search of large sales volumes. For a time, the bigger publishers sought to produce textbooks that sold numerous copies rather than many academic books that would each sell very few copies. Later, publishers tried to produce books that would appeal to new universities and new libraries, especially in developing countries. Over the years, many publishers became several publishers, and then just a few publishers. The Internet and electronic technology are creating another wave of publishing innovation that is splashing across the behavioral and social sciences in the 2010s. New kinds of publishers have entered the market with new kinds of publication channels. I receive email messages several times a week that advertise the availability of new “open access” journals. Some of these advertisements come from traditional publishers of journals; others come from new start-ups. I also receive many email messages that advertise the availability of Internet services that publish, or store and republish, papers for free. Some of these advertisements come from respected universities, others from who-knows-where. M@n@gement was an early pioneer in this free-for-all. Commercial publishers are interpreting “open access” to mean “please pay us,” and they are requesting fees ranging upward of €2000 per paper. It is unclear what benefits commercial publishers are offering in exchange for these fees. Respected names of journals? Respected names of editors and reviewers? Generous advertising of abstracts? Copyediting? Or merely more lines in padded résumés?4 Internet services that are widely available and nearly free give commercial publishers little foothold. Commercial publishers are also charging libraries for access to journals and to databases of copyrighted papers. Publishers predict that these charges will vary with frequency of use, which implies that more visible, more prestigious journals are going to become more valuable. At the same time, many authors are making their papers freely available to databases that charge nothing (e.g., SSRN). In medicine, major US and UK funding sources are attempting to replace traditional journals with electronic open-access publication systems. Well-organized professional associations could compete very effectively in this environment. Many outcomes seem plausible in the longer run. Deans and department heads believe that a journal’s reputation is a guarantor of its authenticity. For them, the actual quality of published papers is much less relevant than the halos of high quality. Researchers have long argued that wide dissemination facilitates progress, but progress is an elusive concept where researchers do not agree about the nature of knowledge, the purposes of research, or the properties of specific papers. 4. Jeffrey Beall of the University of Colorado at Denver maintains a list of predatory journals and publishers. 714 William H. StarbuckM@n@gement vol. 16 no. 5, 2013, 707-718 THE EVOLVING TERRAIN OF ACADEMIC ACCOMPLISHMENT Another driving force has been the development of rating systems, especially for business schools, but also for universities more generally. The ratings started in the late 1980s when Business Week published the first list of top business schools. A year later, US News and World Report published a rating of colleges and universities. Today, many periodicals produce ratings, as do some national governments. The ratings have turned vague opinions about differences between schools into powerful forces. Students want to attend highly rated schools. Donors want to give money to highly rated schools. Schools with higher ratings can charge higher prices; they have newer and more elegant facilities; they pay higher wages to professors and compete more effectively for the most desired professors. Business schools, once seen as unintellectual, vulgar appendages, have become major sources of funding, and universities that once sneered at them have learned to cherish them. In many universities, departments of arts and sciences survive largely because of subsidies from business schools. Publicized ratings and flows of funds have brought new pressures on academic administrators. Many universities want their schools and departments to rank at the very top. Unimpressed by the impossibility of their aspirations, they press deans and department heads to engage in activities that raise ratings. In the mid 1990s, deans and department heads began urging researchers to publish in the most prestigious, most visible journals. One reason was awareness that researchers from highly rated schools dominated publication in these journals; schools with lower ratings sought to imitate schools with higher ratings (Starbuck, 2005). Another reason was that papers in these journals draw more citations, which raise visibility and have value as components of rating systems. Possibly because of these emphases, more researchers are publishing in prestigious journals. Certo, Sirmon, and Brymer (2010) reported that the number of researchers who published in the most prestigious Management journals rose from 600 in 1988 to 1000 in 2008. The prestigious journals have increased the numbers of papers they publish, but not as rapidly as authors might want. As more authors have competed for such publications, the average time needed to publish five papers in prestigious Management journals has increased from 5.35 years in 1988 to 9.72 years in 2008. Another possible consequence of the above-mentioned emphases has been increasing numbers of citations throughout the entire population of academic journals. Authors want more citations. Publishers and journal editors want more citations. Deans want more citations. No one wants fewer citations. Therefore, the citations have been multiplying. Figure 2 shows the impact factors (average citations per paper) of 131 business journals over nine years. Each vertical collection of dots is the distribution of impact factors for one year; the dashed line links the means of these yearly distributions. The impact factors rose an average 7.75% per year, a rate of increase roughly twice that for non-business journals. 715 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 Researchers achieved higher impact factors in part by lengthening the lists of references at the ends of their papers. If each paper makes more citations, there are more citations to distribute through the system. However, reference lists grew only average 3.03% per year, much more slowly than the impact factors. Thus, other factors must have made business papers more attractive as citations by non-business journals. One of these factors may have been general visibility. Business researchers increased their total output of papers. Figure 3 graphs the total numbers of published papers in business journals, using the 1999 total as a norm. The numbers of papers went up an average 11.04% per year. The growing numbers of papers gave authors in fields outside of business more studies that might be relevant to their own research. A second factor may have been the increasing legitimacy of business as a topic of study that holds intellectual challenge and has strong effects on humanity. Figure 2. Impact factors of 131 business journals, 2000-2009 Mean = 0.75 Mean = 1.47 N at ur al lo g of im pa ct fa ct or 1999 2001 2003 2005 2007 2009 -3 -2 -1 0 1 2 716 William H. StarbuckM@n@gement vol. 16 no. 5, 2013, 707-718 280 260 240 220 200 180 160 140 120 100 1999 20102009 2008 2007 2006 2005 2004 2003 2002 2001 2000 Figure 3. Relative numbers of articles published in business journals (1999 = 100) CONCLUSION Summary: Researchers do not agree about the goals of research or the criteria for evaluating research papers. One reason is that research activities and results serve different purposes. Although learning and discovery are among these purposes, other purposes include demonstrating conformity to social norms, assuring the readers of continued employment, and enhancing the prestige of journals, departments and universities. Researchers also disagree about evaluations because research papers constitute complex stimuli that evaluators can interpret in diverse ways. One result is that editorial reviews are usually inconclusive or contradictory. Another result is that reviewers and editors exhibit caution and their reactions to papers incorporate various biases. Still another result is that knowledge evolves at random and without consistent direction. Academic publishing takes place in a changing and occasionally turbulent environment. Changes in printing technology have altered the profit-making opportunities of publishers, the feasibility of selling books rather than journals, the diversity of journals, and the functions of academic libraries. Now, the transition from paper to electronic formats appears to be opening up a host of new publication possibilities that are evoking multitude experiments and market offerings. Publishers and organized academic societies are competing with new startups that take advantage of low-cost Internet services and low- cost data-storage technologies. 717 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 Furthermore, public ratings of departments, schools, and universities have stimulated competition for students, professors, and funding. This competition has been bringing more emphasis on the prestige of journals and the citations to papers. As more researchers have sought to publish in highly visible media, access to these media has grown more difficult and the entire population of research papers has been setting higher standards for citation performance. These issues have two implications for individual researchers. Firstly, researchers can afford to take risks in their selections of research topics and methods. The ambiguity surrounding editorial evaluation creates opportunities for researchers to invent and market new products. Researchers can modify the perceptions of their audiences – they can attract potential readers, convince them that the research comes from credible sources, and persuade them that findings or ideas are worthy of belief. Peter and Olson (1983) have offered a useful analysis of research publication as a marketing task. Another implication of these issues is that researchers need to develop personal navigation systems. Researchers do not dare to rely on their environments to tell them how they are doing. Is the paper well done? Reviewers are unlikely to provide correct appraisals. When reviewers disagree, as they often do, to whom should one listen? Would changes make the paper more persuasive? Reviewers are unlikely to provide useful suggestions and their advice may actually be harmful. Researchers are very lucky if they have colleagues who offer honest and realistic advice, for many proffer positive reinforcement based on very superficial readings. However, researchers ought to pay some attention to every reader who makes comments, whether a journal reviewer, a colleague, or an editor. These people provide data about the audiences from which research papers must win acceptance. William H. Starbuck is courtesy professor-in-residence at University of Oregon and professor emeritus at New York University. He edited Administrative Science Quarterly, chaired the screening committee for Fulbright awards in business management, and was President of the Academy of Management. He has contributed many articles and books. His current research interests are research methodology, innovation, and societal trends. 718 Why and where do academics publish? M@n@gement vol. 16 no. 5, 2013, 707-718 · Bedeian, A. G. (2008). Balancing authorial voice and editorial omniscience: The ‘It’s my paper and I’ll say what I want to’/‘Ghostwriters in the sky’ minuet. In Y. Baruch, A. Konrad, H. Aguinis, & W. H. Starbuck (Eds.), Opening the Black Box of Editorship (pp. 134-142). Basingstoke: Palgrave Macmillan. · Bhattacharjee, Y. (2013, April 26). The mind of a con man. New York Times. · Certo, S. T., Sirmon, D. G., & Brymer, R. (2010). Competition and knowledge creation in management: Investigating changes in scholarship from 1988 to 2007. Academy of Management Learning and Education, 9(4), 591-606. · Cochran, W. G. (1977). Sampling Techniques. New York: Wiley. · Diaconis, P., & Efron, B. (1983). Computer-intensive methods in statistics. Scientific American, 248(5), 116-130. · Ellison, G. (2002). The slowdown of the economics publishing process. Journal of Political Economy, 110(5), 947-993. · Eriksson, K. (2012). The nonsense math effect. Judgment and Decision Making, 7(6), 746–749. · Gottfredson, S. D. (1978). Evaluating psychological research reports: Dimensions, reliability, and correlates of quality judgments. American Psychologist, 33(10), 920-934. · Gottfredson, D. M., & Gottfredson, S. D. (1982). Criminal justice and (reviewer) behavior: How to get papers published. Criminal Justice and Behavior, 9(3), 259-272. · Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medecine, 2(8), e124. · Kepes, S., & McDaniel, M. A. (2013). How trustworthy is the scientific literature in industrial and organizational psychology? Industrial and Organizational Psychology, 6(3), 252–268. · Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161-175. · Mahoney, M. J. (1979). Psychology of the scientist: An evaluative review. Social Studies of Science, 9(3), 349-375. · Mazzola, J. J., & Deuling, J. K. (2013). Forgetting what we learned as graduate students: HARKing and selective outcome reporting in I–O journal articles. Industrial and Organizational Psychology, 6(3), 279–284. · Nylenna, M., Riis, P., & Karlsson, Y. “Multiple blinded reviews of the same two manuscripts: Effects of referee characteristics and publication language.” JAMA, 1994, 272: 149-151. · Peter, J. P., & Olson, J. C. (1983). Is science marketing? Journal of Marketing, 47(4), 111-125. · Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-255. · Rousseeuw, P. J. & Leroy, A. M. (1987). Robust Regression and Outlier Detection. New York: Wiley. · Schwab, A., & Starbuck, W. H. (2009 ). Null-hypothesis significance tests in behavioral and management research: We can do better. In D. Bergh & D. Ketchen (Eds.), Research Methodology in Strategy and Management (Vol. 5, pp. 29-54). New York: Elsevier JAI.. · Schwab, A., Abrahamson, E., Starbuck, W. H., & Fidler, F. (2011 ). Researchers should make thoughtful assessments instead of null-hypothesis significance tests. Organization Science, 22(4), 1105–1120. · Starbuck, W. H. (2005). How much better are the most prestigious journals? The statistics of academic publication. Organization Science, 16(2), 180-200. · Webster, E. J., & Starbuck, W. H. (1988). Theory building in industrial and organizational psychology. In C. L. Cooper & I. Robertson (Eds.), International Review of Industrial and Organizational Psychology (pp. 93- 138). London: Wiley. · Wolff, W. M. (1970). A study of criteria for journal manuscripts. American Psychologist, 25(7), 636-639. REFERENCES