4 32.2 Correia, Ethics F © Vasco Correia. Informal Logic, Vol. 32, No. 2 (2012), pp. 222-241. The Ethics of Argumentation VASCO CORREIA Universidade Nova de Lisboa Department of Philosophy Avenida de Berna, 26-C 1069-061 Lisboa Portugal vasco_saragoca@hotmail.com Abstract: Normative theories of argumentation tend to assume that logical and dialectical rules suffice to ensure the rationality of debates. Yet empirical research on human inference shows that people system- atically fall prey to cognitive and motivational biases which give rise to various forms of irrational reason- ing. Inasmuch as these biases are typically unconscious, arguers can be unfair and tendentious despite their genuine efforts to follow the rules of argumentation. I argue that arguers remain nevertheless respon- sible for the rationality of their rea- soning, insofar as they can (and ar- guably ought to) counteract such biases by adopting indirect strategies of argumentative self-control. Résumé: Les théories normatives de l’argumentation tendent à présumer que les règles de la logique et de la dialectique suffisent pour assurer la rationalité du discours argumentatif. Pourtant, la recherche empirique sur l’inférence humaine montre que nous sommes souvent affectés par des biais cognitifs et motivationnels qui conduisent à diverses formes de raisonnement irrationnel. Etant don- né que ces biais sont inconscients, chacun peut se montrer tendancieux en dépit de l’effort pour respecter les règles d’argumentation. Je soutiens que chacun demeure néanmoins res- ponsable de la rationalité de ses rai- sonnements, dans la mesure où l’on peut neutraliser ces biais moyennant certaines stratégies d’autocontrôle argumentatif. Keywords: Argumentative self-control, argumentational virtues, biases, crit- ical thinking, emotional attachment, ethics of argumentation, fallacies, moti- vated reasoning 1. Introduction The study of argumentation is traditionally divided into two main disciplines: Dialectic, on the one hand, studies the rules of validity of arguments in a dialogue, whether from a formal or from an informal standpoint; and Rhetoric, on the other hand, Vasco Correia 223 studies the conditions of persuasiveness of arguments.1 Al- though these two levels of normativity are generally sufficient to account for the different aspects of argumentative discourse, re- cent works have hinted at the need to take into account a third dimension of argumentation, at a meta-level of investigation, which is focused neither on the norms of reasoning and discus- sion, nor on the norms of persuasiveness, but more exactly on the arguer’s behavior relative to those norms. In this sense, it seems appropriate to refer to this field of enquiry as the Ethics of Argumentation, even though it may be considered by some as a subdivision of Dialectic. Van Eemeren and Grootendorst (2004, 187), for example, developed “a code of conduct for rea- sonable discussions” which consists of “ten commandments” that are meant to guide the arguer’s dialectic behavior in a de- bate. Other recent works assert even more explicitly the impor- tance of an ethical approach in argumentation theory. Cohen (2009, 49), in particular, points out that “many of the results from Virtue Epistemology can be carried over into the arena of argumentation theory.” In fact, the ethical approach is arguably more pertinent in the study of argumentation than in the study of justification, insofar as arguments are typically voluntary proc- esses (unlike beliefs). Likewise, Aberdein (2010) explored the notion that there are “argumentational virtues” specific to the context of debate, just like there are epistemic virtues specific to the context of justification, which calls for a reflection upon the arguer’s moral obligations. In line with these approaches, this article seeks to show that logical and dialectical rules are insufficient to ensure the rationality of people’s reasonings in everyday debate, and that an ethical approach is paramount to elucidate in concrete terms what arguers can do in order to adjust their argumentative be- havior to such rules. In Section 2, I justify this claim from a theoretical point of view, suggesting that arguers may reason correctly both from a logical and dialectical point of view, and nevertheless be biased or “unfair” at different levels (selective choice of premises, biased interpretation of evidence, use of loaded terms, etc.). This seems to happen all the more when the arguer’s “emotional attachment” (Johnson & Blair 2006, 191) to the standpoint is particularly strong. In Section 3, I substantiate this analysis from an empirical point of view by reviewing some of the studies carried out by psychologists on human inference in the past decades, which consistently indicate that people tend 1 I consider here Logic to be a part of Dialectic, inasmuch as the rules of dialectic may include logical validity as one of their normative requirements (see for example van Eemeren & Grootendorst 2004, p. 193), but the two disciplines can of course be fully demarcated. 224 The Ethics of Argumentation to fall prey to a host of biases and heuristics that affect the ra- tionality of reasoning in many ways. Given that these cognitive illusions tend to induce fallacies that occur unintentionally, without the arguers’ awareness, I suggest that intentional efforts to observe the rules of argumentation may prove insufficient to prevent “honest mistakes.” In Section 4, however, I claim that arguers remain nevertheless responsible for their argumentative behavior, at least partly and indirectly, insofar as the rationality of their attitudes may be intentionally reinforced at different levels, namely: through the development of deductive skills, through the acquisition of argumentational virtues, and through the adoption of specific strategies of “argumentative self- control.” 2. Biased argumentation and emotional attachment Normative theories of argumentation tend to assume that arguers who follow the rules of correct reasoning and critical discussion are protected against fallacious forms of reasoning. But is it re- ally the case? May an honest arguer rely solely on what she takes to be the correct norms of argumentation (setting aside the disagreements regarding the nature of these norms) and expect to reach systematically a balanced or “reasonable” standpoint? This seems very unlikely, as we shall see in the next section, due to a variety of irrational phenomena that affect human inference without people’s awareness. In principle, it is conceivable that an arguer reasons in accordance with the rules of logic and dia- lectic, carefully avoiding each known fallacy, and ends up pro- ducing a biased argument nonetheless. For example, an econo- mist’s argument against the International Monetary Fund inter- vention in Greece may be one-sided and unbalanced, despite her well-intended efforts to reason correctly, not because she com- mits this or that fallacy, but simply because she fails to take into account all the available relevant evidence, or because she puts forward a rather selective choice of premises, or because she focuses exclusively on one aspect of the matter. Although none of this is fallacious, strictly speaking, according to most theories of fallacies, it seems somewhat illegitimate to set out an argu- ment in such a partial and blinkered fashion. As Thagard points out, the reason for this is that many reasoning errors stem from cognitive and motivational biases that tend to occur without people’s awareness: Irrationality involves making erroneous inferences for reasons that go well beyond the employment of fallacious arguments. Vasco Correia 225 Rather, inferential mistakes arise from a host of psychological error tendencies (biases). (Thagard 2011, 153) The point to be made here is that arguments may be cor- rect from a logical and dialectic perspective and nonetheless “unfair” and tendentious. This claim challenges the idea that ar- gumentation rules are in principle sufficient to prevent discuss- ants from arguing in unreasonable terms. If we consider the set of pragma-dialectical rules, for example, we may observe that none of the rules in question is designed to avoid unintentional phenomena of distortion such as selective evidence gathering, selective choice of premises or the use of loaded terms. Thus, discussants may scrupulously observe the pragma-dialectical code of conduct and nevertheless argue tendentiously. For ex- ample, the desire that my political position is correct may lead me (unintentionally) to focus on information that seemingly confirms it and, conversely, to overlook information that seem- ingly disconfirms it. Furthermore, the effort to observe argu- mentation rules is presumably intentional, whereas the biases that are susceptible to affect the way people argue are typically unconscious (Pohl 2004, 2; Mercier & Sperber 2011, 58; Tha- gard 2011, 157). As Walton (forthcoming) points out, in many cases people reason fallaciously, not because they want to ma- nipulate their audience, but because their commitment to the standpoint is such that it affects their reasoning: “Many fallacies are committed because the proponent has such strong interests at stake in putting forward a particular argument, or is so fanati- cally committed to the position advocated by the argument, that she is blind to weaknesses in it that would be apparent to others not so committed.” More generally, it is clear that people’s emotional attach- ment to given standpoints significantly affects the way they rea- son and debate in many ways. This seems to happen, as Johnson and Blair observe (2006, 191), because “the act of reasoning is rarely carried out in a situation that lacks emotional dimension.” The authors highlight in particular biases that seem to derive from what they call our “egocentric commitments,” that is, the set of personal interests and involvements that distort the way we treat the information and the way we argue: Such attachments often result in a failure to recognize another point of view, to see the possibility of an objection to one’s point of view, or to look at an issue from someone else’s point of view. For example, if your brother is a nurse, he probably be- longs to a nursing association that promotes the interests of nurses. He probably tends to hold the viewpoints and perspec- tives of that association more or less as a matter of course. He is 226 The Ethics of Argumentation defensive when that viewpoint is challenged. (Johnson & Blair 2006, 191) But does this mean that arguers bear no responsibility for their natural tendency to be biased? Perelman and Olbrechts- Tyteca (1969, 119) go as far as to suggest that every effort of argumentation is inevitably tendentious: “All argumentation is selective. It chooses the elements and the method of making them present. By doing so it cannot avoid being open to accusa- tions of incompleteness and hence of partiality and tendentious- ness.” Yet, it seems possible to counteract our own propensity to be biased, to a certain extent, by adopting control strategies de- signed to ensure the rationality of the cognitive processes at work in argument-making. This is where we enter the sphere of the Ethics of Argu- mentation, which, unlike logic and dialectic, does not seek to examine the first-order norms of how one should argue, but more exactly the second-order norms of how one should behave relative to those norms. In A Systematic Theory of Argumenta- tion, van Eemeren and Grootendorst (2004, 188) highlight the importance of the ethical approach by acknowledging that nor- mative models of critical discussion “run the risk of being iden- tified with striving for an unattainable utopia” if arguers do not choose to accept them. Whether or not arguers adhere to a set of dialectical rules, as the authors observe, is a problem that hinges on pragmatic and ethical considerations which go beyond the scope of logic and dialectic understood in a narrow sense (van Eemeren and Grootendorst 2004, 188). That being said, my sug- gestion is that the mere acceptance of the rules of logic and dia- lectic does not suffice to ensure the reasonableness of argu- ments, nor does it suffice to fulfill the arguer’s obligations. After all, even the arguer who accepts to play the game by the rules is liable to be affected by emotional biases that she is not aware of. This is why it seems necessary to extend the realm of argumen- tation ethics to include a reflection upon the methods to promote effectively the rationality of the discussants’ attitudes. My claim is that arguers remain partly responsible for the rationality of their reasoning, to the extent that they can adopt control procedures to mitigate the effects of involuntary biases. Although biases often occur without people’s awareness, there are indirect ways of counteracting their impact on arguments. I argue that such procedures can be effective so long as they are grounded on an analysis of the very mechanisms underlying the formation of unconscious biases. In other words, my suggestion is that a normative theory of argumentation can only be an effi- cient tool with practical consequences, rather than a mere ideal Vasco Correia 227 of how people ought to argue in the best of worlds, if it takes into account what empirical studies indicate regarding the way people actually tend to reason in everyday debate. In conformity with this methodological assumption, I will review in the next section some of the relevant studies carried out by psychologists. From a philosophical perspective, the point is to understand not only in which ways biases in argumen- tation may occur without our awareness, but also to what degree and with which consequences. This analysis will allow us to re- examine the ethical question in more concrete terms in the last section. 3. The problem of biased argumentation Psychologists distinguish between two kinds of judgemental and inferential illusions: motivational (or “hot”) illusions, on the one hand, which stem from the influence of emotions and interests upon cognitive processes, and cognitive (or “cold”) illusions, on the other hand, which stem from inferential errors due to cogni- tive malfunctioning (Kunda 1990, Nisbett & Ross 1980, Gilovich 1991, Tetlock and Levi 1980). Some researchers con- tend that such illusions are rooted in adaptive mechanisms of reasoning which have evolved to promote the achievement of goals under constraints of time and information (Gigerenzer 2008, McKay & Dennett 2009, Stanovich & West 2008). It has been suggested, for example, that unrealistic optimism and self- serving illusions tend to enhance people’s motivation, mood and productivity (Taylor & Brown 1988). That being said, cognitive illusions may also lead to irra- tional responses, such as risk mismanagement, wishful thinking, self-deception, prejudice, scapegoating, rationalization, and so forth (Dunning et al. 2004, for a review). In addition, biases also seem to aggravate the phenomenon of “attitude polarization,” causing people to interpret information in such distorted ways that their views tend to move even further apart (Lord et al. 1979). It is also worth noting that biases increase people’s vul- nerability to manipulative strategies of persuasion, insofar as propagandists often exploit people’s cognitive weaknesses: for example, a politician who knows that fear generates biases which tend to favor her views on immigration may strategically try to induce that particular emotion in the audience. Further- more, biases seem to widen the gap between normative models of argumentation and real-life debates, to the extent that they induce unintentional violations of the rules of argumentation. Thus, even assuming that some biases are adaptive from an evo- 228 The Ethics of Argumentation lutionary standpoint, it does not follow that they are fair or rea- sonable from an ethical or a dialectical standpoint. One of the most pervasive illusions in argumentative con- texts is the so-called belief bias, which may be described as the tendency to evaluate arguments based on the believability of the conclusions rather than on their logical validity (Evans 2004, for a review). Evans and his colleagues (Evans et al. 1983) were able to demonstrate that people’s assessment of arguments is biased by whether they agree or not with the conclusions by ask- ing the subjects to evaluate the validity of four different catego- ries of syllogisms: (a) Valid-Believable, (b) Valid-Unbelievable, (c) Invalid-Believable and (d) Invalid-Unbelievable. As ex- pected, the degree of believability of the conclusions seemed to have an impact on the way the subjects assessed the syllogisms’ validity. Remarkably, it appeared that the acceptance rate was much higher for syllogisms with believable conclusions than for syllogisms with unbelievable conclusions, even when the syllo- gisms in question were in fact invalid. And conversely, it ap- peared that participants tended to reject valid arguments with unbelievable conclusions, presumably because the unlikelihood of the conclusion biased their evaluation of the logical strength of the inference. According to Evans et al. (2008, 442), this phenomenon could be due to an adaptive mechanism developed to maintain the stability of people’s belief system, which in turn seems required to ensure the ability to act promptly: “When ar- guments are encountered which support existing beliefs, the ev- idence suggests that we do not examine them closely”. This idea is consistent with the more general hypothesis that our ancestors relied primarily on intuitive forms of reasoning, such as fast and frugal heuristics (Gigerenzer 2008, Mercier & Sperber 2011, for a review). A very similar effect, the confirmation bias, occurs when the information is interpreted in a way that tends to confirm one’s own preconceptions (Baron 1988, Kunda 1999, Lord et al. 1979). This well-documented bias generally involves a tendency to focus primarily on the evidence that seems to confirm our ex- isting views, and conversely to overlook disconfirming evi- dence. In many cases, the confirmation bias seems to be moti- vated by our emotions and desires. For example, a scientist’s emotional commitment to a given hypothesis is susceptible to affect the way she seeks evidence and tries to persuade potential opponents. She might focus primarily on sources that agree with her views, or dismiss too hastily evidence in conflict with those views. According to Oswald and Grosjean (2004, 81) “this ten- dency exists … because the possibility of rejecting the hypothe- sis is linked to anxiety or other negative emotions.” Mercier and Vasco Correia 229 Sperber (2011, 57), on the other hand, suggest that we fall prey to the confirmation bias because it helps us “devise and evaluate arguments intended to persuade.” According to the authors, the desire to preserve our belief system and to anticipate “proac- tively” potential counter-arguments leads us to seek arguments that support what we already believe in: When we want to convince an interlocutor with a different viewpoint, we should be looking for arguments in favor of our viewpoint rather than in favor of hers. Therefore, the next pre- diction is that reasoning used to produce argument should ex- hibit a strong confirmation bias. (Mercier & Sperber 2011, p. 61) The confirmation bias seems to play an important role in a number of unintentional fallacies. For example, some people would be ready to assimilate Hitler and Stalin’s atheism to the horrors that were perpetrated under their rule, thereby commit- ting the fallacy of “hasty generalization.” As Dawkins (2006, 273) points out, to infer that Hitler and Stalin did their terrible deeds because they were atheists is arguably as absurd as say- ing: Hitler and Stalin had a moustache; they did terrible things; therefore leaders with a moustache are dangerous. Likewise, the confirmation bias also seems to induce unintentional occur- rences of the “straw man” fallacy, which typically involves some sort of misrepresentation of the opponent’s standpoint (Talisse and Aikin 2006, Johnson and Blair 1983, Walton 1989a). It seems plausible, for example, that my commitment to a philosophical position may lead me on some occasions to mis- interpret or even caricaturize my opponent’s account without being aware of it. In fact, there is evidence that the confirmation bias tends to undermine people’s genuine efforts of objectivity and to aggravate the phenomenon of “polarization of opinions” (Lord et al. 1979, Westen et al. 2006). Another important bias to take into account is the so- called above-average effect (or “illusory superiority”), i.e., the tendency to overestimate one’s positive qualities and to under- estimate one’s negative qualities relative to other people. Psy- chologists have shown in a large number of experiments that most individuals see themselves as better than the average per- son with regard to numerous qualities. Gilovich reports a survey that illustrates the scope of this phenomenon: A survey of one million high-school seniors found that 70% thought they were above average in leadership ability, and only 2% thought they were below average. In terms of ability to get along with others, all students thought they were above average, 60% thought they were on the top 10%, and 25% thought they 230 The Ethics of Argumentation were in the top 1%! Lest one think that such inflated self- assessments occur only in the minds of callow high-school stu- dents, it should be pointed out that a survey of university pro- fessors found that 94% thought they were better at their jobs than their average colleague. (Gilovich 1991, 77) Similarly, other studies revealed that most people consider themselves to be happier (Klar & Giladi 1999), more fair- minded (Messick et al. 1985), more skilful behind the wheel (Svenson 1981) and more likely to live past eighty (Weinstein 1980) than the average person. In fact, as McKay and Dennett (2009, p. 505) point out, “most people view themselves as better than average on almost any dimension that is both subjective and socially desirable.” Ironically, people tend to be biased about their very propensity to be biased, given that most people believe that they are less biased than other people (Pronin et al. 2004). In more general terms, self-serving biases seem to limit people’s ability to be fair and objective whenever their personal interests or goals are at stake. In another suggestive study, Ross and Sicoly (1979) interviewed 37 married couples, husband and wife separately, and asked each spouse what percentage of the housework they thought they were responsible for. Not surpris- ingly, the scores of the two partners added together systemati- cally exceeded 100%, suggesting that each spouse tended to overestimate his or her own contribution to the housework. Al- though Ross and Sicoly maintain that this egocentric bias is mainly due to the fact that people tend to recall more easily their own efforts, they acknowledge that “motivational factors may also mediate an egocentric bias in availability. One’s sense of self-esteem may be enhanced by focusing on, or weighting more heavily, one’s own inputs” (Ross & Sicoly 1979, 323). This type of bias seems to play a significant role in many instances of has- ty generalization, particularly when the debater’s self-interest leads her to neglect the interlocutor’s merits or efforts. For ex- ample, the person who begins an argument with the claim “It’s always me who…” typically neglects to take into account falsi- fying occurrences due to an effect of availability heuristic or of selective memory. Likewise, egocentric biases often lead us to appeal to the argumentum ad verecundiam (or “argument from authority”) without even noticing it, not only because we tend to assume that we know more than our opponents on certain topics, but also because we tend to overestimate the degree of certainty of our beliefs (Fischoff et al. 1977). For the same reason, ego- centric biases also favor the appeal to ad hominem arguments as a means to dismiss the opponent’s claim, often in the conde- Vasco Correia 231 the condescending tone that characterizes the arguer who cannot even admit the possibility that she might be wrong. Egocentric biases provide a remarkable illustration of the threat posed by motivated inferences regarding the way people reason and debate in everyday life: Insofar as these biases are unconscious, even honest arguers may end up reasoning and ar- guing in the most unfair fashion. To that extent, the sincerity requirement seems insufficient to ensure the reasonableness of arguments, and so does the intentional effort to avoid logical fallacies, given that biases typically operate at a sub-intentional level. Thagard (2011, 157) stresses this point: “It would be pointless to try to capture [motivated] inferences by obviously fallacious arguments, because people are rarely consciously aware of the biases that result from their motivations.” A sig- nificant implication of this, as we shall see, is that arguers who are interested in reaching a balanced view must seek indirect ways to counteract their unintentional biases. 4. Argumentative self-control and critical thinking Although cognitive and motivational biases tend to occur unin- tentionally, arguers are not condemned to remain the helpless victims of their tendentiousness. There are many strategies ar- guers can adopt to counteract the effects of biases on the process of argumentation, and to that extent it seems reasonable to sug- gest that arguers are somewhat responsible for the rationality of their attitudes. It is worth noting that a similar claim has been made by a number of philosophers with respect to the process of belief formation, which partly explains the recent profusion of studies on the topic of the “Ethics of Belief” (Adler 2002, Audi 2008, Chisholm 1991, Engel 2000, Feldman 2002, Mele 2001). I argue that this type of approach is also pertinent in the field of argumentation theory, not only as a reflection upon the individ- ual’s “argumentational virtues” (Aberdein 2010, Cohen 2009), but more generally as a reflection upon the conditions of what one may call “argumentative self-control,” by analogy with what some virtue theorists call “epistemic self-control” (Adler 2002, Audi 2008, Mele 2001). Adler (2002, 279), for example, defines the latter as the “ability to resist our own beliefs for the sake of furthering their aim of truth.” Similarly, argumentative self-control may be defined as the ability to counteract one’s own propensity to be biased for the sake of ensuring the ration- ality of arguments. From this perspective, the question to be asked is essentially the following: What can we do to promote the rationality of our attitudes in a debate, knowing that our genuine efforts to argue in fair terms are often undermined by 232 The Ethics of Argumentation unintentional biases? In what follows, I identify a few plausible ways of achieving this, but the list does not intend to be exhaus- tive. To begin with, it seems likely that the very awareness of such biases can lead arguers to be more vigilant regarding their own cognitive weaknesses. After all, those who acknowledge their propensity to be biased seem to be in a better position to do something about it. Mele (2001, 99) gives an example of this: “Consider the biasing effect of the vividness of information … People aware of that effect may resolve to be vigilant against it in important matters, and they may occasionally issue relevant, salutary reminders to themselves at critical junctures.” The question then, as Tetlock (2005, 189) suggests, is whether we are “open-minded enough to acknowledge the limits of open- mindedness.” At any rate, the study of the rules of logic and dia- lectic may be supplemented by the study of the psychological roots of motivated reasoning. Thagard (2011, 158) elaborates on this notion and goes as far as to suggest that “critical thinking requires a psychological understanding of motivated inference more than a logical understanding of the structure of argument.” This is not to say that the study of logic is pointless, as Thagard (2011, 163) explicitly stresses, but simply that logic does not account for many of the inferential errors that people tend to make in real-life situations. In fact, there is evidence that the study of logic, and more generally some degree of training in abstract thinking can con- tribute significantly to counteract cognitive illusions. Holland et al. (1986, 284) insist on that point: “Training in statistics has a demonstrable effect on the way people reason about a vast range of effects in everyday life. Thus formal training of that particu- lar type does indeed make people smarter in a pragmatic sense.” This aspect was confirmed by a recent replication of the well- known “Linda problem” by Tversky and Kahneman (2008, 120). In the initial version of the experiment (Tversky & Kah- neman 1983) the researchers submitted to a group of under- graduates a personality sketch of Linda, a fictitious individual, constructed to portray Linda as an activist concerned with issues of social discrimination. Then the respondents were asked to check which of the following alternatives is more probable: (A) Linda is a bank teller, or (B) Linda is a bank teller and is active in the feminist movement? Surprisingly, 85% of the subjects answered that the alternative (B) was the most probable, clearly violating the conjunction rule of probabilities (the conjunction of two events cannot be more probable than one of the events alone). Yet a more recent version of the experiment, conducted with graduate students with statistic education, revealed that Vasco Correia 233 only 36% committed the fallacy, which seems to indicate that, at least in certain cases, the development of deductive skills can work as safeguard against systematic errors of intuitive reason- ing. It is clear, however, that deductive skills alone are insuffi- cient to render someone fair and impartial. As Paul (1986, 379) observes, “it is possible to develop extensive skills in argument analysis and construction without ever seriously applying those skills in a self-critical way to one’s own deepest beliefs, values, and convictions.” Furthermore, we have seen that many com- mon biases do not even involve an actual violation of the rules of logic: It is difficult to see, for example, how the study of logic and probability could suffice to prevent the confirmation bias or the availability heuristic. To that extent, the fairness and reason- ableness of arguments also seems to depend on what Aberdein (2010, 169) calls the arguer’s argumentational virtues, that is, a set of dispositions and character traits that tend to promote good reasoning. Aberdein’s approach may be described as an attempt to apply the developments of virtue epistemology to the field of argumentation theory. Drawing on Foot (1978) and Zagzebski’s (1996) distinction between virtues and skills2, Aberdein points out that many of the so-called epistemological virtues can prof- itably be applied to argumentation (Aberdein 2010, 176). Among the list of intellectual virtues highlighted by Zagzebski (1996, 114), for example, the following seem as relevant for the purposes of argumentation as for the purposes of knowledge: “the ability to recognize salient facts,” “open-mindedness in col- lecting and appraising evidence,” “fairness in evaluating the ar- guments of others,” “intellectual humility,” “intellectual perse- verance, diligence, care, and thoroughness,” “thinking of coher- ent explanations of the facts,” and “being able to recognize reli- able authority.” This analysis may be described as “neo- Aristotelian,” not only because it resembles Aristotle’s list of intellectual virtues, but also because it fits well with Aristotle’s idea that the character (ethos) of the arguer is a decisive element for the evaluation of arguments. Although Aristotle’s focus was the importance of the arguer’s credibility for the purpose of per- suasion, from a rhetorical point of view, it is possible to extrapo- late from that claim by suggesting that the arguer’s dispositions are also paramount for the purpose of ensuring the rationality of arguments, from dialectical point of view. The advantage of de- veloping argumentational virtues, by contrast with the inten- 2 Foot (1978: 9) suggests that skills are mere capacities that may or may not be exercised, whereas virtues only exist if they are exercised. Zagzebski (1996, 115) adds that “virtues are psychically prior to skills” and that they require a motivational component rather than mere effectiveness. 234 The Ethics of Argumentation tional effort to be impartial, is that these virtues tend to become a sort of “second nature” (Montaigne 1580, 407; Ryle 1949, 42) that allows us to reason in fair terms almost effortlessly, without a conscious and persistent effort to remain impartial. Hence the importance of critical thinking for the rationality of argumenta- tion: As a whole, argumentational virtues help people develop what Walton (1989b, 169) calls the arguer’s critical detachment, i.e., “the ability to detect biases, and thereby to avoid being too heavily partisan to attain a balanced perspective in argument.” Furthermore, the arguer who is interested in reaching a fair and balanced standpoint may also make an effort to examine (and respond to) the set of standard objections to the standpoint he or she is advancing. In Manifest Rationality, Johnson (2000: 165) contends that this task is an “obligation” that arguers must fulfill in order to promote the rationality of their arguments. To stress this point, Johnson observes that traditional approaches have focused too much on what he calls the ‘illative core’ of ar- guments, i.e., the set of premises that arguers advance in support of the conclusion, and not enough on the “dialectical tier,” i.e., the set of alternative positions and plausible objections that must be addressed. In my view, the requirement that arguers construct a dialectical tier seems particularly pertinent with respect to the problem of biased argumentation: By imposing the obligation to contemplate potential objections and alternative views, it helps arguers overcoming their tendency to overlook what seemingly contradicts their views, but also the opposite tendency to focus too much on what seemingly confirms them (confirmation bias). Granted, it may not always be possible, in practice, to deal with all potential objections directed at an argument, as Wenzel (2003, 228) points out, either because of time constraints or “simply because of the sheer quantity of ‘dialectical stuff’ that would be associated with any significant issue.” But the effort to examine systematically potential objections to our own views, even in the absence of an actual opponent (Johnson 2000, 170), can only contribute to promote the rationality of our arguments. This strategy is consistent with what Stuart Mill (1859) calls the “duty” of playing the devil’s advocate, i.e. the obligation to “throw [ourselves] into the mental position of those who think differently from [us]”: [The truth] is [n]ever really known but to those who have at- tended equally and impartially to both sides and endeavored to see the reasons of both in the strongest light. So essential is this discipline to a real understanding of moral and human subjects that, if opponents of all-important truths do not exist, it is indis- pensable to imagine them and supply them with the strongest Vasco Correia 235 arguments which the most skilful devil’s advocate can conjure up. (Mill 1859, 35-36) A similar way to promote self-criticism is the analytic re- construction of one’s own arguments (Walton 1989b, 170; van Eemeren and Grootendorst 2004, 95). On the one hand, this task is a good way to test the soundness of the arguments in question, but on the other hand it also helps exteriorizing any potential “dark-side commitments” (Walton 1989b, 178), that is, proposi- tions “that are not known as explicit commitments by the arguer himself, or possibly even by the other participant in the argu- ment.” It often happens, for example, that people’s arguments rely at least partly on assumptions that remain implicit and that do not resist analysis. Some of these assumptions may be insidi- ous sources of biases that elude the arguer’s awareness, such as the assumption “I know more than my opponent on this matter” or the assumption “I am less biased than my opponent,” both rooted, as we have seen, in the illusory superiority bias. By exte- riorizing the argument’s components, the arguer has a better chance to detect such assumptions and avoid being tendentious. In real-life contexts, however, it may be easier to resort to indirect strategies of argumentative self-control. From this per- spective, it seems useful to contemplate some of the sophisti- cated strategies brought to light by decision theorists. One of these strategies is precommitment (or “self-binding”), which may be described as an attempt to avoid irrational attitudes by imposing constraints on one’s own future conduct (Elster 2007, Loewenstein et al. 2003, for a review). In the context of decision making, this may involve the suppression of future options, as when the pathological gambler decides to sign a self-exclusion declaration which irreversibly banns her from casinos. In the context of argumentation, however, it seems more plausible to envisage self-imposed constraints that concern the conditions under which the arguments are set out. For example, a political analyst who is aware that her article about the Israeli-Palestinian conflict is liable to be biased by her cultural attachment to one of the sides may preemptively decide to submit her analysis to a certain number of control strategies: make sure that each rele- vant fact is taken into account, ask a colleague to detect unno- ticed biases in the text, try to ponder the arguments in favor of the opposite view, and so forth. And if the argumentative con- text is a debate in which such measures are difficult to apply, the arguer may nonetheless adopt “fast and frugal heuristics” (Gigerenzer et al. 2011), which are generally described as rules of thumb that promote the rationality of people’s attitudes under constraints of time and information. She may, for instance, commit herself to the rule “Avoid discussing the Israeli- 236 The Ethics of Argumentation Palestinian conflict when emotions are running high,” or even consider heuristics that incorporate the deontological require- ments mentioned above, such as the rule “Always listen care- fully to the opponent’s argument before trying to come up with a refutation.” But these are mere examples of self-imposed con- straints, and individual arguers are free to adopt strategies that suit specifically the type of biases that seem to affect them the most. Finally, it is worth noting that these self-regulating strate- gies need to be supplemented by interpersonal structures capable of promoting critical thinking in broader contexts. Sociologists and philosophers of science have often emphasized the idea that rationality is not a property of the individual alone, and that, consequently, the social organization of knowledge is key to promoting the truth and the rationality of beliefs (Longino 1990, Merton 1976, Mill 1890, Solomon 2001). Surely, the same can be said about the process of argument-making, given that de- bates are generally privileged occasions for exposing biases (Mercier & Sperber 2011, Sunstein 2003). Thus, for example, Campbell et al. (2009, 65) suggest that managers should coun- teract some of the biases underlying irrational decision-making by introducing further debate: “This safeguard can ensure that biases are confronted explicitly. It works best when the power structure of the group debating the issue is balanced.” Whether at an individual or a collective level, however, there may be as many types of strategies of argumentative self-control as there are types of cognitive illusions. 5. Conclusion The aim of this paper was to show that the rationality of peo- ple’s argumentative behavior cannot rely solely on logical and dialectical requirements, given that a wide variety of fallacies are committed without the arguer’s awareness. Normative theo- ries of argumentation tend to focus on the rules that arguers should ideally observe to be able to promote the reasonableness of arguments and resolve differences of opinion. Although this sort of analysis is paramount to set the principles of rationality that guide the way people reason and debate in everyday life, it needs to be supplemented by a reflection upon the conditions under which such principles may actually be observed. As we have seen, psychologists have consistently shown that most people are prone to a variety of cognitive illusions that distort their argumentative reasoning. Further, such biases tend to occur unintentionally, which means that deductive skills and well- Vasco Correia 237 intended efforts to “play the game by the rules [of critical dis- cussion],” as van Eemeren and Grootendorst put it (2004, 187), may be insufficient to ensure the balance and the reasonableness of people’s way of debating. Far from suggesting that argumentative biases are inevita- ble in a debate and that arguers should be excused for their unin- tentional fallacies, this article sought to show that arguers are partly and indirectly responsible for the rationality of their ar- guments, to the extent that they can exert a certain degree of control over the process of argumentation. In particular, it in- sisted on the notion that arguers who acknowledge their error tendencies may resort to a number of strategies of “argumenta- tive self-control” designed to counteract the effect of biases on their dialectical behavior. A few of those strategies were briefly described here, but if the Ethics of Argumentation is to become a flourishing field of philosophical enquiry, similarly to Virtue Epistemology, we are to expect many more suggestions in that sense. Acknowledgements: This work was conducted within the pro- ject Argumentation, Communication and Context – PTDC/FIL- FIL/110117/2009–funded by the Portuguese Fundação para a Ciência e a Tecnologia (FCT). I would also like to thank the re- viewers for their comments and suggestions. References Aberdein, A. (2010). Virtue in argument. Argumentation 24 (2): 165-179. Adler, J. (2002). Belief’s Own Ethics. Cambridge, MA: Brad- ford, MIT. Audi, R. (2008). The ethics of belief: Doxastic self-control and intellectual virtue. Synthese, 161: 403-418 Barker, S. (2003). The Elements of Logic. New York: McGraw- Hill. Baron, J. (1988). Thinking and Deciding. Cambridge: Cam- bridge University Press. Campbell, A., Whitehead, J. & Finkelstein, S. (2009). Why good leaders make bad decisions. Harvard Business Review 87 (2): 60-66. Chisholm, R. (1991). Firth and the ethics of belief. Philosophy and Phenomenological Research 91: 11-128. Cohen, D. (2005). Arguments that backfire. In D. Hitchcock & D. Farr (Eds.), The Uses of Argument, pp. 58-65. Hamilton, ON: OSSA. Cohen, D. (2009). Keeping an open mind and having a sense of proportion as virtues in argumentation. Cogency 1 (2): 49- 64. 238 The Ethics of Argumentation Dawkins, R. (2006) The God Delusion. London: Bantam Press. Dunning, D., Heath, C. & Suls, J.M. (2004). Flawed self- assessment: Implications for health, education, and the workplace. Psychological Science in the Public Interest 5: 69-106. Eemeren, F.H. van & Grootendorst, R. (2004). A Systematic Theory of Argumentation. Cambridge: Cambridge Univer- sity Press. Eemeren, F.H. van, Garssen B. & Meuffels, B. (2009). Fallacies and Judgments of Reasonableness. Dordrecht, Heidelberg, London, New York: Springer. Engel, P. (2000). Believing and Accepting. Dordrecht: Kluwer. Elster, J. (2007) Explaining Social Behavior. Cambridge: Cam- bridge University Press. Evans, J. (1995). Relevance and Reasoning. In S.E. Newstead & J.T. Evans (Eds.), Perspectives on Thinking and Reasoning, Sussex: Lawrence Erlbaum. Evans, J. (2004). Biases in deductive reasoning. In Pohl (Ed.), Cognitive Illusions, pp. 127-144. Hove, NY: Psychology Press. Evans, J., Over, D., & Manktelow, K. (2008). Reasoning, deci- sion making, and rationality. In J. Adler & L. Rips (Eds.) Reasoning: Studies of Human Inference and its Founda- tions, pp. 437-450. Cambridge: Cambridge University Press. Feldman, R. (2002). Epistemological duties. In P. Moser (ed.), Oxford handbook of epistemology, pp. 362–384. New York, Oxford: Oxford University Press. Fischoff, B., Slovic & P., Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Jour- nal of Experimental Psychology 3 (4): 552-564. Foot, P. (1978). Virtues and vices. Oxford: Blackwell. Gigerenzer, G. (2008). Rationality for Mortals. New York: Ox- ford University Press. Gigerenzer, G., Todd, P. and the ABC Research Group (2011). Simple Heuristics That Make Us Smart. Oxford, New York: Oxford University Press. Gilovich, T. (1991). How We Know What Isn’t So. New York: The Free Press. Hamblin, C. (1970). Fallacies. London: Methuen. Holland, J., Holyoak, K., Nisbett, R. & Thagard, T. (1986). In- duction: Processes of Inference, Learning and Discovery. Cambridge, MA: MIT Press. Johnson, R. (2000). Manifest Rationality. Mahwah, NJ: Law- rence Erlbaum. Vasco Correia 239 Johnson, R. & Blair, J. (2006). Logical Self-defense. Toronto: McGraw-Hill Ryerson. Klar, Y., & Giladi, E.E. (1999). Are most people happier than their peers, or are they just happy? Personality and Social Psychology Bulletin 25: 585-594. Koehler, D. Brenner, L. & Griffin, D. (2002). The Calibration of Expert Judgment: Heuristics and Biases Beyond the Labo- ratory. In T. Gilovitch, D. Griffin & D. Kahneman (Eds.), Heuristics and Biases, pp. 686-715. Cambridge: Cambridge University Press. Kruger, J. & Gilovich, T. (1999). ‘Naïve cynicism’ in everyday theories of responsibility assessment: On biased assump- tions of bias. Journal of Personality and Social Psychology 76 (5): 743-753. Kunda, Z. (1990). The case for motivated reasoning. Psycholo- gical Bulletin 108 (3): 480-498. Kunda, Z. (1999). Social Cognition. Cambridge, MA: MIT Press. Lord, C., Ross, L. & Lepper, M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subse- quently considered evidence. Journal of Personality and Social Psychology 37 (11): 2098-2109. Loewenstein, G., Read, D. & Baumeister, R. (2003) Time and Decision, New York: Russell Sage Foundation. Longino, H. (1990). Science as Social Knowledge. Princeton, NJ: Princeton University Press. McKay, R.T. & Dennett, D. (2009). The Evolution of Misbelief. Behavioral and Brain Sciences 32: 493-561. Merton, R. (1976). Sociological Ambivalence. New York: The Free Press. Mele, A. (2001). Autonomous Agents. Oxford, New York: Ox- ford University Press. Mercier, H. & Sperber, D. (2009). Intuitive and reflective be- liefs. In J. Evans & K. Frankish, In Two Minds: Dual proc- esses and beyond, pp. 149-170. Oxford: Oxford University Press. Mercier, H. & Sperber, D. (2011). Why do humans reason? Ar- guments for an argumentative theory. Behavioral and Brain Sciences 34: 57-74. Messick, D., Bloom, S., Boldizar, J.P. & Samuelson C.D. (1985). Why are we fairer than others? Journal of Experi- mental Social Psychology 21: 480-500. Mill, J.S. (1859). On Liberty. Forgotten Books. Montaigne, M. (1580). Essais. Paris: Seuil, 1967. Myers, D. G. (1975). Discussion-induced attitude-polarization. Human Relations 28: 699-714. 240 The Ethics of Argumentation Oswald, M. & Grosjean, S. (2004). Confirmation bias. In R.F. Pohl (Ed.), Cognitive Illusions, pp. 79-98. Hove and New York: Psychology Press. Mill, J.S. (1859). On Liberty. Forgotten Books. Nisbett, R. & Ross, L. (1980). Human Inference. Englewood Cliffs, NJ: Prentice-Hall. Paul, W.R. (1986). Critical thinking in the strong and the role of argumentation in everyday life. In F.H. van Eemeren, R. Grootendorst, J.A. Blair & C.A. Willard (Eds.), Argumenta- tion, pp. 379-388. Dordrecht: Foris Publications. Pirie, M. (2006). How to Win Every Argument. New York: Con- tinuum International Publishing Group. Perelman, C. & Olbrechts-Tyteca, L. (1969). The New Rhetoric. Notre Dame, London: University of Notre Dame Press. Pohl, R. (ed.) (2004). Cognitive Illusions. Hove, New York: Psychology Press. Pronin, E., Gilovitch, T. & Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review 111: 781-799. Ross, M. & Sicoly, F. (1979) Egocentric biased in availability and attribution. Journal of Personality and Social Psychol- ogy 37: 322-336. Ryle, G. (1949). The Concept of Mind. London, Penguin Books. Schkade, D. & Kahneman, D. (1998). Does living in California make people happy? American Psychological Society 9 (5): 340-346. Solomon, M. (2001). Social Empiricism. Cambridge MA: MIT Press. Stanovich, K. & West, R. (2002). Individual differences in rea- soning: Implications for the rationality debate. In Gilovitch, Griffin & Kahneman (Eds.), Heuristics and Biases, pp. 421- 440. Cambridge: Cambridge University Press. Sunstein C.R. (2003). Why Societies Need Dissent. Cambridge, MA: Harvard University Press. Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers? Acta Psychologica 47: 143-148. Talisse, R. & Aikin, S. (2006). Two Forms of the Straw Man. Argumentation 20 (3): 345-352. Taylor, S.E. & Brown, J. (1988). Illusion and Well-Being: A Social Psychology Perspective on Mental Health. Psycho- logical Bulletin 103 (2): 193-210. Tetlock, P. & Levi, A. (1980). Attribution Bias: On the Incon- clusiveness of the Cognition-Motivation Debate. Journal of Experimental Social Psychology 18: 68-88. Tetlock, P. (2005). Political Judgment. Princeton: Princeton University Press. Vasco Correia 241 Thagard, P. (2011). Critical thinking and informal logic: Neuro- psychologic perspectives. Informal Logic 31 (3): 152-170. Tindale, C. (2007). Fallacies and Argument Appraisal. Cam- bridge: Cambridge University Press. Tversky, A. & Kahneman, D. (1983). Extensional versus intui- tive reasoning: the Conjunction Fallacy in probability judgment. Psychological Review 90 (4): 293-315. Tversky, A. & Kahneman, D. (2008). Extensional versus intui- tive reasoning: The conjunction fallacy in probability judg- ment. In J. Adler & L. Rips (Eds.) Reasoning, Cambridge: Cambridge University Press, 114-135. Twerski, A. (1997). Addictive Thinking. Center City, Minnesota: Hazelden. Walton, D. (1989a). Informal Logic. Cambridge: Cambridge University Press. Walton, D. (1989b). Dialogue theory for critical thinking. Ar- gumentation 3: 169-184. Walton, D. (2003). Ethical Argumentation. Plymouth: Lexing- ton Books. Walton, D. (2006). Fundamentals of critical argumentation. Cambridge: Cambridge University Press. Walton, D. (forthcoming). Defeasible reasoning and informal fallacies. Synthese. Weinstein, N. (1982). Unrealistic optimism about susceptibility to health problems. Journal of Behavioral Medicine 5: 441- 460. Wenzel, J. (2003). Arguer’s obligations: Another perspective. In F.H. van Eemeren, J.A. Blair, C.A. Willard, & A.F. Hen- kemans (Eds.), Anyone Who Has a View, pp. 227-236. Dordrecht: Kluwer Academic Publishers. Westen, D., Blagov, P, Harenski, K., Kilts, C. & Hamann, S (2006) Neural basis of motivated reasoning, Journal of Cognitive Neuroscience 18 (11): 1947-58. Zagzebski, L. (1996). Virtues of the Mind. Cambridge: Cam- bridge University Press.