PDF31.4 04 Woods 318-343 RTG © John Woods. Informal Logic, Vol. 31, No. 4 (2011), pp. 318-343. Whither Consequence? JOHN WOODS Department of Philosophy University of British Columbia Vancouver, BC V6T 1Z1 john.woods@ubc.ca Abstract: There are brief passages in Fallacies which suggest that Hamblin may doubt the existence of the inductive consequence relation. If so, his view would be that when an inductive inference is correct, it is not made so by the fact that its con- clusion is an inductive consequence of its premisses. It would follow accordingly that correct conclusion- drawing needn’t be a matter of cor- rect consequence-drawing. If in turn that were so, perhaps the same could be said for plausibility and defeasi- bility conclusion-drawing. This gen- erates this paper’s central question: Under what conditions does a con- sequence relation exist between the premisses and conclusion of a cor- rect inference? Résumé: Il y a de brefs passages dans Fallacies qui suggèrent que Hamblin doute de l'existence de la relation de conséquence inductive. Si oui, son point de vue serait que si une inférence inductive est correcte, ce n’est pas le fait que sa conclusion est une conséquence inductive de ses prémisses qui rend l’inférence correcte. Il s'ensuivrait que tirer une conclusion correctement n’est pas nécessairement une affaire de tirer une conséquence correctement. Si c'était le cas, peut-être la même chose pourrait se dire de tirer des conclusions plausibles ou annulables. Cela fait soulever une question centrale de cet article: Dans quelles conditions une relation de consé-quence existe entre les prémisses et la conclusion d'une inférence correcte? Keywords: consequence, conditionality, defeasibility, deduction, entailment, induction, probability, plausibility 1. Hamblin’s Question As of this writing in late 2010, Charles Hamblin’s Fallacies is forty years old. There is a large research programme and a hefty body of work that owes much to its influence. Fallacies is not a perfect book. But, these four decades later, it lacks an equal. One of Fallacies’s more interesting assertions is that we in the present-day are in the same situation as our pre-12th century forbears. They had lost the logical theory of the Ancients, and John Woods 319 we have lost fallacy theory.1 Actually, he says, this comparison is not quite right. In the case of the logic of deduction, there was something to be lost. But there has never been a theory of the fallacies. Even those from whom the most important pre-1970 contributions were to come were comparative dabblers.2 Ham- blin is dissatisfied with this state of affairs. It is a scandal that “[w]e have no theory of fallacy at all in the sense in which we have theories of correct reasoning or inference.” (p. 11)3 Ham- blin calls upon logicians to erase this embarrassment. This raises some obvious questions, three of which are es- pecially interesting. One is why, in 1970, the fallacies had yet to attract a full-blown theory. A second is whether, provided it had the will for it, “our” logic—modern logic4—would possess the wherewithal to repair this omission. Assuming a negative an- swer, the third of Hamblin’s questions would be whether a prop- er theory of the fallacies could be had beyond the borders of modern logic, indeed beyond the borders of any logic, “ours”, “theirs” or “yet to come”. For ease of reference, and when the context allows it, I shall speak of these collectively as Hamblin’s Question. The very fact that there was a Hamblin’s Question to ask suggests that the fallacies posed for the would-be theorist a spe- cial kind of difficulty. Virtually everything a logician turns his hand to is difficult. This is partly a matter of the conceptual complexity of logic’s target properties and also, even more, a consequence of its strict demands for precision, rigour and sys- tematicity. Hardly anything that excites a human being’s intel- lectual interest makes the “logical cut.” No one seriously sup- poses, except in a lazily metaphorical way, that there is a logic of love, or of beauty or of justice. (There is, for example, noth- ing to be learned about what in actuality is right or wrong from deontic logic. At their best, deontic logics hold a certain techni- cal interest. But on matters of moral substance they are non- starters.5) 1 Hamblin writes: “In some respects, …, we are in the position of the medie- val logicians before the twelfth century: we have lost the doctrine of fallacy, and need to recover it.” (p. 11) 2 Hamblin writes: “Strangely, in a certain sense, there has never been a book on fallacies; never, that is, a book-length study of the subject as a whole, or of incorrect reasoning in its own right rather than as an afterthought or ad- junct to something else.” (p. 10) 3 To catch Hamblin’s meaning here, it is advisible to read “correct” with a contrastive stress. 4 Let modern logic be any of the established systems in the period ensuing from the publication in 1879 of Frege’s Begriffsschrift to the present. 5 The same, it seems to me, is also true of epistemic and doxastic logic. If you have an epistemologist’s curiosity about knowledge and belief, these are the last places to satisfy it. See here my “Making too much of worlds”, in Guido Whither Consequence? 320 Some things are clearly amenable to logical treatment— the consequence relation, for example, or the provability prop- erty—and others, indeed most others, clearly are not. There are no algorithms for this. Logic-worthiness is not a decidable property. There are borderline cases which underdetermine the in-out question. (Think here of the concept of plausibility.)6 There are also cases which we might characterize as “cross- border”. These are concepts which appear to make as much of a claim on disciplines other than logic as on logic itself. (Think here of the concept of inference.)7 We have it, then, that one level of indecisiveness about the logic-worthiness of a concept is not knowing—or having princi- pled reason to say—whether it’s in or out or in-between and, if in-between, whether a borderline or cross-border case. But there is also a higher level of theoretical uncertainty, in which the as- piring theorist simply has no idea of how to proceed, never mind how his progress, if there were any, would best be classified af- terwards. What I mean by this is the far from uncommon cir- cumstance in which someone produces a not implausible ac- count of something without having much of a clue, before or after, about how to answer the question: “What does it take to make a good theory of that?” or, more directly, how to respond to the instruction: “State and justify your methodology, please.” It is interesting that much of Fallacies reads as if the logi- cian’s problem with fallacy theory is not indecisiveness at either of these levels, but rather the comparative security of thinking that fallacies are excluded by the fact that “we [logicians have] set ourselves higher standards of theoretical rigour and will not be satisfied for long with a theory less ramified and systematic than we are used to in other departments of Logic …” (p. 12). What this suggests is that the fallacies, like most other things, aren’t logic-worthy. They won’t yield to the strict demands that a logic imposes on its subject matter. Consider now the words that immediately follow: Imaguire and Dale Jacquette, editors, Possible Worlds, pages 171-217, Mu- nich: Philosophia, 2010. 6 Plausibility sometimes is taken as an operator-operator, as in “plausibly follows from”, and sometimes as a sentence-operator, as in “Vulcan was a plausible hypothesis in La Verrier’s day”. Perhaps it is both. Whether it is or not, some would exclude it from logic on the ground that there is no negation function definable for plausibility. For are there not equally plausible but incompatible propositions? I will return to plausibility in section 8 below. 7 If Harry infers ψ from a set of sentences ∑, it is a matter of logic as to whether ψ actually follows from ∑. But might it also be a matter of episte- mology as to whether drawing that consequence is a reasonable thing to do, or a matter of psychology as to whether drawing it lies within Harry’s com- putational powers? For more on the difference between consequence-having and consequence-drawing see section 4. John Woods 321 … one of the things we may find is that the kind of theory we need cannot be constructed in isolation from them [= the de- partments of logic]. (p. 12) This, if true, would be bankrupting news for fallacy theory. For if the concept of fallacy is not a concept for logic in any of its departments and yet “in isolation” from logic a theory of the fal- lacies can’t be produced at all, then it would appear that the fal- lacies don’t admit of theoretical treatment of any kind; that the very idea of a theory of them is dead-on-arrival. By any standard of fair comment, this is pretty sloppy go- ing. Why, if it were his position that a theory of fallacies cannot be got at all, would Hamblin have favoured us with a whole book on them? If Fallacies contained no theory of its subject matter, what would have been the value of it? Virtually every- one agrees that Fallacies is a good book, and that there is much to learn from it. This gives us two possibilities to consider. One is that Fallacies is a theory of the fallacies, indeed a theory that preserves the traditional idea that to commit one is to make a mistake of logic. The other is that Fallacies, though informative, sensitive to distinctions, mindful of historical developments and thoughtfully reasoned, is not a theory. Whichever it might be, each is a discouragement of the very presupposition of Ham- blin’s Question. If the first possibility held true, the mere asking of it would be the fallacy of complex question. Yet if the other possibility obtained, Hamblin’s Question would lack a motiva- tion, and we would be left to scramble for a face-saving way out—for example, by interpreting the Question as asking why fallacy theory can’t be formal in the manner of, say, classical first order logic or the ZF theory of sets. This is not very satisfy- ing. Hasn’t it been recognized since the very beginnings of sys- tematic logic that most of the fallacies (and all of the interesting ones) are informal, made so by considerations other than, or over and above, their logical forms—if such there be? There is point in reminding ourselves that Fallacies is in the first instance a complaint by a logician about logic. The complaint is that logic has abandoned the fallacies research pro- gramme. In the second instance, Fallacies is puzzled recognition that logicians never actually got around to it in the first place, not at least in a theoretically robust way.8 There is a good deal 8 Hamblin died in 1985 at a regrettably early age. This denied to the fallacies community the benefit of his commentary on treatments which were inspired in no small way by his demand for renewal but which appeared after his death. See, for example, John Woods and Douglas Walton, Fallacies: Se- lected Papers 1972-1982, 2nd edition, with a Foreword by Dale Jacquette, London: College Publications, 1989/2008, Douglas Walton, A Pragmatic Whither Consequence? 322 of difference between the complaint and the bewilderment. But even so, it does appear that Hamblin’s position is that: Hamblin’s view: If a theory of fallacies is possible at all, it is the job of logicians to produce it. As it happens, it is not in fact unreasonable to suppose that a theory of the falla- cies is indeed possible. So activating the fallacies project is a reasonable demand on logic. It becomes a question of rather central importance whether Fal- lacies itself counts as a response to this imperative. 2. The departments of logic Fallacies appeared in 1970, in the midst of the greatest theoreti- cal proliferation in logic’s long history. Hamblin himself ac- knowledges that logic has “departments”. He would have been fully conversant with developments in intuitionist, many valued, modal, epistemic and temporal logic, and could not have been unaware of the turbulence, caused in large part by his compatri- ots, of relevant and paraconsistent logics. Dialethic logic, an- other antipodean disturbance, didn’t hit the mainstream until 1979, with the appearance in the Journal of Philosophical Logic of Graham Priest’s “The logic of paradox”,9 but there were ear- lier intimations of it abundantly available and widely discussed, not least in places like Australia. By 1970, there weren’t many logicians who hadn’t encountered this explosion in logic, even if not enthusiastic abettors of it.10 Logic was now a store with many departments—a veritable Selfridges or Macy’s—giving the would-be logical theorist the gift of easy credit and non-stop shopping. In the aftermath of Fallacies from 1972 to 1985, Douglas Walton and I published some twenty five papers in which we took Hamblin’s call as permitting, if not strictly demanding, the appropriation of this abundance to the shifting analytical re- Theory of Fallacy. Tuscaloosa: University of Alabama Press, 1995, Frans van Eemeren and Rob Grootendorst, Argumentation, Communication, and Falla- cies: A Pragma-Dialectical Perspective. Mahwah, NJ: Erlbaum, 1992, and John Woods, The Death of Argument: Fallacies in Agent-Based Reasoning, Dordrecht and Boston: Kluwer, 2004. 9 Volume 8, pages 219-241. 10 Quine is perhaps the most notable exception. But even he grudgingly al- lows for the possibility that intuitionism is a real logic, and the virtual cer- tainty that quantum logic must now be acquiesced to. See W.V. Quine, Phi- losophy of Logic, 2nd edition, Cambridge, MA: Harvard University Press, 1986; first published in 1970, and Pursuit of Truth, 2nd revised edition, Cam- bridge, MA: Harvard University Press, 1992. First published in 1990. John Woods 323 quirements of fallacious reasoning. In what would come to be known as the Woods-Walton Approach, the main organizing idea was that at least one of the reasons why logic hadn’t be- forehand made satisfactory headway with the fallacies was that it had lacked the well-stocked inventories of concepts and tech- niques which flowed from its new pluralism. One of the systems the WW-Approach made brisk use of was intuitionist logic, with some borrowings from Kripke’s modal semantics. Intuitionist logic had been around—and influential—since the first decade of the century just past, and Kripke semantics had burst onto the scene beginning in 1959.11 So in 1970 these logics were hardly all that new. Also dating from 1959 was the Yale-Pittsburgh-Canberra upsurge in relevant and paraconsistent logic,12 which with only slight exceptions13 made no major inroads to the WW- Approach—an embarrassing omission, given the widespread insistence that a number of the more important fallacies are er- rors of relevance and that premisses need be (or need not be!) both consistent and relevant. Negation-as-failure cropped up in our 1978 paper on the ad ignorantiam, but it was our invention, not a borrowing from autoepistemic logic, which didn’t exist yet.14 Another of WW’s more frequent appropriations, and the 11 Saul Kripke, “A completeness theorem in modal logic”, Journal of Sum- bolic Logic, 24 (1959), 1-15; “Semantical analysis of modal logic, I: Normal propositional calculi”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 8 (1963), 113-116; “Semantical considerations in modern modal logic”, Acta Philosophica Fennica, 16 (1963), 83-94; “Semantical analysis of modal logic, II: Non-normal modal propositional calculi.” In J.W. Addison, L. Henkin and A. Tarski, editors, The Theory of Models, pages 202- 220. Amsterdam: North-Holland, 1965; and “Semantical analysis of intionis- tic logic”. In J. Crossley and M. Dummett, editors, Formal Systems and Re- cursive Functions, pages 92-130. Amsterdam: North-Holland, 1965. 12 A.R. Anderson and Nuel D. Belnap, Jr., “A simple treatment of truth func- tions”, Journal of Symbolic Logic”, 24 (1959), 301-312; A.R. Anderson, “Completeness theorems for the systems E of entailment and EQ of entail- ment with quantification”, Zeitschrift für Mathematische Logik und Grundla- gen der Mathematik, 6 (1959), 201-216; Nuel D. Belnap, Jr., “EQ and the first order functional calculus”, same journal, 6 (1960), 217-218; and “Inten- tional models for first-degree formulas”, Journal of Symbolic Logic, 32 (1967), 1-22. See also Richard Epstein, “Relatedness and implication”, Phi- losophical Studies, 36 (1979) 137-173. 13 “On fallacies”, which appeared in 1972; it is chapter 1 of Fallacies: Se- lected Papers (1989/2007). See also “Why is the ad populum a fallacy?” (1980, chapter 16 of Fallacies: Selected Papers); “Post hoc, ergo propter hoc” (1977), chapter 9; and “Question-begging and cumulativeness in dialec- tical games” (1982, chapter 19). 14 “The fallacy of ad ignorantiam” is chapter 11 of Fallacies: Selected Pa- pers. Autoepistemic logic arises from contributions by R.C. Moore in 1984 and 1988—“Possible worlds semantics for autoepistemic logic”, Proceedings of the Non-monotonic Workshop, New Palz: NY, 1984. 344-354, and “Au- toepistemic logic”, in P. Smets, E.H. Mamdani, D. Dubois and H. Prade, edi- Whither Consequence? 324 one most directly inspired by Hamblin himself, was dialogue logic (or in a variation, dialectic).15 Later on I will have some- thing to say about dialogue logic. The point to stress here is that according to the WW-Approach, dialogue logics are but one of the frameworks available for the analysis of concepts of rightful interest to logicians. The WW-idea that there is no single logical system, or type of logic, sufficient to account for all the fallacies, and that only a pluralism in logic will enable the trick to be turned, hasn’t won much support in the years following its flotation. One of the criticisms leveled against it is that it fails to provide a unitary theoretical framework for the fallacies, in effect necessitating the working up of a new logic for each different fallacy.16 A fur- ther development has been Douglas Walton’s slimmed-down relativization of the more sprawling pluralism of the WW- Approach to variations expressible within a generalized dialogi- cal framework.17 Seen the old way, dialogue logic is the right tool—organon—for some of the fallacies. Seen in Walton’s newer way, dialogue logic is the right overall framework for them all. Yet another important factor has been the indecisiveness of informal logicians about the extent to which the analysis of the fallacies is strictly a matter for dialogue logic, and also about where to draw the boundaries between informal logic and other disciplines that might be expected to bear on the fallacious.18 A further complication is the ambivalence shown by the informal logic community about the place of formal methods in the anal- ysis of the fallacies, or about the priorities that respectively at- editors, Non-Standard Logics for Automated Reasoning, London: Academic Press, 1988. 15 Here I follow the contemporary convention by which dialectical exchanges are dialogues centred upon the disposition of disagreement. 16 This is a recurrent criticism advanced by the Pragma-Dialectical School against the WW-analysis. For a typical expression of it, see Frans van Eemeren and Rob Grootendorst’s Argumentation, Communication, and Fallacies, p. 103. This is not the place to answer the objection in any detail. But perhaps it might quickly be observed that the principal appeal of the pluralist thesis is precisely the conviction that there is no unified logic of the fallacies. So the form of this dispute is, shall we say, negatively instructive. WW: “Since there is no unitary treatment for the fallacies, pluralism is the way to go.” VEG: “Since it can’t produce a unified treatment of the fallacies, pluralism isn’t the way to go.” (“VEG” compacts “van Eemeren” and “Groo- tendorst”.) 17 See again his A Pragmatic Theory of Fallacy. 18 A leading example is posed by the question of where informal logic leaves off and pragma-dialectics comes into play. See also Ralph Johnson’s The Rise of Informal Logic, Newport News, VA: Vale Press, 1996. John Woods 325 attach to the formal and informal aspects of their analyses.19 All the same, I hardly need say that, like Fallacies itself, some of the very best work on the fallacies comes from this commu- nity.20 3. Induction: A puzzle Part of the modern subject’s proliferation is to be found in a more or less standard family of inductive logics. Here, too, it would only seem natural that logician would look to these pre- cincts for suitable accommodation of the inductive fallacies. However, it surprises me that, on a fair reading, Hamblin’s logi- cal sophistication appears to have deserted him in the matter of induction. He writes: The difficulty that surrounds the definition of ‘inductive falla- cies’ in their [sic] own right is that of distinguishing at all pre- cisely between good inductions and bad. (p. 47) In chapter 7, Hamblin takes up a related point: Is there such a thing as inductive validity, or is it a contradiction in terms? Although we [= Hamblin] accept in principle that some inductive arguments are better than others, what are the canons by which we judge an inductive argument’s absolute, ra- ther than relative, worth? (p. 225) A page later he adds: A prior question … in the case of inductive arguments is: Are they real arguments? (Emphasis in the original) Hamblin identifies his difficulty with Hume’s Problem, and he rebukes the Port Royal logicians for failing to find criteria (or canons) for its mitigation. This is problematic. Hume’s Problem 19 Concerning which see, first, my “What is informal logic?”, published in 1980, reprinted as chapter 17 of Fallacies: Selected Papers; and, second, “The informal core of formal logic”, which is chapter 3 of The Death of Ar- gument: Fallacies in Agent-Based Reasoning. 20 See, for example, Ralph Johnson, Manifest Rationality: A Pragmatic The- ory of Argument, Mahwah, NJ: Lawrence Erlbaum, 2000; James Freeman, Acceptable Premises: An Epistemic Approach to an Informal Logic Problem, Cambridge: Cambridge University Press, 2005 and Maurice Finocchiaro, Arguments About Arguments: Systematic, Critical and Historical Essays in Logical Theory, Cambridge: Cambridge University Press, 2005. Straddling the divide between the formal and informal is Commitment in Dialogue, by Walton and Erik Krabbe, published in 1995 by the SUNY Press. Whither Consequence? 326 is only tangentially about criteria. It is more centrally a problem about justification. Hume is sceptical about the defence of a quite general practice in which criteria for good and bad induc- tions are routinely and confidently applied. Hume’s Problem is not about whether this, that or the other is a criterion of good inductive reasoning. The question is whether the goodness of a criterion can be proved. Hamblin’s brief remark suggests that he himself thinks, approvingly, that Hume’s point is that a criterion can’t be good unless this strict test is passed. This seems to me excessive on both their parts. Nor can it be true that the Port Royal logicians were over-casual about finding criteria, as the sections on probability in La Logique make plain. 21 In 1970, as now, no one would say that inductive logic is as securely tethered and well-advanced as the established fami- lies of deductive logic. But Carnap’s ground-breaking Logical Foundations of Probability appeared in 1950, was followed two years later by A Continuum of Inductive Methods, after some papers of importance in 1945.22 All this was preceded by J.M. Keynes’ A Theory of Probability in 1921, not to overlook the masterly A System of Logic23 by Mill in 1843, a work for which Hamblin has scant affection. All these works forward what they take to be binding criteria for inductive reasoning, yet none rep- resents itself as the definitive solution of Hume’s Problem. What, then, are we to make of Hamblin? It is not credible that he would have been unaware of the criteria advanced by the induc- tive logics of his day. I suppose, then, his is the stronger objec- tion that the going inductive criteria are of no avail in the ab- sence of a clinching and non-circular solution of the Problem of Induction. This is unsatisfying. Hamblin appears to be unac- quainted with a companion “problem” for deduction, which de- mands a clinching and non-circular (hence non-deductive) dem- onstration that our going deductive criteria are actually sound. I hope that I have misunderstood Hamblin on this point. Just lines after the quoted remarks the argument shifts. He writes: Until it is clear whether induction is an argument-form in any way comparable with deduction there is nothing to be gained by 21 Almost certainly written by Pascal. 22 The two books are from the University of Chicago Press in 1950 and 1952 respectively. Logical Foundations had a second edition with the same pub- lisher in 1962. Carnap’s earlier papers include “On inductive logic”, Philoso- phy of Science, 12 (1945), 72-97, and “The two concepts of probability”, Phi- losophy and Phenomenological Research, 5 (1945), 513-532. 23 Published respectively by Macmillan in London and Longmans Green, also of London. John Woods 327 treating inductive shortcomings as varieties of fallacy (47). (Emphasis added.) On the face of it, the incomparability worry is baffling. No one doubts that there are respects in which induction fails compari- son with deduction. It fails comparison in the way that apples fail comparison with oranges. Differences there surely are be- tween deductive reasoning and inductive reasoning. But does any of these actually disqualify inductive errors from member- ship in the class of fallacies? Hamblin gives us no help here. Perhaps he thinks the point is clear and persuasive just as it stands. It is neither, made so in part by Hamblin’s failure to spec- ify the point of incomparability that disturbs him so, and the concomitant failure to distinguish between the claim that 1. the trouble that the incomparability poses for the falla- cies is that there aren’t (or probably aren’t) any induc- tive ones and the claim that 2. the trouble caused by the incomparability is not that there aren’t any inductive fallacies but rather that there isn’t (or probably isn’t) a logic of them and the further claim that 3. the trouble caused by the incomparability is that there isn’t (or probably isn’t) any logic of induction. I am inclined to discount (1), that is to say, to discount it on Hamblin’s behalf.24 (2) and (3) are more interesting. What might we make of (2)? If it were true, then the following would also be true: The fallacy-excluding difference: There is at least one fea- ture D that deduction has and induction lacks which pre- cludes there being a logic of the inductive fallacies and does not preclude there being a logic of the deductive fal- lacies. It is the same were (3) to be true: 24 It would be different if Hamblin were the Aristotle of Topics and the So- phistical Refutations. The fallacies there are defined as errors of deduction. They are nearly always the mistaking of a non-syllogism for a syllogism, and sometimes are the error of misidentifying a proposition’s contradictory. Whither Consequence? 328 The logic-excluding difference: There is at least one fea- ture D that deduction has and induction lacks which pre- cludes there being a logic of induction and does not pre- clude there being a logic of deduction. Of course, it hinges on D. But it also hinges on a question which precedes Hamblin’s Question. It is, as we might say, The Bigger Question: What does it take to make for a log- ic? This is the question I’ll take up in the section to follow. Track- ing down D is the business of the section after it. 4. Logic’s C-concepts There was a time when the Bigger Question would have seemed a stupid question. Not now. The gluttony of logic’s pluralism in the present day makes this a central issue for theory.25 It is a question on the minds of some informal logicians, but not I think to much avail overall. Not dealing with it at all, it is even less availing for Hamblin. Since its inception, the central focus of logic has been on the relations—one or more—of logical consequence or follow- ing from. In a coinage of Moore, the converse of consequence is entailment.26 Usage varies here. In the deductive realm alone, logicians invoke the names of logical consequence, formal con- sequence, deductive consequence, semantic consequence, strict consequence, relevant consequence, paraconsistent conse- quence, and so on. Beyond deduction, a further miscellany awaits: inductive consequence, probabilistic consequence, ab- ductive consequence27, nonmonotonic consequence, defeasible consequence, plausibilistic consequence, and what have you. 25 For some recent discussions of logical pluralism, see J.C. Beall and Greg Restall, Logical Pluralism, New York: Oxford, 2006, Hartry Field, “Plural- ism in logic”, Review of Symbolic Logic, 2 (2009) 342-359 and my “Mac- Coll’s elusive pluralism”, in Amirouch Moktefi and Stephen Read, editors, Hugh MacColl After One Hundred Years, pages 205-233; a special issue of Philosophia Scientiae, 15 (2011). 26 G.E. Moore, Philosophical Studies, London: Routledge and Kegan Paul, 1922. 27 In this paper I’m going to give abductive consequence a pass. Interested readers could consult my “Recent developments in abductive logic”, Studies in History and Philosophy of Science, available online, January, 2011, and “Cognitive economics and the logic of abduction”, Review of Symbolic Logic, in pess. John Woods 329 Over its long sweep, it would not be too much to say that consequence is the concept that anchors logic. It anchors it in the following way: K-consequence. Let r denote reasoning of kind k— deductive, inductive, plausibilistic, defeasible, and what- ever not. Then if r is logic-worthy, there will exist a rela- tion of k-consequence peculiar to the type of reasoning that r is. Pluralism sanctions a two-faced multiplicity about consequence. It allows for it to vary in general type—between, as we have seen, deductive and inductive consequence, and so on. The other face of pluralism shows itself within these types. They are intra- type variations of those genera. Think here of the different pro- visions made for deductive consequence by classical, modal, many valued, intuitionist, free, relevant, paraconsistent and dialethic approaches, and of the still further variations within each of these—not only different but often enough incompatible. The same tale repeats itself for the other genera of conse- quence—inductive, abductive, defeasible, and the rest. All this makes for a veritable blizzard of purported conse- quence relations. Even so, there is a distinction that runs through these different kinds and variations, comparatively untroubled by the resultant pluralism. This is the distinction between 1. consequence-having and 2. consequence-drawing.28 Concerning (1) it suffices to say that whenever a statement ψ is a consequence of a (set of) statement(s) Σ, it is always true to say that ψ is a consequence that Σ has. Describing the drawing 28 The first logician to draw and attach importance to the distinction between consequence-having and consequence-drawing was Aristotle, though not in these words. In the earliest writings on the syllogism, Aristotle distinguishes between arguments whose premisses necessitate their conclusions (i.e., whose conclusion are consequences of those premisses) and arguments whose premisses not only necessitate their conclusions, but also satisfy fur- ther conditions. One is that none of the premisses be redundant. Another is that no premiss may occur as conclusion. A third forbids multiple conclu- sions. When these (and some other) conditions are met, the argument in ques- tion is a syllogism (Topics, 1, 100a 25-27 and On Sophistical Refutations, 1, 165a 1-3; see also Prior Analytics, A24b 19-22). Syllogisms have a number of interesting properties. One is that while a set of premisses may have many deductive consequences, it can syllogistically imply very few of them at most. Whither Consequence? 330 of consequences takes more care. There are two basic camps about consequence-drawing. According to the All-camp, it is rationally required, or at least permitted, to draw all the conse- quence had by anything ∑ you currently believe or are otherwise committed to.29 According to the Some-camp, it is never ration- ally required, or permitted, to draw all of ∑’s consequences, and yet, depending on the circumstances, it is sometimes rationally required, or permitted, to draw some of them. Counting against the All-camp is the evident impossibility of any human individual actually doing its bidding. Against this, in turn, is the All-camp’s propensity to impose its requirements not on living-and-breathing real-word reasoners, but rather on idealized reasoners, and to chalk up the shortfalls of the actual in relation to the ideal to the rational discredit of those who fall short. This dim view of the actual reasoner flows from the as- sumption, widely held within the All-camp, that the ideal stan- dards sanctioned by the model, though not satisfied by actual reasoners, are normatively binding on them. The trouble with this is the near-wholesale indifference of ideal-modellers to the necessity of showing this assumption to be true. If, for example, a model provides that ideal reasoners close their beliefs under consequence, it cannot imaginably be inferred from this that the actual reasoner’s inability to follow suit makes for a normatively subpar performance, if not out-and-out irrationality. That is, it cannot be inferred in the absence of a demonstration.30 When a logician asserts that reasoning on the ground is defective when it proceeds in the absence of perfect information or it ignores propositions that are in the deductive closure of what he already believes, someone is guilty of defective reasoning. Either the reasoner on the ground is or the logician who makes the accusa- tion is. Why should we automatically defer to the latter? Why should that be our default position? Isn’t something more re- quired? Don’t we have need of a demonstration? Consequence-drawing depends on consequence-having. I cannot draw ψ as a consequence of ∑ if ψ isn’t one of the con- sequences ∑ actually has. If the Some-camp is right, the de- pendency doesn’t run in the other direction. ∑ has lots of conse- quences that no one will ever draw, or should or could. Given the k-consequence principle, we can now put it that 29 Subject, of course, to particular structural limitations. Not even the most ideal of ideal reasoners could draw as consequences all the truths of formal first-order arithmetic. 30 For some failed attempts to provide such demonstrations see my paper with Gabbay, “Normative models of rationality: The disutility of some ap- proaches”, Logic Journal of the IGPL, 11 (2003), 597-613. John Woods 331 The consequence specification requirement: The first task of a logic is to specify the consequence relation(s) it seeks to elucidate, and to establish its (or their) characteristic properties. It is well to note that the consequence specification requirement imposes on the theorist no obligation to say anything about ar- guments, at least not in their everyday sense. When the require- ment is met, a useful equivalence will have been revealed. Ψ will be a consequence of ∑ if and only if 〈∑, ψ〉 is a valid se- quence. There is a tradition in logic, beginning with Aristotle, to call structures such as these arguments. But “argument” here is a technical term, whose meaning—if again the Some-camp is right—permits arguments galore which no one ever will, should or could actually make. So we should be careful in our talk of arguments. Consequence-drawing presents the logician with a second major challenge: The consequence-drawing requirement: Logicians should adjudicate the conflict between the All-camp and the Some-camp; and if they find for the Some-camp, they should specify the conditions under which it is correct and permissible to draw a consequence that ∑ has. In many ways this is logic’s toughest assignment. I have little space for it here, beyond brief mention of some of the thornier issues. The question of when to draw a consequence is condi- tioned by a number of factors. Computational capacity affects what a drawer is able to do and that in turn depends on the kind of being he (or it) is, what he is built for and what he is good at. Interest also has a bearing. What consequence it makes sense for a drawer to draw or try to draw are those that (he thinks may) answer to his interests, including what at the moment he wants to know. A third factor concerns premiss-management, with a knock-on effect for consequence-drawing. A case in point is premiss-inconsistency. To what extent, if any, should premiss- inconsistency shape consequence-drawing decisions? We might note that it is not obvious that logic’s traditional tie to argument is any better-nourished by consequence-drawing than consequence-having. Certainly it is true that conclusion- drawing is often implicated in the making of arguments. But it is certainly not true that consequence-drawing requires that an ar- gument be made. Here is why (roughly). Suppose that ψ is a consequence of ∑ and, thinking that the Φi of ∑ are all true, you believe that ψ is true. For this to be so it is not necessary—or Whither Consequence? 332 even all that frequent—that you are making a case for ψ. But, in its everyday sense, that’s just what argument-making is.31 Something like the point I’m after here can be found in the familiar distinction between a deduction 〈Φ1, …, Φn, ψ〉 and the derivation of ψ from those same hypotheses {Φ1, …, Φn}. The deduction of ψ from {Φ1, …, Φn} is entirely a matter for the consequence relation. Derivation is different. Deductions are proper parts of derivations. But there is no derivation without justificatory marginalia opposite the lines of the deduction it en- compasses. For example, if 〈{Φ, Φ ⊃ ψ}, ψ〉 is a deduction, it is not a derivation unless supplemented by the observation that ψ really does follow from {Φ, Φ ⊃ ψ} by application of the rule modus tollens. What happens in the margins is case-making. Derivations are argumentative. Deductions are argumentatively inert. Consequence-having and consequence-drawing are two of a class of C-properties of particular importance for logic. So is the property of premiss-consistency. A fourth C-property is con- ditionality. Here the basic idea is that consequence-having is something that is conditionally expressible; that whenever ψ is a consequence of {Φ1, …, Φn}, then there are conditions under which ⌐If Φ1 ∧ Φ2, …, Φn, then ψ¬ is true. Accordingly, The conditionality of consequence thesis: ψ is a conse- quence of {Φ1, …, Φn} if and only if its corresponding conditional is true. This, if true, encumbers the would-be logician with additional work. The conditionality search requirement. For each conse- quence relation a logic specifies, it must identify the con- ditions for an “if … then”-sentence that make the condi- tionality of consequence thesis true of it. A good deal of ink and high-feeling has been spilt over the the- sis and the requirement. In the early days, there was an instruc- tive battle between Russell and Hugh MacColl over the horse- shoe. Russell thought that there was a relation of material con- sequence and a sense of “if … then” for the conditional sen- tences ⌐Φ ⊃ ψ¬ that express it. MacColl thought that there was 31 Relatedly, if we accept Robert Pinto’s proposal to regard arguments as in- vitations to accept (or make) inferences, it takes some tugging and pulling to get it to be the case that every time I draw one of ∑’s consequence, I am in- viting someone or other to draw the inference ⌐Since ∑, ψ¬. See his Argu- ment, Inference and Dialectic, Dordrecht and Boston: Kluwer, 2001. John Woods 333 no sense of “if … then” for which ⌐Φ ⊃ ψ¬ was a conditional sentence. So, if there were a relation of material consequence, the conditionality of entailment thesis wouldn’t be true.32 Mac- Coll’s objection anticipated a similar one by C.I. Lewis.33 Whether or not there are conditions under which ⌐Φ ⊃ ψ¬ is a conditional, and whether or not there is a relation of material consequence which that conditional expresses, material conse- quence isn’t strict (i.e. honest-to-goodness) consequence, and “⊃” doesn’t express it. Lewis went on to propose that this omis- sion could be rectified by introducing a new conditional symbol “3 ”. In effect, he thought that “⊃” fails the conditionality of consequence requirement, and with it the conditionality specification requirement, whereas “3 ” satisfies them both. Was Lewis right about “3 ”? Are the truth conditions he assigned to “3 ”-sentences such as to verify an “if … then” sentence? Putting “◊” for the possibility operator, Lewis defined “3 ” as follows: Φ 3 ψ iff ~◊(Φ ∧ ~ψ). It is interesting to compare this with the classical definition of “⊃”: Φ ⊃ ψ iff ~(Φ ∧ ~ψ). MacColl’s point was that the truth of ⌐~(Φ ∧ ~ψ)¬ was insufficient for the truth of any sentence in the form ⌐If Φ then ψ¬. Anticipating Lewis’ definition, MacColl also thought that there were indeed sentences in the form ⌐If Φ then ψ¬ for whose truth the truth of ⌐~◊(Φ ∧ ~ψ)¬ is sufficient. Lewis championed the idea that consequence should be conditionally expressible. Although he didn’t say so explicitly, it is evidently his view that 32 See Hugh MacColl, “‘If’ and ‘Imply’, Mind, 17 (1908), 151-152, and 453- 455, Russell, “‘If’ and ‘Imply’ a reply to Mr. MacColl,” Mind 17 (1908), and Russell, “Review: Symbolic Logic and its Applications, by Hugh MacColl” Mind, 17 (1908). An excellent survey of MacColl’s contributions to logic is Shahid Rahman’s and Juan Redmond’s Hugh MacColl: An Overview of His Logical Work with Anthology, London: College Publications, 2007. 33 See, for example, his “Implication and the algebra of logic”, Mind, 21 (1912), 522-531. Whither Consequence? 334 Sufficiency: No sentence ⌐If Φ then ψ¬ is true unless there is a sense of sufficiency in which the truth of Φ is sufficient for the truth of ψ34 What might these senses be? The obvious candidates include: Logical sufficiency, mathematical sufficiency, metaphysical sufficiency, causal sufficiency and physical sufficiency. Although they can be set down with a confidence that bespeaks their obviousness, no one should think that the similarities and differences among them have been worked out to everyone’s satisfaction. I invoke them here to assist in making a small but hardly trivial point. It is that when it comes to an antecedent’s sufficiency for its consequent, there is more than one way to skin that cat, never mind that the complete story has yet to be told. If the sufficiency claim is right, the conditionality of consequence thesis falls out rather easily. If there is a sense in which Φ is sufficient for ψ then there is a sense in which ⌐If Φ then ψ¬ is true and a sense in which ψ is a consequence of Φ (the same sense throughout). In footnote 6, I raised the question whether there is a relation of plausible consequence. If there is and the conditionality of consequence thesis is true, the conditionality specification requirement demands that we find conditions under which whenever ψ is a plausibilistic consequence of Φ, there is a true sentence ⌐If Φ then ψ¬ in which the truth of Φ is in some requisitely distinctive sense sufficient for the truth of ψ. I confess that I can find no such conditions and no such sense. Consider a case. Suppose that “There’s been a burglary” is a plausibilistic consequence of “The door’s been left open and the side-window smashed”. Whatever the truth conditions of the plausibilistic-consequence claim, there is no sense of sufficiency for which the truth of “The door’s been left open and the side- window’s been smashed” is sufficient for the truth of “There’s been a robbery”. This leaves us with three possibilities. One is that the conditionality of consequence thesis is true and there is no relation of plausibilistic consequence. Another is that there is a relation of plausibilistic consequence and the conditionality of consequence thesis is false. But if that were so, propositions 34 Relevant logicians dispute the sufficiency of ⌐~◊(Φ ∧ ~ψ)¬ for the entail- ment of ψ by Φ, and presumably also for the truth of ⌐If Φ then ψ¬. This is because no relevant logician would accept (except ironically) any sentence in the form ⌐If Φ ∧ ~Φ, then ψ¬ where ψ is arbitrary. It doesn’t matter. What- ever their truth conditions for consequence, their view will also be that for any Φ and ψ that satisfy them, Φ will be sufficient for ψ and ⌐If Φ then ψ¬ will be a true conditional. John Woods 335 could have consequences for whose truth they are insufficient. The same would be true, by the way, for defeasible consequence, nonmonotonic consequence, and, of course, inductive consequence. The third possibility is that there is no relation of plausibilistic consequence independently of whether the conditionality of consequence thesis is true. If this were so, we could say that, whereas the burglary case is an example of conclusion-reaching, this is not something that depends on the presence of a consequence relation obtaining between the thing concluded and the things concluded from.35 The conditionality of consequence thesis receives what perhaps its strongest theoretical support from the large family of deductive logics for which a deduction metatheorem is provable. As applied to first order classical logic, there is a case of it that appears to conform to the conditionality of consequence thesis, virtually word for word. Deduction metatheorem: ψ is a semantic consequence of Φ if and only if ⌐Φ ⊃ ψ¬ is a semantically valid sentence. However, it is known that there is no full deduction metatheorem for certain classes of logics of (what logicians take to be) defeasible consequence.36 For some logicians the lack of a deduction metatheorem is a deal-breaker. Perhaps it is. This is something we will have to make up our minds about. If indeed it is a deal-breaker, then for the class of cases in question there is no relation of defeasible consequence, and, if the consequence search requirement is sound, no logic of defeasible reasoning either. If it is not a deal-breaker, the entrenched idea that what consequences are consequences of are propositions sufficient for their truth will have to be dug up and discarded. 5. Inductive consequence This is the place where I want to get to the bottom of Hamblin’s D—the property that deduction has and induction lacks, in vir- tue of which it may be doubted induction is logic-worthy. It is no slander to say that logicians who honour the con- sequence specification requirement and who cleave to the con- sequence dependency of conclusion-drawing are hard-heads 35 This is the position of my Seductions and Shortcuts: Error in the Cognitive Economy, scheduled to appear in 2012 or early 2013. 36 Charles Morgan, “The nature of nonmonotonic reasoning”, Minds and Ma- chines, 10 (2000) 321-360. Whither Consequence? 336 about what to count as logic. Their heads are even harder if their loyalties also extend to the conditionality of consequence thesis. If the loyalties are justified, a great deal of what passes for logic these days isn’t. A good question is: If these principles are sound, how far does the disestablishment of logic go? Does it, for example, cause trouble for the logic of induction? Does it cause trouble for induction in a way that sheds light on Ham- blin’s dark sayings about it in Fallacies? There is a common and long-held view according to which ψ is an inductive consequence of {Φ1, …, Φn} if (and on some tellings only if37), the conditional probability of ψ on ⌐Φ1 ∧ … ∧ Φn¬ is sufficiently high. Perhaps this is right. For a certain large class of inductive reasonings, I think it is right. Think here of statistico-experimental reasoning. If it is, it is so notwithstanding that the truth of the Φi are not sufficient for the truth of ψ, and, correlatively, that the conditional probability at hand is not such as to license ⌐If Φ1 ∧ … ∧ Φn , then ψ¬. But, right or wrong, there is a further question to put. It is the question whether the relations of lending support to or being evidence for can obtain between ⌐Φ1 ∧ … ∧ Φn¬ and ψ without its also being the case that ψ is, in the sense at hand, an inductive consequence of the Φi. Of course, it depends on whether we’re prepared to hold in- ductive consequence to a sufficiency condition of its own: Sufficiency*: ψ is an inductive consequence of {Φ1, …, Φn} only if there is some sense of sufficiency in which the truth of ⌐Φ1 ∧ … ∧ Φn¬ is sufficient for the truth of ψ. Here, too, opinion is divided: One way in which sufficiency* could fail would be where ψ is rightly concluded from evidence that supports it, notwithstanding that ψ is not a consequence of it. I myself am of that view. That is not what matters here. What matters here is whether it is Hamblin’s view, or whether, had he had occasion to reflect upon it, it would have been. Let us sup- pose so. How would this bear upon the question of whether there is a logic for inductive reasoning? It would bear, or not, depending on whether Hamblin would also accept the conse- quence specification requirement: No consequence relation, no logic. Period. 37 I myself am not in the only-if camp. Consider a case. You are tramping in the wilds of Brazil and your companion points out an ocelot. It is your first ocelot. “How interesting”, you exclaim, “I always imagined that ocelots would be two-legged, not four!” I think that this is a competent induction, for a realistically broad notion of induction. But judged by, say, Bayesian stan- dards, it’s a train wreck. John Woods 337 I think it may fairly be surmised that Hamblin’s feature D, in whose absence a logic is not possible and in whose presence the opposite is true is satisfaction of the consequence specifica- tion requirement. Deduction satisfies it. Induction—for the class of cases in view—may not, and Hamblin himself appears to think that it does not. How so? Hamblin wonders whether inductive arguments are really arguments. That bears repeating: He wonders whether inductive arguments are really arguments. It is an astonishing claim, made even more so by his acknowledgement that we sometimes make probabilistic arguments, writing that … no one is going to be much interested in probabilistic argu- ment unless the probability of the premisses very clearly out- weighs the a priori improbability of the conclusion. (240) Even so, although calling inductive arguments ‘arguments’ is to mark a similarity to deductive arguments … it might be as well to reassure ourselves that the similarities are really as great as the differences. (226) Earlier I expressed surprise that Hamblin should be a sceptic about inductive logic in the face of burgeoning work on prob- ability. How, I suggested, could Hamblin not have been aware that probability theory was, if not all of inductive logic, then its theoretical core? How could he not know of the vital alliance between induction and probabilistic reasoning? If we look closely at the passages quoted, we see a readi- ness on Hamblin’s part to distrust this partnership—cutting some slack to probabilistic argument and hardly any to inductive argument. My present conjecture is that Hamblin’s hostility to inductive logic stems from its lack of a bona fide consequence relation. I now conjecture that his further view is that reasoning probabilistically isn’t consequence-drawing either. This being so, probability cannot overcome the deficiency that makes for the logic-unworthiness of induction. In plain words: No matter the details, we can’t get a logic of induction from probability theory. This, if true, is something to pay attention to. So let’s look into this a bit further. 6. Saving inductive consequence? Whither Consequence? 338 Not surprisingly, inductive logics brim with attempts to hang on to the idea of inductive consequence in a systematic way. Here is Jon Williamson on this point, speaking for, as they are called, logical theories of probability: Perhaps the most obvious thing to try first is a generalization of entailment ⊧ to partial entailment ⊧x, where a set Θ of sentences partially entails sentence Φ to degree x, Θ ⊧x Φ, if and only if p(Φ Λ Θ) = x. Under such a view classical entailment is the case where x = 1. If Θ is empty we get a concept of degree of logical truth which corresponds to unconditional probability.38 Of course, partial entailment gives partial consequence. My view is that partial consequence is consequence in name only. Partial consequence fails sufficiency*. Perhaps there is another way of getting inductive conse- quence back into gainful employment. Suppose, as before, we grounded inductive consequence in conditional probability, which on the conjecture just above is precisely what Hamblin would deny. Suppose, even so, that whenever p(Φ Λ Θ) ≥ n, for some suitable value of n, we would have it that (1) If Θ, then probably (Φ) and with it that (1′) Φ is an inductive consequence of Θ. This provides a key contrast between the partial consequence relation of logical theorists of probability and—as we have it here—inductive consequence. The difference is what happens in the interior of the corresponding conditional sentences. In the case of partial consequence, the conditional is (2) If Θ, then Φ. But (2) differs from (1) essentially. The consequence of (1) aris- es from the consequent of (2) by prefixation of the sentence op- erator “probably” to (2)’s consequent. If we think (2) false and (1) true in virtue of “probably”’s respective absence and presence, there is a way now of producing the conditional corre- sponding to partial consequence. We simply rewrite (2) as (1). 38 “Probability logic”, in Gabbay et al., Handbook of the Logic of Argument and Inference, pages 397-424; p. 404. John Woods 339 The difference with Hamblin is now clearly discernible. He writes, The logician commonly conceives arguments on the pattern ‘P, therefore Q’; but … we do not normally say ‘This crow is black; that crow is black; therefore all crows are black’. … In- stead, we frame, at most, a modified conclusion in the form, ‘Therefore it is a reasonable conclusion that …’, or ‘So proba- bly …’, or ‘So presumably.’ (p. 226) There it is in a nutshell. Expressions in the form ⌐Θ, therefore Φ¬ encode genuine arguments and are genuinely logic-worthy. Expressions in the form ⌐Θ, therefore probably Φ¬ encode what are sometimes called arguments and thus give the impression of logic-worthiness. But the impression is wrong. Similarly, the right interpretation of “if … then” in If Θ, then Φ gives consequence, whereas in If Θ, then probably (Φ) it does not. It is an interesting test, about as hard-headed as they come. Not only does it put inductive logic out of business; it upends the whole family of defeasibility and default logics. Apart from Hamblin’s opinion of it, the present suggestion for generating consequence relations is interesting in its own right and, as I say, interesting enough to stay with a while long- er. 7. Plausibilistic consequence? There is no greater force driving logic’s pluralism than the vari- ety and sheer number of its consequence relations, real or imag- ined. Pace Hamblin, our rescue, just now, of inductive conse- quence may suggest itself as a model for the other contenders, the very ones on which we were inclined to give up on only brief sections ago—plausibilistic consequence, defeasiblistic consequence, nonmonotonic consequence, presumptive conse- quence, the lot. In the space remaining to me, I will consider only the plausibility case. For this to work, we will need sen- tences in the form Whither Consequence? 340 (3) If Θ, then plausibly (Φ) that meet the requisite grounding condition. And for that to wash, there must already exist, or be concurrently producible, a sufficient theoretical grasp of sentences in the form ⌐plausibly (Φ)¬ as they occur in probabilistic conditionals ⌐If Θ then proba- bly (Φ)¬, to motivate whatever is proposed as their ground. In the case of ⌐If Θ then probably (Φ)¬, we saw a grounding link in the conditional probability of Φ, and we proposed that if the conditional probability of Φ on Θ is high enough, Θ may serve as antecedent in a true sufficiency conditional whose consequent is ⌐probably (Φ)¬. That linkage constitutes the ground of (1), ⌐If Θ then probably (Φ)¬. We can say the same thing more briefly: Grounding inductive consequence: Pace Hamblin, induc- tive consequence is grounded in a probability logic. The question is whether there exists a like grounding for ⌐plausibly (Φ)¬ as it occurs in ⌐If Θ then plausibly (Φ)¬. Do sen- tences like (3) meet a grounding condition in the manner of (1)? Have we got a plausibility logic? Might there be a relation of conditional plausibility, and might it be analytically exploitable in the manner of conditional probability? There are some interesting writings on plausibility, of which the best to date by a logician is Nicholas Rescher’s Plau- sible Reasoning published in Assen by van Gorcum in 1976.39 Plausibility is unruly. It behaves very differently from probabil- ity. As mentioned in footnote 6, it has no stable concept of nega- tion. A given body of evidence can make incompatible proposi- tions equally plausible, with obvious implications for closure under conjunction. Certainly there are logicians for whom the following refrain is decisive: No negation, no logic! On the other hand, one of the better treatments by com- puter scientists, Nir Frieman’s and Joseph Halpern’s theory of plausibility measures, is a low-structure generalization of prob- ability. Plausibility is a partially ordered relation subject to a dis- tinguishing axiom that says that a set of sentences must be at least as plausible as any of its subsets. Addition of two further axioms gives the so-called KLM properties for default logic. The first of this pair provides that if (i) A, B and C are pairwise disjoint sets, (ii) the plausibility of A ∪ B exceeds that of C, and (iii) the plausibility of A ∪ C exceeds that of B, then the plausi- 39 For some thoughts of my own, see Gabbay’s and my The Reach of Abduc- tion: Insight and Trial, Amsterdam: North Holland, 2005; Chapter 7. For a different perspective, Walton’s Plausible Argument in Everyday Conversa- tion. Albany, NY: SUNY Press, 1992. John Woods 341 bility of A alone exceeds the plausibility of B ∪ C. The other axiom stipulates that if A and B are both utterly implausible, so is A ∪ B. The KLM properties of a putative default conditional → are set by a reflexivity axiom, and the rules of left logical consequence, right weakening, conjunction, disjunction and cau- tious monotonicity.40 An important feature of plausibility measures is that the spaces they measure are direct generalizations of probability spaces. This gives rise to a notion of conditional plausibility analogous to what is required for Bayesian networks. The ques- tion is whether conditional plausibility is grounding in the re- quired way. Putting “pl” for “plausibly”, does ⌐Pl(Φ Λ Θ) = x¬ ground a sufficiency conditional, ⌐If Θ then pl(Φ)¬, when x is big enough? If so, wouldn’t plausibilistic consequence be back in business? Notwithstanding the weakness of some of the KLM properties,41 it would seem—against Hamblin—that it might. In a good deal of the plausibility literature there is also a tendency to pragmaticize plausibility. This is fine with me. Sometimes “plausible” is a hedge. Sometimes its function is a matter of context, including who’s saying what to whom. Some- times, in short, “That’s plausible” is a commitment-qualifier. But that is not what we are after at present. We are after a relation that qualifies ψ as a plausible consequence of Θ. Our quest is semantic. We were seeking a relation between Θ and Φ in virtue of which Θ would be sufficient for ⌐plausibly (Φ)¬. If our search paid off, we could say that Φ is a plausibilistic consequence of Θ. Perhaps it is the same with those other consequence relations that pullulate in the stipulations of an ever-expanding literature. If so, there will be a relation between Θ and Φ in virtue of which 40 The KLM properties are named after their proposers: S. Kraus, D. Leh- mann and M. Magidor, “Nonmonotonic reasoning, preferential models, and cumululative logics”, Artificial Intyelligence, 44 (1990), 167-207. For plausi- bility measures, see Nir Friedman and Joseph Y. Halpern, “Plausibility measures: A user’s guide”, Proceedings of the Eleventh Conference on Un- certainty in AI, 1995, 175-184, and “Plausibility measures and default rea- soning”, N. Friedman and J. Halpern “Plausibility measures and default rea- soning”, Journal of the ACM 48 (1996), 1297-1304. As for the KLM proper- ties: Reflexivity provides that Φ → Φ. Left logical equivalence is: If ⊦Φ ⇔ Φ′, then from ⌐Φ → ψ¬ infer ⌐Φ′ → ψ¬. Right weakening is: If ψ ⇒ ψ′, then from ⌐Φ → ψ¬ infer ⌐Φ → ψ′¬. Conjunction is: From ⌐Φ → ψ1 ¬ and ⌐Φ → ψ2 ¬ infer ⌐Φ →(ψ1 ∧ ψ2) ¬. Disjunction is: From ⌐Φ1 → ψ ¬ and ⌐Φ2 → ψ ¬ infer ⌐(Φ1 ∨ Φ2) → ψ ¬. Cautious monotonicity is: From ⌐Φ → ψ1 ¬ and ⌐Φ → ψ2 ¬ infer ⌐(Φ ∧ ψ2) → ψ1 ¬. 41 For example, reflexivity fails intuitively; no proposition is a default conse- quence of itself. Moreover, if we allow that if ⊧ ψ then Φ ⊧ ψ, then we have the paradox of necessity for →, and likewise for right weakening. Whither Consequence? 342 if Θ then defeasibly (Φ), and a possibly different relation be- tween Θ and Φ such that if Θ then presumably (Φ), and a rela- tion such that if Θ then autoepistemically (Φ); and so on. This, it seems to me, is one of the most important open questions in modern logical theory. No one doubts that the logic and com- puter science literatures abound in consequence purporting names—“defeasible consequence”, “default consequence”, “au- toepistemic consequence”, and whatever else. No one doubts the names. But there is room to wonder about the putative nominata. There is room for doubt whether we have achieved a rescue of plausibilistic consequence. The same doubt—and more—applies to the like rescue of the others. A last word about plausibility. If it exists, I haven’t much of a general idea of how to parse the relation between Θ and Φ in virtue of which (3) If Θ, then plausibly (Φ) is true. But, returning briefly to the burglary example, I have no hesitation in accepting as true (4) If the door was left open and the side-window was smashed, then plausibly (there’s been a burglary). That is, I have no hesitation is supposing that (4) meets the suf- ficiency requirement for conditionality. If this is so, then the queried relation between Θ and Φ exists, and nothing precludes our calling it plausibilistic consequence in this case: (5) That there’s been a burglary is a plausibilistic conse- quence of the door’s having been left open and the side-window’s having been smashed. But here is a point to give us pause. We see in the interplay be- tween (4) and (5) that it is (4) that wears the trousers. All I mean by this is that for a reflectively competent speaker of English it is a more untutoredly accessible matter to determine whether (4) is true and—independently of that—a barely accessible matter to determine whether (5) is true. (Who, without tuition, knows what plausibilistic consequence is supposed to be?) It is the same way, I should have thought, with presump- tive consequence (after all “presumably” is untutoredly accessi- ble), but perhaps not with, say, nonmonotonic or autoepistemic consequence, whose cognate sentence-adverbs are not accessible without instruction. Be that as it may, the literature on defeasibility and monotonicity, and their numerous variations and adaptations, is John Woods 343 too big for anyone to read in even a generously proportioned lifetime. But surely, it will be said, there are manageable fami- lies of such logics, many well-known and some classics, in which the requisite consequence relations have long since been well catered for. How, then, can there be any question as to the existence and bona fides of, say, defeasible consequence and nonmonotonic consequence? I have two things to say about this. One (I repeat myself) is that calling something a defeasible con- sequence relation doesn’t make it the case that any consequence relation is actually it. The other is that if these purported logics shed light enough on “defeasibly” (etc.) to enable the grounding of sufficiency conditionals of the form ⌐If Θ, then defeasibly (etc.) Φ¬, then the logics of defeasible (etc.) consequence, are fully deserving of the name (each time).42 8. Conclusion Hamblin is a hard-head about induction. He has difficulty in see- ing how inductive arguments can actually be arguments. He no- tices that you can’t get an unqualified sufficiency conditional from the fact that the conditional probability of Φ on Θ is very high. He makes too much of this. He thinks that a logic of induc- tion can’t be got from a logic of probability. He thinks that a logic of induction can’t be got at all. This makes for a sweeping scepticism. Since there is no logic for induction, there is no logic for the inductive fallacies. A fallacy is an inapparently bad ar- gument; but since it is not apparent that there are any inductive arguments, it is not apparent that there are any inductive falla- cies. Sweeping as it is, how could this scepticism not also carry in its path the dialogue logics that have proliferated these past four decades? The short answer is: It depends on the nature of the arguments that a logic purports to model. If it models dis- course in which inductive reasoning occurs, then it is not a logic. At least, it is not a logic by the largely undeveloped lights of Hamblin’s chapters 1 and 7. It is an odd conclusion. It makes one want to go through chapters 7 and 8 with a fine-tooth comb. It makes one think how long ago was the fateful year 1970.43 42 I myself think that the K-consequence principle is false, a minority claim I develop in Seductions and Shortcuts. As is argued there, there are types of disciplined and assessable reasoning for which no distinctive consequence relation is definable. Saying why is a long story, much beyond what we have space for here. 43 For helpful comments on earlier drafts I warmly thank Peter Bruza, Maurice Finocchiaro, David Hitchcock, Lorenzo Magnani, Fabio Paglieri, Alirio Rosales and Harvey Siegel. For stimulating discussions about Whither Consequence? 344 nonmonotic consequence relations (real or imagined) thanks also to Dov Gabbay, David Makinson and Mark Weinstein.