Dialogue games as dialogue models for inter- acting with, and via, computers NICOLAS MAUDET DAVIDMoORE Universite Paul Sabatier, Toulouse Leeds Metropolitan University Abstract: The purpose of this paper is to discuss some ways in which dialectical models can be put to computational use. In particular, we consider means of facilitat- ing human-computer debate, means of ca- tering for a wider range of dialogue types than purely debate and means of providing dialectical support for group dialogues. We also suggest how the computational use of dialectical theories may help to illuminate research issues in the field of dialectic it- self. Resume: Le but de cet article est de presenter quelques fa,,G 2 is to be playingG I2 whose rules are comprised of the union of the two sets of rules, with priority to the rules ofG 2 in case of rule conflict. This is a similar approach to Walton and Krabbe's notion of "embedding dialogues of a certain type as subdialogues into a structure of some other type" (Walton and Krabbe 1995, p.82, cf Reed 1998). As in the Walton and Krabbe analysis, our model is in principle capable of allowing for a number of shifts, to form a "cascading effect" (Walton and Krabbe 1995, p.l 06). An important consequence of the model is to impose an appropriate structure upon the "gameboard" representation of the on-going dialogue. Ginzburg (1997), for example, introduces a partially ordered set "question under discussion", and Gordon's pleading games require "open", "conceded" or "denied" statements (Gordon 1994). As a generalization of these concepts, we propose the notion of "Games Under Discussion" (GUO), i.e. games currently opened in the dialogue. For the sake of simplicity, we can imagine the GUO as a stack (the current game being the top element). However, richer structures like trees or partially ordered sets can also be used. A related issue is to describe how games are established in the course of dia- logue, i.e. how they are added to the GUO. This requires a means of allowing the bidding of games and of dealing with such bids. Our starting position is to include explicit dialogue moves to propose/accept the entry into or the exit from a game. 226 Nicolas Maudet and David Moore We thus obtain what might be seen as a "Meta-Dialogue Game", a 4-phase proc- ess, as follows: (i) Entry proposal (ii) Entry acceptance/refusal (iii) Exit proposal (iv) Exit acceptance/refusal Obviously, requiring explicit moves at each turn would generate an excessive number of dialogues. To relax these constraints, we propose to adopt the notion of de facto commitment (see Mackenzie 1979) for the acceptance phase. A player, that is, will accept a game unless he explicitly rejects it. In other words, the pro- posal ofa game includes the game in the GUO. A particularly important game type is the basic game, so called because infor- mation exchange is seen as the ultimate function of dialogue (cf. Levinson 1979) and the basic game is designed to ensure the maximisation of information ex- change and game-level co-operation. In the model players are always, therefore, committed to the basic game. Although our model caters for this by defining different levels of commitment (Maudet and Evrard 1998, Maudet 2000), we will consider for the sake of simplicity that once a game is open, both players accept their goal in the game (s-commitment) and attempt to play within the game's rules (r-commitment). We are now in position to give an illustration of these ideas through our concert example. We use the following abbreviations: p == 'having chance to get a place', s == 'holding a season ticket', e == 'the concert is an extra concert'. The definitions of moves and their update consequences for the commitment stores (CSs) are in line with our amended DC model. The moves also have consequences for the GUO structure (set out on the next page): The example illustrates how the argumentation game is embedded within the question game. The important steps are as follows. Move 1 establishes the col- laborative question game (note that B could have merely refused the game: "I don't want to answer that question"; in this event, the question game would have been immediately removed from the GUO). During this game, a contradiction appears: move 3 bids an argumentation game, since it is not an expected move in the question game. Move 6 closes the argumentation game since B retracts his original thesis. Let us now turn to the highly problematic issue of enabling a computer to operate as a dialogue participant in line with this model. First, we propose an additional, extra-game, level of strategy in the light of our extended model. The additional level of strategy is concerned with the issue of whether to retain or change the on-going game. Having made a strategic choice at this level, and as- Dialogue games as dialogue models for interacting with computers 227 MOVES CSa GUD CSb MOVES 1 question(p) [Question(p) ] 2 S--fP [Question(p) ] S-fp 'rss ert (s -fp) 3 retract(s-fp) [Question(p)] S-fP fArgum(s--7IJ) 7 4 -'(S-fp) [Question(p)] S-fP chall (-'(S-fp)) IfArgum(s-fp) 7 5 assert(e) -,(S-fp). [Question(p)] s--fP. e. [Argum(s--fP)} e. e -f( -,(s --71J)) e -f( -,(s -fp)) 6 -'(S-fp). [Question(p) ] -,(S-fp). retract(s -fp) e. e. e -f( -,(s -fp)) e -f( -,(s -fp)) suming that any change of game is accepted by the dialogue partner(s), strategies specific to that game will be adopted (e.g. for a debate game the three level strat- egy proposed by Moore and Hobbs (1996) will be adopted). This additional level of strategy enables us to distinguish two kinds of compu- tational behaviour: • A reactive agent: the computer (C) merely adopts a stance with regard to the incoming game bids. C does not plan for or bid new games. Nevertheless, C recognizes games when they are bid by a partner and makes a strategic decision whether to accept that bid or attempt to continue the current game. The issue of bid recognition is, however, complex. It is tempting to propose that use by PI at time TI ofa move type not catered for by the game type being played at T I, bids a new game type and forms the first move in a new game of that type. However the proposal would have the consequence that any move can be made at any point, or at least that it will be impossible to distinguish between an illegal move in the current game and a game bid. Further, a move may appear to be a (legal) move in the current game type but in fact be intended as a new game bid. Given these complexities we currently insist, in our computational model, on explicit game bids. Walton and Krabbe (\995, p. \ 02) distinguish between "licit" shifts of dialogue, in some of which the different dialogues may be functionally related and hence "embedded", and illicit shifts of dialogue, often associated with fallacies. The formulation of strategic rules enabling C to distinguish be- tween such shifts in real time is complex and forms the subject of current investigation. 228 Nicolas Maudet and David Moore • A deliberative agent: C has the ability to plan games. In other words, it attempts to bid and enter into games. A computational agent able to handle these struc- tures may need to process mental attitudes such as intention or desires, in addition to the "commitments" of our current prototypes. A straightforward option is to adopt the popular Belief-Desire-Intention (BDI) architecture, as also proposed by Burton and Bma (1996). The importance of intention in its relation to the linguistic structure has been em- phasized in Artificial Intelligence research by the influential work of Grosz (1977). However, we might be prudent not to overstate its importance in the current con- text. The conclusions of Grosz are highly dependent on the goal-oriented dia- logues she considers. It has been widely shown that linguistic structure is not necessarily isomorphic with the intentional structure, at least when merely under- stood as the underlying task structure. For instance, in our concert example, the argumentation game was not "planned". Sub-dialogues tend, that is, to emerge during a dialogue rather than being planned for in advance. This we believe com- plicates the task of building a deliberative agent. Indeed, Dahlback (1997) claims that depending on the type of dialogue considered, the "dialogue-task" distance can vary greatly. We argue that in our current context, this distance is rather large. Consequently, our current work focuses on the issue of a reactive computational player, leaving the deliberative approach for future work. This section, then, has considered means of enhancing our debating system by catering for types of dialogue beyond debate. A complementary means of en- hancement is to integrate the dialogue game system within a multimedia environ- ment. This possibility will now be discussed. 4. Multimedia enhancements It is clear that multimedia has much to offer education (e.g. Boyle 1997, Bagui 1998, Stoney and Oliver 1998). Use of multimedia in an educational context, how- ever, is prone to the danger mentioned earlier, that the teaching interaction will become unduly didactic. Laurillard, for example, argues "too often the multimedia products on offer to education use the narrative mode, or unguided discovery, neither of which supports the learner well, nor exploits the capability of the me- dium" (Laurillard 1995, ef Montgomery 1997, Retalis et at. 1996). Consequently, we are examining means of utilising the dialogue game framework discussed in previous sections within such a multimedia context. We suggest that educational multimedia systems and computer-based dialogue games can work to each oth- er's mutual advantage in four ways (ef Moore 2000). First, multimedia may be used as an initial stimulus to encourage students to enter the dialogue game in the first place (cf. Moore and Hobbs 1996, p.159). Footage of philosophers in discussion might engender a philosophical debate, for Dialogue games as dialogue models for interacting with computers 229 example. Extracts from a video documentary about abortion might set the scene for, and encourage participation in, a debate about whether and under what cir- cumstances abortion should be allowed. This is an example of what Laurillard (1995) sees as the computer "[supporting] the learner in what is otherwise only possible through real-world experience", and as video offering "at least vicarious experience of the world". Secondly, during the dialogue game per se, hypermedia principles can be used to enable the student to clarify points and lines of argument he does not understand, and to look up relevant facts about empirical matters. A prima facie weakness of our current DC-based framework is the restricted range of question types it allows. However, arranging for key points of C's dialogue contribution to be represented as hypermedia nodes, so that in effect the debate is suspended whilst points are clarified and empirical matters are pursued, in a man- ner similar to Walton's "interludes" in negotiation dialogue (Walton 1998), is ex- pected to overcome this problem. The situation is analogous to that in Gordon's "Zeno" system (Gordon 1996), where hypertext links are seen as potentially able to "reduce the 'rigidity' of Zeno's formal logic". Suitable hypermedia links may also enable users to clarify their understanding of concepts used in the debate, such that they enter the "cognitive environment" (Tindale 1992) required for the dialogue game to work. These two approaches to using multimedia within argumentation systems can be seen as use of the multimedia facility to enhance the service provided by the dialogue facility. Conversely, we can see the next approach as using the dialogue facility provided by dialogue games to enhance a standard multimedia presenta- tion. For the approach here is to seek to cater for reflection by the student during and after presentation of material from a multimedia package. Hartley (1993) re- fers to the need for "interactive debriefs", as does Laurillard (1995), and the use of the dialogue game framework discussed in sections 2 and 3 above promises on- line provision of at least part of such a reflective debrief process. The final way in which educational multimedia systems and the computer- based dialogue game framework can work to each other's mutual advantage, we refer to as "full integration". Here C will use, where appropriate, a range of media as its contribution to the dialogue, so that C will, on its various turns, use text, audio, graphics, video, or some combination thereof, as its dialogue contribution. An important issue here will be how C will determine the content of this contribu- tion. In a multimedia context this issue resolves, we argue, into two: deciding on the semantic content of the move, and deciding on the media to use to express the move. The former can be decided on the basis of the strategic considerations dis- cussed earlier. Concerning the latter, generally applicable guidelines for choice of media in different circumstances are not yet known (Alty 1993). Indeed, it might well be that experimenting with different policies in the current context could in itself prove illuminating. As an interim measure, however, one might suggest fa- 230 Nicolas Maude! and David Moore vouring relatively information-rich media (e.g. video) whenever they are available; this would be in line with Maybury's "preference metrics" (Maybury 1993). Such an approach can readily be catered for by the Toulmin-based approach mentioned earlier. For all that is required is that individual nodes be modified to encapsulate, where appropriate, calls to the relevant selection of media output. The nodes would thus be acting not as purely receptacles of text, but also as what is referred to in the knowledge representation literature (e.g. Hopgood 200 I) as "demons", i.e. invocations of procedural routines--in the current context, invoca- tions of routines for playing video clips, for example. Arranging for appropriate demons to be encapsulated in the nodes would be the responsibility of the domain "author", just as the propositions are with the text-based version. In effect, then, the procedure involves the creation of argumentation-based hyperdocuments, along the lines of Schuler and Smith's (1993) "Author's Argumentation Assistant", and enriching the document with appropriate calls to varied media types. Current work involves moving towards a full implementation of this proposed integration of our dialogue game framework with multimedia. The aim is to en- hance the current text-based prototype such that C announces and then plays the appropriate multimedia material as its dialogue move. There are, however, several issues of theoretical and practical importance to be addressed. The first concerns student input. It might be felt that the proposed multimedia arrangement would disadvantage S in that C has a range of types of media avail- able at its disposal, whereas S can merely select from a given set of propositions. An interesting approach might be to allow S to select fragments of the computer's stored media content as his move; indeed this approach might substantially in- crease the flexibility of student input over the current text based arrangement. A related concern is that the issue of what propositional content is being put to the user may be complicated by the fact that C is making some of its dialogue contributions via multimedia. In the purely text-based system the position is clear--8 becomes committed to whatever proposition is contained in the written message from the computer. This is, of course, a function of the dialogue game's commit- ment rules; in our current system its effect is displayed to the user via updates to the "commitment windows". In the multimedia environment, however, the rich- ness of the output may make it less obvious what commitment store updates are required. Our current design caters for this by a simple announcement of the proposition the video footage (say) is taken to be putting forward - in effect "summing up" the propositional message of the video. This leads, though, to a third issue, namely whether the multimedia context will necessitate changes to the dialogue model, for it may be that the strict record of commitments required by a dialogue game model is inappropriate in a multime- dia environment. Similarly, the game's restriction to one proposition per turn may rule out some of the longer dialogue turns that might be warranted by the availabil- ity of differing media types. Dialogue games as dialogue models for interacting with computers 231 Considerable work remains, then, to bring the proposed integration of our dialogue game framework with multimedia to full fruition. However, we believe that the approach discussed has the potential to engender advantages of both mul- timedia systems and educational debate and thus promises a useful enhancement of our current dialectical system and hence major educational benefit. A further enhancement of the dialectical system would be to extend it to situations involving mUltiple participants. This will be considered next. 5. Computer Supported Collaborative Argumentation (CSCA) Dialogue games, then, provide, we argue, a powerful means of modelling dialogue and allowing a computational agent to participate in dialogue with a user. What, though, of dialogue involving groups of participants? This is an important issue in current computer science given the growing interest in computer supported col- laborative learning (CSCL) in general (Hoadley 1999, Steeples et al. 1996) and CSCA in particular (Veerman et al. 1999). Dialogue game systems have generally been studied within the context of two participant dialogues, and there is some ambivalence in the literature about how the framework might be extended to cater for multiple participants. Walton (1989) suggests that a game will involve two players, and that although there may be games of dialogue with more than two participants, these players can be collected into two groups, one on each side. In effect, then, "one group is the proponent and the other the respondent" (p. 282). Other writers, on the other hand, are prepared to allow for varying numbers of (genuine) participants, e.g. Apostel ( 1982): "a discussion is the interaction between n participants. Monologues, dialogues, and polylogues may all constitute discussions" (p. 98, cf Barth and Martens 1982, viii). Unfortunately, it is not made clear precisely how these varying numbers of participants can be catered for within dialogue games. [n the remainder of this paper therefore we propose an exploratory model extending dialogue games to multiple participants. We will restrict ourselves to Walton's conception of dia- logue game playing by multiple participants as "teamwork". Given this, DG (T I ,T2, game) is a game between two teams (Tl and T2), and teams are sets of players such that for all players p, pin Ti and pin Tj implies i=j (a player cannot be in two teams). The team-based approach, however, raises some important questions, in par- ticular what it means to play in a team and how teams are formed. Wooldridge and Jennings (1999) have identified four stages in what they call the cooperative prob- lem process: (i) recognition (identification of the potential for cooperation), (ii) team formation, (iii) plan formation and (iv) execution. Following their theory, the issue becomes how these stages can be put into practice computationally. (i) Recognition. The recognition of the potential for cooperation by human participants in the dialogue we leave to the participants themselves. How a compu- 232 Nicolas Maudet and David Moore tational agent might encourage such identification, or identify the potential for itself to cooperate, is an issue for future research. (ii) Team formation. This may depend on multiple contextual factors: some teams may be defined through social roles before the beginning of the conversa- tion (e.g. teacher/students, policemen/witness), others may emerge from the dia- logue. This possibility, however, involves a complex mechanism of team forma- tion that we have not investigated yet. An interesting possibility is that when (at least) 4 players are in the global game, one might allow for parallel dialogue games (i.e. parallel discussions). (iii) Plan formation. The issue here concerns how players play together in a team. The teams may hold an intra-team meeting before the dialogue proper, to clarify their views and to enable potential team members to establish whether there is enough "common ground" (cf Rosenberg and Sillince 1999). A very delibera- tive team may even start the dialogue with a plan ("B will ask whether p", "C will ... ") - a "shared plan" (Grosz and Kraus, 1993). In a CSCA context, however, it is intuitively unlikely that players will build a shared plan to defend the team's point of view. What is needed for CSCA there- fore is a more reactive process. The difficulty arises, however, that this may give rise'to discrepancies within the team. Consider the following example, where A is a teacher asking a question to two students Band C (we assume that every move is multi-addressed): Step 1 (A): Must X go to jail? Step 2 (B): X is a thief. Step 3 (C): And thieves must go to jail. Step 4 (B): No! The problem here is delicate: on the one hand, players are autonomous and have their own point of view; on the other hand, they play in a team and must follow a global policy (for instance, there may be some crucial facts which cannot be denied within the team). The growing field of Multi-Agent Systems offers some important results and models of more flexible teamwork (see e.g. Tambe (1997), Jennings (J995». We draw on such work to make two proposals, currently being investigated. One approach is to consider that the step 4 above (B's "no!") opens a new game, within a team. This leads to the following analysis, at the game level. A bids for a question dialogue game (step 1). Band C form a team to try to find out the answer and step 2 and 3 are understood in the context of a question dialogue game (more precisely an exam-question dialogue game, since the teacher is presumed to know the answer). But at step 3, C claims a fact that B does not believe: they enter into an argumentation game (nested in the question game) where the teams are different (step 4: DG«B>,, argumentation». This analysis can be represented diagrammatically as follows: Dialogue games as dialogue models for interacting with computers 233 Step 1 (A)~(B C) Step 2 (A)+-(B C) Step 3 (A)+- (B C) Step 4 (A) «B)~(C» Adopting this approach clearly entails dynamic team formation. This greatly com- plicates the issue of computational modelling of, and support for, such dialogues, given the difficulties of team formation discussed earlier. In our second proposed approach we allow discrepancies within a team, and attempt to deal with this via differing levels of commitment. Specifically, we pro- pose that each team gets a "collective" commitment store, a virtual list of commit- ments arising from the current dialogue game. How, though, should such team commitment stores be updated? The arrangement in the original DC system was "de facto commitment"--a participant has to explicitly withdraw from his com- mitment store those statements of his interlocutor to which he is not prepared to commit (Mackenzie 1979). The situation is, we suggest, more complex where teams are involved. For different theses can be expressed within a team and this raises the issue of how the CSs should be updated. In the current example, for instance, it is counter-intuitive that the teacher be committed to the claim "thieves go to jail", which would be the case were the DC mechanism to be applied. Here, therefore, we propose a procedural amendment, making use of the notion of "minimal consensus". The minimal consensus is the intersection of the CSs of the players of the team. The idea is that the teams (at the end of the turn) will be de facto committed only to this minimal consensus. Applied to our current example, this implies that, at step 4, the teacher (A) is committed to "X is a thief' (this claim is in the minimal consensus) but not to "thieves must go to jail". This is a relatively simple way to extend the model, and we do not see it as unduly coercive in that if a team member is not prepared for a particular commitment he can leave the game or swap sides. (iv) Execution. One would expect, of course, moves to be executed by the team members. A difficulty, however, concerns turn taking. A very attractive aspect of the dialogue game framework, particularly from a computational per- spective, is the reduction of the turn-taking problem to the following equation: one move:;: one turn. In the context of a set of players, this definition may need to be refined. It may not be realistic to require or even allow one move for each player of the team. A possible solution could be to introduce an explicit turn-taking move, in line with Bunt's "dialogue-control-acts" (Bunt 1994, cf Traum and Hinkelman 1992), so that players can hand over the turn when they want. Alternatively, it may be that a team is required to make one move only before handing over to the other team, and that the process of intra-team agreement can be modelled via recursive dialogue games within the teams. In this case, each team would have a spokesper- son putting forward the team's moves. 234 Nicolas Maudet and David Moore A number of interesting and important issues remain to be solved, then, con- cerning computational modelling of group dialogues. We believe that such issues are well worth pursuing, not least because a dialogue game model capable of catering for group dialogues offers, we suggest, two major advantages to CSCA. One major gain would be that, by providing a computationally tractable model of polylogue, it becomes possible for a computational agent to participate in the poly logue, together with two or more human participants. This is advantageous in a number of ways. CSCL work is often of a discursive nature (Simon 1997) and the ability of the computational agent to playa "devil' s advocate" role is potentially of educational value (el Retalis et al 1996), especially, perhaps, in contexts in which the human participants all agree but it is felt educationally advantageous for them to critically explore their shared view. Secondly, we are currently investigat- ing the use of collaborative virtual environments (CVE's) for group discussion. A perception of "presence" is seen as crucial to such environments and we propose the use of computational agents as a means of enabling "presence in absence" (Gerhard and Moore 1999, Fabri et al. 1999) and thus allowing people to benefit from the discussions even when not directly participating themselves; a user's computational agent, that is, may be able to use its dialectical model ofthe dialogue to contribute to the dialogue on the user's behalf. A further advantage of compu- tational participation is that it affords participants the possibility of their own "pri- vate" discussion with the agent. This might be used for rehearsal and practice prior to entering the group discussion (perhaps to resolve any "intra-agent con- flict" (Amgoud and Maudet 2000», and/or for reflection and analysis after a group discussion. The facility may be particularly useful for people reluctant to enter the group discussion or for people with a social disability such as autism which may restrict their participation (el Moore et al. 2000). A final advantage of computational participation is that it would enable a number of computers to hold discussions with each other, and, given recent claims concerning the educational benefits of vicarious learning from the dialogue of others (Stenning et al. 1999), the resulting transcripts might make educationally valuable study material. A second major benefit of dialogue games to CSCA is their ability to provide a regulatory framework for interactions within the collaborative discussion. Means of suitably controlling the evolving discussion are required (el Okamoto and Inaba 1997). And given that, as suggested earlier, dialogue games purport to be models of "fair and reasonable" dialogue, the case for their adoption as the regulatory framework seems clear. Thus Finkelstein and Fuks (1990), for example, use a dialogue games model as the basis for a system for providing automated support for groups collaborating on the development of software specifications (el Bouwer 1998, Burton et at. 1997). In a computational context, the inability of the computer to understand natural language will, on the face of things at least, severely constrain this regulatory role. Our debating prototype, for example, operates on the basis of a predetermined (albeit expandable) set of propositions, and the Hartley and Hintze mediating sys- Dialogue games as dialogue models for interacting with computers 235 tem (Hartley and Hintze 1990) operates on strings. On the other hand, one can speculate that much of the difficulty in unregulated discussions concerns seman- tic shifts, and that this would be largely ruled out by the propositional logic of the dialogue games. Further, the models can provide a valuable service atthe propositional logic level, for example by keeping track of commitments and pointing up incon- sistencies and consequences of extant positions. A further issue concerns the computer's role in the dialogue if teams are formed. For it might appear that the computer as an agent adds a third party to the debate and hence immediately destroys the two-team arrangement. We envisage, how- ever, that the computer will maintain a watching eye over the evolving dialogue and, in an· educational setting, proffer advice to either team in the light of the reigning dialogue status. Even where teams do not form, a computational agent can play an important role. It may be the case in asynchronous computer confer- ences that propositions posed by one participant evoke no response (Hewitt and Teplovs 1999) and that discussion is therefore stymied. A computational agent could potentially provoke discussion in such circumstances by acting as devil's advocate or by asking for support for the dialogue contribution. In this section, then, we have made some proposals for means of utilising dialogue games within group-based CSCA. Whilst the proposals are at this stage inevitably somewhat speculative, we believe there is a strong case for investigating them further. An important aspect of such an investigation concerns strategies for a computational agent in a group context. For example, the computer needs to decide which of the various "live" propositions to take issue with (assuming that disputation is itself a valid strategy at that stage) and when it would be appropriate to do so, Le. when it is its "turn". Similarly, we have assumed so far that players are always committed to the same game. A suitable game adjustment process, for cases where this assumption does not hold, needs to be investigated. An interest- ing avenue of study concerns how knowledge of the conventional rules of a par- ticular game type may help to coordinate the games of the interlocutors. For instance, if A notices that B violates a rule of the game type he (A) thought they were in, a "meta-communication sub-dialogue" may appear, until the adjustment is achieved (Le. a common game is adopted). 6. Concluding remarks - an interplay between informal logic and computational dialogue systems We have outlined our work in applying dialectical theories developed within the field of informal logic to dialogue involving people and computers, Le. "computa- tional dialectics" (Gordon 1996). Many issues remain to be addressed, e.g. the development of suitable computational strategies, empirical investigation of the systems in use, and refinements to the dialogue game models to enable multimedia enhancements and group discussion. Fundamental to these issues, and to the re- search in general, is the development of suitable dialectical models. Given this, it is 236 Nicolas Maudet and David Moore of interest to consider how the computational use of dialectical theories may help to illuminate research issues in the field of dialectic itself. The crucial point, we argue, is that the computer environment can act as a test-bed in which the dialectical theories can be evaluated and refined. Walton (1998, p. 29) argues: "the formal systems of dialogue that have proliferated in recent times appear potentially useful, but they are not sharply enough focussed on the practical contexts of argument use that need to be studied in relation to the fallacies -they are too diffuse, too multiple and too abstract". And a computational test-bed is likely to provide a useful facility for rationalizing the proposed models and making them less abstract. One useful approach might be to allow two com- puter systems to run with a proposed system in dialogue with each other, and to study the results. As Amgoud and Maudet (2000) point out: "conversation simula- tion between computational agents is more and more considered as an important means to get empirical results about dialogue structures and behaviors". Further, some specific points of investigation can be suggested. One concerns the dialogue rules. For example, the ramifications of Walton's device of a "dark side commitment set" (Walton 1998, 1984, Walton and Krabbe 1995) can be in- vestigated in a computational environment. Some of Walton's systems require that a commitment will move from the dark to the light side if a participant retracts the proposition in question. We have argued, however, that since the dark side is, ex hypothesi, unknown, there appears to be no way of distinguishing such moves from those involving propositions not on the dark side, and thus no way of propo- sitions making their way across (Moore 1993). The practicality of the notion of dark side commitments, for example, could perhaps be definitively ruled upon in a computational context. Similarly, computational use of Mackenzie's DC system (Mackenzie 1979) has already suggested issues concerning the dialogue rules, for example the issue of potentially being prevented by the rules (RREPSTAT) from answering a question in the desired way. Conversely, the DC model reveals weaknesses in our current compu- tational model. In particular, the input arrangement of selecting from pre-set propo- sitions prevents a challenge of any grounding implication acquired via previous defense moves (Trott 1999). Whilst this can be catered for computationally, albeit at the cost of extra complications at the interface, it is an interesting example of difficulties imposed by the lack of computational understanding of the propositional semantics. Indeed, it may be that attempting to work within the confines of propositional logic will turn out to be revealing about what Walton (1989) sees as the contested ground between semantics and pragmatics. As well as dialogue rules, crucial aspects of dialogue strategy may be illumi- nated by computational use. One interesting possibility is to study the utilisation of Walton's argumentation schemes (Walton 1996) as a component in a computa- tional strategy. Amgoud and Maudet (2000) suggest "meta-preferences", such as "choose the smallest argument, in order to restrict the exposure to defeaters", as a Dialogue games as dialogue models for interacting with computers 237 means of driving the choice of arguments in a context of dialogue. Working through such strategies is both vital to computational use of the dialectical theories and facilitated by a computational environment. Further, such strategic considerations are, we suggest, of fundamental importance to the field of informal logic itself, in that for normative dialogue models to be of practical use in generating dialogues, suitable strategies for their use are vital. Indeed, computational utilisation may be revealing with regard to the very no- tion of a "normative" dialogue model. Walton (1998, p.155) suggests "the dialec- tical sequence of argumentation in a deliberation can be normatively represented as the opposition between ... two sides on how to solve the problem that is the issue of the deliberation". But what does "normatively represented" mean? On the face of it, it may seem to be an oxymoron. If we want to represent something we should not be normative about it, and if we are bringing in normative considera- tions we may not be representing the actuality. This is the view of "representation" which tends to be prevalent within the AI knowledge representation community (e.g. Bench-Capon 1990). On the other hand, it might be argued that, concerning dialogue at least, nothing can be represented without a normative stance (el Van Eemeren et al. 's (1993) notion of "normative pragmatics"). Normative represen- tation involves representing a dialogue as it should be, against some ideal form; el Van Eemeren et al. (1993, p. 37): "a central problem for critical analysis is how to represent argumentative discourse in a way that is both relevant to the interests of normative analysis and faithful to the intentions and understanding of the ordinary actors who produce the discourse". Given this, computational realization may help to show the reasonableness and practicality of the representation, and hence the extent to which it can qualify as a "representation". Similarly, Walton talks of "mixed dialogue", involving an overlap of types of dialogue (Walton 1998, p. 20 1). Again, these concepts might be illuminated in a computational context. For a computational model would seek to simplify matters by seeing extended dialogues as potentially consisting of a series of dialogue games (as discussed in section 3 above), which can be distinguished from each other by topic (different games of the same type), aim (different game types, same topic) or both (different game types and different topics) (Moore 1993). Part of the computational challenge is, as we have seen, to derive strategies for bidding games and deciding on incoming bids (el also Maudet 2000). The computational analy- sis, though, would be revealing as to the need for, and the practicality of, mixed dialogue in Walton's sense. Conversely, it may be that the absence of a facility for dialogue overlap represents an impoverishment within the purported computa- tional model. Here as elsewhere, then, there seems to be scope for an interesting and fruitful interplay between research within informal logic on the dialogue models per se, and research on their computational utilisation. The hope is that this paper will play a part in facilitating such an interplay. 238 Nicolas Maudet and David Moore References Aleven, V., Ashley, K. D. (1994). An instructional environment for practising argumenta- tion skills. Proceedings of the Twelfth National Conference on Artificial Intelli- gence, pp. 485-92 vol. 1,31 luly-4 Aug. 1994, Seattle, WA, USA. Alty, l. L. (1993). Multimedia: We have the technology but do we have a methodology. In H. Maurer (ed.) Proceedings of the Ed-Media World Coriference on Educational Multimedia and Hypermedia. Amgoud, L., Maudet, N. (2000). Vers un modele de dialogue base sur I'argumentation 12th Congres Francophone de Reconnaissance des Formes et Intelligence Artificielle, RFIAOO, Paris, France, 1-3 Fevrier 2000. Apostel, L. (1982). Towards a General Theory of Argumentation; in Barth and Martens (1982) Bagui, S. (1998). Reasons for Increased Learning in Multimedia. Journal of Educational Multimedia and Hypermedia. 7(1) pp. 3-18. Baker, M. (1994). A Model for Negotiation in Teaching-Learning Dialogues. Journal of Artificial Intelligence in Education, vol. 5 no 2, pp. 199-254. Barth, E, M. and Martens, J. L. (eds.) (1982) Argumentation: Approaches to Theory Formation. Amsterdam/lohn Benjamins B V. Bench-Capon, T. 1. M. (1990) Knowledge Representation, An Approach to Artificial Intelligence. Academic Press. Bench-Capon, T. l. M. (1998). Specification and Implementation of Toulmin Dialogue Game. Proceedings of JURIX 98. Bench-Capon, T. J. M., Leng P. H., Stanford, G. (1998). A Computer Supported Environ- ment for the Teaching of Legal Argument Journal of lriformation, Law and Technol- ogy (JILT), 1998(3); http://www.law.warwick.ac.uk/ii1t/98-3/capon.html. Bench-Capon, T. l. M., Lowes, D., McEnery, A. M. (1990). Using Toulmin's Argument Formalism to Explain Logic Programs. Proceedings Explanations Workshop V, Man- chester. Bench-Capon, T. J. M., Dunne, P. E. S., Leng, P. H. (1991). Interacting with Knowledge Based Systems through Dialogue Games. Proceedings of Eleventh International Conference, Expert Systems and their Applications, vol. I; Avignon, May 1991. Bouwer, A. (1998). ArgueTrack: the Design of an Argumentative Dialogue Interface. 2nd International Workshop on Human-Computer Conversation, Bellagio, Italy, 13-15 July 1998. Bouwer, A. (1999). ArgueTrack: Computer Support for Educational Argumentation; poster presentation at AI-ED '99, the 9th International Conference on ArtifiCial Intelligence in Education, Le Mans, 19-23 July 1999. Boyle, T. (1997). Design For Multimedia Learning. London: Prentice Hall. Bunt, H. C. (1994). Context and Dialogue Control. Think Quarterly 3( 1), pp. 19-31. Burton, M., & Bma, P. (1996). Clarissa: an exploration of collaboration through agent- based dialogue games. Proceedings of the EuroAIED, Lisbon. Dialogue games as dialogue models for interacting with computers 239 Burton, M., Brna, P., & Treasure-Jones, T. (1997). Splitting the Collaborative Atom: How to Support Learning about Collaboration. In B. du Boulay, & R. Mizoguchi (Eds.) Artificial Intelligence in Education: Knowledge and Media in Learning Systems. pp. 135-142. lOS, Amsterdam. Carbogim, D. V., Robertson, D., Lee, 1. (2000) Argument-based applications to knowl- edge engineering. The Knowledge Engineering Review, vol. 15:2, pp. 119-149. Dahlback, N. (1997). Towards a dialogue taxonomy. In E. Maier, M. Mast, & S. LuperFoy (Eds.) Dialogue Processing in Spoken Language Systems, Springer Verlag Series LNAI-Lecture Notes in Artificial Intelligence 1236. Fabri, M., Moore, D. J., Hobbs, D. J. (1999) The Emotional Avatar: Nonverbal Communi- cation between Inhabitants of Collaborative Virtual Environments, in Braffort et al. (Eds.) Gesture-Based Communication in Human-Computer Interaction, Springer Lecture Notes in Artificial Intelligence 1739. Finch, 1. (1998). Knowledge-Based Systems, Viewpoints and the World Wide Web. In Web-Based Know/edge Servers, lEE digest 981307, (June) pp. 8/1-8/4. Finkelstein, A., & Fuks, H. (1990). Conversation Analysis and Specification. In N. Luff (ed.) Computers and Conversation. Academic Press. Garrison, D. R. (1991). Critical Thinking and Adult Education: A Conceptual Model for Developing Critical Thinking in Adult Learners. International Journal of Lifelong Education, vol. 10 no. 4, pp. 287 - 304. Gerhard, M., Moore, D. J. (1999). Agents for Networked Virtual Learning Environments. Proceedings of the 5th International Conference on Networking Entities, NETIES'99 - The Organisational Impact of Telematics; Danube University, Krems, Austria. Ginzburg, J. (1997). On some semantic consequences of turn taking. Proceedings of the MunDial97 Workshop on formal semantics and pragmatics of dialogue, Univer- sity of Munich. Girle, R. A. (1986). Dialogue and Discourse. In G. Bishop and W. Van Lint (eds.), Pro- ceedings of the Fourth Annual Computer Assisted Learning in Tertiary Education COliference, Adelaide 1986, distributed by Office of Continuing Education, Univer- sity of Adelaide. Gordon, T. (1994). The Pleadings game: An Exercise in Computational Dialectics. Artifi- ciallntelligence and Law, vol. 2 no. 4, pp. 239-292. Gordon, T. (I996). Computational Dialectics. In P Hoschka (Ed.) Computers as Assist- ants, A New Generation of Support Systems; New Jersey: Lawrence Erlbaum Asso- ciates. Grasso, F., Cawsey, A., Jones, R. (2000) Dialectical argumentation to solve conflicts in advice giving: a case study in the promotion of healthy nutrition. International Journal of Human-Computer Studies, vol. 53, pp. 1077-1115. Grosz, B. J. (1977). The representation and uses offocus in dialogue. Ph.D thesis, University of California, Berkeley. Grosz, B. J. & Kraus, S. (\996). Collaborative plans for complex group action. ArtifiCial Intelligence 86(2) pp. 269-357. Hartley, 1. R. (1993). Interacting with multimedia. University Computing vol. 15, pp. 129- 136. 240 Nicolas Maudet and David Moore Hartley, J. R. and Hintze, D. (1990). Dialogue and Learner Modelling. In S. A. Cheri (ed.) Student Model Acquisition in a Natural Laboratory (NATLAB); GEC DELTA Project D-IOI6 Final Report, Brussels. Hatcher, D. (1999). Why Formal Logic is Essential for Critical Thinking. lriformallogic 19(1),pp.77-89. Hewitt, J., Teplovs. C. (1999). An Analysis of Growth Patterns in Computer Conferencing Threads. In Hoadley (1999). Hoadley, C. (Ed.) (1999). Computer Support for Collaborative Learning (CSCL '99) Stanford University, Palo Alto, December 11-15 1999. Hopgood, A. (2001) Intelligent Systems for Engineers and Scientists. London: CRC Press. Jennings N. R. (1995). Commitments and conventions: The foundation of coordination in Multi-Agent Systems. The Knowledge Engineering Review 8, pp.223-250. Jones, A. (1995). Constructivist Theories of Learning and IT. In N. Heap, R. Thomas, G. Einon, R. Mason, & H. Mackay (Eds.) Information Technology and Society - A Reader; Sage Publications Ltd. Lajoie, S. P., Greer, J. E., Munsie, S., Wilkie, T., Guerrera, C., & Aleong, P. (1995). Estab- lishing an argumentation environment to foster scientific reasoning with Bio- World. In D. Jonassen, G. McCalla (eds.) International Conference on Computers in Edu- cation 1995, Proceedings of ICCE 95,5-8 Dec. 1995, Singapore, pp. 89-96. Laurillard, D. (1995). Multimedia and the changing experience of the learner. British Journal of Educational Technology, 26 (3) pp. 179-189. Levinson, S. C. (1979). Activity types and language. Linguistics 17, pp.365-399 Mackenzie, J. D. (1979). Question-Begging in Non-Cumulative Systems. Journal of Philosophical Logic pp. 117-133. Mann, W. C. (1988). Dialogue Games: conventions of human interaction. Argumenta- tion 2(4): pp. 511-532. Maudet, N. (2000). Conversational co-operation through dialogue games. Doctoral Col- loquium, COOP2000 - Fourth International Conference on the Design of Co-opera- tive Systems, Sophia Antipolis, 23-26 May 2000. Maudet, N., Evrard, F. (1998). A generic framework for dialogue game implementation. Second Workshop on Formal Semantics and Pragmatics of Dialogue (TWLTI3) May 13-15, 1998, University of Twente, Enschede, The Netherlands, Hulstijn J. and Nijholt A. Eds. Maybury, M. (1993). Planning Multimedia Explanations using Communicative Acts. In M. Maybury (ed.) Intelligent Multimedia Interfaces. AAAI Press/MIT Press Montgomery, M. (1997). Developing a Laurillardian CAL Design Method. In Educa- tional Multimedia/Hypermedia and Telecommunications 1997, Proceedings of Ed-Media/Ed-Telecom 97 - World Conference on EducatiofJal l'vlultimedia/ Hypermedia, World Conference on Educational Telecommunications, Calgary, vol. " pp. 1322-1323 AACE. Moore, D. J. (\993). Dialogue Game Theory for Intelligent Tutoring Systems. Unpub- lished PhD dissertation, Leeds Metropolitan University, UK. Moore, D. 1. (2000). A Framework for Using Multimedia within Argumentation Systems. Journal of Educational Multimedia and Hypermedia 9(2), pp. 83-98. Dialogue games as dialogue models for interacting with computers 241 Moore, D. J., Hobbs, D. J. (1996). Computational use of philosophical dialogue theories. Informal Logic vol. 18(2), pp. 131-163. Moore, D. J., McGrath, P., Thorpe, 1. (2000) Computer aided learning for people with autism - a framework for research and development. In press for Innovations in Education and Training International 37(3). Moyse, R., Elsom-Cook, M. T. (Eds.) (1992). Knowledge Negotiation. Academic Press. National Curriculum Council (1990a). Curriculum Guidance 7-£nvironmental Educa- tion. NCC, York. National Curriculum Council (I 990b ). Curriculum Guidance 8-Education for Citizen- ship. NCC, York. Okamoto, T. & Inaba, A. (1997). The Intelligent Discussion Co-ordinating System for CSCL Environment. In T. Muldner and T. C. Reeves (Eds.) Educational ivfultimedial Hypermedia and Telecommunications 1997. Proceedings of Ed-AlediaiEd-Telecom 97 - World Conference on Educational Multimedia/Hypermedia, World Confer- ence on Educational Telecommunications, Calgary, vol. II, pp. 794-799 AACE. Perelman, C. & Olbrechts-Tyteca, L. (1969). The New Rhetoric: a Treatise on Argumen- tation. Notre Dame Press. Pilkington, R. M. (1992). Intelligent Help. Communicating with Knowledge Based Sys- tems. Paul Chapman Publishing Ltd. Pilkington, R. M. (1998). Dialogue Games in Support of Qualitative Reasoning. Journal of Computer Assisted Learning 14, pp. 308-320. Pilkington, R. M., Hartley, J. R., Hintze, D., & Moore, D. J. (1992) Learning to Argue and Arguing to Learn: An Interface for Computer-based Dialogue Games. Journal of Artificial Intelligence in Education, 3(3), pp. 275-295. Pilkington, R.M. & Mallen, C. (1996). Dialogue Games to Support Reasoning and Reflec- tion in Diagnostic Tasks. In P. Brna, A. Paiva and J. Self(Eds.), Proceedings of the European Conference on Artificial Intelligence in Education, September 1996. Lisbon, Portugal: Fundacao Calouste Gulbenkian. Pilkington, R. & Parker-Jones, C. (1996) Interacting with Computer-Based Simulation: The Role of Dialogue. Computers & Education, 27(1), pp. 1-14. Prakken H (2000) On Dialogue Systems with Speech Acts, Arguments and Counterarguments. In Proceedings of JELIA 2000, The 7th European Workshop on Logic for Artificial Intelligence. Springer Lecture Notes in AI, Springer Verlag, Berlin, 2000. Quignard, M. and Baker, M. (1997). Modelling Argumentation and Belief Revision in Agent Interactions. In Proceedings of the European Conference on Cognitive Sci- ence (ECCS97), Manchester (UK), March 1997. Quinn, V. (1997). Critical Thinking in Young Minds. David Fulton Publishers. Ravenscroft, A. (1999). Designing Argumentation for Conceptual Development. Com- puters and Learning Research Group (CALRG) Technical Report 184, Institute of Educational Technology, The Open University, Milton Keynes, UK MK7 6AA. Ravenscroft, A. & Hartley, 1. R. (1999). Learning as Knowledge Refinement: Designing a Dialectical Pedagogy for Conceptual Change. International Conference on Artifi- ciallntelligence in Education 1999, Le Mans, France, 19-23 July 1999. 242 Nicolas Maudet and David Moore Reed C. (1998). Dialogue Frames in Agent Communication. In Demazeau Y. (Ed.) Pro- ceedings of the Third International Coriference on Multi-Agent Systems, IEEE Press, 1998, pp246-253. Retalis, S., Pain, H., & Haggith, M. (1996). Arguing with the Devil: Teaching in Contro- versial Domains. In C. Frasson, G. Gauthier, & A. Lesgold (Eds.) Intelligent Tutoring Systems, Third International Conference, ITS'96, Montreal, Canada, June 12-14, 1996. Springer. Rosenberg, D. & Sillince, J. A. A. (1999). Common Ground in Computer-supported Collaborative Argumentation. Paper presented at Workshop on Computer-Sup- ported Collaborative Argumentation for Learning Communities, CSCL '99, Stanford University, Palo Alto, December 11-151999. Schuler, W. & Smith, J.B. (1993). Author's Argumentation Assistant: A Hypertext-Based Authoring Tool for Argumentative Texts. In A. Rizk, N. Streitz, J. Andre (Eds.) Hypertext: Concepts, Systems and Applications. Cambridge University Press. Self, J. (1992). Computational Viewpoints. In R. Moyse, M. T. Elsom-Cook (Eds.) Knowl- edge Negotiation. Academic Press. Simon, J. A. (1997). Object Orientation in Discourse Structuring. In T. Muldner & T. C. Reeves (Eds.) Educational Multimedia/Hypermedia and Telecommunications 1997, Proceedings of Ed-MediaIEd-Telecom 97 - World Conference on Educational Multimedia/Hypermedia, World Coriference on Educational Telecommunications, Calgary, volume II pp. 984-989 AACE. Steeples, c., Unsworth, c., Bryson, M., Goodyear, P., Riding, P., Fowell, S., Levy, P., DuffY, C. (1996). Technological Support for Teaching and Learning: Computer-Me- diated Communications in Higher Education. Computers and Education 26( 1) 71- 80. Stenning, K., McKendree, 1., Lee, J., & Cox, R. (1999). Vicarious learning From Educa- tional Dialogue. In Hoadley (1999). Stewart-Zerba, L. & Girle, R. (1993). Rules and Strategies in Dialogue Logic. Proceed- ings of the Sixth Australian Joint Conference on Artificial Intelligence, Melbourne 1993. Stoney, S. & Oliver, R. (1998). Interactive Multimedia for Adult Learners: Can Learning be Fun? Journal of interactive Learning Research 9(1) pp. 55-81. Suthers, D., Weiner, A., Connelly, J., & Paolucci, M. (1995). Belvedere: Engaging Stu- dents in Critical Discussion of Science and Public Policy Issues. Proceedings 7th World Conference on Artificial Intelligence in Education (AI-Ed '95). Washing- ton, D.C., 1995. Tambe M. (1997). Towards Flexible Teamwork. Journal of Artificial Intelligence Re- search 7, pp. 83-124 Tindale, C. W. (1992). Audiences, Relevance and Cognitive Environments. Argumenta- tion 6, pp. 177-188. Traum, D. R., & Hinkelman, E. (1992). Conversation acts in task-oriented spoken dia- logue. Computational Intelligence, 8(3), pp. 575-599. Trott, J. (1999). Developing a computer debating system with mullimedia integration. Unpublished MSc dissertation, Leeds Metropolitan University. Dialogue games as dialogue models for interacting with computers 243 Van Eemeren F.H., Grootendorst R., Jackson S., & Jacobs S. (1993) Reconstructing Argumentative Discourse. The University of Alabama Press. Veennan, A. L., Andriessen, J. E. 8., & Kanselaar, G. (1999). Collaborative Learning Through Computer-Mediated Argumentation. In Hoadley (1999). Walton, D. N. (1984). Logical Dialogue Games and Fallacies. University Press of America. Walton, D. N. (1985). New Directions in the Logic of Dialogue. Synthese 63, pp. 259-274. Walton, D. (1989). Question-Reply Argumentation. Greenwood Press. Walton, D. (1996). Argumentation Schemes for Presumptive Reasoning. Lawrence Erlbaum. Walton, D. (1998). The New Dialectic: Conversational Contexts of Argument. Univer- sity of Toronto Press. Walton D., and Krabbe E.C.W. (1995). Commitment in Dialogue. Basic Concepts of Interpersonal Reasoning. State University of New York Press. Wooldridge, M. & Jennings, N. R. (1999). The Cooperative Problem Solving Process. Journal of Logic and Computation 9(4), pp. 563-592. Nicolas Maudel Institut de Recherche en Informatique de Toulouse Universite Paul Sabatier. Toulouse 3 I 07 J Toulouse Cedex France E: nicolas. maudet@enseeihtfr David Moore School of Computing, Leeds Metropolitan University Leeds LS6 3QS United Kingdom E: d. moor@lmu,ac, uk