meek Australian Journal of Educational Technology Intelligent agents, Internet information and interface James Meek University of Wollongong A review of intelligent software agents and their relevance to networked information touching on some of their emerging potential and on interface considerations. Coming Together? The Internet: bustling, booming infrastructure. Holder of masses of information which is at one time both easy to access and hard to find. Agents: tireless software helpers with great promise in a variety of fields including the retrieval and filtering of information for individual needs. Could these two work together to make a significant difference to future patterns of information gathering in research and education? The Internet is part of the motivation for agents - it's going to be impossible, if it isn't already, for people to deal with the complexity of the online world. I'm convinced that the only solution is to have agents that help us to manage the complexity of information. I don't think designing better interfaces is going to do it. There will be so many different things going on, so much new information and software becoming available, we will need agents that are our alter egos; they will know what we are interested in, and monitor databases and parts of networks. (Pattie Maes of the MIT's Media Laboratory, interviewed in Berkun, 1995). Software tools known as 'filters' and 'agents' are beginning to manage the informational onslaught for us. As long as we are conscious of their hazards and limitations, they'll serve us well until even more powerful navigating tools do the job. The computers that cause the problem will also solve the problem. (Danny Goodman in Goodman (1995), countering one of a list of 'common myths' about the Information Superhighway... that "I will be crushed under tons of information arriving via the Superhighway".) 76 Australian Journal of Educational Technology, 1995, 11(2) Although they are still in their infancy, the promise of intelligent agents is an appealing one. The intelligent agents of tomorrow will relieve users of the time-consuming and tedious searches through a massive, intricate and globally-dispersed web of electronic information. Agents will find, assemble and analyse information that users need to solve problems, become better informed and make intelligent decisions. (Roesler & Hawkins, 1994, pp. 20-24.) What are some of the difficulties with network (Internet, Web) information? What are agents and what are some of the issues they raise? What matters need consideration with regard to agent interfaces? Should there be one or more agents? Should agents use facial expressions and what other means of personification? What is the best metaphor for interface agents? (Maes, 1994, p.40.) Internet information A resource in flux: great potential The Web (and the entire electronic network of which it represents just a part) provides a range of widely distributed and potentially valuable sources of information. It is also a volatile and incomplete organism which is fraught with duplication. Access to information can be frustrating, when encountering messages about missing URLs and inaccessible or busy sites, or when communicating over a slow local network or a modem. A coherent centre and a complete index may also be missing from the resource, but the potential of it is clear to many as is indicated by recent dramatic growth in interest in its uses, both recreational and academic. Differing interfaces The Internet is actually an amalgam of a number of different and connecting software technologies with distinct interfaces to contend with. Many of these are somewhat inaccessible to the less 'technical' user than the Web pages. Agents could, and currently do, help with managing this difficulty, at least in so far as location of resources is concerned. In discussing the variety of publishing and information dissemination mechanisms accessible via the Internet, December (1994) lists electronic mail, telnet, FTP, Archie, Gopher, Veronica, hyperlinked Web pages, listservers and USENET discussion groups as samples of the possibilities encountered. Apart from the basic hardware and software infrastructure requirements, December concludes, "a primary barrier to this access involves user interface". By consequence "creating a graphical interface to unify other communication services" with browser interfaces such as Mosaic is seen as a first challenge (December, 1994, p. 35). This suggestion, combined with Meek 77 his next thought that information should be formatted in such a way as to facilitate retrieval and display by a variety of means, suggests an opening- up of information for perusal by remote means. These means could include agent software in place of the individual browsers which are his main interest. Indexing is needed, if nothing else Maddux (1994, p. 39) indicates uncertainty that the Internet will be as revolutionary in education as some expect, partly due to lack of machinery in schools, but also because of problems in curriculum support and locating resources of potential benefit. He points to an "overwhelming sense of information overload", reporting that "one of (his) students recently suggested that a first exposure to the Internet is almost like walking into the world's largest library and finding no catalogue or other inventory, no user instructions of any kind, no titles or author names on books, and no indices or tables of contents on anything". This lack of comprehensive indexing is a substantial problem to the 'directed', as distinct from the 'browsing' user. Directed users know that there is relevant information to their purposes - hidden among the volumes of the ejournals, databases and papers currently multiplying rapidly 'out there' - they just have difficulties in finding it or in sifting the valuable from among the multiple possibilities presented. "Networked computers have become the 'fishing poles' in that vast, seemingly unlimited ocean of virtual information sites", says Kawamoto (1994, pp. 44-45), before repeating the point that "there is still no real mega-indexing facility that streamlines the exploration, search and retrieval process". The Internet, it is observed, even "precludes the kind of systematic centralisation that might make navigation less cumbersome". While the use of retrieval mechanisms like Gopher and WAIS may gradually become known to the novice, it is possible that agents could serve as a much more effective means to make accessible information resident on both the Web and the entire network of which it is just a part. Indeed, intelligent software agents are already responsible for the creation of numbers of indexes which are accessible to cyberspace explorers... though more confusion is bound to occur when the novice discovers there are some dozens of such indexes, the coverage and management of which is uncertain, and the redundancy between which is significant. Still greater problems will become apparent in dealing with the volume of response search engines can return, as is illustrated in Dawe & Baird (1995), where a 'Multi Threaded Query' returned literally hundreds of references. Clearly data of such a volume needs parsing and organising in some way, which is another activity in which agents may have a role. 78 Australian Journal of Educational Technology, 1995, 11(2) Map-making and information overload It is true, as Falk (1995) suggests in an interesting view of the Web as a phenomenon creating new communities, that such new communities are in the act of making their own maps to the information which they find relevant to their needs. These act as indexes too, though they will have similar duplication to less specialised indexes: taking the case of Web pages devoted to intelligent software agents as an example, there are several easily found sites, and these do contain a significant level of overlap and cross-reference to one another. The entire Web is a construct of hyperlinks. It therefore runs the risk of losing its users in hyperspace when the cognitive load associated with understanding links made and places been gets too high. The kinds of mechanisms which Oren (1990) reiterates in reference to designing less vast hypermedia than the Web, like limiting links and giving clear visual cues about 'position' in the linked materials, are simply impossible to guarantee. Though some browsers provide pull-down lists of pages visited recently and change colours on links exercised in the last n days, these mechanisms are simply inadequate. Agents to reduce the load? The chaos and vibrancy may well be part of what attracts the recreational Web user to the environment, but the directed user needs to be protected from navigational complexities and the potential overload created by both volume of information possibilities and duplication among them. Using agents to discover relevant information, to remove duplications and then to make initial assessments about the level of relevance of any resource, be it Web page or WAIS document, may well be the saviour of directed researchers. Agents could also function to insulate the researcher from the technicalities of a particular interface by retrieving required information from them and presenting it in a familiar form. They could also generate significant efficiencies, especially given their potential to schedule activities, including follow-up scanning for new and changed information, independent of their user. Software Agents A good introduction to 'intelligent' agents, which gives some indication of the breadth of applications to which the idea could be applied, is provided in Roesler & Hawkins (1994, p. 19). Intelligent agents are autonomous and adaptive computer programs operating within software environments such as operating systems, databases or computer networks. Intelligent agents help their users with routine computer tasks, while still accommodating individual habits. Meek 79 This technology combines artificial intelligence (reasoning, planning, natural language processing, etc.) and system development techniques (object-oriented programming, scripting languages, human-machine interface, distributed processing, etc.) to produce a new generation of software that can, based on user preferences, perform tasks for users. Additional to such a definition could be two contributions from Harmon (1995, p.2): the suggestion that the generic agent stands between user and an application, but does not necessarily prevent user from using application or process and that they are appropriate for repetitive tasks which are performed differently by different people. Although currently in their infancy, the advent of agents is seen by many to be a significant step in the evolution of computing. Alan Kay considers that they are part of a 'third revolution' in computing, following upon the move to time-sharing computing and then to desktop computing employing the graphical user interface, respectively. Kay (1990) suggested that the next major advance in computing will be the widespread adoption of networked or distributed computing, and that this will be driven by agent-based interfaces. More recent activity surrounding the Internet and the development effort going into agent-based systems may well reinforce the veracity of this view. Earlier, Kay (1984) pointed to the origin of agency in computing thus: The idea of an agent originated with John McCarthy in the mid-1950s, and the term was coined by Oliver G. Selfridge a few years later, when they were both at the Massachusetts Institute of Technology. They had in view a system that, when given a goal, could carry out the details of the appropriate computer operations and could ask for and receive advice, offered in human terms, when it was stuck. An agent would be a 'soft robot' living and doing its business within the computer world. (Alan Kay quoted in Laurel (1990, p.359).) This kind of idea was carried forward by Nicholas Negroponte, who is sometimes also credited with its origin, with elements emerging in his publication of The Architecture Machine: Toward a more human environment in 1970. Rather later, when Negroponte contributed his 'Hospital Corners' article to Laurel's (1990) discussions about future interfaces, including software agents and guides, he reworked the agent idea as a collective of software entities providing an alternative to the current direct manipulation model of computer interaction represented by the desktop metaphor: But wouldn't you really prefer to run your home and office life with a gaggle of well-trained butlers (to answer the telephone), maids (to make the hospital corners), secretaries (to filter the world), accountants or brokers (to manage your money), and on some occasions, cooks, gardeners and chauffers when there were too many guests, weeds, or cars on the road? (Negroponte (1990, p. 352).) 80 Australian Journal of Educational Technology, 1995, 11(2) Publication of Laurel's The Art of Human-Computer Interface Design, following on such developments as the development of HyperCard and of distribution of Apple's agent vision in the 'Knowledge Navigator' video, also coincided with advances in artificial intelligence techniques and desktop computing power. All these together added impetus and further inspiration to the development of agents in computing, as has most recently, the dramatic advance in networking technology and its adoption in the guise of popular access to the World Wide Web. Today, in Being Digital, an exploration of the realm of the 'Information Superhighway', Negroponte again reworks the agent concept, talking-of 'digital butlers', 'personal filters' and even 'digital sisters-in-law' to help in choosing which movie to see. He envisions a range of agents, working together to create an 'intelligent interface' with which the user can converse, or which can anticipate the user's needs from knowledge it holds (and builds) about him or her. This is an interface which "will be rooted in delegation, not in the vernacular of direct manipulation". (Negroponte, 1995, p101) General characteristics of 'intelligent' agents The following provides a useful list, sourced in Roesler & Hawkins (1994, pp. 20-24), of the kinds of characteristics which might be expected of the more capable agents. "Software does not necessarily need to have all these qualities to be classified as an intelligent agent. On the other hand, it is probably reasonable to say that the intelligence level of agents can be correlated to the degree to which they implement these properties": • Autonomous agency - the ability to handle user-defined tasks independent of the user and often without the user's guidance or presence • Adaptive behaviour - the ability to mimic the user's steps when normally performing a task' • Mobility capability - the ability to traverse computer networks, carrying actions for remote execution • Cooperative behaviour - the ability to engage in complex patterns of two way communications with users and other agents • Reasoning capability - the ability to operate in a decision making capacity in complex, changing conditions • Anthropomorphic interface - the ability to exhibit human-like traits Now what could an agent do? If "a software agent is a computer program that functions as a 'cooperating personal assistant' to the user by performing tasks autonomously or semi- autonomously as delegated by the user', (Harmon 1995, p. 2), what uses could such devices be put to? Agents are a concept in software which various sources and orientations might define differently. For purposes of this paper, they are regarded as Meek 81 differing from other related interface mechanisms like guides and wizards in several ways. Figure 1 sketches just one set of possibilities based on a very broad definition which does not necessarily differentiate a separate guide category. 'Agents', in terms of this discussion tend to be found in the 'information' and 'work' categories. FUNCTIONS FOR AGENTS? INFORMATION Navigation and browsing Information retrieval Sorting and organising Filtering WORK Reminding Programming Scheduling Advising LEARNING Coaching Tutoring Providing help ENTERTAINMENT Playing against Playing with Performing Figure 1: Borrowed from Laurel (1990, p.360). Guides assist the user of a particular piece of software in its operation or in constructing an understanding of its content by presenting differing viewpoints; wizards tend to function as experts in a particular domain, guiding the novice; whereas an agent is set a task or function and then left to perform it alone, sometimes with the agent even deriving its own tasks by observation of the user. Another list of applications, found in Maes (1994, p.31) and based on prototypes being developed at MIT, gives similar emphasis to Indermaur's of the information gathering and filtering potential in agents reported below. It also adds such items as mail management, meeting scheduling and selection of books, music, movies to a list of where agents can help which is 'virtually limitless'. This is an area of vigorous activity which, like the development of Net resources, is holding the attention of people from a variety of disciplines, all aiming at designing "applications to be better surrogates while requiring less control over the environment in which these applications perform". Thus, what agents could be used for is an idea that varies in the eye of the beholder. 82 Australian Journal of Educational Technology, 1995, 11(2) Agent classification schemes A variety of schemes can be found to describe the kinds of agents which either currently exist or are the focus of development or research interest. Each scheme differs in emphasis according to the orientation of its creator, as would be expected in a field attracting interest from a range of areas including human-computer interface designers, commercial product developers and artificial intelligence specialists. In listing intelligent interfaces, adaptive interfaces, knowbots, knobots, softbots, userbots, taskbots, personal agents and network agents as just a few among the class 'agents', Reicken (1994a) betrays an interest rooted in the study of artificial intelligence and inter-machine communications. Meanwhile Laurel's (1990) discussions of agents and guides generally confined the 'guide' to a single application as a means of communicating differing viewpoints or giving hints, and in so doing it shows a human- computer interface orientation. From another point of view, one more closely related to how they do their work than what they do, Harmon (1995, p.4) identifies three types of agents: • End-user programmable (or 'simple') agents; • Knowledge-based systems (or 'smart') agents; and • Self-learning (or 'intelligent') agents. In the context of this review, terms are used as to allow all software agents to be thought 'intelligent'. The perspective which Harmon provides, however, in examining agents' 'degree of intelligence' and 'mode of deployment' (desktop based, server based or distributed agent) is valuable. His assessment of how a variety of marketed agents sit on these axes is similarly useful for those with a deeper interest. Indermaur (1995, p.97), while acknowledging that agents are being developed today in a number of application areas, lists three major types - advisory agents, assistant agents and Internet agents - as well as identifying a subclass of communicating agents. It then goes on to lay stress on that part of the broad range which are "designed to filter and gather information from commercial data services and public domains like the Internet and to automate work flow"... a group of distinct interest to this discussion. Under this scheme, advisory agents "offer instruction and advice to help you do your work". These 'learn' about you, your expertise and interests and adapt accordingly, at best anticipating your goals and presenting suggestions based on past actions. They do this by maintaining two models: One of the user and user behaviour, and another of subject matter or domain details. Assistant agents "can be more ambitious than advisory agents because they often act without direct feedback from users". Examples of this kind Meek 83 of agent like smart mailboxes and search engines raise a number of issues in actually doing work for you. Indermaur reports Pattie Maes' suggestion that two most important factors in design of such agents are their competence and the level of trust extended to them. Competence concerns how an agent acquires knowledge and its sensitivity to its user's needs, while trust concerns whether users will feel comfortable in delegating tasks to an agent. These issues are explored in more detail in Maes (1994, p.31-32), where they are used to explain a preference for approaches to agent creation employing machine learning over others based on end-user programming and knowledge bases. Indermaur's third grouping is the Internet agents, most of which are information gatherers, some of which attempt to make sense of the information they find on the Web. Examples of these are WebCrawlers, Spiders and various other software 'robots'. Issues arising from agents Numbers of concerns arise with the employment of agents. Predictably, the details vary with the type of agent being considered, and the following is just a sample to extend the coverage beyond problems already touched upon: Indermaur's commentary on the 'assistant agent' category points out that it is important to have a mechanism whereby the balance between agent independence and intrusiveness can be manipulated. Another issue of import raised by such agents is that of responsibility for agent actions: if an agent can act more autonomously, who will take responsibility for its activities? Also arising in association with agents is the issue of privacy. If an agent 'knows' a lot about its employer, could that not pose problems when agents find they have to communicate with one another about their purposes and their owners? Among other salient matters, like the tension between people wanting agents to do things they are not good at, but not to get too good at doing those things, this idea of inter-agent communication and a 'society of agents' are covered in an interesting interview with Marvin Minsky found in Reicken (1994b). Indermaur's 'Internet agents' raise other concerns regarding their behaviour in the network which they roam which are taken up in Eichmann (1994) and Markoff (1994). Markoff dramatises the concern thus: Protoartificially intelligent creatures are already loose in the net, and in the future they will pose vexing ethical dilemmas that will challenge the very survival of cyberspace. Markoff (1994, p.45). What he is concerned about is the load on processors dispensing information through the Internet of uncontrolled 'robots' wandering the net, reviewing and harvesting its riches. David Eichmann has similar 84 Australian Journal of Educational Technology, 1995, 11(2) concerns about the impact of 'Web spiders', which he takes as far as to use as a motive for proposing a set of ethics for spider behaviour. In Eichmann (1994, p.10) it is proposed that agents acting in the network for an individual user should adhere to the following guidelines, which are quoted verbatim: • Identity - a user's agent's activities should be readily discernible and traceable back to its user. • Moderation - the pace and frequency of information acquisition should be appropriate for the capacity of the server and the network connections lying between the agent and that server. • Appropriateness - a user agent should pose the proper questions to the proper servers, relying upon service agents for support regarding global information and servers for support for local information. • Vigilance - the user agent should not allow user requests to generate unanticipated consequences. A similar set of ethics are suggested for those which are gathering information for purposes of providing it in a generally available service, though it is clear that the basic concern is about whether the network infrastructure is capable of dealing with the volume of activity potentially generated. Employing the remote interface of an agent in preference to the direct interface of a Web browser could make significantly greater impact on network servers. Creativity/Humanity Some apparently worry that agents will somehow undermine creative effort if it eventually comes about that agents can 'understand' material's meaning in other than terms of correlating textual content with a query question's content. 'Serendipity' and chance collisions of previously separate concepts sometimes create new ideas, and the thinking goes that somehow the programmed behaviour of agents could be counter to this activity. Boden (1994) comes to the reverse position, finding potential assistance to creativity in agents being able to help by "suggesting, identifying, and even evaluating differences between familiar ideas and novel ones". Agents will be able to collaborate and compare 'ideas', and in any case, there will always be the potential for them to be set-up to occasionally make random comments or suggestions to prompt human thinking. It seems very unlikely that human users will ever surrender their intellect to the agent which is designed as a helper, not a replacement... but the connection made between creative process and agents is, nonetheless, a thought-provoking one. Some are even more deeply troubled by the emergence of the software agent. Lanier (1995), putting an extreme position, considers intelligent Meek 85 agents "both wrong and evil". He suggests that in employing such mechanisms humans might be surrendering their humanity - "redefining themselves into lesser beings" - and altering their own psychology. Such ideas are certainly worth considering, though it is difficult to imagine the sort of person who would abdicate responsibility so totally to what is, after all, merely a contrivance of machine and software: trust is one thing, surrender another. Agent as virus? One final concern which has been indirectly felt in considering ethics and agents is their potential to create effects similar to computer viruses. As a new breed of agents which actually 'leave' their home base and 'go places' comes into being, new challenges are faced. Telescript, the script language associated with the Magic Cap product and discussed in Davis (1994) is one product which is dealing with the potential threats of programs like itself. Such programs as these which are actually transported across networks to operate on different hosts are being designed to incorporate 'cyberspace passports' which carry their origin and authority. Telescript is also intentionally constrained by a vocabulary which disallows potentially dangerous functions like the direct examination or modification of host system memory or file systems, for example. Prospects Even while it is clear there are legitimate concerns about agent behaviour, there is also great potential to be found in them. Though they are relatively primitive at present, the development of agents will be significant in making available far more accessible, current, comprehensive and simplified information based on the plethora available through electronic networks. If agents can be created which can only gather and then reduce the mountain of information to its essential items, significant tedium will have been removed from the processes of research. Interfaces for Agents Agents can be valuable to and carry the potential to add another dimension to a human-computer interface. In some cases agents could even be considered to be the interface. A case in point would be their shielding of the user from the underlying complexities of navigation through and communication with differing environments in the Internet as discussed above. This act of protecting the user in gathering material and then presenting summarised 'pictures' of it can be seen as replacing the interfaces which would otherwise need to be faced. The interface of an agent is in many ways no different to that which mediates communication between any computer-based artefact and its user, and is thus subject to the same sorts of constraints that are applied in 86 Australian Journal of Educational Technology, 1995, 11(2) many human-computer interaction design guidelines. Before looking at a selection of issues which some see as particularly pertinent to agents, it may be useful to review a set of general principles for machine design that seem to have application here. Donald Norman (1988), in The Psychology of Everyday Things, provided a refreshingly practical and attractive set of ideas on design which can be applied for any interface, whether computer or otherwise. His ideas in the context of building an agent interface and with the adoption of an appropriate metaphor, as discussed below, can be significant in deciding whether an agent is effective and/or accepted in its role. Some of his central concepts which seem relevant to creating agents include: being aware of object affordances, or what the appearance of something implies about its utility; the importance of giving visibility to its functions; the power of making constraints clear and using in-built expectations of users here to support their expectations; the need for direct feedback and to provide evidence that the user has control of an object; and the great capital which can be made of human tendencies to build concept models of objects with which they interact. Metaphor Metaphor is a useful device in literature and computer interface alike. It provides a mechanism whereby all manner of properties can be implied for an artefact by associating it with something else. This device, discussed by Cates (1994) in some detail for purposes of applying it particularly to hypermedia, is equally of potential in regard to agents. Using a coherent set of cues, visual or aural, can be a key factor in effective human-computer communication, whether because the users understand 'intuitively' how better to work with an object, or it helps them better to apprehend its value. Commonly, the agent is given expression in a human-like form, such as seen in the Apple Knowledge Navigator video or products like those being developed by Pattie Maes' group at MIT. Human metaphors, like the 'assistant' casting often found in agents, however, are not the only possibility. So long as what is used can be judged both appropriate and a coherent metaphor well implemented, it can assist communication. Interacting with agents Social concerns, mostly about people's need to feel in control of and comfortable with mechanisms which they use, dominate the thoughts expressed in a more recent article by Norman - particularly given the fact that "some agents have the potential to form their own goals and intentions, to initiate actions on their own without explicit instruction or guidance" (Norman 1994, p. 68.). As might be anticipated from his previously mentioned guiding principles, Norman stresses the need for interface with agents to provide Meek 87 reassurance to its user that the agent is technically reliable and feedback that it is working according to plan. Furthermore, the outward face of the agent application should control expectations about its abilities. Norman is worried about the tendency to use anthropomorphic devices in the agent interface as he feels that these could be interpreted as promises of performance which cannot be met by relatively primitive programs. This betrays his basic wish that 'system image' should accurately depict capabilities and actions, but probably underestimates the sophistication of the computer user. Privacy issues are considered, in addition, with concerns being expressed about the potential for agents to exchange sensitive information about their users. Perhaps inter-agent interfaces will need further consideration with regard to this less-technical matter. Finally, Norman raises concerns about the means by which agents are to be instructed or controlled. He expresses reservations about the practicality of both agents instructing themselves by 'watching' their users and about direct user programming of agents, suggesting that neither approach to communication is likely to be wholly satisfactory. Norman highlights a further issue in interface which might sometimes be overlooked: the fact that there are number of communications (and modes for them) possible in the user-agent interface. Some of these are explicit and some are implicit, but all are mediated through some form of interface. Instructions are given to the agent and responses are received from it. Instructions could be given directly by spoken-word or through text input or through demonstration. They might, alternatively, be given implicitly, through the agent making conclusions based on user action, though in this case it could be said that the agent 'instructs' itself by 'observing user behaviour. A 'conversation' might be necessary to refine unclear intentions and to ensure that goals are appropriate. Eventually an agent must report its findings to the user, and this could potentially be in one of several forms. Future interfaces are likely to be more complex than the current mostly text- based processes. They will also likely be rather more complex than a command and response model. These possibilities are worth keeping in mind. Anthropomorphism Interface agents radically change the style of human-computer interaction. The user delegates a range of tasks to personalised agents that can act on the user's behalf. We have modelled an interface agent after the metaphor of a personal assistant. The agent gradually learns how to better assist the user by: • Observing and imitating the user • Receiving positive and negative feedback from the user 88 Australian Journal of Educational Technology, 1995, 11(2) • Receiving explicit instructions from the user • Asking other agents for advice. (Maes 1994, p.40) Obviously Maes' team is working with interface metaphors which require a certain 'humanity' incorporated into them, but agent interfaces need not always be person-like. 'WebCrawlers', 'Knowbots' and such offer examples with a potentially non-anthropomorphic in nature: web search engines, for instance, commonly employ an approach to communication which has far more in common with doing a 'terms' search in a computerised library catalogue than a discussion with a librarian... and this can be viewed as entirely appropriate and effective. Nonetheless, a tradition of human experts and assistants is a useful metaphor for communication, and people are capable of enjoying and interacting with the type of (exaggerated) character found in the Apple Knowledge Navigator video or the send-up of it found in Murie's (1993) CD. Laurel (1990, p.358) suggests that anthropomorphic tendencies in an interface are acceptable providing there is no pretence that the agent figure actually is human. She feels that two distinctly anthropomorphic qualities are required of (and enjoyed by) computer users - responsiveness and a capacity to perform actions - and contends that these serve as the basis of the metaphor of agency. Similarly, Tognazzini (1992) suggests that designers should make no pretence that the computer is human, but instead should consider the creation of a character separate from, but within, the computer context which 'acts' as an agent. User expectations of agent abilities, says Tognazzini, in an echo of Norman's idea, should be constrained. Further, the tasks which an agent is to perform should be limited to such tasks as it is conceivably capable of, which thus makes the form in which it is portrayed in software very important. Believability, a term not to be confused with realism, is the topic of an interesting paper in Bates (1994). It discusses agents in terms of the coherence of their expression and a need for them to be able to express their 'emotions' in order for them to be understood. When read alongside Maes' comments about the feedback which can be got from visual representations of an agent's 'state of mind' (in Maes 1994, p. 36), this paper provides an interesting perspective. Both give some support for those seeking mechanisms and motivation to create agents which will be trusted by their users and each uses agents based on cartoon forms. Language and agents is another field in which there are many interface possibilities which could be explored. In future, agents may be required to talk or to 'understand' spoken language in different applications, though most computer interfaces continue to be text-based for now. Just how agents present the information they gather is another issue deserving attention. Several existing Internet search mechanisms, including Veronica, are able to numerically rate the 'relevance' of articles Meek 89 being scanned to the set of criteria which was supplied to prompt the search. In future, information could be presented by agents which have first 'sub-contracted' its tailoring to individual needs by means of the personal presentation engines or filters as described by Bergeron (1994). Such engines might abbreviate or expand upon raw text according to the needs of the target user. Concluding questions Of necessity, this discussion has largely avoided very detailed consideration of the individual devices of agent interfaces, preferring instead to look to a bigger picture which is also incomplete. Numerous questions arise in relation to agents, a sample which are now offered as prompts to further and future investigations: • What types of aural and visual cues and representations would make an agent more effective? • What specific functions may be needed in agents? • What actions/other actions might be imagined for agents? • How could all of the above be synchronised with the underlying structures of the agent and of its setting and client? • Could mind-maps be useful devices for manipulating user understanding of the agent, its behaviour or its domain? • How else might agents access internalised cognitive structures of their users? • To what extent would it be feasible to have agents adjust their communication to suit preferred styles among their users? • Just how far should agents go in simplifying complexities for their users? • How useful is it to talk in generic terms about agents, anyway? • Is the specific context very significant to decisions on the mode through which communication between agent and user is mediated? • To what extent can agent interfaces capitalise on existing interaction cliches, and to what extent might they require the development of new and distinct modes of communication? Agents are an interesting area of development in computer software, and one in which expectations, particularly with regard to assisting people with managing the growing masses of networked information, are high. While many can see much utility in their advent, it remains to be seen whether the expectations being generated by agents will be fully delivered upon. References Bates, Joseph (1994). The role of emotion in believable agents. Communications of the ACM, 37(7),122-125. (July, 1994.) Bergeron, Bryan (1994). Personalised data representation: Supporting the individual needs of knowledge workers. Journal of educational multimedia and hypermedia, 3(1), 93-109. Berkun, Scott (1995). Agent of change. Wired, 3(4),116-117. (April 1995 - An interview with Pattie Maes.) Cates, W. M (1994). Designing hypermedia is hell: Metaphor's role in instructional design. 16th Annual Proceedings of AECT, 95-108. Ames, Iowa: Iowa State Uni. 90 Australian Journal of Educational Technology, 1995, 11(2) Davis, Arnold (1994). The digital valet, or Jeeves goes online. Educom Review, 29(3), 44-46. (May/June, 1994.) Dawe, Russell T. and Baird, Jeanette H. (1995). WWW, researchers and research services. Proceedings of AusWeb'95. Lismore, NSW: Southern Cross University. http://www.scu.edu.au/sponsored/ausweb/ausweb95/papers/sociology/dawe/ December, John. (1994). Electronic publishing on the Internet: New traditions, new choices. Educational Technology, 34(6), 32-36. (September, 1994.) Eichmann, David. (1994). Ethical web agents. Second international world-wide web conference: Mosaic and the web. pp.3-13. (Held in Chicago, Ill, Oct 18-20, 1994.) Falk, Jim. (1995). The meaning of the web. Proceedings of AusWeb'95. Lismore, NSW: Southern Cross University. http://www.scu.edu.au/sponsored/ausweb/ausweb95/papers/sociology/falk/ Goodman, Danny. (1995). Living at light speed? Random House Electronic Publishing. (Extract found 'somewhere' on the Web, under title 'Myths'). Harmon, Paul. (Ed) (1995). Software agents. Intelligent software strategies, 11(1), 1- 13. (January, 1995.) Indermaur, Kurt. (1995). Baby steps. Byte, 20(3), 97-104. (March, 1995.) Kawamoto, Kevin. (1994). Wired students: Computer-assisted research and education. Educational Technology, 34(6), 43-48. (September, 1994.) Kay, Alan. (1990). On the next revolution. Byte, 15(9), 241. (September, 1990.) Lanier, Jaron. (1995). Agents of alienation. Interactions, 11(3), 66-72. (July, 1995.) Laurel, Brenda. (1990). Interface agents: Metaphors with character. Laurel, B. (ed.), The art of human computer interface design. pp. 355-365. Reading, MA: Addison Wesley. (In general, as well as for the particular reference.) Maddux, Cleborne D. (1994). The Internet: Educational prospects - and problems. Educational Technology, 34 (6), 43-48. (September, 1994.) Maes, Pattie. (1994). Agents that reduce work and information overload. Communications of the ACM, 37(7), 31-40. (July, 1994.) Markoff, John. (1994). The fourth law of robotics. Educom Review, 29 (2), 45-46. (March/April 1994.) Murie, M. (1993). Macintosh multimedia workshop. Carmel, Indiana: Hayden Books. Negroponte, Nicholas. (1970). The architecture machine: Toward a more human environment. Cambridge, MA: The MIT Press. Negroponte, Nicholas. (1990). Hospital corners. In Laurel, B. (ed.), The art of human- computer interface design. pp. 347-353. Reading, MA: Addison Wesley. Negroponte, N. (1995). Being digital. Rydalmere, NSW: Hodder and Stoughton. Norman, D. A. (1988). The Psychology of everyday things. New York: Basic Books. Norman, Donald A. (1994). How might people interact with agents. Communications of the ACM, 37(7), 68-71. (July, 1994.) Oren, Tim. (1990). Cognitive load in hypermedia: Designing for the exploratory learner. Ambron, Sueann & Hooper, Kristina (eds.), Learning with interactive multimedia (pp.125-136.) Redmond, Washington: Microsoft Press. Reicken, Doug. (1994a). Introduction to intelligent agents special issue. Communications of the ACM, 37(7), 20-21. (July, 1994.) Reicken, Doug. (1994b). A conversation with Marvin Minsky about agents. Communications of the ACM, 37(7), 23-29. (July, 1994.) Roesler, M. and Hawkins, D. T. (1994). Intelligent Agents: Software Servants for an Electronic Information World (And More!). Online, 18(4), 18-32. (July, 1994.) Tognazzini, Bruce. (1992). Tog on interface. (Especially chapters 21, 22.) Reading, MA: Addison Wesley. Please cite as: Meek, J. (1995). Intelligent agents, Internet information and interface. Australian Journal of Educational Technology, 11(2), 75-90. http://www.ascilite.org.au/ajet/ajet11/meek.html