Editorial Mind synthesis The President of the USA and I have something in common: we play the saxophone. And having briefly heard Clinton play, I would say we are at roughly the same acceptable, though unmistakably amateur, standard. Nevertheless, we are no doubt both so attuned to the sound of the saxophone that it would not be easy to fool either of us into believing that a real instrument is being played if in reality the sound of the instrument had been electronically synthesized. Certain instruments are easier to synthesize than others. For instance, it is hard even for sensitive ears to distinguish between a genuine church organ and a church-organ sound coming out of a good synthesizer. This is partly because the sound quality of the organ is not dependent on the organist: a two-year-old child who presses an organ key achieves no better or worse sound quality with that key than the most gifted organist. On the other hand, a synthesized solo saxophone (like most synthesized instruments) tends to have a certain unnatural, mechanical quality which is difficult to pin-point but which to the saxophonist immediately rings false. It was therefore with astonishment that I recently listened to two solo saxophone pieces, one played on a real tenor saxophone by a top-flight jazz saxophonist, the other the result of some keyboard input and clever programming by computer scientists who may well have had only a rudimentary knowledge of music and perhaps no playing skills whatsoever. I was quite unable to say which of the two was synthesized even after repeated listenings. The fact is that the perceived perfection achieved by a professional saxophonist is not perfection at all but an amalgam of skill, flexible reactions and numerous little human errors, and provided that this skill, these reactions and these errors can be identified and quantified - clearly, they can - they can all be coded so that the result is perceived perfection. In short, perceived perfection in instrumental synthesis seems to be merely a matter of how long a programmer is willing to spend on fine-tuning the program code. Can the same sort of thing be applied to the sole use of educational technology as opposed to using human teachers? The problem for educational technologists writing CAL programs is one of producing a level of flexible reaction and interaction equal to that which an experienced human teacher can offer. Is it thus only a matter of the amount of care and effort put into the programming? Can a brilliant, patient programmer, drawing on the expertise of brilliant human teachers, produce the software equivalent of a brilliant human teacher? If not, might this be possible in the foreseeable future? The answer to all these questions is - I would have thought obviously - No, except within certain categories of training software involving repetitive tasks and/or motor skills (and even then . . . ) . Of course, there is hardly an educationist who will not add when theorizing about the use of technology that a human teacher will always be necessary, but I often get the feeling when reading academic papers concerned with the theory of learning technology that remarks about the need for human intervention are there either as comforters or safety nets to catch the crystal ball if it eventually has to be discarded as junk. Indeed, I detect among many academics writing about software for use in education a deeply held belief that technology will soon reach - in certain cases, has already reached - a stage at which the human teacher can be dispensed with. Yet the number of potential variables in most teacher-learner situations is vastly greater than the number of variables involved in accurately reproducing the sound of a musical instrument - so great, in fact, that even imagining the technology of 10, 20 or more years from now, it is hard if not impossible to envisage a teaching machine so advanced that it could entirely replace a human being for all but those learning situations where understanding concepts is of secondary importance. Medical diagnosis entirely by computer, pilotless 2 Alt J Volume 2 Number 2 passenger jets, robots that do the housework . . . all are conceivable within the lifetime of today's young people. But despite the uncanny accuracy of some of Arthur C. Clarke's predictions of thirty or so years ago in 2001: A Space Odyssey, the chances of our producing by the beginning of the 21st century (or even well beyond) a computer like HAL, able to react more or less like a human being, are inflnitesimally small; and it would take a HAL to make human teachers completely redundant. It is always dangerous to predict the pace of technological progress, but the teacher-student relationship is potentially so complex that encompassing all possible subtleties of it in a teaching machine seems as out of reach as a manned mission to the nearest star or accurately forecasting next week's weather. Two things flow from this. The first has become almost a cliche: that human teachers who fear redundancy because of educational technology are needlessly concerned. In the 1960s, the language laboratory appeared to many language teachers as a threat to jobs, when in fact there was no more possibility of their being replaced by a tape recorder than there is now by a computer. The second is that we must always approach with a degree of healthy circumspection the value of educational software in the comparatively inflexible form it has now and will certainly have for a long time to come. In the 1950s, the behaviourist psychologist B.F. Skinner believed that the teaching machine had such obvious advantages - infinite patience, consistent reaction to student input, the offer of self-paced learning, the fascination of youth with technology and so on - that any disadvantages were far outweighed, and this at a time when teaching machines were rudimentary and cumbersome affairs by modern standards. Skinner's ideas about programmed learning were later toppled from first place in the educational popularity league by Bruner and other advocates of discovery-based learning; but again, with the advent of hypertext and hypermedia encouraging browsing and guesswork, the by now more manageable teaching machine still fitted nicely into the scheme of things. Today, if anything, the pendulum is swinging back to the apparent security of the step-by-step approach, though neural networks look set to prolong the popularity of hypermedia as a discovery-based learning tool. The teaching machine will thus doubtless continue to be promoted, if only to relieve pressure on teachers faced with ever larger numbers of students. It is right to promote it, but we should also continually re-assess the extent to which it can successfully be used in the absence of a human teacher. In my view, it can very successfully be used, provided that two basic conditions are met. The first is that human teachers are fully aware of the dangers of non-intervention in the teaching environment. The second is that both teachers and students are fully aware of the limitations of CAL software, and therefore prepared to question computer-generated responses and assessments. A computer program is the result of decisions initially made by human beings, and it is human not only to err, but also to be so unimaginably complex that no machine yet designed can come anywhere near an acceptable synthesis of the way our minds work. Those of us involved in educational technology should therefore be blowing our own trumpets but not before having pushed some mutes into their bells. What I say here is not new. But I believe it is certainly worth repeating. Gabriel Jacobs 3