Editorial Technology and plagiarism In many disciplines within higher education, there has been a steady move over the last decade or so away from traditional examinations at the end of courses. Such examinations are seen as inherently unfair, partly because only in rare circumstances can a single set of timed tests genuinely reflect the content of an entire course, and partly because factors extraneous to normal intellectual capabilities, such as a headache, may unexpectedly depress a student's mark. Modularization may go some way to easing educationists' anxieties on this score, but will not in itself completely dispel the perceived problems. Other than dispensing with testing altogether (there are advocates of such an approach), there are only two ways of overcoming, or at least cushioning, the potentially unrepresentative effects of a final examination on which all or a significant part depends. Hie first is to test in the traditional manner but at intervals throughout a course, with the consequent periodic examination results making up the final assessment, or counting towards it. The second way - which has recently gained considerable ground - is to introduce continuous assessment of work done outside the examination room (essays, dissertations, projects, assignments, group work and so forth) either as the sole set of criteria for the final mark or, again, as forming part of it. The overriding arguments in favour of this second form of continuous assessment in undergraduate courses, and therefore the principal reasons for its increasing usage, are twofold. First, it keeps a moderate amount of steady pressure on students as opposed to forcing them into occasional bursts of frantic revision (or frantic attempts to start more or less from scratch as an examination approaches). Secondly, it leads students into the basics of scholarship and research methodology. But it is not without its disadvantages. For example, unless marks are kept secret until after the end of a course (which means limited feedback for the student), poor early results may demotivate. Furthermore, a choice of topics may encourage an adherence to areas for which at least a moderately good mark is likely, thus limiting that exploration which, many would maintain, should be at the very heart of education. Continuous assessment by project also has, however, a potential downside beyond any strict pedagogical disadvantages, namely the temptation to plagiarize. There is of course a grey area between blatant plagiarism on the one hand, and the inclusion of a certain amount of unacknowledged source material on the other. I am sure that many students who write dissertations but who are not yet fully familiar with scholarly procedures fail to provide source references not out of an intention to cheat but simply because they have not yet learned how to handle the referencing process, a process which involves not only using appropriate mechanisms but also judgement about when to include and when to omit a reference (indeed, the reverse of missing references - over-referencing to the point of preventing a natural flow of the text - is almost as common in student assignments within certain disciplines). Nevertheless, as one moves out of the grey towards the black, there eventually comes an undeniable point at which an unacknowledged source represents sheer plagiarism. 2 ALT-J VOLUME 2 NUMBER 2 Identifying such plagiarism is necessarily a hit-and-miss affair. An examiner, however expert in a subject, cannot be expected to recognize every unacknowledged source. In the majority of subject-areas, a student has such an enormous range of sources available (large numbers of published articles in numerous journals, little-known specialist reports, electronic information culled from obscure databases) that no single examiner, or even a set of examiners, could possibly know the details of them all. Including proper references to such material may be to the student's advantage, since finding things out for oneself and showing how one has done so is something most examiners are keen to reward, but the temptation to omit references and thus to pass- ideas off as one's own is probably even more common among undergraduates than among researchers. After all, having an original idea is seen by most students - indeed most people - as even better than showing how good one is at winkling out the ideas of others, whatever the high moral ground taken in the rubrics and guidelines for assessed assignments. Then there is plagiarism by copying the work of fellow students (difficult in the examination room but all too easy outside it), so-called 'syndicating' (sharing the workload on an assignment without the knowledge of the examiner), and even buying the work of professional scribes. While such professionals are no doubt few and far between, it is an open secret in places of higher education that written work on a wide variety of topics can be bought, off the shelf or even tailor-made. There is a small but worrying black market in essays, dissertations, and numerical analyses of all kinds. One can apparently order a First, an Upper Second or - if the disparity between one's former Third-Class work and the work one is about to submit would perhaps appear too great - a Lower Second. The chances are that if the required level is judiciously chosen and the work carried out to the right specifications, most examiners tackling a daunting pile of assignments, in many cases from mostly anonymous students, will be oblivious to the dishonesty lurking behind perhaps one or two of them. Now, it goes almost without saying that student plagiarism in one form or another has always existed, but the advent of information in digital form has given the activity a new dimension. Digital information can not only be copied very rapidly, but also does not degrade from one generation of copying to the next: revealing little errors do not creep in during the copying process as they can - and often do - when re-typing or copying by hand. Indeed, there is no foolproof way of detecting digital plagiarism. The plagiarism detection tool, briefly described by Benford et al in this issue of ALT-J, sounds fascinating and clearly has some interesting potential for certain applications within certain subject-areas, but obviously can provide only an approximate pointer towards possible cheating. Other methods of detecting plagiarism, such as those sometimes employed in courts of law to prove authorship or otherwise, are of little or no use when it comes to numerical analyses, and in any event are far too cumbersome (and too contentious) to be used on a regular basis in places of higher education. Nor is there any practical way of discouraging plagiarism other than by exhorting academic honesty or by giving an accompanying written or oral test under examination conditions either to all students or to a random sample. There is much to recommend the use of such tests, since copying without understanding may lead to being subsequently caught out, and though actually proving plagiarism may be beyond examiners, the threat of a test will tend to deter all but the most desperate or foolhardy of students. But accompanying tests add to the marking load, to a certain extent detract from the very object of project work, and in any case may 3 Gabriel Jacobs Technology and plagiarism well do nothing to distinguish between original work on the one hand and, on the other, material which has been understood but which happens to have been plagiarized as well. Leaving aside the problem of catching out students who copy assignments (or parts of them) written by other students or who buy assignment work from professionals, does it matter that material plagiarized from respectable sources is regurgitated as if it were original? Clearly, at present it does matter, since the perception of examinees that examiners are favourably disposed towards originality is no doubt accurate (and indeed inadvertently reinforced by teachers who warn students against doing no more than throwing back lecture notes at them). How, then, is one to deal with what the increasing use of computer technology in education will inevitably bring with it: ever-easier access to an abundance of source material in digital form? Without an accompanying test or viva voce examination, how does one judge an assignment which, one suspects, may be a compilation of chunks of unacknowledged material lifted from encyclopedias held on CD-ROM and data sets held in electronic libraries? It may be that having given students the tools in the form of learning technology with which to access and manipulate digital information, we need to re-think our examining criteria when it comes to project work, at least at the undergraduate level. It may be that in terms of marks we should no longer attempt to distinguish, as most of us have tended to do in the past, between an assignment which has involved original thought and one which is simply a montage of unaltered source material. Original thought, if one can be sure of recognizing it, must of course be rewarded, but should a student necessarily be penalized for having sourced material, then having presented it unmodified and even unacknowledged? By how much must it be altered and/or reorganized to demonstrate at least originality of form? This latter question is far from new, but the use of learning technology makes it all the more poignant, since many computer-based learning tools can also be used to lift source material and deposit it in pristine condition into assignments. Some will argue that students have always been over-concerned with assessment by examiners, and that the kind of self-testing facilities often incorporated into learning- technology applications are what they should really be concentrating on. This may be a valid point from the standpoint of a teacher, but try selling it to a student whose future career or academic progress depends on gaining a required pass mark. Human nature being what it is, from the moment one grades assignments, and/or passes or fails them, the undoubted pedagogical value of self-assessment, however well appreciated, is bound to take something of a back seat in the mind of the average student. One way or another, then, those of us involved in examining project work - and it looks as if we shall be increasing in number despite those who are currently forecasting a move back to tests and examinations - have to come to terms with the new problems of assessment highlighted by the general availability of computer technology. Perhaps the way forward is to turn a necessity into a virtue by thinking of term-time assessed assignments as primarily exercises in information-gathering and therefore less as projects intended to demonstrate understanding and/or learning. But that is a perilous stance indeed ... Gabriel Jacobs 4