Microsoft Word - buchan.doc


Australasian Journal of
Educational Technology

2007, 23(3), 408-434

A Bridge too Far or a Bridge to the Future?
A case study in online assessment at
Charles Sturt University
Janet F. Buchan and Michael Swann
Charles Sturt University

The in house development of an online assessment tool, OASIS, has
provided a unique opportunity to research the use of online assessment in
teaching and learning across the university. The developing relationship
between IT staff, educational designers and academics serves as a model for
integrated and evolving management systems which demonstrate how
academic research is informing improvements in applying educational
technology. A framework, the Bridge Support Framework for Online Learning,
is proposed for the support and implementation of online learning systems.
A case study in online assessment in a microeconomics subject describes the
development of this framework in response to a ‘systems’ failure when
using the online assessment software tool during a major assessment event
for a large external student cohort. As the university moves towards an open
source learning management system for 2008, the range of online assessment
methods will increase. It is here that our case study and the proposed Bridge
Support Framework have potential value in learning from history to improve
processes and procedures for the future.

Introduction

At Charles Sturt University (CSU) the nature of in house software
development and cross-divisional support of online technologies has given
the authors a unique opportunity to contribute to the evolution of the
online assessment software itself, to play an integral role in the
development of associated support processes and, through research, to
gain an understanding of exactly how online multiple choice assessment is
being used in teaching and learning throughout the university. A study
supported by a CSU Scholarship in Teaching Fund grant is enabling the
researchers to investigate more deeply the effectiveness of this form of
assessment in enhancing learning and teaching.



Buchan and Swann 409

To date CSU has had a limited number of mainstream online assessment
tools available to staff and students. These include online forums
(discussion boards) for collaborative discussion; online submission of
assignments (through the CSU developed Electronic Assessment Submission
Tracking System, EASTS) and online marking. On a smaller scale, individual
disciplines and subjects use a variety of online (or e-learning) technologies
such as interactive CDs and online courses to achieve specific learning
outcomes such as collaborative learning, critical reflection and problem
solving (Donnan, 2006; Sivapalan & Cregan, 2005; Kerka & Wonnacott,
2000). This discussion will confine its focus to certain aspects of our recent
experience with online multiple choice assessment. As our university
moves rapidly towards the implementation of an open source learning
management system (LMS) in 2008, this range of online assessment
methods will increase with the introduction of new tools such as wikis,
blogs, gradebooks and more integrated interactivity. Along with this
increase in availability of assessment tools (amongst other enhanced
features) comes the increased responsibility for supporting the
implementation and ongoing development of a university wide, open
source learning management system (Wheeler, 2007). It is here that our
case study and the proposed Bridge Support Framework have potential value,
learning from experience to improve processes and procedures for the
future.

The evolution of online assessment at CSU
CSU introduced online multiple choice assessment in 1998 with a pilot trial
of an in house developed, online multiple choice assessment quiz/test tool,
OASIS (Online Assessment Submission Information System). Since then,
this mode of assessment has been used successfully by staff and students
(see later) to enhance learning through a variety of largely formative tasks,
i.e. done for feedback rather than marks, and some summative assessment
tasks, i.e done for both feedback and marks (Nichols, 2003) since its initial
introduction as a pilot trial in 1998 (Buchan & Pitcher, 2005). Following
substantial technical upgrades to the software in 2003 and 2005, increased
support from the software developers and the Centre for Enhancing
Learning and Teaching (CELT), along with the promotion of online
assessment through staff development, the use of this mode of assessment
has increased very rapidly over the last three years (Figures 1 and 2) and
extended to all faculties in a wide variety of disciplines. It is also being
trialled at a divisional level by Student Services and the Library (Fry, 2006).
As lecturers are discovering the benefits of online multiple choice
assessment, so they are beginning to push the boundaries; the boundaries
of online technology itself, as well as the boundaries of the appropriate
educational application of the technology (Northcote, 2002).



410 Australasian Journal of Educational Technology, 2007, 23(3)

0

1000

2000

3000

4000

5000

6000

7000

8000

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

2003

2004

2005

2006

Figure 1: Growth in the number of individual OASIS test submissions.
(Data includes multiple submissions by single users).

0

1000

2000

3000

4000

2003 2004 2005 2006

OASIS subject Groups (total
per year)

Distinct OASIS tests
submitted per year

Distinct submitting users
per year

Figure 2: The usage of OASIS by comparison of subject groups, distinct
tests submitted and distinct submitting users per year, 2003 to 2006.

The increased uptake of this particular application (Figures 1 and 2) has
been due to the promotion by champions of this mode of assessment (Ely,
1999). These include academics and a nominated OASIS coordinator, who
is an educational designer working closely with software developers,
academics and other educational designers to promote the technology,



Buchan and Swann 411

provide professional development, and to ground online assessment in
appropriate pedagogy. Academics research and share their experiences in
online assessment through the Foundations of University Learning and
Teaching program, and the Tertiary Teaching Colloquium and other forums
(Wilkinson & Dawson, 2000).

It is argued in this paper that the evolution of online assessment at CSU is
more than just a simple linear exercise in software development. It
encompasses and must necessarily integrate the simultaneous
development of a variety of key support processes, professional
development and associated research that collectively feed back into the
improvement processes to produce a more sustainable and reliable online
system (Buchan, 2004; Buchan, 2003).

Figure 3 provides a Venn diagram approach to illustrating the three major
components of the integrated online system operating across a multi-
campus university, contributing to the effective delivery of online learning
services.

Figure 3: Major components of an integrated support
system for online technologies

A: Division of Information Technology - providing and maintaining the IT
services and support staff.

B: Centre for Enhancing in Learning and Teaching (CELT) - providing
Educational Designers and specialist technology coordinators

C: School based academic staff - mediating student assessment through
online systems such as OASIS



412 Australasian Journal of Educational Technology, 2007, 23(3)

Figure 3 identifies the critical intersection of components A, B and C, which
must function optimally if large scale online assessment is to work. Our
case study explores the consequences of a failure in the online system and
is used to highlight the need for a fully integrated support system for
online technologies. The case study follows the assessment experiences of a
microeconomics subject offered in the School of Commerce at CSU. In 2006,
during a major summative assessment event, a technical failure caused
serious problems for the distance education cohort despite the careful
planning of several years, including successful formative assessment tasks
(Swann, 2004), the support from different divisions, and the preparation of
the microeconomics students for the single major assessment event. In the
ensuing reflection the question is posed: Is this ‘A bridge too far’ (Ryan,
1974) or a Bridge to the future?

The following extract from Michael Swann’s report (2006) sets the
background for the case study.

Summative Assessment Test – 6th June 2006

The (online assessment) OASIS Summative Test was set to commence at
precisely 1.45 pm on Tuesday 6th June AEST and to finish at precisely 3.00
pm. Overseas students from the UK (London) to Japan (Tokyo) had to set
their clocks by NSW Tuesday 6th June AEST even though this meant sitting
for their OASIS Test at for instance, 2.00 am or 4.00 am in their local time
zones. Brave souls!

Somewhere just after 1.45 pm as 171 [microeconomics]… students attempted
to simultaneously log-on and access their Summative Test online… the
OASIS Server at CSU-Albury campus sustained a critical overload failure…
Students in droves reported by email … a [series of problems]… after
attempting to log on to the test:

The immediate consequences of the server failure were certainly dramatic:

1. My office phone went into meltdown
2. The Microeconomics Forum was deluged with around 300 postings from

distressed DE students
3. The IT Help Desk received a tidal wave of calls from anxious, concerned

and distressed DE students
4. Student Services were also deluged with calls from stressed and

concerned DE students
5. The School of Commerce General Office phones rang continuously for an

hour and half.

By any measure this was a very disappointing exercise… especially with the
test counting for 20% of formal assessment in a B.Bus. foundation subject…
The angst and sense of frustration across the … DE student cohort was
considerable. (Swann, 2006)



Buchan and Swann 413

This caused an immediate response from students - not only in NSW and
across Australia but also from international students from London to
Tokyo. Fortunately, the University’s student support systems, from the IT
Service Desk staff to the General Office staff in the School conducting the
online test, responded swiftly and efficiently. The immediate response was
to reschedule the online summative test for later in the same week and by
dividing the students into two smaller sub-cohorts the test was run twice
with complete success. In terms of our Venn diagram in Figure 3, the
rescheduled test performed as planned because the critical DIT-CELT-
School interface worked efficiently to produce a successful online test - but
only in its rescheduled second attempt.

The technical reason for the systems failure was traced to a temporary
server failure which could not support the load placed on it by the large
cohort access (Sefton, 2006). The uptake of this form of online assessment
was far greater than anticipated, and therefore supported, by the particular
server. Such a hardware failure is probably inexcusable today but this
incident highlights the critical importance of communication within the
integrated online support system illustrated in Figure 3, and the other
components of support for online learning environments as proposed in
our Bridge Support Framework below.

 A framework for integrated support – A bridge to the future

In the first conversation after the 6 June 2006 systems failure, Michael
Swann made the ‘historic’ comment, “It is disappointing: what we wanted
was another D-Day success but what we got appears to be more like a
bridge too far”. We had planned as well as possible, and factors largely
beyond our immediate control were ultimately the undoing of the event on
the day. Using student surveys (Table 1), investigations into the technical
problems (Sefton, 2006) as well as detailed reports on these and other
online assessment events (Buchan 2006; Buchan, 2005; Swann, 2006; Swann,
2004) we have been able to reflect on and use this situation and other
instances of online assessment at CSU to improve our system and to build
a bridge to the future.

Although the case study dwells on a particular incident and IT
infrastructure failure, there is a myriad of other areas which can
significantly affect the outcomes of an online assessment event. The
authors introduce here their Bridge Support Framework for online learning.
Developed within the context of online assessment, the framework has a
more general application in mind and is intended for use in all areas of
technology enhanced learning. The word ‘framework’, rather than ‘model’,
has been carefully chosen because a framework per se is the structure one
uses to support something and on which one builds the final product. A



414 Australasian Journal of Educational Technology, 2007, 23(3)

‘model’ provides an exact replica of something that others can copy, and in
the strict, scientific, application of a model it would be used to predict
outcomes and consequences of actions.

The initial framework was first defined in an internal discussion paper
dated June 2006 (Buchan, 2006) in response to the case study events, and
incorporating one of the author’s years of experience in developing online
assessment systems at CSU. A study of the literature has helped to refine
the framework, and at the time of final submission of this paper, the
authors have been able to review the success of the framework in the light
of the (successful) Autumn 2007 offering of the microeconomics subject in
our case study.

In comparison, the RIPPLES model for integrating instructional technology
into higher education (Surry, Ensminger & Haab, 2005; Surry, Ensminger &
Jones, 2003) addresses the barriers to integrating instructional technology
into education. It has been described as a “…useful tool for analysing
institutional innovations.” (Benson & Palaskas, 2006). We found significant
similarities in our proposed initial framework with this model, in
particular the academic focus. Surry, Ensminger and Jones (2003, p.2) note
that their model “…is unique from most other technology integration
models in that it includes facilitating factors that are specific to academic
settings.” The RIPPLES model provides an institutional focus and more
depth to factors such as fiscal resources (budget) and infrastructure in its
broadest institutional sense. Moreover, a gap in our original framework
identified from the RIPPLES model is evaluation. Management of, and
support for, online learning at an institutional level is necessarily inter-
divisional and requires a systems approach (Camp et al 2007; Uys, 2000).
Thus the overarching communication is critical to the success of online
learning. Ely (1990) in his seminal work cites eight conditions that appear
to facilitate the implementation of technology innovations and reasons for
adoption resulting in the diffusion of innovation. Our framework moves
beyond implementation and focuses on support for online learning with its
strong pedagogical foundation. It is intended for use at a variety of levels,
from individual lecturer to institution.

While many institutions may appear to be well advanced in online
learning, the demand for Web 2.0 technologies (Alexander, 2006) and the
uptake of open source learning management systems (Wheeler, 2007) make
the authors’ Bridge Support Framework all the more important. The ACODE
benchmarks for the use of technology in learning and teaching in
universities (ACODE, 2006) confirm the importance of key issues such as
support to promote continuous quality improvement in universities,
through both staff development and student training as well as through
institutional policy and IT infrastructure.



Buchan and Swann 415

Figure 4: The Bridge Support Framework for Online Learning
1. IT infrastructure 5. Evaluation and research
2. Software/tools 6. Budget
3. Pedagogical (instructional) design

and academic research
7. Communication

4. IT support for end users 8. Institutional/administrative support and
development of protocols and guidelines

The structure of the bridge support framework is now described in more
detail. The graphic representation of the authors’ four ‘pillars’ and two
‘cross-beams’ with a communication arch and boss at the apex of the
communication arch of support is not intended to define a fixed
prioritisation, but suggests that all components are of equal importance.
There can be no weak link or the whole structure may fail. The four main
pillars provide a foundation and the two cross-beams draw attention to the
importance of certain links in the ‘system’ that ensure successful support
for online learning. Individual institutions will have their own specific
needs, strengths and weaknesses and our Bridge Support Framework
provides a flexible blueprint to supporting online learning, more
specifically, online assessment.

1. IT infrastructure

Camp et al (2007, p.26) note that “Managing IT infrastructure for higher
education today is a balancing act. Institutions require high performance,
reliability, scalability, agility, and a platform for innovation.” The
organisation’s IT infrastructure, i.e. the servers and networks supporting
individual online assessment tools, needs to be reliable and capable of
supporting the projected use and loading of the system. Determination of
this potential use requires effective communication between IT staff,
developers and all end users.



416 Australasian Journal of Educational Technology, 2007, 23(3)

2. Software

The online assessment tool needs to be reliable. The software must be ‘bug
free’, user friendly, and must meet the needs of the users (Wheeler, 2007).
Wheeler also suggests that software needs to be flexible and adaptable to
institutional needs (open source, in house), a need that must be balanced
by institutional resourcing and costs. Professional development is required
to ensure that users have a realistic expectation of what the online system
or tool is capable of, are competent in its use (see below, 4. Support) and
that appropriate risk management strategies are in place.

3. Pedagogical (instructional) design and academic research

Following a successful implementation of technology into a learning
environment, the ongoing use of online technology to enhance learning
needs to be grounded in pedagogy and research (Hardy & Benson, 2002;
Ely, 1999). Barone (2003) suggests that a collaborative association between
instructional (educational) designer, specialist technology coordinator and
academic is needed not only to enhance learning but to help to create a
new learning environment. Our discussion here is confined to the use of
online assessment.

Assessment is recognised as one of the key elements of the teaching and
learning process (Nichols, 2003; McAlpine, 2002; James & McInnis, 2001;
Ramsden, 1992). Byrnes & Ellis (2006) quote Kendle and Northcote to
conclude that: “as such, assessment should be one of the first design
considerations when preparing [a] … course, and be seamlessly integrated
into the course, not ‘tacked on’ as an afterthought.’ Assessment design will
draw on a number of the key elements of the Bridge Support Framework and
is closely allied with research, planning, institutional policy and guidelines.
Because of its centrality in the learning process, optimal assessment design
should also take ongoing cognisance of student feedback and evaluation.

Planning typically might involve the academic, educational designer and
specialist technology coordinator. Planning for the microeconomics
assessment event is outlined below with more specific detail under the
‘support’ pillar of our framework. A model that is working well at CSU on
a small scale is the designation of a specialist technology coordinator to
provide support for pedagogical design, research and liaison with key
stakeholders in online technologies (evidenced by the growth in uptake of
online assessment from 2003 onwards, Figures 1 and 2). We perceive this
role as pivotal in an integrated support system (Figure 3) for online
technologies. A specialist technology coordinator can be described as an
expert in the use of the technology and a ‘champion’ for the educational
application of the technology. In this case study the (informal) role of



Buchan and Swann 417

OASIS coordinator is held by a CELT Learning Media Laboratory
coordinator (educational designer). Similarly other technologies at CSU are
supported informally by educational designers (academic developers) with
interests and expertise in its use. As demonstrated through OASIS, this
model has been successful in providing an active point of liaison between
academic staff, IT developers and the administration (Figure 3) and will
continue to be used in the expansion of CSU’s online learning environment.
Most importantly, it helps to ground the use of the educational technology
firmly in the realm of learning and teaching, ensures that academic
research is conducted, and that communication of this research reaches the
right people for appropriate action.

Research
With the overabundance of different technologies today, users are in
danger of being caught up in the pressure to use the technology according
to the ‘Mallory Principle’ simply, ‘because it is there’ and without sufficient
consideration for the quality of learning. It has been noted (Byrnes & Ellis,
2006; Northcote, 2002) that, with the need to remain competitive, the move
to online teaching and learning ‘coupled with the importance of
assessment as a critical component of the teaching and learning process,
may in and of itself be sufficient justification for the use of online
assessment’ (Byrnes & Ellis, 2006, p.105).

We have observed that those developing and implementing the online
assessment technology are not usually the end users, and thus sound,
situated research into the use of technology becomes essential to inform the
development of the software itself (Buchan, 2006; James & McInnis, 2001)
as well as to inform the future acquisitions of online assessment tools in the
new CSU online learning environment. Feedback from the users of CSU’s
online assessment technology has helped to position the technology for
future developments and upgrades and to inform the acquisition of new
systems to better serve the needs of the clients (Fry, 2006; Swann, 2006,
Buchan & Pitcher, 2005; Jenkins, 2005).

4. IT support for end users: Staff and students

In an educational environment, support for any online system or tool is
important. In the case of online assessment the support becomes vital
because, by its nature, the process of assessment is a critical part of a
student’s learning experience in a subject (Donnan, 2006). Any problems
concerning the type, execution or mode of delivery of assessment can mean
the difference between success and failure for a student, with potentially
serious consequences for all parties concerned. The importance of support
(or lack thereof) is also recognised as a barrier to the adoption of new
technologies (Hagner, 2001; Ely, 1999). The evolution of support systems



418 Australasian Journal of Educational Technology, 2007, 23(3)

for online assessment at CSU has been gradual, and generally demand
driven in response to perceived need, or as a direct reaction to problems
associated with different situations.

Three different levels, or types of support that are required for end users of
online technology are identified here (Buchan, 2006): training and
professional development, troubleshooting and self help.

Live/real time training, professional development (workshops, one on one specialist
help, etc.)
The delineation of responsibility for the training of users (staff and
students) in using specific online technologies is a ‘fuzzy’ area in most
institutions because of the nature of the type of support required. This
ranges from technical competency (familiarity) with the software itself, to
appropriate educational application in the learning environment (Figure 5).
This delineation might be pedagogically separated according to Gagné and
Glaser’s (1987) five learned capabilities. Training deals largely with motor
skills required to become competent in new technology, supported by
intellectual and verbal information skills. Professional development might
be seen to deal with the ‘higher order’ skills such as intellectual skills and
cognitive (problem solving) strategies that empower users to apply the
technology appropriately in their learning and teaching. There is no clear
separation in training or professional development, but a continuum,
which in itself is challenging for an organisation to manage where training
and professional development are provided by different divisions.
Depending on the structure of the organisation, responsibility for training
can be placed on the (equivalent) division of information technology, a
training or staff development division, or an academic development unit
(educational designers, as in CSU’s CELT). The regional, multi-campus
nature of CSU presents particular challenges in providing adequate, real
time training in the use of online technologies.

Troubleshooting - Help Desk oriented; immediate or delayed response (Help Desk
sources appropriate support and, if needed, refers to specialists, available
information, etc.)
Help Desk style troubleshooting for online tools comes from several
sources at CSU: IT Services Help Desk, the CELT specialist technology
coordinator, individual educational designers, and finally, academic staff
for their own subjects.

The IT Services Help Desk is often the first point of call for students and
staff using online assessment tools, because many problems with any
online system manifest themselves as technical errors (even if it is user
‘error’, the users do not know this). The problems encountered by users of
the in house software are very specific, which presents challenges for the



Buchan and Swann 419

Help Desk. Many of these problems are solved by pointing users to the
relevant information on the web or direct contact with known ‘experts’ in
the particular technology where needed. However, the scale of operations
at CSU, with some 36,000 students of whom over half are distance
education students, is challenging and we are still developing the
necessary online resources and working towards achieving this level of
inter-divisional communication. IT Customer Services is currently
implementing the IT Information Library (ITIL) service management
system which should potentially solve many of these problems (Director
Customer Services personal communication, 8 July 2006).

Providing on call Help Desk support help for all online assessment events
on the scale now being used at CSU (Figure 1) is, however, unrealistic and
undesirable. An online system should be robust enough to support the
usage it gets. Furthermore, prior training and professional development for
users, as well as planning the use of technology for learning and teaching
should minimise major demands on the Help Desk services.

Self-help - web information and print resources, these can help cover the functions
of training/professional development and troubleshooting if set up correctly
Self help support for online assessment tasks comes from a variety of
different areas and should be tailored to individual learning needs and
styles (Gagné & Glaser, 1987) and the specific cohort, i.e. distance
education or on campus students (Hicks, Reid & George, 1999). Support
includes general information on the application made available on the web
(help documents, how to guides, etc.); pre-prepared subject specific
information provided in the Subject Outline; and familiarisation or practice
tests.

Although there was no coordinated system for the development of web
based guidelines for the various in house software applications, the model
of having a single specialist technology coordinator for individual
applications, such as OASIS, is helping to streamline the development of
‘self help’ resources. This is also being expedited in the implementation of
the CSU online learning environment, CSU Interact, through a coordinated,
interdivisional approach to developing training and support material.

In summary, direct support for staff users of online assessment tools at
CSU comes largely from CELT. There is minimal support available for
student users of OASIS except for information on the Student Services
website (CELT initiative) and any support provided by subject lecturers.
The OASIS coordinator deals directly with staff and, occasionally, students
on an individual needs basis.



420 Australasian Journal of Educational Technology, 2007, 23(3)

5. Evaluation and research
Surry, Ensminger & Jones (2003) describe evaluation as the need for
continual assessment of the technology. They identified four areas of
evaluation to be considered by administrators in the integration of
technology into learning environments. Keeping the focus on learning, and
adapting these for our support framework we identify three key areas in
online technology that need to be evaluated to facilitate continual
improvement in learning outcomes within a technology enhanced learning
environment. Firstly, evaluation of technology in relation to learning goals;
secondly, evaluation of the technology including an ongoing assessment of
technology alternatives and, finally, a cost/benefit evaluation to determine
the return on investment for any technology expenditure.

The cost/benefit evaluation is perhaps controversial in an academic
environment, but in the highly competitive contemporary educational
environment where the users (staff and students) are effectively the
‘clients’ of those providing the technology, consideration of dollar benefits
does become important. However, the perceived benefit of the technology
will not equate simply to numbers of users, but quality, achievement of
learning outcomes, innovation and other factors can be incorporated into
the calculations of benefit.

6. Budget
Funding has been identified as a top ten IT issue (Camp et al., 2007) for the
past eight years. No institution can afford the ad hoc acquisition of
individual applications without adequate financial planning for
maintenance and future developments. Budget considerations in managing
the online learning environment operate at a number of levels:
organisational, divisional, faculty, school and individual. Budget
considerations are also dependent on the (perceived) responsibilities each
area has for sustaining specific aspects of the online learning environment
(Jackson, 2007). In 2006 the authors identified a number of budget related
shortfalls in planning that have impacted on online assessment including
lack of training for students in use of software (no division had staff
designated to this task), IT infrastructure limitations (this has been
addressed in a recent server upgrade) and a lack of coordinated
development of self help support materials.

7. Communication
Working with innovative, exciting and sophisticated new online
technologies not only expands the frontiers of the educational services
offered by the University in this age of web based learning, but also
reinforces our acute awareness of and dependence on the central role of
team effort in the emerging e-learning environment. (Swann, 2004)



Buchan and Swann 421

The development of reliable online learning systems is acknowledged to be
a team effort (Kerka & Wonnacott, 2006; Uys, 2000). Inter-divisional, intra-
divisional and staff-student communication have all been important in the
evolution of online assessment at CSU. In the Venn diagram of Figure 3,
the critical intersection of the three components for successful online
assessment is where communication is paramount (Hagner, 2001). The
RIPPLES model identifies people as a key element. However, we use
‘Communication’ purposefully, where the important human interactions
become implicit. Our bridge framework has attempted to emphasise this
aspect by making communication a component which connects and
informs all the other components of the bridge.

8. Institutional and administrative support and development of policy,
protocols and guidelines

There is a need for organisational policies and procedures to be developed
for new technologies (Surry, Ensminger & Jones, 2003). The issues
associated with the administrative and institutional support of online
assessment include policy development (institutional to school level), and
equity of access and accessibility issues (Sim, Holifield & Brown, 2004). For
administrative purposes, in 2005 CSU mandated Internet access for all
students in its admission requirements (University Handbook 2005 Section
5.1), and minimum IT access requirements are currently (June 2007) under
review. The University now “…assumes that all on campus and distance
education students at CSU will have ongoing access to an internet
connected computer capable of communicating with CSU online systems”
(Senate, 2007).

At CSU a coordinated approach to the development of university
assessment and admission policy, regulations, protocols and guidelines
and recommendations for IT access is essential, as we move to a more
extensive online learning environment with the introduction of both an
LMS and a digital object management system. Clearly no single policy
concerning online access and use of technology in assessment can cover all
uses of all applications without limiting innovation and use of the
technology. A theoretical ‘model’ of how policy, protocols and guidelines
in relation to IT access can be developed and introduced at appropriate
institutional levels is shown in Figure 5.

As the use of online assessment has increased, it has become clear that
academic staff are using the technology in a variety of innovative ways, but
not all are necessarily educationally appropriate, and more recently issues
associated with equity and accessibility have emerged for discussion. In
determining the appropriate use of this mode of online assessment, our
case study has been able to draw on past experiences and the consequent



422 Australasian Journal of Educational Technology, 2007, 23(3)

development by CELT of ‘protocols and guidelines’ in response to a
perceived need. However, as is the case in many institutions (W. Jackson,
UNSW@ADFA, personal communication, September 2006; Donnan, 2006),
the development of overarching policy, protocols and guidelines and
institutional strategies (Sim, Holifield & Brown, 2004) to support new
technologies is lagging behind the technology itself.

Levels of IT access Policies/guidelines
required

Level 1 -CSU level
For official commun-

ication with CSU

Level 2 -CSU and/or
Faculty level

To permit access to
the OLE and

subject materials

Level 3 - Course level
For access to multi-
media/online subject
materials and spec-

ialised software asso-
ciated with a course

Level 4 - Subject level
Specialised software

associated with a
specific subject

Required by all
CSU students

Required by all
students in faculties

supporting widespread
use of the OLE

Required by students
enrolled in the subject

Required by students
enrolled in certain

courses

Mandated in the
University Admissions

Regulations

Mandated in the
Faculty Admissions

Regulations

IT software, hardware
reqs described in
course handbook

IT software, hardware
reqs described in

subject outline

Figure 5: A model of a levels of IT access approach to
the development of policy, protocols and guidelines

Historically, online assessment through OASIS began as a small local, inter-
divisional pilot project. The rapidly increasing use of this form of
assessment has meant that online assessment outgrew its ‘pilot mode’
before the support processes and product development could be fully
mainstreamed with adequate funding and institutional recognition. The
move from pilot to mainstream and associated support processes is a
potential weak link in the success and scalability of any online tool or
system and highlights the need for institutions to have a coordinated
approach to managing their online learning environment.

Microeconomics case study
Why online summative assessment in Autumn semester 2006? In
microeconomics, students are exposed to multiple choice questions in both



Buchan and Swann 423

their session assignments and in the end of semester final exam. The nature
of microeconomics as a subject with its regular use of mathematical
concepts, tables of data, diagrams and frequent technical terms lends itself
to at least partial assessment by use of multiple choice questions (Judge,
1999). Well constructed multiple choice questions can be effective in testing
everything from simple cognitive skills related to a memorised economics
vocabulary and application of technical terms through to data generated
problems requiring analytical and well defined problem solving skills to be
demonstrated by the student (Clarke et al, 2004; McKenna & Bull, 2000).

The transition from online formative assessment modes to a first attempt at
summative assessment in a large size distance education student cohort
(around 200) evolved over the years 2003 to 2006, in tandem with the
evolving in house online assessment software. After experimenting and
working with formative assessment tests between 2003 and 2005, it was felt
that the move to large cohort summative assessment could be trialled in the
Autumn semester of 2006. The costs involved were minimal in terms of test
preparation and informing students through subject outlines (hard copy)
and subject forum postings. The benefits to academic staff and students
were significant and divided into two broad categories.

a. The pedagogical benefits from online assessment included both student
focused gains in the learning experience and lecturer focused gains in
teaching methodology. Formative assessment, through a series of four
multiple choice question tests run in conjunction with the students’
subject modules in microeconomics, had received generally positive
feedback from student surveys, as indicated by the results shown in
Table 1. Distance education students gained an opportunity for
immediate learning feedback on their test performance (Benson, 2003;
Kerka & Wonacott, 2000; Northcote, 2002), supplemented by additional
test solution sets posted on the subject forum immediately after the
close of the summative test. With assignment turn round time
eliminated from the semester schedule, the timing of the summative test
could be optimally placed to assess the maximum number of subject
topics with online feedback prior to the final exam.

b. The administrative benefits included time saving involved in
administrating large student cohort assignments, time savings in
administering academic markers by the School, and the avoidance of
clerical error by the substitution of computer generated marking and
spreadsheeting functions.

The lecturer focused gains in teaching methodology concentrate on an
application of the model of constructive alignment established by John
Biggs in his seminal work Teaching for Quality Learning at University (2005).
Essentially, students complete the online microeconomics tests consisting



424 Australasian Journal of Educational Technology, 2007, 23(3)

of multiple choice questions, specifically aligned to learning objectives
embedded in the set of subject modules. The online assessment software
then permits a formal statistical analysis of large cohort performance,
identifying through quantitative indicators how well aligned the learning
objectives are with the specific test questions set to assess student
understanding of each particular learning objective.

Through use of a longitudinal study commencing in Autumn 2006 and
going through to Autumn 2007 and beyond, it has been possible to use the
online assessment tool to better align the student learning objectives with
the assessment tasks themselves, i.e. the multiple choice test. The lecturer
can identify areas of poor to weak student performance in tests, suggesting
the need to improve how well the learning objectives, assessment modes
and subject modules align in order to optimise the learning experience for
the students in future semesters.

Table 1: Student perceptions of the value of online
multiple choice assessment in enhancing learning.

Numbers of respondents
Total survey group (Micro)*Question Strongly

agree Agree Unsure
Dis-

agree
Strongly
disagree

The use of multiple choice questions
per se (i.e. both print or online modes)
is an appropriate way of conducting
assessment in this subject.

52 (8) 61 (19) 6 (1) 4 (2) 0 (0)

I found that the online multiple choice
assessment exercises available as
formative assessment in my subject
helped me achieve the subject
objectives.

25 (4) 59 (17) 17 (6) 3 (3) 19 (0)

I found that the online multiple choice
assessment exercises available as
formative assessment in my subject were
appropriately designed for this level of
study.

28 (4) 56 (16) 17 (8) 2 (2) 20 (0)

The use of online multiple choice
assessment as a summative assessment
tool (i.e. counts towards your final
grade) was an appropriate way of
conducting assessment for this subject.

33 (5) 39 (13) 11 (5) 9 (9) 25 (1)

* The first number in each cell indicates the total number of student respondents
across all subjects other than microeconomics, while the number in brackets
indicates the number of microeconomics students responding to each question.

At the end of the Autumn (first semester) session in 2006, students from a
number of different subjects that used online multiple choice assessment in



Buchan and Swann 425

some form were surveyed. It is beyond the scope of this paper to report on
this research, although the provisional results of the study (Table 1) are
encouraging in support of this form of assessment for both formative and
summative purposes.

Aspects of the Bridge Support Framework are discussed below with reference
to the case study.

Institutional and administrative support
The guidelines developed for using OASIS in assessment recommend that:

OASIS is best used as a formative assessment tool …. There are significant
issues concerning plagiarism and security concerns associated with online
submission of tests. Staff also need to be mindful of equity in student access
to online facilities. However, with careful planning OASIS can be used to
support summative assessment tasks. (Centre for Enhancing Learning and
Teaching, 2005)

Faculty of Business policy requires that all students will have access to a
computer for the primary purpose of assignment writing. The use of online
assessment has been in place in microeconomics since 2003 and as part of
summative assessment since 2006. Students are advised from the
commencement of each Autumn semester that online assessment is a part
of the subject. To date we have had no experience of any distance
education student claiming disadvantage on equity grounds due to lack of
online access. Alternative assessment arrangements are put in place for
individual students on request.

Pedagogical (instructional) design, planning and research
Supported by a Scholarship in Teaching fund grant we are hoping to move
beyond the practicalities of online assessment, with the aims of our study
being ‘to measure the effectiveness of online multiple choice assessment as
a teaching and learning tool and to develop a model of constructive
alignment between learning outcomes and multiple choice questions’
(Buchan, Wilkinson & Swann, 2006; Biggs, 2005). It is beyond the scope of
this paper to report fully on this study although some of the preliminary
findings have been included (Table 1 and student feedback).

Online assessment design fits within a pre-existing assessment framework.
In the first year microeconomics subject the online tests represent only one
component of the total subject assessment. Students are expected to
provide written assignment work during the semester as well as to sit a
three hour closed book, invigilated final examination in the subject. The
final examination receives a minimum of 50% weighting and must be
passed as one of the conditions governing the award of subject grades at
CSU. The microeconomics online tests are designed with a range of



426 Australasian Journal of Educational Technology, 2007, 23(3)

questions which test student knowledge of technical concepts as well as
longer, more data and/or graphics based questions which test student
problem solving ability in the subject. Not only do students receive
immediate feedback on submission of their tests, but  the subject forum is
also used to post solution sets for students to assist them analyse which
was the correct answer and why a particular answer was correct in terms of
microeconomic models and their analysis.

Finally, this online assessment mode provides a technical solution that
permits the lecturer to explore ways in which student learning outcomes
can be evaluated, revised and improved over time. For large student
cohorts, it is only the comparatively recent advent of online assessment
technology that makes such pedagogical research tasks feasible with
limited resources in both time and funds.

Budget
On a practical level, one of the more tangible benefits of the move to online
summative assessment modes has been the budget savings at the School
(and ultimately the Faculty) level. For large student cohorts, assignment
marking is a labour intensive, time consuming and regular budget item for
the School. In 2006, by moving from a written second assignment to a one
hour online summative test with computer generated student results,
several thousands of dollars were saved from the School budget. It is
suggested that these funds can then be allocated to the development of
additional and upgraded online facilities at CSU or reallocated to other
more urgent purposes within the organisation. The funds saved from
School budgets by the increasing adoption of online tests across the
University can also be viewed as a potential model of transfer pricing
within the University – sensitising the central administration to the
growing demand for online services across Schools and campuses and
identifying a flow of funds from School based budgets that would
determine the ‘transfer price’ of online services as provided by DIT to
Faculties/Schools within the organisation.

At a University level, the final recommendations of a working party are
now informing the expansion of the University’s Online Learning
Environment (OLE) (Virtual Learning Environment Working party, 2005;
ILSC, 2004) through a specially funded OLE program. This has led to the
selection of Sakai as an open source learning management system (Tulloch,
2005). This strategic approach to the development of the VLE makes use of
available funding to position the University strongly in its focus on flexible
learning with the university wide implementation of Sakai as part of CSU’s
new OLE, CSU Interact, in 2008. Funding has equated not only to the actual
software, but also to new support positions in CELT and the Learning
Materials Centre (LMC) for implementation of CSU Interact.



Buchan and Swann 427

Communication
The Bridge Support Framework identifies communication as an essential
part in the provision of online learning systems. A weak point identified
through our case study has, however, been a lack of aspects of inter-
divisional communication and inadequate procedures to ensure the
robustness of the technology for current usage (Sefton, 2006).
Communication is particularly important in real time events such as the
microeconomics summative assessment event. We argue that the
microeconomics case study is an exemplar of successful communication in
many aspects and has led to the development of a number of processes and
procedures that have been adopted throughout the University (guidelines
for Subject Outlines, feedback to the developers for upgrades to software,
protocols for the appropriate implementation of online assessment in a
subject). Improvements that have already occurred as a consequence of the
initial summative test experience is that DIT is now being advised on a
semester by semester basis when School based subject coordinators are
scheduling large cohort summative tests and improvements in user-
developer communication has seen upgrades to both software and
hardware.

IT infrastructure
Although the CSU system has been used extensively for a variety of large
cohort assessment tasks in other subjects (Figure 2), load testing of the
system under the particular circumstances had not been done prior to the 6
June 2006 microeconomics summative test (Sefton, 2006), although
technical assurances were that the system could handle the load. We have
instituted improved communication between the developers, IT
infrastructure maintenance staff, and the end users of the technology
(academic staff and educational designers) to ensure the relevant IT staff
understand the potential use of the online systems, so that appropriate
tests and changes can be made. Similarly, through professional
development, end users are  made aware of the limitations of the system,
and their own individual responsibilities in using technology for live,
online assessment events, so that they do not use it for purposes which the
IT infrastructure cannot realistically support and which may result in a
systems failure at a critical juncture.

IT support for end users
Live/real time training, professional development
In the case of microeconomics the lecturer received initial exposure to
online assessment software training through regular interaction with both
School based (the Educational Designer) and relevant CELT staff. As with
most software, much expertise had to be acquired through ‘learning by
doing’, by working with the software through a few semesters of purely
formative assessment tests.



428 Australasian Journal of Educational Technology, 2007, 23(3)

A noticeable weak point in our ‘bridge’ is that there is generally no formal
training available for student users of online learning technologies. This
puts the responsibility for familiarising the students with the online
assessment system on the lecturers. Initial feedback from students in our
study confirms the current inadequacy of student support. This weak point
is being addressed in CSU’s implementation of its new learning
management system, through a more holistic approach to IT support,
development of resources for students by Student Services, and an
integrated professional development plan (Gill & Hardham, 2007).

Troubleshooting - Help Desk oriented
Following the de-briefing on our microeconomics case study, it was
recommended that staff familiar with the application (DIT staff and/or the
OASIS coordinator) be available for on call support for major summative
assessment tasks in monitoring the IT infrastructure, and the IT Service
Desk be notified of major assessment events. The subsequent introduction
of this coordinated approach has already seen success.

In the microeconomics subject, individual support is available to students
through email and phone calls made directly to the lecturer. This works
reasonably well when only a few students seek help at random times and
in relation to minor problems associated with formative tests. But the
support system is not designed to cope with the level of student demand
for help as occurred on our 6 June summative test. On reflection, under
such one off circumstances - the Subject Forum provides the best means of
group response - more effective and potentially more reassuring than
individual phone calls or emails.

The Subject Forum (discussion board) was also enlisted as a useful ongoing
means of identifying OASIS issues and addressing ad hoc questions and
problems that emerged. The Forum proved to be a valuable adjunct to the
Subject Outline, providing students with ongoing support while they
familiarised themselves with the new technology. Most of the issues
identified by students related to online access and the avoidance of routine
navigational errors in moving through the online test. Through Forum
postings students became adept at helping each other with minor software
navigational problems. Email also proved a useful tool for dealing with
individual student queries concerning access and navigation.

Student Self Help
In the first year microeconomics subject, student self help was largely
provided by a variety of means as outlined below.

• Practice tests - Intense use was made of formative tests prior to the
introduction of a summative test. Typically, students attempted four



Buchan and Swann 429

formative tests per semester in line with completing their four modules
in the subject.

• ‘Print’ resources - The Subject Outline was updated to include a new
section on introducing the OASIS online software and outlining the
benefits of moving to online assessment. Students were able to access
the online assessment tasks through a hyperlink in their online subject
outline.

• Web resources - Students and staff have access to a variety of web based
resources; the Student Guide to OASIS available on the CSU Student
Services website, the staff guidelines on the OASIS and the online Help
guide available with the OASIS application itself.

Conclusion
There are certain inherent ‘frailties’ in online technology that imply the
presence of a certain amount of ‘risk’ associated with the use of the
technology (Northcote 2002). These frailties are not limited to the actual
technology (hardware and software concerns), but also to the
administrators and organisational (cross-divisional) support of the
technology, users of the technology, the dependence on their competence
and familiarity with the technology (i.e. for lecturers this equates to setting
up appropriately designed online assessment tasks, for students it is
accessing and completing the tasks) that is also critical to the successful use
of the technology.

Initial indications and student reaction during the case study event were
that using online multiple choice assessment for summative assessment may
be a ‘bridge too far’. This is, at least partially, supported by the preliminary
results of our research (Table 1) in which some 23% of microeconomics
students (and 25% of our total survey group) disagreed with the use of
online multiple choice assessment as appropriate for summative
assessment purposes. While we would like to explain away this feedback
as a ‘once off’ unfortunate experience with technology, the realities of the
operations of online systems and their inherent ‘instabilities’ will remain
and need to be factored into any consideration of the use of online
assessment technologies. We cannot dismiss the very personal feedback
from students.

The technical difficulties relating to our online summative assessment…
created a lot of unnecessary stress and anxiety for myself and many
students, who organised their lives around being available for this test. We
had 4 month's notice for this online test, and due to it not being available on
the day, we had 24 hours' notice to complete it again on another day, around
working full-time etc with no preparation. … For formative assessment,



430 Australasian Journal of Educational Technology, 2007, 23(3)

OASIS was excellent. There is potential for this technology to be an excellent
summative assessment tool, as long as it works! (microeconomics student A,
Research survey feedback, 6 June 2006)

I found having access to the on-line OASIS formative (if that's the right word
- the non-assessable ones!) tests invaluable as a self-test resource during the
semester, and a good way for a new student like myself to get a feel for the
way that questions are phrased in this subject.  I'm actually looking forward
to doing other subjects that may have the same capability (microeconomics
student B, Research survey feedback, 10 June 2006).

In this paper we have reflected on our experiences with online assessment
over a number of years, considered student feedback from our research,
and examined the literature to develop a bridge to the future, a framework
for the integrated support of online learning; the Bridge Support Framework
for online learning. It is hoped that this framework, together with the
developing model of integrated support for online systems at CSU (Figure
3) and our ongoing research will go some way towards guiding institutions
in developing robust and reliable online learning, and specifically online
assessment, systems. CSU faces challenging and exciting online
developments as it moves towards the implementation of its new, open
source, learning management system from 2007 onwards. The principles
developed in this paper will hopefully play a constructive part in the
immediate future of CSU Interact, the University’s new online learning
environment.

Acknowledgements
We are indebted to all academics pursuing innovative ways of teaching
their subjects: to our colleague Dr Jenny Wilkinson for her commitment to
our online assessment research, to Lincoln Gill for his sound educational
design and support of online technologies, and to those DIT developers
and staff who took on the challenge of programming for online assessment
and who continue to support our research, particularly Matt Morton-Allen,
Vicki Pitcher and their team.

References
ACODE (2006). Benchmarks for the use of technology in learning and teaching in

Universities Australasian Council on Open, Distance and E-learning.
http://www.acode.edu.au/projects/acodebenchmarksfullfinalnov06.pdf

Alexander, B. (2006). Web 2.0: A new wave of innovation for teaching and learning?
Educause Review, 41(2), 32-44. http://www.educause.edu/ir/library/pdf/erm0621.pdf

Barone, C. A. (2003). The changing landscape and the new academy. Educause
Review, 40(5), 41-47. http://www.educause.edu/ir/library/pdf/erm0353.pdf



Buchan and Swann 431

Benson, A. D. (2003). Assessing student learning in online environments. New
Directions for Adult and Continuing Education, 23(100), 67-78.

Benson, R. & Palaskas, T. (2006). Introducing a new learning management system:
An institutional case study. Australasian Journal of Educational Technology, 22(4),
548-567. http://www.ascilite.org.au/ajet/ajet22/benson.html

Biggs, J. (2005). Teaching for quality learning at university, 2nd edition, Society for
Research into Higher Education & Open University Press, UK.

Buchan, J. (2006). Online assessment at Charles Sturt University: A bridge too far?
An internal discussion paper on the use of OASIS and online assessment for
summative purposes. June 2006. Centre for Enhancing Learning and Teaching.
Charles Sturt University.

Buchan, J. Wilkinson, J. & Swann, M. (2006). Education driving technology: An
investigation into the effectiveness of online multiple choice assessment as a
teaching and learning tool. Scholarship in Teaching Fund application. Charles
Sturt University.

Buchan, J. & Pitcher, V. (2005). Online assessment at Charles Sturt University:
Education driving technology. Poster paper presented at CSU Learning and
Teaching Conference, Bathurst, September 2005.

Buchan, J. (2004). Report detailing suggestions for improvements to the current
OASIS system, and a ‘wish list’ for functionality of future online testing tools at
CSU. 4 November 2004. Centre for Enhancing Learning and Teaching, Learning
Media Laboratory. Charles Sturt University.

Buchan, J. and Buchan, A. (2003). Lessons from nature: Developing an adaptive
management model for sustaining quality learning environments. Sustaining
Quality Learning Environments. Proceeding 16th ODLAA Biennial Forum. Open and
Distance Learning Association of Australia.
http://odlaa.une.edu.au/publications/2003Proceedings/pdfs/buchan.pdf

Byrnes, R. & Ellis, A. (2006). The prevalence and characteristics of online
assessment in Australian universities. Australasian Journal of Educational
Technology, 22(1), 104-125. http://www.ascilite.org.au/ajet/ajet22/byrnes.html

Camp, J. S., Deblois, P. B. & the 2007 Educause Current Issues Committee (2007).
Top 10 IT Issues 2007. Educause Review, 42(3), 12-32. [verified 22 Jul 2007]
http://www.educause.edu/ir/library/pdf/erm0730.pdf

Centre for Enhancing Learning and Teaching. (2005). Guidelines and protocols for
using OASIS in assessment. Centre for Enhancing Learning and Teaching,
Learning Media Laboratory. [viewed 19 Sep 2006]
http://www.csu.edu.au/division/landt/resources/oasis.htm#protocols

Clarke, S., Lindsay, K., McKenna, C. & New, S. (2004). Inquire: A case study in
evaluating the potential of online MCQ tests in a discursive subject. ALT-J, 12(3),
249-260.

Donnan, P. (2006). Supporting e-assessment. ODLAA Times, 14(2), 7-9. Open and
Distance Learning Association of Australia.



432 Australasian Journal of Educational Technology, 2007, 23(3)

Ely, D.P. (1999). New perspectives on the implementation of educational
technology innovations. Evaluative Report. ERIC. http://www.eric.ed.gov/
ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/17/5c/c0.pdf

Ely, D. (1990). Conditions that facilitate the implementation of educational techno-
logy innovations. Journal of Research on Computing in Education, 23(2), 298-305.

Fry, G. (2006). Library OASIS tutorial trial: Evaluation. Internal memorandum.
Greg Fry, Manager, Client & Information Services, Albury Wodonga Campus,
Charles Sturt University.

Gagné, R. M. & Glaser, R. (1987). Foundations in learning research. In Instructional
technology: Foundations.  Lawrence Erlbaum Associates. New Jersey. pp.49-83.

Gill, L. & Hardham, G. (2007). CSU Interact. Professional Development plan.
Version 1.5. 31 May 2007. Centre for Enhancing Learning and Teaching. Charles
Sturt University.

Hardy, L. & Benson, R. (2002). Through the looking glass: Roles, relationships and
reflective practice to improve the use of educational technology. In S.
McNamara & E. Stacey (Eds.), Untangling the web: Establishing learning links.
Proceedings ASET Conference 2002. Melbourne, 7-10 July.
http://www.ascilite.org.au/aset-archives/confs/2002/hardy.html

Hagner, H. R. (2001). Interesting practices and best systems in faculty engagement
and support. Final report. NLII White paper. University of Hartford.
http://www.educause.edu/ir/library/pdf/NLI0017.pdf

Hicks, M., Reid, I. C. & George, R. (1999). Designing responsive online learning
environments: Approaches to supporting students. Proceedings AARE
Conference. http://www.aare.edu.au/99pap/hic99172.htm

ILSC (2004). Information and Learning Systems Committee. Action sheet. 27 July
2004. Charles Sturt University.

Jackson, G. A. (2007). Compartments, customers or convergence? Evolving
challenges to IT progress. Educause Review, 42(3), 35-49.
http://www.educause.edu/apps/er/erm07/erm0731.asp

James, R. & McInnis, C. (2001). Strategically re-positioning student assessment. A
discussion paper on the assessment of student learning in universities.
University of Melbourne, Centre for the Study of Higher Education.

Jenkins, S. (2005). Survey of use of OASIS in human resources second year subject.
(Unpublished). School of Commerce. Charles Sturt University.

Judge, G. (1999). The production and use of on-line Web quizzes for economics.
Computers in Higher Education Economics Review, 13(1). [verified 22 Jul 2007]
http://www.economicsnetwork.ac.uk/cheer/ch13_1/ch13_1p21.htm

Kerka, S. & Wonnacott, M. E. (2000). Assessing learners online. Practitioner file. Office
of Educational Research and Development. Washington DC. [verified 22 Jul
2007] http://www.cete.org/acve/docs/pfile03.htm

McAlpine, M. (2002). Principles of assessment. Bluepaper Number 1. February 2002.
Robert Clark Centre for Technological Education. University of Glasgow. CAA
Centre TLTP Project. Published by The CAA Centre.



Buchan and Swann 433

McKenna, C. & Bull., J. (2000). Designing objective test questions: An introductory
workshop. [viewed 1 May 2003, verified 23 Jul 2007]
http://www.caacentre.ac.uk/dldocs/otghdout.pdf

Nichols, M. (2003). Using eLearning tools for assessment purposes. 16th ODLAA
Biennial Forum Conference Proceedings Sustaining Quality Learning Environments.
Open and Distance Learning Association of Australia. [verified 23 Jul 2007]
http://odlaa.une.edu.au/publications/2003Proceedings/pdfs/nichols.pdf

Northcote, M. (2002). Online assessment: Friend or foe? Colloquium. British Journal
of Educational Technology. 33(5), 623-625.

Ramsden, P. (1992). Learning to teach in higher education. Routledge, London.
Ryan, C. (1974). A bridge too far. Hamish Hamilton, London.
Sefton, P. (2006). Report on the failure of the OASIS application. Division of

Information Technology, Customer Services. Internal report July 2006. Charles
Sturt University.

Senate (2007). Charles Sturt University Senate decision. Reported in M. Tulloch.
Centre for Enhancing Learning and Teaching, Director’s report June 2007.
Internal document.

Sim, G., Holifield, P. & Brown, M. (2004). Implementation of computer assisted
assessment: lessons from the literature. ALT-J, Research in Learning Technology.
12(3), 215-229.

Sivapalan, S. & Cregan, P. (2005). Value of online resources for learning by distance
education. CAL-laborate. Uniserve Science, University of Sydney. pp.23-27. [verif-
ied 23 Jul 2007] http://science.uniserve.edu.au/pubs/callab/vol14/cal14_sivapalan.pdf

Surry, D. W., Ensminger, D. C. & Haab, M. (2005). A model for integrating
instructional technology into higher education. British Journal of Educational
Technology, 36(2), 327-329.

Surry, D.W., Ensminger, D. C. & Jones, M. (2003). A model for integrating
instructional technology into higher education. [viewed 30 May 2007, verified
23 Jul 2007] http://www.iphase.org/papers/RIPPLES.rtf

Swann, M. (2004). Online assessment using OASIS: Strategies for a flexible learning
environment: the recent experience from the economics discipline. December
2004. School of Commerce. Faculty of Commerce. Charles Sturt University.

Swann, M. (2006). Report on OASIS performance: Some reflections on an evolving
technology. July 2006. Internal report. School of Commerce. Charles Sturt
University.

Tulloch, M. (2005). (Internal memorandum). Memo To: Deputy Vice-Chancellor
(Academic). Report on Current VLE Projects for Learning & Teaching Plan 2007-
2011. Marian Tulloch. Director of CELT.

University Handbook (2005). Charles Sturt University Academic Manual.
http://www.csu.edu.au/acad_sec/manuals/contm.htm

Uys, P. M. (2000). Towards the virtual class: Key management issues in tertiary education.
Unpublished PhD thesis, Victoria University of Wellington, New Zealand.
[verified 23 Jul 2007] http://www.globe-online.com/philip.uys/phdthesis



434 Australasian Journal of Educational Technology, 2007, 23(3)

Virtual Learning Environment Working Party (2005). Towards the enhancement of
the CSU Virtual Learning Environment. Report to the ILSC of VLE working
Party. September 2005. Charles Sturt University.

Wheeler, B. (2007). Open source 2010: Reflections on 2007. Educause Review, 42(1).
48-67. http://www.educause.edu/ir/library/pdf/erm0712.pdf

Wilkinson, J. & Dawson, S. (2000). Use of OASIS, a self assessment tool, to enhance
student learning Presentation at Centre for Enhancing Learning and Teaching,
Charles Sturt University Forum.

Janet F. Buchan (author for correspondence)
Educational Designer/Learning Media Laboratory Coordinator
Centre for Enhancing Learning and Teaching
Charles Sturt University, PO Box 789, Albury NSW 2640, Australia
Email: jbuchan@csu.edu.au
Michael Swann, Lecturer in Economics
School of Commerce, Faculty of Business
Charles Sturt University
PO Box 588, Wagga Wagga NSW 2650, Australia
Email: mswann@csu.edu.au