mckenzie


 

Australian Journal of 
Educational Technology 
 
2000, 16(3), 239-257 

 

"I hope this goes somewhere": Evaluation  
of an online discussion group 

 

Wendy McKenzie and David Murphy 
Monash University 

 
This article outlines the application of a particular model of content analysis 
(Henri, 1992; 1993) to the evaluation of an online discussion group. The 
discussion group was part of the learning environment for a subject of the 
Graduate Certificate in Higher Education offered by the Centre for Higher 
Education at Monash University, Australia. Henri’s model focuses on the 
level of participation and interaction in the discussion group, as well 
analysing the content of the messages according to a cognitive view of 
learning. Overall, the analysis confirmed the success of the discussion 
group, and provided a useful conceptual lens with which to study the 
online environment. 

 
Introduction 
 
There is currently intense interest in online learning, and in particular the 
role that online discussion groups can play in promoting interactivity and 
collaboration among learners. This use of computers to enable 
communication between learners separated by time and distance is one of 
the fastest growing uses of technology in education (Bates, 1995). 
Typically, online discussion groups, newsgroups, or computer conferences 
as they are variously called, are used to support interaction between any 
number of persons in an asynchronous, text-based environment. Messages 
in the discussion forum are displayed as ‘threads’ organised according to 
subject referents and allow the history of the online conversation to be 
easily followed. 
 
Advocates of the use of online discussion groups in education are 
enthusiastic about the benefits this opportunity for interaction affords the 
learner. Harasim (1989) describes the part that online interaction can play 
in collaborative learning environments, emphasising the positive effects of 
being actively engaged in learning, sharing information and perspectives 



240 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

through interaction with other learners. The asynchronous nature of the 
communication affords extra advantages in terms of promoting reflective 
thinking, as well as more practically offering increased flexibility of time 
and place of learning (eg, Bates, 1995; Harasim et al., 1995). As well as 
allowing flexibility, online discussion groups can help to reduce the 
isolation of learning at a distance, and play an important role in the social 
aspects of learning (Harasim et al., 1995, Mason & Weller, 2000). 
 
Following on from the enthusiastic adoption of computer mediated 
communication technologies have trailed developments in methods for 
evaluating the quality of online discussion, ranging from highly 
quantitative approaches, such as the use of tracking software (eg, Pitman 
et al., 1999) to more qualitative methods, such as discourse analysis 
(Owen, 2000). Just as the increasing availability of online discussion 
groups offers challenges to teachers designing learning environments, this 
technology also offers unique opportunities for evaluation. This article 
outlines the analysis of an online discussion group that operated as part of 
a flexible learning subject offered over one semester. In particular, the tool 
used for evaluation was Henri’s (1992; 1993) method for analysing the 
content of discussion transcripts. This approach was found to provide a 
useful conceptual lens for coming to an understanding of the success of 
the group. 
 
Setting the context 
 
The Centre for Higher Education Development (CHED) at Monash 
University began offering its Graduate Certificate in Higher Education to 
academic staff of the University in 1999. As Monash is a multi-campus 
institution, the course was designed and developed flexibly, so that for the 
most part participants could access and work on study materials at times 
and locations of their choice. The first subject, Designing for Learning, has 
three components which comprise the learning environment: face to face 
workshops (the equivalent of about 4 days), printed study materials (a 
study guide and book of readings) and an online environment built 
around CHED’s purpose designed software tool, InterLearn (Murphy & 
Webster, 1999). 
 
Integrated with the web materials for the subject is an online discussion 
group, using Netscape Communicator. The decision to use this, rather 
than more specialised and elaborate software, was based on the notion 
that this was Monash University’s default application, which academics 
had ready access to, could be very simply set up, and was likely to be the 
software they would use for their own teaching. 
 



McKenzie and Murphy 241 

 

In planning for use of the discussion group, one aim was to ensure that it 
was not just an ‘add on’, but an integral part of the learning environment. 
At the same time, it was felt that there should be sufficient flexibility in its 
use so that participants would use it proactively. Prompting for use of the 
discussion group was provided by links in the online materials, specific 
requirements of some online activities and a weekly global email from the 
subject coordinator. So, although members of the teaching team did post 
messages to the group, it was most often in response to questions or issues 
raised by participants. It is interesting that even though there was an 
introductory message from a teacher, it was not the first message to the 
group; this came from a student who was keen to get started. 
 
The key role that the teacher or moderator can play in an online 
environment is a topic of particular interest in the literature, with a key 
recent contribution being that of Salmon (2000). Based on her extensive 
experience and research, Salmon has developed a five stage, research 
based model for what she calls ‘e-moderation’. This model of teaching and 
learning online includes a range of suggestions for teaching in a variety of 
online contexts, ideas for the evaluation of online discussion groups, along 
with a list of useful questions for evaluators to explore (Salmon, 2000, p. 
121). 
 
As mentioned, in the current context, the discussion group was not the 
only means of interactivity within the online materials. Use of the 
InterLearn software, which is based on a database structure, enabled 
participants to view each other’s online activities and at times collaborate 
on such activities, thus encouraging a constructivist perspective of 
learning. In this case most of the activities focussed on their practice as 
teachers in higher education, and how the material they studied could be 
applied in the learning environments for which they were responsible. It 
was also envisaged that, by spending time both posting and searching the 
online activities, participants would be stimulated to use the discussion 
group to follow up on issues that arose and raise points of debate and 
special interest. To aid navigation, a link to the discussion group was 
included as a standard icon on each page of the participants’ individual 
‘worksites’, and reminders to visit the group regularly were interspersed 
in the materials. Overall, though, the group was not ‘teacher-led’, but was 
a forum where participants shared their experiences, concerns and 
opinions. In this way the online environment deviated somewhat from 
that propounded by Salmon (2000), as in this context (notably that the 
participants were all academic staff) it was considered that the online 
moderator would play a lesser role. 
 



242 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

The analysis of the online discussion group that follows was part of an 
overall evaluation strategy for the subject. Other aspects included focus 
groups, discussions at face to face tutorials and both qualitative and 
quantitative questionnaires. It should also be noted that participation in 
the discussion group was not part of the formal assessment of the subject. 
However, seven of the activities submitted via the online worksite did 
contribute to the final mark (an ungraded assessment of ‘unsatisfactory/ 
satisfactory’). 
 
Evaluating online discussion 
 
With increasing interest in the evaluation of online discussion groups, 
researchers have been applying a number of tools to tease out key aspects 
of the interaction that can lead to improvements in online learning 
environments (Pitman et al., 1999). Methods for describing and analysing 
the effectiveness of online discussion will be of interest to practitioners for 
different reasons (eg, see Mason, 1992). The evaluation of the online 
discussion probably most often occurs as part of a routine subject 
evaluation, with various methods used to determine students’ perceptions 
of the experience such as surveys, interviews and focus groups.  The 
unique features of the online environment itself, however, invite other 
approaches to evaluation. One is the availability of information about the 
level of participation in a newsgroup based on statistics on the number of 
users, frequency of access, number of messages per student, the number of 
threads and messages per thread (Harasim, 1989). Although this 
information can be useful, there is a danger here in concluding that the 
level of activity in a newsgroup reflects the level of learning (Mason, 1992). 
 
The transparency of online discussion, the fact that all communication is 
easily organised, stored and retrieved, suggests that analyses of the 
discussion records themselves would be a useful approach. Despite the 
wealth of data that is available in transcript analysis, few researchers 
attempt an in-depth analysis of the content of the discussion record 
(Gunawardena, Lowe, & Anderson, 1998; Romiszowski & Mason, 1996). 
One reason for this reluctance maybe the time and labour-intensive nature 
of such an undertaking. Another reason, perhaps, is the lack of availability 
of tried and tested frameworks for evaluating the effectiveness of online 
discussion using transcripts. 
 
Of the models that have been proposed, the preferred method of analysis 
varies  according  to  purpose  of  the  evaluation  and  the  interests  of  the  
 
 



McKenzie and Murphy 243 

 

researchers. For example, Levin, Kim, and Riel (1990) focus on the need to 
understand the nature of the interaction among participants. This 
analytical approach leads to the construction of ‘message maps’ that 
represent the flow of communication within the group, but are not 
concerned with message content. Other researchers are primarily 
interested in evaluating the effectiveness of online discussion in terms of 
the learning process. Each takes a similar approach to analysing the 
discussion record, first breaking the transcript down into small units and 
then classifying these units according to content. In some cases the 
categories are defined retrospectively, and are tailored to capture the 
flavour of a particular forum (eg, Mowrer, 1996). Others have taken a more 
theoretical perspective, where the categories for analysis are designed a 
priori to reflect evidence about the learning process in which the 
participants are engaged. It is this level of analysis, Henri (1992) argues, 
that is needed to evaluate and guide the use of online discussion 
environments. In Henri’s model (1992, 1993) the transcripts are analysed 
according to five dimensions, these being participative, interactive, social, 
cognitive and metacognitive. Her approach is grounded in a cognitive 
view of learning, focusing on the level of knowledge and skills evident in 
the learners’ communications.  Aspects of this model have been taken up 
and expanded upon by others interested in comparing the level of critical 
thinking in face to face seminars and computer conferences (eg, Newman, 
Johnson, Webb, & Cochrane, 1997). 
 
Henri’s approach, however, will not be suited to all evaluation purposes. 
For example, Gunawardena et al. (1998) judged Henri’s model to be 
inappropriate for their analysis of an online debate. It was suggested that 
Henri’s analysis of interaction did not reflect the ‘gestalt’ of the entire 
online discussion, but rather focused on links between specific messages. 
A gestaltist approach to analysing the interaction of the entire online 
conference was central to Gunawardena et al.’s purpose to evaluate 
evidence for the social construction of knowledge. Their own preferred 
method of content analysis was developed to capture the progression of 
ideas as they were reflected at different phases of the debate: 
sharing/comparing information; identifying areas of disagreement; 
negotiating meaning and co-construction of knowledge; testing and 
modification of proposed synthesis, and agreement statements.  As the 
nature of the online discussion group that is the subject of our analysis 
was more an informal sharing of ideas and experiences, Gunawardena et 
al.’s approach did not seem to fit our purpose. 
 
 
 
 



244 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

Therefore, in the current context, the Henri (1992; 1993) framework was 
chosen to evaluate the effectiveness of the online discussion group because 
it allowed for analysis of a range of aspects of an online discussion, the 
level of participation in the form of usage statistics, the nature of the 
interaction between contributors, and an indication of the learning process 
through an analysis of the cognitive activity evident in the message 
content. 
 
Method 
 
Using a hard copy of the transcript from the online discussion forum, each 
message was first coded using unique identifiers for all staff and student 
contributors. Each message was then divided into ‘message units’, as per 
Henri’s (1992; 1993) method. This process proved to be somewhat 
problematic as the definition of a message unit given by Henri is quite 
vague, (ie, “as in a table of contents”, 1993, p.9). Others have also found 
this aspect of Henri’s model difficult (Gunawardena, et al., 1998; Howell-
Richardson & Mellar, 1996).  In keeping with Henri’s (1993) argument that 
a count of the total number of messages alone does not adequately capture 
a measure of participation, a working definition of a ‘message unit’ was 
adopted here to refer to that which represented one ‘idea’. In practice, it 
turned out that message units tended to correspond to paragraphs, as this 
is how people tend to organise written communication. This 
operationalisation of a message unit allows for an analysis of differences 
between participants that communicate a little or much information in one 
message. 
 
Each message unit was classified according to the categories defined by 
Henri’s (1992; 1993) model. This approach yields both quantitative and 
qualitative data, according to five broad dimensions:, participation, social, 
interaction, cognitive and metacognitive aspects. Some alterations to this 
method of analysis were made to include additional information in some 
categories. The categories used to describe level of participation (including 
social) and interaction are described in Table 1. Table 2 outlines the 
categories used to classify the message content relevant to the cognitive 
and metacognitive dimensions. 
 
Participation / social 
The participation dimension included measures of the level of 
participation, structure and type of participation. The level of participation 
was indicated by the number of messages, number of message units, and 
length of each message unit. The structure of the online discussion was 
observed by recording the day of the week and time of day for each 



McKenzie and Murphy 245 

 

message. The analysis also allowed tracking of the ‘threads’ of the 
discussion according to the subject heading of the messages. 
 
Table 1: Summary of the classifications used in the transcript analysis to 
measure participation and interaction, based on a slightly modified 
version of Henri’s model (1992; 1993). 
 
Dimension Categories Example  
Participation 
Level of 
participation 

Number of messages 
Number of message 
units 
Length of message unit 
(lines) 

 

Structure* Day and time of posting 
Subject thread 

 

Type of 
participation* 

Administrative (A) 
 

Technical (T) 
 

Social (S)   
 
 
 

Content (C) 
 

Question about submission of work 
 

Technical problems with access 
 

Self: ‘Hi, my name is X and …..’ 
Other: ‘Hope you all have a good 
Easter …’ 
 

Direct: ‘The reading on learning 
outcomes…’ 
Indirect: ‘ideas to help with noisy 
students’ 

Interactivity 
Explicit 
interaction 
 
 

Implicit 
interaction 
 
 

Independent 
statement 

 
Direct response (DR) 
Direct commentary (DC) 
 
 

Indirect response (IR) 
Indirect commentary 
(IC) 
 

(IS) 

 
‘Hello X, In response to your question 
about…’ 
‘ I agree with X that …’ 
 

‘I think that the answer might be …’ 
‘I agree that students …’ 
 
 

Relating to the subject under 
discussion, but isn’t in reference to a 
prior contribution 

Note: * indicates modification to Henri’s model (1992) 
 
An indication of the type of participation was gained by coding message 
units as either referring to some aspect of course administration (A), the 
use of the technology to access and use the online site (T), or a reference to 



246 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

the content of the subject (C). A further category was added to distinguish 
between content messages (C) that drew on subject material specific to the 
course (direct), or relevant to the topic of teaching and learning in general 
(indirect). The fourth type of participation reflected a social purpose (S), 
and is equivalent to Henri’s second dimension (1992; 1993). The social 
category was further classified as social expression about one’s self (ie, 
personal introduction), or an expression of sociability directed toward 
others (eg, asking about others’ well-being). 
 
Interaction 
Henri’s (1992; 1993) interactivity dimension differentiates between 
contributions to the online discussion that are explicit, implicit or 
independent. Explicit interactions can be either in response to a question 
posed (DR) or a commentary on someone else’s message (DC).  In explicit 
interactions the person to whom the communication is directed is 
indicated in the message. For these explicit interactions a record of the 
sender and the person to which the message is directed was noted so that 
patterns of communication between participants might be observed. 
Implicit interactions were defined as including a response to (IR) or 
commentary on (IC) a prior message, but without indicating specifically to 
which message the contribution referred. The independent statement (IS) 
category was reserved for cases where a message contained new ideas not 
connected to others previously expressed in the discussion forum. 
 
Cognitive and metacognitive dimensions 
The message units defined as relevant to the content of the subject (C) 
were then classified according to Henri’s cognitive and metacognitive 
dimensions. Examples of how each of these categories were interpreted in 
the current analysis are given in Table 2 (see Henri, 1992; 1993). 
 
The cognitive skills dimension is based on a taxonomy of cognitive 
processes and skills thought to reflect the nature of the learning process 
(see Henri, 1993). The first classification outlines five levels of critical 
thinking: elementary clarification (introducing a problem and its parts); in 
depth clarification (analysis indicates added insight and understanding of 
the nature of the problem);  inference; inference (evidence of inductive or 
deductive reasoning); judgement (making a judgement, summing up); and 
strategies (proposing what is needed to implement a solution). 
 
 
 
 
 



McKenzie and Murphy 247 

 

Table 2: Summary of the classifications used in the transcript analysis to 
assess the cognitive and metacognitive dimensions of Henri’s model (1992; 
1993). 
 

COGNITIVE 
Critical thinking 
Elementary 
clarification 
 

In depth clarification 
 

Inference 
 
 

Judgement 
 
 

Strategy 
 
Information processing 
Surface 
 
 
 

In depth 
 
 

 
 
Introduce a problem; pose a question; pass on information 
without elaboration. 
 

Analyse a problem, identify assumptions. 
 

Concluding based on evidence from prior statements; 
generalising. 
 

Expresses a judgement about an inference, relevance of an 
argument, theory, or solution. 
 

Proposes a solution; outlines what is needed to implement 
the solution. 
 
 
Repetition without adding new information; statement 
without justification; suggesting a solution without 
explanation. 
 

Brings in new information, shows links, solutions 
proposed with analysis of possible consequences; evidence 
of justification; presents a wider view. 

METACOGNITIVE 
Knowledge 
Person 
 
 

Task 
 
 

Strategy 
 
Skills 
Evaluation 
 
 

Planning 
 
 

Regulation 
Self awareness 

 
 
Comparing self to others as a cognitive being, eg, student 
perspective vs. teacher perspective. 
 

Showing an awareness of one’s approach to a cognitive 
task, eg, preparing a lecture. 
 

Comment on strategies used to reach an objective and 
assess progress, eg, I find do X when trying to …..’ 
 
Question about value of one’s ideas or way of going about 
a task,  eg, ‘I do not have a good understanding of ….’ 
 

Evidence of organising steps needed and prediction of 
what is likely to happen. 
 

Evidence of implementing a strategy and assessing 
progress. 
eg, ‘I know I feel …..’ / ‘I found learning about … 
interesting’ 

 
The second aspect of cognitive skill to be evaluated according to Henri’s 
(1992; 1993) method was the level of information processing evident in the 



248 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

message content, classified according to the dichotomy of surface versus 
deep processing. In depth processing reflects organisation and critical 
evaluation of information, the opposite of this being surface processing 
indicated by repetition and the absence of evidence of elaboration and 
justification. 
 
Evidence of participants' metacognitive processes was observed by 
including Henri’s (1992; 1993) categories of metacognitive knowledge and 
metacognitive skills. Metacognitive knowledge refers to declarative 
knowledge about the person (what is known about the person as a 
‘cognitive being’); the task (appreciation of the task and information 
available); and strategies used (how a cognitive task is successfully 
completed). Expression of metacognitive skills reflects knowing how to 
assess one’s knowledge, skills and strategies (evaluation), predict and 
organise what is needed to complete a cognitive task (planning); initiate 
and supervise progress toward reaching one’s objectives (regulation); and 
recognise and understand one’s feelings and thoughts about the task (self-
awareness). 
 
Scoring 
The first author completed the initial coding of data. A reliability analysis 
was undertaken, with a random sample of one-third of the messages being 
coded by an independent researcher. A comparison of the results showed 
the level of agreement between the two scorers was 95%  on type of 
participation, 76% on type of interaction, 44% on critical thinking skills, 
and 95% on information processing. The reliability of classifying message 
content for the five levels of critical thinking was quite poor. On closer 
inspection the majority of discrepancies were between neighbouring 
categories, in particular between levels of clarification (elementary and in-
depth), and between the categories showing evidence of drawing 
conclusions (inference and judgement). A reanalysis of the scoring 
collapsing the categories into three (clarification, conclusion, strategy) 
resulted in 68% agreement between scorers. The number of messages 
showing evidence of metacognitive aspects in the sample was very small, 
and even for these messages there was poor agreement. Limitations of this 
approach are taken up further in the discussion. 
 

Results 
 

Level of participation 
 

From 157 messages, a total of 271 message units were defined. On average, 
messages contained 1.8 units, each unit being on average approximately 
eight lines (or 70-80 words). Out of a total enrolment of 38, 25 students 



McKenzie and Murphy 249 

 

posted at least one message to the discussion forum, five of these students 
posting more than 10 messages. Five staff contributed to the discussion 
forum, with two staff members in the top four most frequent contributors.  
Overall, eleven (out of 30) contributors, including two staff members, 
account for 80% of total number of messages posted. Students contributed 
74% of the messages posted. 
 
Structure 
 
Students contributed 74% of the messages posted.  The majority of 
responses were posted during usual business hours (8.00 am – 6.00 pm) 
between Monday and Friday.  Tracking the pattern of messages over the 
semester showed that highest number of postings was seen in Weeks 1 
and 2 (n = 20 and 27, respectively), with the number of contributions 
dropping to about 12 per week for Weeks 3 – 11, and sharply declining 
thereafter. Although not formally analysed, the turn-around time for 
responses to messages was quite short (eg, in a day or so, especially by 
staff). Overall, there was a clear impression of the ‘lifespan” of a subject 
lasting about two weeks. Later in the semester, a specific posting by staff 
that it was acceptable to post to ‘old’ messages may have encouraged 
extension of the ‘life’ of some subjects. 
 
A total of 44 new subject threads were posted to the discussion group, but 
27% of these were never referenced further. Eight threads received five or 
more referents and account for almost 50% of the total discussion. The 
most discussed subjects were Blooms taxonomy (n = 6), Learning 
Outcomes (n = 6), Comparing responses (n = 6), Humanistic theories (n = 
7), Teachers’ stories (n = 9), Kolb’s Learning cycle (n = 9), Objectives (n = 
13), and Phenomenography (n = 21). These subjects track quite closely the 
subjects listed in the module outline for the course, with most topics being 
obviously relevant to the weekly activities being conducted in the 
InterLearn environment. It is worth noting that writing about the 
phenomenography topic in the online discussion forum was listed as one 
of the online learning module activities. 
 
Type of participation 
 
Five per cent of the message units were administrative in nature (eg, 
question about readings; staff comment on course related information). 
Only about 10% of the message units were considered to be social in 
nature, two thirds of these were relaying information about the person 
posting the message, and the other third was directed to other members of 
the group. Eight per cent of postings were either questions or answers 



250 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

about the use of the technology needed to access the discussion forum or 
the InterLearn online environment. 
 
The majority of the message units (76%) were related to the content of the 
course.  Of these content message units, 75% were judged to be relevant to 
the course content, directly addressing the subject material covered in the 
learning modules. Most of the indirect messages were about bringing in 
other work-related problems (eg, latecomers to classes) or an interesting 
article in a newspaper, etc. 
 
Interactivity 
 
Using Henri’s (1992; 1993) framework, an analysis of the message units 
shows that 74% of the message units posted could be classified as an 
interactive response to a prior contribution. Of these, just under half (47%) 
were classified as explicit interactions, 17% were implicit interactions, and 
36% were classified as independent statements. Therefore, most of the time 
respondents were directly engaging with the messages posted by others, 
and about two thirds of these direct responses were commentaries rather 
than responses to questions. Teaching staff in particular tended to direct 
their messages in explicit interactions. (Note: Most of the interactions were 
evidenced by reference to a message in text or by pasting in part of the 
message to the sender’s posting.) 
 
For those explicit interactions, an analysis of “who was responding to 
whom” was undertaken to determine if there were any distinctive patterns 
of communication (ie, either between staff and students or between 
students themselves). These interactions were fairly evenly spread across 
participants, with the most active participants responding to a range of 
other participants. Five students had between 6 and 13 referents, and three 
of these students were in the top six contributors to the conference. In 
other words, some students were reinforced for their contributions more 
than others. were. As you would expect, the more often you contribute the 
more reinforcement you get (although this is not always the case). A few 
trends emerged in terms of some students responding quite often to a 
particular student’s messages and vice versa (creating ‘interaction dyads’). 
The teaching staff appeared to respond to a range of participants, to the 
most active more often, but also seemed to make an effort to respond to 
new participants. On only four occasions did students directly address a 
staff comment. 
 
 
 



McKenzie and Murphy 251 

 

Content analysis 
 
The message units defined as related to the content of the subject were 
analysed according to Henri’s (1992; 1993) cognitive and metacognitive 
dimensions. 
 
Table 3: Summary of analysis of content message units by percentage of 
units at each level of cognitive skill, and number of units demonstrating 
metacognitive knowledge and skills. 
 
Dimension Category Level Message units 
Cognitive 
skill 

Critical thinking 
/ reasoning 

Elementary clarification 
 

In depth clarification 
 

Inference 
 

Judgement 
 

Strategies 
 

Unclassified 

29% 
 

24% 
 

7% 
 

19% 
 

10% 
 

11% 
 Level of 

information 
processing 

Surface 
 

Deep 
 

Unclassified 

22% 
 

67% 
 

11% 
 
Cognitive skill 
Table 3 shows a breakdown of these message units into the levels of 
critical thinking and information processing categories, expressed as a 
percentage of the total number of content message units. Approximately 
one half of the message units were classified as either elementary or in-
depth clarification. This result reflects the way in which the participants 
used the online discussion forum to bring in examples from their own 
teaching and other sources, to pose problems and discuss their 
experiences. The inference and judgement categories mainly reflect 
responses that were summative in nature, or reflective a statement of the 
authors’ view on a particular issue, rather than an exploration of a 
problem. These messages tended to communicate the authors’ evaluation 
of the usefulness or validity of a particular theoretical approach or 
concept. The strategies category represented often practical solutions 
offered by participants to others on the problems raised in their own 
teaching. The breakdown of classifications is representative of both staff 
and students’ contributions. 
 
 
 



252 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

Information processing 
This category was used to classify responses as examples of either 
superficial or in depth processing (see Table 3). In the context of the 
present analysis this classification was not found to be very 
discriminatory. Given the advanced level of discussion evident in the 
forum, the surface processing category (22%) was used much less often 
than the in depth processing category (67%). Messages classified as 
evidence of surface level processing involved mostly examples where 
participants contributed information about extra resources without 
elaboration. Messages demonstrating deeper levels of processing involved 
relating new information to their experiences, critically evaluating ideas, 
and exploring strategies. 
 
Metacognitive knowledge and skills 
As Henri (1993) observed, metacognitive processes are difficult to assess 
with more traditional teaching methods, but with online discussion it is 
possible that participants will contribute more of their reflections on their 
own learning. Some evidence of this kind of metacognitive activity was 
seen in the discussion forum, although relatively infrequently (16%). In 
most cases these comments were classified as reflecting The majority of 
metacognitive statements occurred in the categories for knowledge about 
the person (n = 12) and or evaluation of skills (n = 23). Those responses 
that recognise the person as a cognitive agent in comparison to others 
tended to be reflections on personal learning style. Evaluation statements 
communicated the assessment of the person’s approach to a task and the 
efficacy of that approach (eg, reflections on the effectiveness of teaching 
strategies). The self-awareness category (n = 9) represented participants' 
reflection on how much they felt they were learning with the activities, 
and their feelings associated with these learning experiences. However, 
the low incidence of messages reflecting metacognitive awareness, and the 
poor reliability in scoring these aspects suggest that this part of the 
analysis should be viewed with caution. 
 
Discussion 
 
Overall, the approach based on Henri’s (1992; 1993) model of analysis 
provided a useful way of coming to an understanding of how the online 
discussion forum was operating in the Designing for Learning subject. The 
analysis indicated that the forum was successful in providing a medium 
for ‘lively’ interaction between students and staff, verifying the optimism, 
as expressed by a quote from one student used in the title of this article, 
that the discussion did indeed ‘go somewhere’. The discussion was highly 



McKenzie and Murphy 253 

 

interactive as evidenced by the majority of direct responses and 
commentaries to messages, rather than many postings of independent 
statements. The forum was used primarily for discussion about topics 
raised by the content of the subject Designing for Learning. The structure of 
the online discussion closely paralleled the module map for the course and 
the topics reflected the weekly activities being conducted by students in 
the InterLearn environment. In that sense, the evaluation supports the idea 
that the interaction inherent in the InterLearn activities promoted further 
interaction between participants in the online discussion forum. Generally 
speaking, participants were using the discussion forum in two ways: to 
explore content covered in the course, and to discuss practical problems 
and swap strategies for improving the participants’ own teaching practice. 
The discussion forum was evidently not used for social interaction, nor 
very often for administrative or technical support. 
 
The analysis of participation levels indicated that the discussion forum 
was used often by a core group of students who contributed regularly. If 
engaging in discussion using this forum was part of the formal 
assessment, then this level of participation would be more of a concern 
(eg, 11 contributors, including two staff members, posted 80% of the 
messages). It was interesting that directions to contribute to the forum 
about Phenomenography as part of the InterLearn online activities led to 
this subject being the most frequently discussed. Therefore, one way of 
increasing the interaction in the discussion forum may be to introduce 
more of these explicit links between the online activities and the 
discussion forum. 
 
Another influence on the level of activity may have been the stance 
adopted by the teachers of the subject, which as mentioned was more 
reactive than proactive. That is, teachers for the most part responded to 
issues and queries raised by students, including both content and 
administrative matters. In fact, as the discussion group progressed, the 
teachers decided to usually delay posting their messages, especially to 
content matters, as these sometimes had the effect of closing off a topic 
rather than feeding it. 
 
The lack of participation by some students may have been due to technical 
difficulties, and there was some indication that a few students found 
access to the discussion forum possible for the first time quite late in the 
semester, with access from home being a problem for a  small  minority.  It  
 
 
 
 



254 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

is also possible that given the structure of the discussion closely followed 
the module timetable, lack of participation may have been due to some 
students finding it hard to keep pace with this timetable, and therefore the 
subject under discussion. It is interesting that a message from a staff 
member that it was appropriate to post to an old topic quite late in the 
semester led to a spurt of ‘revisiting’ old ground. Also, there were a few 
occasions where students would logon and respond to quite a few 
messages at once, in a manner of ‘catching up’ with the discussion. 
 
It is worth noting that feedback from a quantitative evaluation exercise 
conducted at the conclusion of the subject indicated high levels of student 
satisfaction. In particular, over three quarters of the participants reported 
that the online learning materials stimulated their interest most or almost 
all of the time, with a slightly higher proportion agreeing that ‘The way 
this subject is taught is appropriate for the material’. Further, over 60 per 
cent reported that ‘The interactivity of the online materials fostered the 
sense of a learning community’ for most or almost all the time. 
 
Conclusion and implications for practice 
 
For our context and purposes, the application of Henri’s (1992; 1993) 
framework for analysing the transcripts of computer conferences 
demonstrated that this method can be a useful part of evaluating the 
effectiveness of online discussion. Assessing the level and nature of the 
participation, and in particular the kind of interactivity, provided a sense 
of how the participants were using the discussion forum. The analysis of 
the content of the discussion also contributed to this understanding, 
although some classifications were more problematic than others. For 
example, as the participants were evidently contributing at an advanced 
level to the discussion, discrimination between the deep and surface level 
of information processing seemed less relevant in this context. The 
analysis of the critical thinking skills showed that students were using the 
discussion to seek clarification about their ideas and analyses of problems, 
as well as exchanging views on the value of different learning theories, 
and suggesting strategies to overcome problems. In the context of the 
current subject, the nature of the activities designed for the InterLearn 
environment encourage reflection on participants' own teaching and 
learning experiences. Some evidence of these reflections did show up in 
the discussion in terms of comparing personal learning styles and self-
evaluation of approaches to teaching and learning, although as noted these 
kinds of metacognitive aspects of the discussion were not easy to reliably 
classify. 



McKenzie and Murphy 255 

 

 
Even though the process of transcript analysis can provide useful data for 
exploring the way in which participants are contributing to an online 
conference, it is not without its problems. The classification of message 
content, in particular, is necessarily subjective. In this case, low reliability 
in making finer distinctions between levels of critical thinking and 
metacognitive aspects as defined by Henri (1992; 1993), limits the 
conclusions that can be drawn. That one scorer was familiar with the 
content of the subject whereas the other was not may have contributed to 
the discrepancies. Neither person was involved in the teaching of the 
subject, nor was a participant in the forum itself. These issues highlight the 
need to consider the purpose of the analysis, the experience and role of the 
persons analysing the transcripts, and the suitability of the framework 
chosen for analysis, as potentially impacting on the interpretation of the 
results. On the latter point, it is possible that the cognitive focus of Henri’s 
model is more easily applied to an online environment where the 
discussion focuses on structured problem solving activities, compared to 
the unstructured nature of this forum. 
 
Although unstructured in itself, it also needs emphasising that the 
discussion group was part of a highly structured study environment. 
Thus, for example, some of the messages that might in other contexts be 
posted by the moderator to the group were in this context sent to 
participants in a weekly global email. This was designed to overcome a 
basic potential weakness of discussion groups, in that participants have to 
first of all visit them to begin interaction! However, most people regularly 
check their email, and thus this means method of communication provides 
an excellent means of motivating and encouraging students to visit the 
group, and alerts them to key aspects of the discussion. Other parts of the 
overall evaluation confirmed the value of the weekly global email 
messages. The other aspect of the context that needs to be emphasised was 
that the discussion group was what might be called a second level of 
online interaction. The primary (and assessable) component was the 
shared online activities – in general the discussion group was there for 
participants to discuss issues that arose in attempting the online activities. 
 
The lesson from this is the clear reminder that a discussion group, to be 
effective, must be a key and integral part of the learning environment. 
Students will visit and use such a group only if they perceive that it helps 
their learning and adds value to their course of study. This may be even 
more important, or at least of equal importance, as the role of the 
moderator of the group. If the students aren’t motivated to visit an online 
discussion group, the moderator who relies solely on posting to the group 
will not be ‘heard’. 



256 Australian Journal of Educational Technology, 2000, 16(3) 

 

 

Overall, analysing the transcripts from the discussion forum provided 
useful feedback to the course organisers in the ongoing improvement of 
the subject. Importantly, it affirmed the impression that the discussion 
group had been an effective part of the learning environment. It also 
helped in the process of making adjustments to the subject for its second 
intake. These adjustments included modifications to the print based study 
guide as well as changes to the online environment. Improvements to 
InterLearn are also taking place, on the basis of both student feedback and 
staff experience. Finally, the analysis became part of an evaluation 
database available to the developers of the other subjects in the course. 
 
Acknowledgments 
 
The authors wish to acknowledge Dr Karola von Baggo, Monash 
University, for her contribution to the analysis of data in this project. 
 
References 
 
Bates, A. T. (1995). Technology, open learning and distance education. London: 

Routledge. 
 
Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1998). Transcript analysis of 

computer-mediated conferences as a tool for testing constructivist and social-
constructivist learning theories. In Distance Learning 1998: Proceedings of the 
Annual Conference on Distance Teaching & Learning, (pp. 139-145). August 5-7, 
Madison, WI. 

 
Harasim, L. (1989). Online education: A new domain. In R. Mason & A. Kaye 

(Eds.). Mindweave: Communication, computers and distance education, 50-57. 
Oxford: Pergamon Press. http://www-
icdl.open.ac.uk/mindweave/chap4.html [verified 11 Nov 2000] 

 
Harasim, L., Hiltz, S. R., Teles, L., & Turoff, M. (1995). Learning networks: A field 

guide to teaching and learning online. Cambridge, Massachusetts: The MIT Press. 
 
Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye (Ed.). 

Collaborative learning through computer conferencing: The Najaden Papers, 117-136. 
Berlin: Springer-Verlag. 

 

Henri, F. (1993). The Virtual University: Collaborative learning through computer 
conferencing. Workshop, Monash University, July, 1993. 

 
Howell-Richardson, C., & Mellar, H. (1996). A methodology for the analysis of 

patterns of interactions of participation within computer mediated 
communication courses. Instructional Science, 24, 47-69. 

 



McKenzie and Murphy 257 

 

Levin, J. A., Kim, H., & Riel, M. M. (1990). Analyzing instructional interactions on 
electronic message networks. In L. M. Harasim (Ed.), Online Education: 
Perspectives on a new environment, 185-213. New York: Praeger. 

 
Mason, R. (1992). Evaluation methodologies for computer conferencing 

applications. In A. R. Kaye (Ed.). Collaborative learning through computer 
conferencing: The Najaden Papers, 105-116. Berlin: Springer-Verlag. 

 
Mason, R. & Weller, M. (2000). Factors affecting students’ satisfaction on a web 

course. Australian Journal of Educational Technology, 16(2), 173-200. 
http://www.ascilite.org.au/ajet/ajet16/mason.html 

 
Mowrer, D. E. (1996). A content analysis of student/instructor communication via 

computer conferencing. Higher Education, 32, 217-241. 
 
Newman, D. R., Johnson, C., Webb, B., & Cochrane, C. (1997). Evaluating the 

quality of learning in computer supported cooperative learning. Journal of the 
American Society of Information Science, 48, 484-495. 

 
Murphy, D. and Webster, L. (1999). Partnership in learning: An interactive online 

software tool. Paper presented at the CREAD Conference on Education and 
Partnerships, University of British Columbia, Canada, September 21-23. 
http://cread.cstudies.ubc.ca/proceedi.htm [viewed 8 Jan 2000, verified 11 Nov 
2000] 

 
Owen, M. (2000). Structure and discourse in a telematic learning environment. 

Educational Technology & Society 3, 3.  [verified 11 Nov 2000] 
http://ifets.ieee.org/periodical/vol_3_2000/b04.html 

 
Pitman, A. J., Gosper, M. and Rich, D. C. (1999). Internet based teaching in 

geography at Macquarie University: An analysis of student use. Australian 
Journal of Educational Technology, 15(2), 167-187. 
http://www.ascilite.org.au/ajet/ajet15/pitman.html 

 
Romiszowski, A. J., & Mason, R. (1996). Computer-mediated communication. In D. 

H. Jonassen (Ed.) Handbook of research for educational communications and 
technology, (pp. 438-456). New York: Simon & Schuster Macmillan. 

 
Salmon, G. (2000). E-moderating: The key to teaching and learning online. London: 

Kogan Page. 
 

Wendy McKenzie, Department of Psychology, Monash University 
 

David Murphy, Centre for Higher Education Development, Monash 
University