1 
 

Article 
 
 
 

“Practising What We Preach”: 

Social Constructivism and 

Course Design 
 

 
 
Rita Headington and Alison Hales 

School of Education 
 
 
 

When we teach students, do we always practise what we preach?  Do we use what we know about 

our students, and our discipline, to structure courses and engage and support our learners? 

 
In primary initial teacher education (ITE) we have identified a contradiction between what we say and 

what we do. We teach our students about teaching, learning and assessment in the primary school 

and emphasise the importance of building on children’s previous learning and using collaborative 

techniques in their classrooms. We tell them to engage children in learning through activity and 

discussion. In other words we teach them to use a ‘social constructivist’ approach. And yet our 

course design has used traditional lectures and seminars, where students have been required to  

work independently and show their learning through summative assessments. This is not social 

constructivism in action. We have not been using what we know about effective teaching and 

learning. We have not practised what we preached. 
 

Our students: what we know and how they learn 

We expect student teachers to know about the children they teach and their learning needs. But how 

much do we know about our students and their learning needs? 

 
We know it is not unusual for students now to enter higher education with a diverse range of prior 

experiences and academic qualifications (Yorke and Longden 2007). Student employment often runs 

in parallel, and sometimes competes, with academic studies. Students use a range of technologies 

to study individually and communicate with peers and tutors. They draw upon the escalating global 

resources available at the click of a mouse. 

 
Building upon the diversity of individual experience, a social constructivist approach seeks to 

develop prior understanding. It engages students in deep learning (Biggs 2003) by providing clear 

links with previous learning and giving opportunities for study and research. Interaction is essential 

to this. Through interaction with ‘able others’, such as tutors and fellow learners, students are 

supported in moving beyond the learning they can achieve as individuals to reaching their potential 

through the ‘Zone of Proximal Development’ (Vygotsky 1978) (see figure 1). 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

2 
 

 
 
 

 
Actual Development 

The learning that can be achieved 

without  support 

 
‘The Zone of Proximal Development’ 

 

 
The zone between actual and potential 

development 

 
Potential Development 

The learning that can be achieved with the support of an ‘able other’ 

 
Figure 1: The Zone of Proximal Development (Vygotsky 1978) 

 

We know that learners gain motivation and encouragement through involvement with decision- 

making processes and that frequent and constructive feedback on learning enables their formative 

development (HEA  2009a; Nicol, 2008). ‘Assessment for learning’, where assessment is used 

formatively, as a vehicle for learning, rather than solely as a summative end goal, is based on social 

constructivist principles. This has been evidenced in schools (QCA 2009) and in higher education 

through projects and centres of excellence including, REAP, ASKe and ESCalate (HEA 2009a). The 

approach promotes greater use of feedback and feed-forward (Brown 2007) by tutors, and through 

student peer- and self-assessment. 

 
However, the National Student Survey has highlighted problems with the quality of assessment in 

higher education, such as the clarity of assessment criteria, promptness of feedback and whether 

the quality of feedback is supporting learning (Unistats 2009). Tutors’ written comments, their 

language, clarity, presentation and relationship with the criteria are increasingly under scrutiny from 

students who question its usefulness (Weaver 2006; University of Greenwich 2009). 

 
We know that technology is now embedded in the lives of students who recognise the importance 

of discrimination to sift and review electronic materials (JISC 2006; SEEL 2008).  While Prensky’s 

(2001) digital divide between tutors and students may be narrowing,  Wesch (2007) and Drexler 

(2008) have demonstrated that Web 2.0 technology can be used more effectively in higher 

education by exploiting the wealth of available materials and opportunities for interaction.  The 

increased availability of Web 2.0 technology, and its ease of use, can offer direct synchronous and 

asynchronous student-tutor interaction to suit the working patterns of both parties and support the 

dialogue fundamental to the social constructivist approach to learning. Similarly the effective use   

of technology has the potential to enhance aspects of feedback and assessment, through virtual 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

3 
 

 

 
learning environments, online submission and marking, audio-feedback and personal response 

systems (Nicol 2007; University of Greenwich 2009). 

 
Alongside these developments, Race (2006) tells us that students are becoming more strategic in 

their approach to completing higher education qualifications and it is evident that, where conflicts 

between studies and demands of personal life prevail, a goal-orientated approach drives students’ 

approach to assessment (Boud 2000). The criticality that Barnett (1997) identifies as essential 

to degree studies can give way to superficiality. Is higher education adding to this by relying on 

traditional approaches to learning and assessment without acknowledging the diversity of learners’ 

needs and experiences and the potential of technology in learning? 

 
The revalidation of an undergraduate initial teacher education (ITE) programme (University of 

Greenwich 2008) gave us the opportunity to address this question. We wanted to critique Course A 

to develop teaching and assessment methods that would provide the new Course B with academic 

rigor, building upon what our students know and supporting how they learn. 
 

A critique of Course A 

Course A was taken by all first year undergraduates during 2007–2008.  Its assessment was wholly 

summative and this occurred after the taught element, formed of a traditional model of lectures, 

followed by seminars and self-study (see figure 2). 

 
 

Lecture slides available and key texts identified on Online Campus 

(OLC) virtual learning environment (VLE) up to a week before lecture* 

 
 

 
Lecture input (with printed articles) 

 
 
 

 
Tutor-led seminar 

  
 

Student self-study and maintenance of file 
 
 
 

Course A assessment: 

● Peer and tutor assessment of file (mid Term 3) 

● Timed examination (late Term 3) 

 
* Students were not required to print lecture slides but most chose to do so and commented 

unfavourably if this facility was late or not provided. 

 
Figure 2: Course A model 2007–2008 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

4 
 

 

 
The course required student attendance one day per fortnight and focused on 10 different topics 

within primary education theory and practice. Slides, and details of key texts, were made available  

via a virtual learning environment, the ‘Online Campus’ (OLC), a week before the lecture, for students 

to download and read.  Handouts of articles were provided at the beginning of each lecture, for 

reference in the seminar and/or as follow-up material. Ten whole-cohort lectures, of approximately 

120 students, gave introductions to the topics of study and, in seminar groups of 30, students 

reviewed each lecture’s key points, with a tutor leading discussion and the students drawing upon 

their limited experience of theory and practice. The timetabling of a one-hour lecture at 9am and 

a two-hour seminar later in the day, impacted on students’ engagement with the course.  Half the 

cohort moved immediately from lecture to seminar, without time to read the literature provided. The 

other half had a three-hour break between the two sessions but were not always motivated to use 

this time to acquaint themselves with the handouts. Students who did not attend the lecture were 

likely to find the slides and the seminar of limited use without undertaking further reading. 

 
Learning within the course relied upon progressive engagement with the 10 separate but linked 

topics, through attendance and personal study. Students were required to demonstrate their 

engagement by maintaining a file of notes from lectures, seminars and readings. The file was 

presented as an assessment item for Course A towards the end of the course. During 2006–2007   

the item was deemed a pass if the content of all sections was complete. It was evident that students 

who had missed sessions photocopied notes from their peers to ensure their files were complete, 

often with little indication of engagement with the notes. 

 
Previous tutors’ evaluations had identified that the assessment of the file, with its pass/fail outcome, 

had led some students to work at a surface level and consequently a peer assessment component 

was introduced during 2007–2008. This required students, mid-way through the year, to share 

their files and work together to identify ‘success criteria’, that is, specific evidence of achievement 

that exemplified the file’s assessment criteria (Headington 2010a). From this a sample of student 

work was tutor-marked to provide generic feedback for the cohort. Towards the end of the year 

students worked in pairs to identify the content of each other’s files and give written feedback on 

their quality. Tutors became responsible for moderating a sample of work and any items in dispute, 

and awarding a pass/fail judgement on the basis of the evidence provided. This was well received 

by most students. The completed marking proformas demonstrated that the majority of students 

offered honest, constructive and developmental feedback to their peers and, by working in pairs, a 

high level of discussion had occurred about each file. A minority of students were dissatisfied with 

the procedures and one student urged changes to enable students to give anonymous feedback   

to peers in groups other than their own. Interestingly, this request came from the seminar group 

that had involved the largest proportion of tutor moderation due to disputes between those giving 

and receiving feedback. The same group had found difficulties in working together during seminars 

through the year. There seemed to be a lack of trust which had become increasingly problematic 

when students were given a perceived position of power over another’s assessed work (Bloxham 

and West 2004; Boud and Falchikov 2006). 

 
Course A had, for several years, culminated in a two-hour written examination (100% of the course 

grade).  It comprised one seen question, given to students a few weeks beforehand, and one   

unseen question from a choice of four to five topics.  Students were encouraged to discuss the seen 

question and revise together for the examination.  A lecture provided examination-specific guidance 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

5 
 

 

 
and a seminar exemplified revision methods and expectations. Students were also directed towards 

the university’s study skills support services. For some, this was the first examination taken for a 

number of years. 

 
Student evaluations of Course A identified their overall satisfaction with its content and progression 

through topics. However, weaknesses in the content and quality of end of year examination scripts 

provoked discussion amongst tutors who questioned students’ depth of study, level of engagement 

with session content and the literature. Tutors remarked that students chose to abandon rather than 

confront demanding texts and the summative model of assessment left no opportunity to redevelop 

provision to benefit the cohort concerned. 

 
A review of the students’ and tutors’ evaluations of Course A, alongside the literature of effective 

teaching, learning and assessment, showed us that we were not practising what we preached. This 

provided sufficient argument for the reframing of a new course (Course B) within the revalidated 

degree. 
 

How Course B was developed to meet students’ learning needs 

We wanted the course to provide our ITE students with a pedagogical model that could be analysed, 

used and applied to the primary classroom.  Course B was therefore conceived with the view that 

it should enable students, new to higher education, to engage with its content by building upon 

personal experience, encouraging research and promoting interaction with and feedback from ‘able 

others’ (Vygotsky 1978). It aimed to achieve what Biggs (2003) called “constructive alignment”   

where learning activities and assessment tasks complement the course’s learning outcomes.  We 

intended, “students to take responsibility for their own learning ... establishing trust between student 

and teacher” (HEA 2009a), by providing authentic learning experiences that would reflect their 

professional contexts and enable students to work with one another towards a common goal. 

 
Reflecting practice in the professional environment, there would be an explicit use of ‘assessment 

for learning’ (QCA 2009), where the students’ progress as learners was fundamental. In essence, 

Course B aimed to ‘practise what we preached’, a social constructivist approach to learning. 

 
It was vital to build a supportive and open learning community which acknowledged and built upon 

the strengths of its individuals (Boud 2000) who came from a wide range of experiences and starting 

points. We felt this approach would be both appropriate and productive to embed within a Level 4 

course, providing a foundation for future years. 

 
Course A had been a 15 credit Level 4 course, but the revalidated programme facilitated the birth of 

Course B, a 30 credit Level 4 course.  The consequent increase in study hours in this area, formed 

by decreasing the number of Level 4 courses, provided an opportunity to extend or deepen the 

content of the previous course.  Staff consensus was to opt for depth of understanding to provide 

a firm foundation to the students’ degree studies and school-based experiences.  This enabled a 

substantial redevelopment of the course’s structure (see figure 3). 

 
Course B, described by the Head of Department as the ‘trunk and branches’ of the undergraduate 

ITE programme, provides a framework for studying curriculum areas, undertaking school-based 

placements and exploring key pedagogical issues. It offers a place where education theory can be 

explored independently of subject demands. 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

6 
 

 

 
In contrast to the more traditional model of Course A, Course B was developed to engage students 

with their learning through activity and interaction, within and between taught sessions.  It was 

formed of 10 units delivered using a ‘blended learning’ approach, where each unit comprised an 

introductory lecture, readings, online workshops and directed activities, formative submissions and a 

seminar in which students presented and discussed their research and studies. 
 

 
 

Introductory lecture with slides, readings and workshop activities 

available to view/download beforehand 

 
 
 
 
 
 
 
 
 
 
 

Group presentation: research 

and preparation, including 

group tutorial 

 
Reading 

group 

activities 

Online workshop, readings 

and activities, including 

written submissions by 

groups/individuals 

 
 reading logs, reflections on 

school-based tasks) 

 
 

Tutors’ written feedback to 

groups and individuals on 

online submissions 

 
 
 

 
Seminar: usually two weeks after introductory lecture 

Group presentation 

Tutor-led activities and discussion 
 
 

 
Course B assessment: 

● Joint report on school-based observations/learning theory (early Term 2) 

● Peer assessment of portfolio/reflective commentary (late Term 2) 

● Timed examination (late Term 3) 
 

 
 

Figure 3: Course B model 2008–2009 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

7 
 

 

 
Assessment methods were critical to the course’s constructive alignment, and the doubling its 

credit rating gave us the chance to develop this aspect of the course’s design.  At revalidation, the 

programme team had determined that a summative, end-point examination would remain a course 

requirement. However, the ‘high stakes’ nature of an examination, which tends to favour learners 

who can express themselves clearly and concisely within its parameters, was complemented by two, 

more creative, summative assessment items at intervals during the course and ongoing formative 

assessment.  This would provide an opportunity for different learning styles, enable ‘catch up’ 

for those who missed sessions and identify students with difficulties at points where errors and 

misconceptions could be addressed. 

 
The first item, a joint report (worth 25% of the course grade), was designed to provide an authentic 

learning and assessment experience; pairs of students were required to describe, compare and 

contrast specific aspects of their school placements in relation to their academic studies. The  

second item, a portfolio of evidence (worth 25%), mirrored the peer-assessment pilot of the previous 

year but was enhanced to ensure that no student was disadvantaged by subjective peer comments. 

The submission required students to take part in the peer assessment activity and to complete an 

evaluation of the experience of giving and receiving peer feedback. The final examination (worth 

50%) would remain as previous years with one seen and one unseen question. 

 
To further support students’ learning, formative assessment items, which focused on readings, 

presentations and directed activities, were woven through the course to build an environment 

where feedback from tutors, peers and self were regular and accepted features that would help to 

construct and shape the learning community. During the first seminar, students were asked to form 

‘reading groups’ (of two to three students) and ‘presentation groups’ (of four to six students).  These 

were not necessarily related. The reading group members were required to work together after 

each lecture to tackle the readings provided, by completing a ‘reflective log’, which included a full 

Harvard reference, summary statement and main points learned. Samples of these were submitted 

to tutors for formative feedback via the OLC Submission Centre as part of the students’ directed 

activities.  This provided the students with immediate peer group support and the tutors were able 

to diagnose difficulties, provide small group feedback to students and use issues arising to inform 

generic feedback and further teaching. For example, we were able to address students’ difficulty 

with Harvard referencing before students were penalised for errors in formal submissions. This area 

had consistently caused problems in previous cohorts. 

 
The presentation groups required students to work together to lead a 15–20 minute presentation 

on an area of choice from the topic of study within the unit to the rest of their seminar group, who 

became the audience. The students needed to work together to research, design and present and 

were entitled to a tutorial with the seminar tutor to guide their work. When presentation groups were 

formed, students discussed areas for feedback and comments that they would value from their 

peers. Ideas from across the cohort were collated to form a ‘presentation group feedback’ proforma; 

success criteria identified by peers. This was distributed to the audience, completed and returned 

to presenters at the end of their presentations. This peer-assessment provided initial experiences of 

giving and receiving feedback against criteria in a supportive group situation. It aimed to alleviate 

anxiety and build trust between students in advance of the peer assessment of the portfolio of 

evidence in which students would need to comment on the work of others in their seminar group. 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

8 
 

 

 
The OLC submission centre provided individual students and groups with the opportunity to gain 

direct feedback from tutors on their writing and reflection. The directed activities, built into most of 

the online workshops, included a submission of approximately 300 words. The submissions, based 

on readings or links between theory and practice, enabled students to identify Level 4 expectations 

and be guided by on-going constructive criticism from their tutor. As formatively assessed items,   

the submissions were chosen to cover key issues within the course and overtly build towards 

students’ summative assessments, for example, requiring students to read and summarise key   

texts or comment on the design of classroom environments. The submissions enabled tutors to gain 

an early insight into students’ needs (Yorke 2006), both in the structuring of their writing and their 

understanding of the course content, and to address concerns at an individual, group or cohort 

level. Students in need of individual support were identified and directed to the university’s study 

skills centre. 
 

Are we now practising what we preach? 

A mid-year review of progress, via an online survey of students in February 2009, captured 

qualitative and quantitative responses.  The results verified tutors’ anecdotal evidence that students 

had responded positively to the Course B model of teaching, learning and assessment.  There were 

65 respondents (54.6% of the cohort) of which the majority felt ‘comfortable’ (63.1%) or ‘excited’ 

(10.8%) that Course B used a blended learning approach saying: 

 
“I really liked working through lectures, seminars and online, it kept the work varied and 

interesting.” 

 
“It was very well structured, good levels of feedback of work, relevant topics, helpful and 

challenging session leaders.” 

 
Most respondents felt the design of Course B had, ‘to some extent’ (56.9%) or ‘a great deal’ 29.2%), 

helped them to become part of a community of learners, an essential feature of an assessment for 

learning approach that builds on formative feedback from tutors, peers and the self (Boud 2000).   

The presentation groups were instrumental in this as they enabled students to work collaboratively, 

develop their presentation skills, give and receive feedback from peers and gain in confidence: 

 
“The presentation groups really boost your confidence.” 

 
“It’s much easier to give encouraging, positive feedback to your peers about their 

presentations. I think confidence building is important in the first year.” 

 
However, the reading groups appeared less successful in developing a community of learners 

after the initial submissions: 

 
“To be honest, after the first reading group submission the group stopped meeting to discuss, 

and we read separately.” 

 
“The reading groups whilst, an excellent idea just haven’t been practical to carry out to any 

useful degree due to pressure of work and different levels of commitment etc!” 

 
“With regards to the reading groups: sometimes it’s easier to get on and do it yourself.” 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

9 
 

 

 
We had initially planned to use the reading group submissions as a means of ensuring peer support 

and discussion when students tackled demanding readings. As the course progressed, professional 

activities were used as submissions and this may have impacted on the students’ perception of the 

value of the reading groups. It may be advantageous for us to form presentation groups by merging 

two reading groups; building on the success of the former and emphasising the importance of the 

latter. 

 
The OLC discussion boards were less overt in bringing a sense of community to such a large   

cohort, tending to operate as notice boards rather than ‘discussion’ areas. Tutors led by providing 

questions and personal anecdotes to encourage students to share, reflect upon and analyse their 

experiences as primary school learners. Despite tutors’ concerns that layout and accessibility within 

the discussion boards hampered opportunities for effective dialogue, students commented that: 

 
“As a novice in this area of online learning etc, I surprisingly enjoyed contributing to the 

discussions and reading fellow students’ responses.” 

 
“It is helpful to share a range of perspectives on experiences.” 

 
This suggests that they had used them to learn more about the experiences of individuals within 

the community. Several students suggested that the discussion board comments might have been 

used more productively by tutors in seminars, providing starting point for the interaction essential 

to learning. One tutor, for whom this was a first experience with blended learning, responded to 

these observations by saying, “Fair point! It didn’t occur to me to use the discussion points made 

in seminars! This may have been due to my lack of experience with the model – I saw this as a bit 

of a bolt-on and wasn’t sure of the purpose other than a type of notice board. Had it been used 

in seminars it would have given a purpose to the students and so encouraged greater use and 

discussion.” 

The OLC ‘coffee bar’, a student-only social area was used very little but as one student noted, 

“Everybody seems to be on Facebook”, echoing findings from the LEX study (JISC 2006) that 

different levels of student engagement operate online. 

 
Students moved easily between the face-to-face and online elements of Course B and this helped 

individuals to feel supported by tutors and peers: 

 
“Having the online elements, I felt like I had support with Course B 24/7.” 

 
Only a handful of students contacted tutors for technical support with online elements when 

the course began and most rated these as ‘easy’ or ‘very easy’ to use (e.g. workshop directed 

activities, 50.8% and 41.5%; submission centre, 50% and 40.6%). This was not surprising as recent 

studies have identified students’ growing confidence in and access to technology (e.g. JISC 2006). 

Students’ concerns appeared to centre on issues related to the first year of operation. For example 

some slippage occurred as tutors developed the content of the online workshops; some technical 

difficulties arose with the students’ use of the submission centre; and there were some disparities 

between university and home computer software. Whilst none of these problems was major, they 

were annoying and problematic for those who were less confident in using a blended approach, 

echoing findings of the SEEL project (2008). 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

10 
 

 

 
However, this did not discourage use of the OLC as spot checks throughout the year showed 

students were using it daily between 7am and 1am. As one student put it: 

 
“I love the structure that the online element brings and the accessibility.” 

 
The online medium enabled direct access to appropriate web-based materials and appeared 

to engage students in directed activities, though not always positively. For example while one 

commented that, “The use of video is more accessible to all types of learners”, another suggested 

that some of the online readings used were “quite heavy going … and long”. The latter was 

recognised by tutors as an area in need of immediate review as the original intention had been to 

provide ‘bite-size’ information to students via well-selected websites, building on the studies of 

Drexler (2008) and Wesch (2007). 

 
There was a widely held view by students that: 

 
“Having all the information online proved really helpful. I always printed off the different units 

and this helped to me organise my work through the tasks.” 

 
Although one student felt it, 

 
“Takes up our time in reading and writing. It comes across as complicated, confusing and 

frustrating and feels like a burden.” 

 
The area of feedback and assessment raised several issues. The course’s online formative  

feedback from tutors was seen as ‘somewhat’ (31.3%) or ‘very’ (40.6%) beneficial when it operated 

successfully and students felt it was ‘somewhat’ (29.7%) or ‘very’ (54.7%) useful to receive written 

group feedback from the tutor. However, tutors did not always provide timely feedback and this 

raised criticism as students recognised it moved away from the pedagogical model advocated, and 

impacted on their learning experience. 

 
“I was very disappointed and concerned with the continuous lateness of feedback, after all 

students would not be allowed to hand in work late at all. Most importantly, if feedback were 

given on time, it could have further improved my learning throughout the course.” 

 
Tutors commented that they found it difficult to keep up-to-date with giving feedback in the 

submission process adopted, where students submitted work when they were ready to do so. The 

most successful approach was achieved by a tutor who checked and responded to submissions 

on the same day each week. However, this was not common practice. One tutor felt that students 

should submit by a given date so tutors could respond at a set point: 

 
“The haphazard nature of the submissions – not easy to keep checking ‘just in case’ 

something was submitted after the last time you looked.” 

 
It is questionable whether tutors’ difficulties rested solely with the lack of structure as the formative 

nature of this feedback seemed to lower its status in relation to other ‘more pressing’ diary 

commitments. The principles of formative feedback need to be acknowledged and its practice 

valued in order to develop tutors’ approaches (Nicol 2008). 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

11 
 

 

 
Students were initially concerned about the paired assessment tasks with fewer feeling ‘comfortable’ 

(50%) or ‘excited’ (9.4%). This may have been due to the summative implications of the product, as 

one student noted,  “I was concerned that our grade relied so heavily on the other person making an 

effort.” 

 
A full analysis of the joint report has been reported separately (Headington 2010b). It is based  

on student evaluations, completed following the assignment’s submission, the majority of which 

suggested the process had been beneficial. Joint submissions appeared to be new ground for the 

students and initial apprehensions did not inevitably follow through. A positive attitude and approach 

reaped dividends: 

 
“It was good to have someone to work with as it wasn’t so bad when getting to grips with the 

more harder or longer readings (for the assignment).” 

 
“I thought the joint report would be difficult but on the first meeting we agreed a ‘to do’ list 

and met regularly where we gave each other feedback and ideas. We would follow these up 

and ended with a report we were both happy with.” 

 
Overall, the mid-year online survey results suggested that Course B had provided students with a 

social constructivist model of learning that built upon previous experiences and was appropriately 

facilitated through peer and tutor interaction. The course’s structure provided progression, from one 

unit to the next, that was not apparent in Course A, and this appeared to contribute to the students’ 

construction of learning. 

 
Tutors felt that blended learning provided an effective model of practice for the students. The 

units enabled face-to-face tutor input, via lectures and seminars and online tutor support through 

formative feedback and participation in discussions. Students were encouraged to seek and give 

support to peers via face-to-face and online activities. 

 
In terms of outcomes, the joint reports and portfolios showed a high level of engagement and 

analysis, with very few failures. Tutors identified examples of deep learning in students’ discussions 

and assessments: 

 
“I was surprised at the level and depth of engagement with Year 1 students so early on in the 

course. They really got into the core issues and were critically analysing them.” 

 
The end of year examination results testified to a depth of engagement with course content,   

with results that surpassed those of the previous year (see figure 4). Whether this was due to the 

improved student understanding, the ability of the cohort or the depth of study offered by a 30 credit 

course is more difficult to ascertain. 

 
The course was not without its difficulties and the need for staff commitment and motivation cannot 

be denied. It necessitated frequent staff meetings to discuss pedagogical approaches, review 

students’ learning, devise new units and provide mutual support. Tutors reported that without direct 

experience of learning through a blended approach, it was difficult to identify with the students’ 

learning experience. Some Course B tutors extended their knowledge of course content, others their 

skills in using the technology. Although advantageous to the individuals concerned, and critical to  

the strength of the provision, the development of the course took longer than initially anticipated. 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

12 
 

 

 
Tutors needed to focus on the development of course units, possibly to the detriment of regular 

formative feedback to students, creating a dilemma in respect of course expectation. This should 

ease in the future for Course B as units are reviewed rather than created; but, unless checked,  

the same issue could move with the cohort into their second and third years where similar course 

models are now being developed following the success of Course B. We have nonetheless, been 

gratified that the students’ and tutors’ engagement in this full-scale pilot has enabled the approach 

to become embedded within the programme’s provision and has impacted on the quality of 

students’ learning experiences. 

 
We have, it seems, found a way to practise what we preach. 

 
 

 
 
 

Figure 4: Examination results by% in Course A (2008) and Course B (2009) 



Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

13 
 

 

 
Acknowledgement 

We wish to thank Ian McNay, Emeritus Professor of Education, University of Greenwich, who 

provided us with encouragement, guidance and critical feedback on drafts of the paper. 
 

References 

Barnett, R. (1997). Higher education: a critical business. Buckingham: SRHE and OUP. 

Biggs, J. (2003). Teaching for quality learning at university. Second edition. 

Bloxham, S., and West, A. (2004). Understanding the rules of the game: marking peer assessment as 

a medium for developing students’ conception of assessment. Assessment and Evaluation, Higher 

Education, 29(6): 721–733. 

Boud, D. (2000). Sustainable assessment; rethinking assessment for the learning society. Studies, 

Continuing Education, 22(2): 151–167. 

Boud, D.,  and Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment and 

Evaluation, Higher Education, 31(4): 399–413. 

Brown, S. (2007). Feedback and feed-forward. Centre for Bioscience Bulletin, 22:1. 
 

Drexler, W. (2008). The networked student. [Online]. Available at: http://teachweb2.blogspot. 

com/2008/11/cck08-connectivism-networked-studentthe.html (accessed 30 March 2009). 

HEA. (2009a). Feedback. [Online]. Available at: www.heacademy.ac.uk/ourwork/learning/ 

assessment/senlef (accessed 30 March 2009). 

Headington, R. (2010a). Clear in advance: a case study of first year undergraduate students’ 

engagement with assessment criteria. Paper presented at EARLI/Northumbria Assessment SIG 

Conference, Newcastle, September 2010. 

Headington, R. (2010b). Assessment in higher education: supporting students’ first experiences. 

Paper presented at European First Year Experience Conference 2010, Antwerp, May 2010. 

JISC. (2006). LEX: learner experiences of e-learning. [Online].  

Available at: www.jisc.ac.uk/elp_lex.html (accessed 30 March 2009). 

Nicol, D. (2007). Assessment for learner self-regulation: achievement in the first year using learning 

technologies. Paper submitted to Assessment and Evaluation in Higher Education. 

Nicol, D. (2008). Assessment as a driver for educational change. Escalate New, Issue 10, Spring 

2008: 21–23. 

Prensky, M. (2001). Digital natives, digital immigrants. [Online]. 

Available at: www.marcprensky.com/writing (accessed 1 October 2006). 
 

QCA. (2009). Assessment for learning guidance. [Online]. 

Available at: www.qca.org.uk/qca_4334.aspx (accessed 30 March 2009). 
 

Race, P. (2006). Learning, teaching and assessing: study text for the PGCert/PGDip/MA in higher 

education. London: University of Greenwich. 

SEEL. (2008). Student experience of e-learning. London: University of Greenwich. 
 

Unistats. (2009). University of Greenwich, National Student Survey 2008. [Online]. 

Available at: www.unistats.com (accessed 30 March 2009). 

http://teachweb2.blogspot/
http://www.heacademy.ac.uk/ourwork/learning/
http://www.jisc.ac.uk/elp_lex.html
http://www.marcprensky.com/writing
http://www.qca.org.uk/qca_4334.aspx
http://www.unistats.com/


Compass: The Journal of Learning and Teaching at the University of Greenwich Issue 2, 2010 

14 
 

 

 
University of Greenwich. (2008). BA QTS in primary education validation document. London: University of 

Greenwich. 

University of Greenwich. (2009). Feedback and assessment project: final report. London: University of 

Greenwich. 

Vygotsky, L. (1978). Mind in society. Cambridge MA: Harvard University Press. 
 

Weaver, M. (2006). Do students value feedback? Student perceptions of tutors’ written responses. 

Assessment and Evaluation in Higher Education, 31(3): 379–394. 
 

Wesch, M. (2007). Vision of students today. [Online]. 

Available at: www.youtube.com/watch?v=dGCJ46vyR9o (accessed 30 March 2009) 
 

Yorke, M. (2006). Formative assessment and employability: some Implications for higher education Practices. 

McNay, I. (ed.). Beyond mass higher education: building on experience. Maidenhead: Open University Press. 

Yorke, M., and Longden, B. (2007). The first-year experience in higher education in the UK. [Online]. Available 

at: www.heacademy.ac.uk/FYEsurvey.htm (accessed 17 May 2007). 
 

 

http://www.youtube.com/watch?v=dGCJ46vyR9o
http://www.heacademy.ac.uk/FYEsurvey.htm