Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 Improving reading compliance with whole-class qualitative quiz questions Arron Phillips, Martin Compton University of Greenwich Abstract “Have you done your reading?” If you are a teaching academic who always gets positive responses to this question, then you are in a very fortunate (or talented) minority. This small case study draws on existing research into why students do not read and evaluative research into strategies designed to combat this phenomenon. It reflects on an ad hoc trial of quiz questions randomly targeted at individuals in two seminar groups of first-year undergraduates within the Business Faculty. The trial spanned seven weeks and sought to improve previously poor levels of reading compliance. The study found that, within a short period, the technique employed significantly increased levels of reading compliance, when measured across the whole group through qualitative comprehension questions. Introduction So-called ‘reading compliance’ is a broad umbrella term that refers to actual or claimed confirmations of suggested, recommended and essential reading by undergraduate and postgraduate students. The term itself, although apparently the most common for the phenomenon in the literature, connotes conformity, regulation and scrutiny, though the counter strategies to non-compliance are not always mandatory. The rates of non- compliance set out below and the breadth of strategies deployed to combat it suggest there are parallel phenomena of ‘reading relevance’ and ‘reading significance’ that need to be considered simultaneously. Non-compliance, when it comes to set reading, is widely recognised amongst teaching staff (Burchfield and Sappington, 2000; Starcher and Proffitt, 2013; Hatteberg and Steffy, 2013) and lecturers’ perceptions of it appear to be reflected in the reality. Hatteberg and Steffy (ibid.), for example, cite multiple studies since the early 1970s that show that no more than 30% of students complete reading tasks for any purpose. It also seems to be an increasing trend (Burchfield and Sappington, op. cit.). Indeed, Lei et al (2010) describe it as an ‘epidemic’. Our small case study within this urban, post ’92 university was one of both convenience and opportunity. The study aimed to address a first encounter with this common challenge, as experienced by one of the collaborators (a PhD student with seminar-leading responsibilities, henceforth ‘the tutor’), and drew on the expertise of the other (a Senior Lecturer in Learning, Teaching and Professional Development with twenty-five years’ teaching experience, ten of those as a teacher educator). As we shall set out below, due consideration of a range of approaches culminated in a strategy ‘with a twist’ that, in this context, has had a remarkably satisfying impact on levels of compliance. Our collaboration commenced after an impromptu conversation about difficulties faced when teaching a class in which the majority had not completed a required reading task. Prior to this, the tutor had done some teaching as co-tutor within the faculty, on a course involving a lot of student presentations; engagement levels were high and preparation was impressive. Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 Previous positive experiences as a teaching assistant in smaller groups at another post ’92 institution, the co-tutoring role and the levels of engagement all served to emphasise the dissimilarity of the subsequent experience. Some faculty members suggested that we should not expect students to prepare, as this was rare, and little could be done to motivate them. Since this disappointing view ran counter to impressions of these same students in a different setting, we committed ourselves to the development of a strategy to change the behaviour and attitudes of the students. In a structured approach, we analysed contextual specifics and considered various strategies. Awareness of the levels of non-compliance reported in the literature did lessen the shock of facing a large group of blank-faced first-year undergraduates and made us the more determined to challenge the problem. Below, we set out a consideration of key literature on why students do not read and what can be done to overcome this reluctance, before detailing the specifics of our case study. Why don’t they read? Explanations for the phenomenon itself and its rise include a growing disinclination to read, or even respect, hard copy material in a digital era (Jolliffe and Harl, 2008). Scepticism about the value and purpose of the assigned readings is also common. Brost and Bradley (2006), for example, ‘judged the lectures to be accessible to students whether or not the reading had been done’ (p.104). Conversations with colleagues seem to suggest that in our faculties, when setting reading, we sometimes succumb to the assumptions that a) students will not do it and b) this disinclination is down to laziness. Despite no empirical connection between the latter thought and reality, it is persistent and worrying. Logically, we might next ask: If we do not expect them to do it and anticipate having to compensate in class for that, then why do we set reading at all? Clump et al (2004), in a relatively large study amongst undergraduates within a single institution (n=423), found that reading compliance before coming to class leapt from 27.5%, when there was no incentive other than requested preparation, to just under 70% when the material was directly related to a quiz or test. They report, with evident disappointment, that nothing would appear to raise compliance to 100%; however, what seems more significant here is the connection between motivation and reading and the impact this has on assessments. Self-reporting studies such as this may need to be viewed with a degree of scepticism: Hoeft (2012) observed, for example, that of the 46% of students who reported that they had completed reading, only 55% were able to answer simple questions; Sappington et al (2002) found similar results and connected this with academic dishonesty. In short, asking students whether they have done the reading is not likely to elicit a reliable response. Once again, this range of inaccurate reporting roughly approximates to our own experiences within one first-year compulsory course. On appearance alone, it seemed that, although some had engaged with the texts, many had either skimmed or were assuming they would not be tested on their claims. We were consequently keen to embrace a strategy that measured comprehension rather than claims of compliance. In a fairly large study of Business undergraduates (n=394) (Starcher and Proffitt, 2013), almost 50% of the students stated ‘lack of time’ as a reason for non-compliance with set reading; ‘boring’, ‘not meaningful’ and ‘professor rarely refers to the texts’ were the next most commonly offered reasons. These were from pre-defined categories, however. Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 In a smaller, more qualitative study, Brost and Bradley (op. cit.) were concerned that emphases on student-focused solutions to non-compliance and assumptions about motivation were a potential distraction and might cause other factors to be ignored. Instead, they focused on advanced level students studying for an elective module in which they were exposed to thirteen different lecturers. Their student sample was small, but most interesting in their findings was the sense of the strategic amongst student decisions as to whether or not to read. Time factors and content relevance / interest were cited, but, where students guessed, realised or assumed the material would be covered in class, the likelihood of in- depth reading of the set texts was low. This also reflects Pecorari et al.’s (2012) study, which showed that a significant majority of students valued attendance and lecture notes more highly than text book content and set reading. Their apparent strategic reckoning was: “if the objective is to pass a course and attendance is sufficient to achieve that goal, the textbook is superfluous” (p.249). The inherent dangers here of limited vistas on content and the resultant reduction in opportunities to engage with deeper learning are made clear; we were keen to avert them. One of the most frequently-cited problems is a lack of adequate study skills or what Brost and Bradley (op. cit.) describe as ‘unpreparedness’, owing either to student mindset, to confidence issues (Tuckman, 1991) or, increasingly, as a consequence of frailties in their pre-undergraduate education and consequent weaknesses in language and comprehension (Lei et al, 2010). What can be done? Hoeft (2012) notes that there have been few university-based studies on strategies to combat reading non-compliance. Hatteberg and Steffy (op. cit.) state that, despite the ubiquity of the issue, there is relatively little research on it and large-scale comparative studies are a notable gap. Their evaluative study drew on student perceptions of the effectiveness of a range of methods to foster reading compliance (which they had, in turn, filtered from existing case studies) and this informed our specific choice of technique to implement first. Of seven strategies, they found the students reported ‘announced reading quizzes’ as the most effective. In fact, all the open and inclusive strategies were more popular than the ‘surprise’ or exclusive strategies, such as unannounced quizzes and random questioning. Hoeft (op. cit.) also reports ‘quizzes’ as the reading motivation students most frequently requested and, in a follow-up study, shows that quizzes had a significant impact on both compliance and comprehension. Johnson and Kiviniemi (2009) also connect frequent quizzes on reading to increased success in summative examinations, a finding replicated by Sappington et al (op. cit.). Uskul and Eaton (2005) had similar results when students were given graded, long-answer questions to set reading. These three studies illustrate degrees of blurring of the distinction between formative and summative assessments, though do not advocate the use of quiz scores as part of summative grades. Perhaps the biggest problem with this approach, however, is the additional workload it entails. In contrast, Roberts and Roberts (2008) argue that the quiz approach does not foster deep learning or understanding of content. This suggests to us that the types of questions asked need to be carefully considered. Additionally, their argument assumes that the quiz is the principal method of developing knowledge, whereas other studies and our own approach regard the use of quizzes much more as a threshold to deeper understanding, emerging Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 later in the sessions. Another suggested reason for non-use of mandatory motivators such as quizzes is that they might provoke resentment towards lecturers and result in poor evaluations (Sappington et al., 2002). Such cynical reasoning, based solely on supposition, adds little to our understanding of why students do not read. However, it is somewhat revealing about the stance lecturers take on this issue, and is perhaps indicative of more widespread perceptions of students by academic staff. Sappington et al. (ibid.) state in their conclusion: “Faculty who reject quizzing on the basis of students’ ill will may want to reconsider the practice of giving exams on the same rationale” (p.274). Lei et al. (op. cit.) claim limited confidence as a key reading de-motivator, one that becomes more influential as readings are attempted but not understood. This suggests to us that quizzes offer the opportunity to tackle simultaneously the compliance and self-confidence issues, with a potentially wider impact on students’ studies. In interviews with Business Faculty colleagues, Starcher and Proffitt, (op. cit.) identified the reading quiz (in many forms, but usually multiple choice) as the most commonly cited in- class strategy used to encourage pre-reading of the material. Other suggested strategies were: presentations to class and one exam question based only on reading. Pre-class strategies suggested by their faculty colleagues included chapter summary tasks, online postings or quizzes and reading journals. In the paper, Starcher and Proffitt (op.cit.) criticised their faculty and colleagues for the inherent extrinsic motivation factors at play in the design of some strategies used. These may have embarrassment potential which, they argue, could have serious long-term consequences. However, this assumes that the quiz responses and results are necessarily open and visible to others. They seem to ignore alternative, less open ways of managing quizzes which can have an intrinsic potential. For example, students might be encouraged to consider their responses ipsatively, the lecturer could collect responses or the students might self-mark. Such approaches would then draw on the inherent formative potential of questioning. Having said that, we opted for a series of oral questions posed randomly to individuals in a group setting; though an individual would be asked a question and might be embarrassed if s/he did not know the answer, we nevertheless felt that this was legitimate in not exceeding the usual expectations of classroom interaction. Praise for the whole class if they did well, or advice to read more and deeper if the class score was poor, would follow, encouraging a sense of group responsibility rather than creating discomfort at individual exposure. Despite some reservations, the literature pertaining to studies of both students and lecturers suggests that quizzes have the potential to increase reading compliance. A multi-faceted approach, including both academics and students (Starcher and Proffitt, 2013) and strategies that enable both surface and deeper levels of comprehension (Hatteberg and Steffy, 2013), should always be part of the wider learning design. The impact of not reading material cannot be underestimated; it is, of course, only one strand of the varied notions of ‘student engagement’, but a significant one nonetheless. The evidence that such things as engagement with studies, levels of preparedness and time spent on studies out of class have a positive impact upon achievement is now unequivocal (Quaye and Harper, 2014). In addition, frequent ‘low stakes’ tests on reading improve not only reading compliance but also class attendance (Schrank, 2016). Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 For both mature students and school leavers, changing reading habits is often a significant challenge, but, given the continuation of established pedagogic frameworks or even the adaptation and implementation of new ways of teaching and learning, it remains both a behaviour and a skill that they need to develop quickly. If we enable non-compliance by reflecting our assumptions and attitudes or by (because we expect it) implementing in-class measures to compensate for it, we produce students who learn NOT to read. This makes addressing the issue in year 1 all the more important and can be part of the wider development of active and independent study habits in a non grade-dependent setting, which will help prepare the students for the following years (Cottrell, 2013). The sample This study focused on two seminar groups taught by the same tutor. The two classes were on a first-year compulsory course within an undergraduate programme at the Business Faculty. Group A had an average attendance of twenty students whilst Group B had an average of eight students. Group A students were a mix of mature students and school leavers, was ethnically diverse and comprised both UK born and international students. The latter seminar consisted entirely of school leavers, mostly UK born and with a more homogenous ethnic profile. Gender is a factor that features in some studies, but, as this was a study of ‘opportunity and convenience’, no distinctions or contrasts were made. As a convenience sample, the two cohorts mirror a large proportion of other cohorts in a university which has a wide and celebrated ethnic diversity alongside a significant mature student population: a diversity noted in the recent QAA higher education review (QAA, 2015). Context The programmes are taught in the conventional form of a lecture and seminar on a weekly basis across one term. The seminar material gives students the opportunity to explore the weekly topic in depth. The lecture, in this structure, comes after the seminar, which meant that students did not have the grounding knowledge that a lecture can provide. In terms of learning design, the lecture endeavours to provide students with an exploration and understanding of the fundamental underpinning principles of the course topics. Whilst the seminars provide students with the opportunity to explore these principles in more detail, they also provide students with the context to, and reality of, these principles in the workplace. The course leader provided the teaching team with materials for the class, but left it open to the tutors as to how it was covered in the seminar. In this case, the tutor used small group discussions based on texts directly relevant to the weekly topics. This reflected the majority approach across the seminar team. As such, reading prior to the seminar was essential. In preparation for seminar, the reading involved one academic article or a case study and a short portion of the recommended text book, usually no more than ten pages. In the first three weeks of the seminar course, the majority of students appeared to have done little or no preparation. Since the tutor had minimal responses to general questions and efforts to engage the students in discussion were to no avail, the first twenty minutes of the seminar had to be spent on remedial activity. The tutor either encouraged the students to read the case study or provided a short introduction to the topic, which, as the literature Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 reinforces above, had the potential to continue the cycle of non-compliance and legitimise the students’ tendency not to read. The quiz approach The following quiz-based approach was then adopted as a means of engaging students and getting them to undertake the reading. Each week, the tutor came up with a series of questions based purely on the reading material. The questions were varied in terms of potential responses. Some of the questions asked for surface level responses:  What four criteria did the author claim were needed?  Who is the leading researcher in…?  Which motivational theory does the theory in this text develop from?  Name four of the eight types mentioned. Other questions gave students the opportunity to explore their understanding, such as:  What did you understand the author to mean when s/he said…?  Illustrate theory X by giving an example.  How does theory Y correlate to theory Z? The surface level questions gave the tutor an instant indication of whether the reading had been done superficially or not. The other questions, challenging Roberts and Roberts (op. cit.), were interpretative and could (and did, in latter stages) lead to vibrant discussions and deeper understanding. The questions were randomly targeted, risking, as suggested above, student discomfort, but the non-conformity rates were so high that it would soon be apparent that most would not be able to answer even the superficial questions. This was a deliberate and considered deviation from the approaches suggested in our reading, but one we were keen to trial, as it had the potential to kick-start discussion and get to the deeper levels of understanding more efficiently. This approach was unannounced in the first week, but announced thereafter. The students were informed at the end of each session of the reading required for the next session and reminded that a quiz would happen then. Between weeks 9 and 10 of the teaching term, as the students had a break for Easter, they were also reminded again by a further message via the virtual learning environment messaging service. In practical terms, the quiz worked as follows: As the register was completed, the students would be given a number. Once all students had been allocated a number, the total would be placed in a random number generator on the projected computer screen. The free online software would then select a number and that student would be asked a question. Should the student not answer the question, it would then be opened up to the class. Use of a random number generator aimed to remove any bias from the selection process. The randomiser was a deliberate effort to show that there was no inclination to ask students who had not previously tended to prepare, nor to ask those who had, as the most likely to provide the correct answer. Averting alienation was at the forefront of our thinking and we felt that Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 the time it would have taken to administer individualised questions could not be justified. The desire to make the questions qualitative and serve as prompts for later discussion was also an important consideration. What was learnt? Data was collected informally, the tutor recording alongside the questions whether each question had been answered correctly by the group; thus there was no record of individual changes or developments. (This would be an interesting future project at a more systematic and formalised level, especially in this context.) Fundamental here is that the record is of first responses, giving an indication of the breadth of increased compliance. Eventual correct responses, even if initial answers were incorrect or flawed, were higher and, in weeks 8 and 9 with Group B, were in fact 10/10. Tutor observations of responses to the process and of the trends also form part of the findings set out below. In the final session, students were asked what they thought of quizzes. From a tutor perspective, it is clear that the continued use of the quiz approach has been successful with these cohorts of students. This can be seen from the improvements in class scores over the course duration (see fig 1.) Session (by course weeks) Group A score Group B score Week 4 (unannounced intervention) 2/10 3/10 Week 6 (announced hereafter) 2/10 2/10 Week 7 3/10 4/10 Week 8 6/10 8/10 Week 9 8/10 9/10 Week 10 6/8 7/8 Week 12 13/18 15/18 Figure 1: Number of correct responses by group Weeks 1-3 of the course were where the problem was first identified, but where no action was taken, other than remedial strategies. Weeks 5 and 11 were sessions for which no reading was required. As can be seen, first-time correct responses increased almost every week. The results’ dip in week 12 can be rationalised by the fact the quiz was a) much longer and b) was an exam revision session and so the questions referred to previous reading, including that set in the first three weeks, when no quizzes took place and preparatory reading had not been done. Some improvement in preparation for weeks 6 and 7 was noted, but it was still insufficient to make a positive impact upon class discussion. During the first few weeks, as reading compliance across the group grew, it was not possible to perceive many clear patterns. The tutor allowed sufficient ‘thinking time’ for a student to answer the question and resisted efforts by other students to step in. Some students would look down at their notes or there would be long silences in response to those questions. However, as the ‘pattern’ of the approach became embedded, so the students adapted their behaviours and clearly demonstrated a growing sense of engagement and even enjoyment. Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 In the first three weeks the students who prepared were mostly non-UK, and, of those that had prepared, the majority went on to have near perfect attendance. The students who prepared from the outset responded in a positive manner and appeared to embrace the quizzes much earlier on in the process than other students. In terms of attendance by the groups during the study, there was no decline in Group A though there was a small decline in Group B (twelve in week 3 and eight on average thereafter) after a significant drop off in the weeks prior to the start of the quiz trial. Further research would be needed to determine links between attendance and the quiz approach and so reasons for the slowing of the rate of non-attendance cannot be determined or claimed at this stage. Towards the end of the trial, responses to questions were faster; the inability of any chosen student to answer continued, but to a much more limited extent and, significantly, the overall quality of engagement with the topics at hand improved. The sense of fluidity and general ‘success’ of the sessions was also tangible. As the term progressed, the students would, without asking, apologise for not having done the reading and ask to be excluded from the quiz. This unexpected honesty enabled the tutor to provide an alternative activity for those who had not prepared. The general tenor of the responses was that students found the quizzes helpful in getting them to read, though some clearly saw it as a necessary evil and were reluctant to engage with the texts. Most of the students spoken to were positive, as indicated by this characteristic response: “I like the quizzes… they encourage me to do the reading which enables me to get more from the class activity.” (Group A) One was blunt about disliking the method, but the ‘compliance despite…’ nature of this response is important to note: “I hate quizzes but they force me to do the reading for what is in my view a rather boring subject, so I s’pose it’s great”. (Group B) It was good to hear that the effort with the randomiser was noted too, though, interestingly, this student saw it as a motivator in itself: “The fact we can see that you are not picking on us and it’s random definitely motivates me to get involved, to be better prepared”. (Group A) Formalised, deeper responses in the form of focus groups or interviews would no doubt offer richer insights. The slow start over the first few weeks could indicate that the students were waiting to see whether this was a one-off or was going to become a regular occurrence. By the fourth week of the trial, and in both the seminars, those students who were asked the questions were showing clear evidence of reading and this continued throughout the remainder of the seminars. A possible perceived drawback to doing a quiz on the reading material would be that the tutor must spend time preparing questions. However, it was found that, as the tutor needs to read the material prior to teaching it, the increase in preparation time is minimal. The quiz Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 itself took around ten minutes from the seminar, which was already limited in time. The tutor nevertheless felt that taking ten minutes to do this was beneficial, as it framed the subject matter and focused on the relevant theory, concept or study. In contrast, in the first few seminars twenty minutes or more was being spent remedially explaining things or waiting for students to read the document. Thus the quiz in fact increased the time available for group discussions and more in-depth application and analysis. Conclusions Whilst this was a small sample, the results suggest that a quiz based on mandatory, relevant reading can be a suitable method of engaging students to prepare for the class. We were delighted with the result and the apparent ease with which the group culture was modifiable. One non-deliberate manifestation was the way in which the quizzes began to feel like a competitive ‘me against them challenge’. The anticipation of the number from the randomiser and the collegiality amongst the students will therefore form part of the way such quizzes will be set up and ‘sold’ in future. The tutor was at times frustrated that the benefits evident as a consequence of the trial were at the cost of a fuller reckoning of individual depth of understanding and patterns of compliance. Given the same circumstances in future, a similar strategy would be implemented from week 1, though with perhaps an additional single question for each student to be answered on paper or via mobile devices and submitted before the whole-group random questions. One of the main benefits to this approach is that students become accustomed to reading prior to attending class. The quiz leads to positive learning behaviours that will be expected of them going into their second and third year of studies. Sadly, it does not mean that they will like it more, but at least it indicates wider motivational factors. A study on reading content and engagement, perhaps linked to student choices at undergraduate level would be interesting follow-up work. The in-class benefit of minimising the ‘mini lectures’, allowing time to focus on clarifying issues, enables academics to have a greater sense of where to pitch the learning activities for that session. Often, students’ responses to the more open questions would provide material for a more in-depth discussion when the class broke up into small groups. Unexpected outcomes, such as when these activities actually improve the fluency and coherence of the session, bode well for the way we might manage the approach in future. The whole process leads us to wider conclusions that we have expressed as questions we feel all academics should consider before setting texts for reading preparation:  Is the reading actually essential or even important? If so, what mechanism will you have in place to ensure its contents have been understood? If not, why are you setting it?  How closely tied is the set reading to the seminar or lecture content? Have you made the nature of this connection clear?  What assumptions do you have about students’ ability to read, process and understand what they are reading and what support is in place both immediately and more widely within the faculty or institution? Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016  Will your quick-witted and strategic students know (or feel) that the material will be covered in class even if they haven’t read it?  Are your strategies for encouraging (or forcing) reading potentially shaming or embarrassing?  Will the benefits of gauging comprehension at an individual level (e.g. through individual response mini papers) outweigh the whole class developmental and deeper discussion benefits of approaches similar to those used in this study?  How could you demonstrate that you are not ‘picking on’ likely non-compliers or, perhaps worse, choosing the ‘usual suspects’ of keen compliers?  Do the strategies ‘preach to the converted’? i.e. Do they benefit those who read anyway?  If we use in-class activities, how much can we tap into intrinsic motivational forces and what else can be done to make the reading something pleasurable rather than dutiful? Above all, we found that this trial challenged assumptions about what students are willing and able to do. If extrinsic motivators like compliance-boosting quizzes are coupled with an assumption that students will do the reading and a clear connection between the material and seminar content is made, then it is not the number of students reading that is important but their starting points in the seminar and the individual progress that can then be made from there. Reference list Burchfield, C. M. and Sappington, J. (2000) ‘Compliance with Required Reading Assignments.’ Teaching of Psychology, 27(1), 58-60. Brost, B. D. and Bradley, K. A. (2006) ‘Student Compliance with Assigned Reading: A Case Study.’ Journal of Scholarship of Teaching and Learning, 6(2), 101-111. Clump, M. A., Bauer, H. and Breadley, C. (2004) ‘The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum.’ Journal of Instructional Psychology, 31(3), 227-232. Cottrell, S. (2013) The study skills handbook (4th edn.). Hants: Palgrave Macmillan. Hatteberg, S. J. and Steffy, K. (2013) ‘Increasing Reading Compliance of Undergraduates: An Evaluation of Compliance Methods.’ Teaching Sociology, 41(4), 346–52. Hoeft, M. E. (2012) ‘Why university students don't read: What professors can do to increase compliance.’ International Journal for the Scholarship of Teaching and Learning, 6(2), 12. Case Studies Compass: Journal of Learning and Teaching, Vol 9, No 13, 2016 Jolliffe, D. A. and Harl, A. (2008) ‘Texts of our institutional lives: Studying the ‘reading transition’ from high school to college: What are our students reading and why?’ College English, 70(6), 599-617. Johnson, B. C. and Kiviniemi, M. T. (2009) ‘The effect of online chapter quizzes on exam performance in an undergraduate social psychology course.’ Teaching of Psychology, 36(1), 33-37. Lei, S. A., Bartlett, K. A., Gorney, S. E. and Herschbach, T. R. (2010) ‘Resistance to reading compliance among college students: Instructors’ perspectives.’ College Student Journal, 44(2), 219. Pecorari, D., Shaw, P., Irvine, A., Malmström, H. and Mežek, Š. (2012) ‘Reading in tertiary education: Undergraduate student practices and attitudes.’ Quality in Higher Education, 18(2), 235-256. Quality Assurance Agency for Higher Education (2015) Higher Education Review of University of Greenwich. Available at: http://www.qaa.ac.uk/en/ReviewsAndReports/Documents/University%20of%20Greenwich/U niversity-of-Greenwich-HER-15.pdf (Accessed: 26 April 2016). Quaye, S. J. and Harper, S. R. (2014) Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations. London: Routledge. Roberts, J. and Roberts, K. (2008) ‘Deep Reading, Cost/Benefit, and the Construction of Meaning.’ Teaching Sociology 36(2), 125-40. Sappington, J., Kinsey, K. and Munsayac, K. (2002) ‘Two studies of reading compliance among college students.’ Teaching of Psychology, 29(4), 272-274. Schrank, Z. (2016) ‘An Assessment of Student Perceptions and Responses to Frequent Low-stakes Testing in Introductory Sociology Classes.’ Teaching Sociology, 44(2), 118–127. Starcher, K. and Proffitt, D. (2011) ‘Encouraging Students to Read: What professors are (and aren't) doing about it. International Journal of Teaching and Learning in Higher Education, 23(3), 396-407. Tuckman, B. (1991) ‘Motivating college students: A model based on empirical evidence.’ Innovative Higher Education, 15(2), 167-176. Uskul, A. K. and Eaton, J. (2005) ‘Using graded questions to increase timely reading of assigned material.’ Teaching of Psychology, 32(2), 116-118. http://www.qaa.ac.uk/en/ReviewsAndReports/Documents/University%20of%20Greenwich/University-of-Greenwich-HER-15.pdf http://www.qaa.ac.uk/en/ReviewsAndReports/Documents/University%20of%20Greenwich/University-of-Greenwich-HER-15.pdf