Australasian Journal of Educational Technology, 2021, 37(1). 163 Effectiveness, efficiency, engagement: Mapping the impact of pre-lecture quizzes on educational exchange Tanya Evans, Barbara Kensington-Miller, Julia Novak University of Auckland Our study addresses a systemic issue facing higher education – a lack of rigorous educational research alongside new technology-assisted ways of teaching and learning. The issue highlights the disciplinary disconnect as many academics do not research outside their discipline, yet are tasked with educational modernisation through trying out new educational technology. Addressing this issue, we present our conceptual framework, the course transaction space (CT-space), and use it to analyse the impact of an intervention we designed that involved the use of regular online pre-lecture quizzes in a university mathematics course. The aim of the intervention was to optimise the effect of distributed (spaced) practice on long-term retention. Our findings suggest that a relatively small change in course instruction can improve the efficiency and effectiveness of educational exchange. Our analyses of data from multiple sources provide evidence that our intervention resulted in a sustained increase in the frequency of students’ engagement with the content, increased attendance of lectures, and improved grades. Additionally, we discuss the impact of our intervention on the quality of student engagement with reference to competence related beliefs and self-efficacy. Finally, we discuss how our intervention can be used in other contexts for supporting an evidence- based approach to teaching and learning. Implications for practice or policy • For teachers designing an intervention with the aim of improving students’ learning engagement during a course of tertiary study, we advise incorporating a series of frequent low stakes online quizzes with low level of difficulty. • For students, these will act as an incentive, enabling improvement in the frequency of their learning engagement and its quality. • The course transaction space (CT-space) model can be used to explore and analyse the impact of a variety of interventions introduced in tertiary courses through the lens of engagement. Keywords: blended learning, student engagement, online quizzes, impact mapping, course transaction space, mixed-methods Introduction Globally, the higher education sector is challenged to keep up with the times and reassess its sustainability in a technological era. Blended learning, the integration of face-to-face and online instruction, is now widely adopted as the new normal in course delivery across tertiary institutions (Heinrich et al., 2016; Montgomery et al., 2015). At research-intensive universities, teaching and learning is primarily facilitated by academics, who do not normally research outside their discipline on educational issues or spend time acquiring knowledge about learning theories. Yet, there is an expectation that these academics will drive innovation in teaching and learning at tertiary level. This often involves trying out new educational technology. Naturally, there is increasing experimentation with new modes of delivery by enthusiastic innovators but many lack the skills required for conducting rigorous evidence-based research as part of their innovative endeavours. Innovations are often based on the integration of new technological gadgets with only anecdotal evidence about their merits. A systemic lack of rigorous educational research, alongside these new technology-assisted ways of instruction in higher education, presents as a major issue that can have dire consequences. A recent pivotal study by Roy et al. (2017) serves as a cautionary tale about educational technology. In their study of university students studying mathematics, a multimedia resource called e-Proof was designed to support learning from written proofs. E-Proof presented proofs with audio commentary and visual animations to Australasian Journal of Educational Technology, 2021, 37(1). 164 focus attention on logical relationships. Contrary to the hypothesised assumption, the researchers observed that the group of students using e-Proof exhibited poorer retention, compared to the control group that was relying on the traditional way of learning. They accounted for this unexpected result through analysis of eye-movement, concluding that extra support offered by e-Proof disrupts the processes by which students organise information, thus restricting the process of integration of new understanding with existing knowledge. As a result, this newly developed multimedia resource was deemed detrimental to student learning and was not vetted for use. Such implications are serious, raising the question: Who is responsible for the evaluation of the plethora of new interventions involving educational technology? With the rapidly accelerating advances of emerging technologies, the importance of evaluating these in higher education settings is now paramount. The field of educational technology focuses on students, teachers and systems, such that: These three components are often hard to disentangle, which has sometimes resulted in research over-claiming the impact of one to the exclusion of the other, such as the significance of a system implementation (e.g., flipped classroom model or a MOOC) rather than look to the complex interplay of learning / teaching and learners / teachers. (Heinrich et al., 2017, p. iv) In this article, we present a study involving an online intervention in an undergraduate mathematics course and use mixed-methods to analyse the impact it had on student learning. We present our conceptual framework, which we call the course transaction space (CT-space) to illustrate how small changes in course instruction can improve the effectiveness and efficiency of the educational exchange between student and teacher. The main utility of the CT-space provides a structure for isolating factors that influence the educational exchange, thus helping to disentangle the complex interplay of teaching and learning. Instructional design theory, motivation to learn, and engagement Historically, instructional design theories were first developed for face-to-face tertiary education and then later, separately, for education by correspondence (or distance education). The latter was always characterised as an early adopter of technological advances, relying heavily on identifying the most efficient ways to engage learners at a distance. As a result of recent developments, these two modes of delivery have been blended together in new instructional design theories (Keller, 2010; Merrill, 2013; Spector & Merrill, 2008), which combine the body of knowledge from distance education (e.g., Moore, 2013) with the face- to-face instructional principles (e.g., Merrill, 2002). Over the last decade, as technology has become an integral part of higher education, technology-assisted teaching and learning are no longer regarded as separate or even new phenomena. Some authors use the single term e3-learning, as introduced by Spector and Merrill (2008) to refer to these systems collectively with the emphasis on effectiveness, efficiency, and engagement. Central to e3-learning is learner motivation based on Keller’s (2010) motivational theory that dominates instructional design, and has now become known as the ARCS (attention, relevance, confidence, and satisfaction) model. This model illuminates the effect of motivation by explaining attitudes, beliefs, and behaviours that help a learner to first initiate engagement with the learning and then sustain that engagement by overcoming obstacles to accomplish the learning goals. The third ARCS category of confidence includes the construct of self-efficacy, which was developed by Bandura (1997) as part of his social cognitive theory. Self-efficacy refers to learners’ expectations and beliefs about their own ability to learn new material, to develop new skills, and to master tasks. To date, researchers have identified several primary sources of self-efficacy, with the most influential being mastery experiences (successful performance on tasks). This means that in educational settings previous success contributes to improvement in self-efficacy while failure undermines it (Skaalvik et al., 2015; Usher & Pajares, 2009). Although engagement has been considered in relation to motivation, they are generally viewed as distinct constructs (Fredricks et al., 2016; Reeve, 2012). While numerous definitions of engagement exist in the Australasian Journal of Educational Technology, 2021, 37(1). 165 literature, they tend to refer to “the extent to which a student is actively involved with the content of a learning activity” (Helme & Clarke, 2001, p. 133). Moreover, many studies distinguish three dimensions of engagement – behavioural, cognitive, and emotional – that align with doing, thinking, and feeling. It is known that they do not occur in isolation but instead synergistically facilitate and complement each other (Bond & Bedenlier, 2019; Helme & Clarke, 2001; Kahu & Nelson, 2018; Reeve, 2012). The theoretical foundation of engagement stems from expectancy-value theory (Wigfield & Eccles, 2000), which posits that personal beliefs about one’s competence in learning can determine specific actions and behaviours. In summary, the dominant instructional design theories recognise the critical importance of confidence and in particular self-efficacy, and posit that enhancing self-efficacy would lead to improved support for student learning and achievement. Thus, if students believe they are able to learn and have control over their own outcomes then this will promote their motivation and increase their engagement with learning. Distributed (spaced) practice and long-term memory retention To facilitate effective mastery experiences, we drew on research from experimental cognitive psychology, accumulated over the last 100 years, pertaining to the effect of distributed practice (or spaced practice) on long-term retention. The main finding to date relates to separation of learning episodes of the same content by a period of 1 day and was found to be extremely useful for maximising long-term retention (Cepeda et al., 2006). The distributed practice effect (which relates to spaced effect) refers to an effect of interstudy interval (ISI) upon learning and is measured by test performance. Essentially, ISI is the time interval between two separate study episodes of the same material. In a typical experimental study, the ISI would be varied between the two study episodes to measure the impact of different time intervals on retention. This is followed by a fixed retention interval concluding with a test. In an integrative review of the distributed practice literature, Cepeda et al. (2006) examined the degree of benefit produced by shorter and longer ISIs on retention in verbal recall tasks. The finding in their meta-analysis points to the optimal ISI of 1 day (for our context), regardless of when retention is measured by a test after 1 day or after 2 to 28 days. Learner retention significantly increased as the ISI difference increased from 1-15 minutes to 1 day and dropped when ISI difference increased beyond 1 day. Conceptual framework: Course transaction space model Oriented by the theory, together with our aim to improve the effectiveness and efficiency of the educational exchange in a university course, we designed a conceptual framework, which we call the course transaction space (CT-space), comprising three dimensions as illustrated in Figure 1. In choosing the term, we drew on the instructional design research in distance education (Moore, 2013), where transaction distance captures students' involvement in courses offered at a distance by focusing on transactions between a student and an instructor (online/mail interactions and dialogues). The researchers argue that increased frequency of interactions lead to decreased transaction distance, which indicated students' improved involvement. By expanding the context to blended learning environments that include face-to-face instruction, our aim was to capture transactions that occur between a teacher and a student for the duration of a course. X-axis: Frequency of engagement The x-axis records the frequency of students' engagement with the course material capturing the number of episodes that students engage in as prescribed by the instructional design of the course. For example, in our context, this variable depends on the number of lectures and tutorials (practical sessions), tests/exams, homework written assignments, and online assessments (quizzes) in the course, as well as the distribution (spacing) of these course components throughout the semester. The higher the number of spaced learning episodes that students engaged with, the further to the right is the course positioned along the x-axis. Y-axis: Quality of engagement The y-axis represents the quality of students’ engagement as a reciprocal action (contextual transaction) in response to interactions prompted by an instructor. The variance along this dimension corresponds to the various levels of behavioural, emotional and cognitive engagement of learners. This variable is influenced Australasian Journal of Educational Technology, 2021, 37(1). 166 by the competence beliefs and expectancies of learners (including self-efficacy) as these shape students’ behavioural choices, effort and persistence, self-regulatory strategies, emotions, and achievement (e.g., Bandura, 1997; Fredricks et al., 2016; Helme & Clarke, 2001; Kahu & Nelson, 2018; Pajares & Miller, 1994; Reeve, 2012; Skaalvik et al., 2015). With numerous influencing factors, the y-axis represents values of a multivariable function recording a multidimensional construct of quality of student engagement. Operationalising this multidimensional construct is complex and beyond the scope of this article. We elaborate on this in the “Future research” section. Z-axis: Pedagogical practice The z-axis refers to the pedagogical practice of a teacher as a transaction with a particular cohort of learners. This is determined by factors such as an instructor’s level of content knowledge, pedagogical content knowledge, technological pedagogical content knowledge, and the practical manifestation in the educational setting. Similar to the y-axis, the pedagogical practice direction represents a multidimensional construct. For the purpose of this study this variable was deliberately held constant to focus solely on the changes within the engagement plane spanning x- and y-dimensions (Figure 1). Figure 1. Course Transaction space (CT-space): Three dimensions of educational exchange. Note: The term “course” refers to a single unit of tertiary study, such as a module, paper, or subject. Utility of the model The effectiveness and efficiency of the educational exchange in a course depends on the three variables represented by the x, y, and z axes, and is affected by the distance from the origin and the position in the CT- space. Understood this way, different tertiary courses in a programme of study can be plotted as a sequence of points in the CT-space and compared. For example, consider a student’s experience going through a mathematics major programme at a university. Assume the student starts in a stage I course taught by a teacher with a high level of pedagogical expertise who structures the course to have regular marked homework and weekly group tutorials, both designed to elicit high-quality student engagement. However, in the subsequent stage II course, a different less pedagogically skilled teacher structures the course to have only one assessment – the final exam. The experience is quite different and, with the lack of incentives for regular engagement, the student starts to skip lectures and begins to struggle. Prior to the exam, the student spends 3 days cramming, trying to memorise content, but without conceptual understanding. Mapping this progression in the CT-space, there is a substantial difference in the coordinates of the two courses presented. Assuming a basic gradation on the three axes from very low to very high, we observe the coordinates of the stage I course most likely to be in the high and very high range, whereas the Australasian Journal of Educational Technology, 2021, 37(1). 167 coordinates for the stage II course almost certainly are close to zero. The large distance between the consecutive courses displayed in the CT-space can reflect marked change in student engagement for the stage II course, thus illuminating potential difficulties ahead. In this way, an entire programme of study for a university student can be viewed as a directed graph in the CT-space. The presence of a long distance between vertices of the graph can signpost a problematic change in the student learning environment between consecutive courses, thus prompting the teacher to consider modifications that are required to mitigate this. Alternatively, the CT-space can be used to gauge what happens following an intervention in a course, the focus of this present study. By examining relative changes along the three axes, it is possible to consider the impact of the intervention on the educational exchange occurring in the course between teachers and learners, relative to the individual axes. Research aims and questions In our study we ensured that pedagogical practice (the z-axis) remained unchanged (same course materials, contents of lectures, course book, assessor, teacher, and course coordinator) in order to restrict the CT- space to a plane spanning x- and y-dimensions (Figure 1) and introduce a change targeting the frequency of students’ engagement (the x-axis) by incorporating a series of short multichoice online quizzes that were to be completed by students frequently (before every lecture). Prior research found that blended learning environments with quizzes were in general more objectively effective and attractive to learners as reported in a recent meta-analytic synthesis of numerous studies (Van Alten et al., 2019). Moderator analyses found that quizzes positively affect the effectiveness (measured by grades) and attractiveness (measured by student satisfaction) of blended learning. However, neither of these studies attempted to explore the impact of quizzes through the lens of learners’ engagement. In this research, a mixed-methods study was carried out with concurrent quantitative and qualitative data analyses to examine the potential effects of quizzes on students through the lens of engagement via the CT- space model. The following research questions were posed comparing the previous semester with the trial semester: 1. Would the position of the course in the CT-space change (indicating more efficient and effective educational exchange)? 2. Would the final grades be significantly different (favouring the trial semester)? 3. Would the pass-rate be significantly higher? 4. Would the proportion of students achieving A grades be significantly higher? 5. Would students report an increase in their lecture attendance and attribute it to the impact of the intervention? The importance of prior achievement and the impact this can have on many educational variables prompted us to also investigate whether there were any differences between the groups of students with respect to their grades in a prerequisite course. Two further questions were therefore posed: 6. Would there be significant between-group differences in students’ perceptions about the value of the intervention (as contributing to their understanding of the course material) based on prior achievement? 7. Would there be significant between-group differences in behavioural engagement with the intervention based on prior achievement? Australasian Journal of Educational Technology, 2021, 37(1). 168 Method Research site and participants The study was conducted at a large research-intensive university (University of Auckland, New Zealand) in an undergraduate mathematics course covering Calculus II, Linear Algebra II, and Introduction to Ordinary Differential Equations, serving the needs of students majoring in a variety of disciplines. For a large proportion of non-mathematics majors taking this course, lack of interest in the subject contributes to low intrinsic motivation, skewed attitudes, and deficient engagement with the course. An additional challenge is the size of the course: the enrolment numbers range from 350 to 550 students per semester. The course is delivered over 12 teaching weeks with the following weekly structure: three 1-hour lectures (traditional style instruction) and one 1-hour tutorial (student-centred approach with 25 to 30 students per room). It is also noteworthy that, due to a mandatory policy, all lectures are video-recorded and made available to students on the same day. After this new policy was rolled out, attendance at lectures dropped significantly, and in many cases below 30%. In the trial semester, 393 students were enrolled in the course and were expected to take online pre-lecture quizzes. Their final grades were compared to those in the previous semester where 518 students took the same course. Intervention: Online quizzes The design of the intervention was informed by the findings from experimental educational psychology pertaining to the effect of distributed (spaced) practice and was based on a somewhat similar successful implementation reported by Novak et al. (1999). A bank of multiple-choice questions was developed and delivered as online quizzes, using Canvas, the university’s learning management system. The students were expected to complete the quizzes prior to attending each lecture, starting from week 2. Each quiz contained two multiple-choice questions, which assessed the two main learning outcomes from the previous lecture. These were designed by the first author, who had been teaching this course for 7 semesters. The students were allowed two attempts at completing each quiz, and their highest score was recorded. Each question was randomly selected from a bank of questions containing 2 to 3 versions (for example, different numerical values). The marking scheme awarded one mark for each correctly answered question, with a maximum mark per quiz of 2 and a minimum of 0. The time limit was set for 30 minutes once a student began the quiz, to provide enough time to revise the material if needed while completing it. Figure 2 shows an example question. The contribution of quizzes was allocated 7% of the final grade. The best 28 scores (from the 32 provided) were recorded. Figure 2. Quiz question with instructor’s view of the responses using a learning analytics tool Australasian Journal of Educational Technology, 2021, 37(1). 169 Data collection and analysis Data was collected from: • The learning management system (Canvas), providing grades and learning analytics; • A student survey (Likert 4- and 5-point scale and Likert-type scale questions) at the end of the trial semester (paper-based, anonymous, conducted in class, 10 minutes allocated); • A focus group interview with 5 students (who volunteered); and • An interview with a tutor (teaching assistant facilitating tutorials). Ethics approval was granted by the University of Auckland Human Participants Ethics Committee for conducting the study (Reference number 017236). The two interviews were semi-structured and conducted by the second author who was not involved with the Department of Mathematics. These were held at the end of the semester, after all grades were finalised. Questions were prepared in advance and were guided by the framework of themes to be explored based on the CT-space model. Both interviews were audio-recorded and transcribed. Various statistical tests were conducted using IBM SPSS version 26 to analyse quantitative data. Raw data from survey can be found at https://doi.org/10.17608/k6.auckland.8330465.v1. Study limitations Several limitations of this study are evident that may affect interpretation and generalisation of the results. One limitation of the surveying process was a non-probability sampling as it was conducted in class, hence selecting the students that are more likely to attend lectures instead of a random sampling. Focus group participants were not chosen randomly as they were students who self-selected to participate. Lastly, the comparison of final grades is between two different cohorts of students, which might be slightly varied. Results and discussion At the design stage, we were unsure whether introducing online pre-lecture quizzes would increase the frequency of students’ engagement with the content taught in class. The result, however, was notably surprising, not only in terms of the uptake but also with the consistency of the engagement for the duration of the semester. The completion data for all 32 quizzes ranged from 81.2% to 96.45% with monotonically decreasing pattern: Quiz 1 was completed by 96.45% of enrolled students, Quiz 14 by 92.13%, Quiz 28 by 87.3%. We had hoped for some good engagement with the quizzes but had not anticipated this level to be so high and remain continually high as the semester pressure tightened. It is noteworthy to reiterate that the incentive for students to attempt a quiz was only 0.25% contribution towards their final grade and manifested as an effective extrinsic motivation. Previously, assessment consisted of 3 assignments, 10 tutorials, 1 test, and 1 exam. In the trial semester, we added 32 quizzes. Keeping all previous assessments unchanged allowed us to monitor the relative change to the right along the x-axis in the CT-space. To investigate whether the students engaged with the content while taking the quizzes, we first report on the data from the survey. The response rate was 98% of students who were present in class, with 140 individual responses. Our initial interest focused on how students approach a quiz, and, more importantly, if their first attempt is incorrect (the online system automatically provides instant feedback alerting the student) would they try again? Also, would they take the time to refer to their notes and other resources? Our data revealed that over 68% of students reported spending time studying before their first attempt. Furthermore, over 88% students stated that they were spending time studying before the second attempt. In order to answer our research question 7, we investigated students’ studying behaviour during the first and second attempts at the quizzes (self-reports) with respect to the different A-, B-, and C-grade bands in the prerequisite mathematics course. Students had reported their grades in the survey which asked: “What was your grade in the stage I prerequisite mathematics course that you took?” Each grade band represents https://doi.org/10.17608/k6.auckland.8330465.v1 Australasian Journal of Educational Technology, 2021, 37(1). 170 an aggregate of the corresponding range of the grade. For example, the A-grade band represents students who achieved A+, A, or A- in the prerequisite mathematics course. A Kruskal-Wallis H test was conducted to determine if there were differences in time spent studying before the first attempt at quizzes between groups that differed in their prior achievement: C-grade band (n = 25), B-grade band (n = 47), and A-grade band (n = 61). We found the time spent studying was lower for C- and B-grade bands (Mdn = 1-5 mins) than for A-grade band (Mdn = 6-10 mins), but the differences were not statistically significant, χ2(2) = 4.947, p = .084 (Figure 3). Figure 3. Time spent studying before first attempt at quizzes per grade band. Item: How much time did you usually spend studying before taking a quiz? In contrast, examining the second attempt at quizzes, a Kruskal-Wallis H test demonstrated that the time spent studying before the second attempt was statistically significantly different between groups, χ2(2) = 9.991, p = .007 (Figure 4). Subsequently, pairwise comparisons were performed using Dunn's procedure with a Bonferroni correction for multiple comparisons. Adjusted p-values are presented. This post hoc analysis revealed statistically significant differences in the time spent studying between the C-grade band (Mdn = 1-5 mins) and B-grade band (Mdn = 6-10 mins) (p = .020) as well as the C-grade band and A-grade band (Mdn = 6-10 mins) (p = .007), but not between the B- and A-grade bands. Figure 4. Time spent studying before second attempt at quizzes per grade band. Item: If you got a question wrong during the first attempt, how much time did you study before the second attempt? 10 12 2 0 1 13 14 11 5 3 19 10 18 12 1 0 5 10 15 20 0 mins 1-5 mins 6-10 mins 11-30 mins >30 mins CO U N T Time spent studying before first attempt at quizzes C-band B-band A-band 8 13 2 0 2 6 18 13 8 12 27 28 3 0 0 5 10 15 20 25 30 0 mins 1-5 mins 6-10 mins 11-30 mins >30 mins CO U N T Time spent studying before second attempt at quizzes C-band B-band A-band Australasian Journal of Educational Technology, 2021, 37(1). 171 Competence beliefs, self-efficacy, and mastery experience As explicated earlier in the literature section, students’ competence-related beliefs play a central role in shaping their behavioural and cognitive engagement. To investigate the impact of our intervention on competence building, we analysed the data from students’ responses to the question: “How much did the quizzes contribute to your understanding of the course material?” (not at all; a little; somewhat; significantly). The majority of students (80%) reported somewhat (51%) or significantly (29%). Interestingly, this was despite there being no significant difference between the groups that differed in their prior achievement (Figure 5). This was established by a Kruskal-Wallis H test comparing the responses of students from the C-grade band (n = 25), B-grade band (n = 47), and A-grade band (n = 61). All three groups expressed similar levels with valuing the contribution of quizzes to their understanding of the course material (Mdn = somewhat), with the differences being not statistically significant, χ2(2) = .514, p = .774. This test indicates that the answer to our research question 6 is that there is no difference between the groups. Figure 5. Contribution to understanding of material. Item: How much did the quizzes contribute to your understanding of the course material? Thus, regardless of their prior achievement the large majority of students attributed their improvement in understanding of the course material to the impact of quizzes. This suggests that not only did the quizzes enable competence building generally but also that it had an impact on the C-grade band students – the group of students who are often conformed to type on many measures, including undesirable behavioural choices (e.g., Dibbs, 2019). Furthermore, the perspective of the expectancy-value theories (Eccles & Wigfield, 2002) posits that academic engagement, educational choices, and ultimately achievement are influenced by two categories of beliefs: (1) learner’s expectations of success, and (2) learner’s perception of the value of the tasks. It was shown that learners’ subjective task value beliefs are strong predictors of engagement (e.g., Wigfield & Eccles, 2000). In our intervention study we introduced new tasks (online quizzes) and collected evidence demonstrating that the majority of students perceived these tasks as value- adding (contributing to their understanding). Thus, by extrapolation from empirical research based on expectancy-value theory, it is plausible to suggest that the introduction of quizzes enabled higher level of academic engagement. As previously noted, self-efficacy is a central construct among competence-related beliefs and is a powerful predictor for students' achievement and learning outcomes, as believing that one has a capacity to perform promotes academic engagement. The most influential source of self-efficacy has been identified as mastery experience as previous successful performance contributes to improvement in self-efficacy while failure undermines it (Usher & Pajares, 2009). Considered from this perspective, the impact produced by our intervention is evident in the data, not only as contributing to student understanding of the material (as reported above), but also more broadly in developing their academic self-efficacy through successful Not at all A little Somewhat Significantly C-band 2 5 11 7 B-band 2 6 23 15 A-band 0 11 34 16 2 5 11 7 2 6 23 15 0 11 34 16 0 5 10 15 20 25 30 35 40 CO U N T Contibution of quizzes to understanding of material C-band B-band A-band Australasian Journal of Educational Technology, 2021, 37(1). 172 mastery experience. Our evidence comes from the data about students' performance on the quizzes during the semester. The large majority of students successfully completed the quizzes with a mean score of 90.76%, equivalent to an A+ score, (SD = 13.04%), and 133 students (out of 393) received 100% on all quizzes. This suggests that, as the students received positive feedback on their performance consistently throughout the semester (three times a week before every lecture), it had a reinforcing effect, signalling that they were performing successfully and gaining mastery experience. Again, it seems plausible to suggest that the incorporation of quizzes, in addition to the standard assessment components, made a positive impact on student learning through amplifying students' mastery experience. The quizzes provided validation for the students’ successful learning efforts, and, as they were done frequently, this enabled the accumulation of their mastery experience. Lecture attendance In order to answer our research question 5, we examined student attendance of lectures during the trial semester in comparison to their previous mathematics course, a prerequisite stage I. However, as student attendance at lectures at the University of Auckland is not compulsory, we were not able to easily analyse any impact our intervention had on improving attendance. Carrying out a head-count of students attending lectures during the trial semester was also invalid as we had no data to compare it to from the previous semesters. We decided that a way forward might be to survey the students using a comparative set of three questions: • In previous mathematics courses (e.g., Stage I MATHS), how many lectures did you attend? (none; only some; about half; most; almost all) • In the MATHS course this semester, how many lectures did you attend? (none; only some; about half; most; almost all) • Did the pre-lecture quizzes affect your attendance of lectures in the MATHS course this semester? (Yes, I attended more; No, I attended the same; Yes, I attended less; Don’t know) Self-reported increase in attendance was confirmed by a paired-samples t-test that was used to determine whether there was a statistically significant mean difference between students’ attendance of lectures in the trial semester compared to their attendance in a prerequisite course (N = 135). Inspection of outliers did not reveal them to be extreme and they were kept in the analysis. Participants reported attending more lectures during the trial semester (M = 4.56, SD = 0.708) as opposed to a prerequisite stage I course (M = 4.29, SD = 1.112), a statistically significant mean increase of 0.274, 95% CI [0.092, 0.456], t(134) = 2.982, p = .003, d = .256. Moreover, in response to the third question, 27% of students admitted that the incorporation of pre-lecture quizzes in the trial semester made them attend more lectures. Peculiarly, the largest effect was reported by the B-grade band: 36% attributed their increased attendance to the introduction of quizzes, whereas only 23% of A-grade and 20% of C-grade band students reported increased attendance. Only 3 students said that they attended less – all from the A-grade band. Overall, the evidence suggests that the introduction of regular online quizzes increased lecture attendance. It is notable that a fifth of the C-grade cohort of students attributed the impact of the quizzes to their choice of attending lectures, particularly as this cohort includes at risk students who generally tend to skip lectures. Focus group interview The focus group interview was an opportunity for the students to express their thoughts more freely. The discussion mainly focussed on whether the quizzes had any impact on changing their study behaviour and if there was an increased frequency in their engagement with the course material. One student explained that the quizzes galvanised practice in order to achieve full marks. I think the quizzes force you to learn that. Such as, if you don’t know how to do this question, even if you know the method you would need… you also want to know the theory. […] I think it helps a lot for a subject like maths because it helps you do practice every day, so it makes you practice every day. […] Oh yeah, if someone just doesn’t care about the quizzes Australasian Journal of Educational Technology, 2021, 37(1). 173 then it is a separate issue, but I think that if I want to score 7 out of 7 then I need to practice every day. Another student further elaborated how the quizzes helped to obtain clarity. I mean understanding is basically what we did in class, but it [the quizzes] just made me review all of the things that we did in class, so it made it clearer. …. Yeah, I think the quizzes just help you to review the stuff. Yeah. A third student described how the quizzes were motivational. I think that, for some subjects, it is so easy to just turn up to lectures and to just turn up to tutorials and do the work and then if you forget about it until the next week, then that’s fine. But the quizzes in some way – you get home in the evenings and you like ‘Oh, I got to do my quiz’. So, you map stuff out…so you are more inclined to do things. I would probably do stuff anyway but, yeah, it motivates me a bit more, yeah. Although a small group, and not necessarily representative of the whole class, it was interesting to note how committed they were to be completing the quizzes for the duration of the entire semester. Unanimously, the students all agreed that incorporation of online quizzes was a good innovation for the course and suggested it should be used more widely. Tutor interview It was fortuitous that one of the regular teaching assistants for the course, Jack (not his real name) was employed to supervise 10 out of the 13 tutorial streams for the course in the trial semester. This equated to him tutoring approximately 300 students per week, affording him a unique opportunity to observe whether the intervention had any impact on student learning practices. Jack had worked as a tutor for the same course for 4 semesters so (although anecdotal) he was able to provide a perceptive comparison. He commented that the students during the first tutorial were different from previous years (note that the tutorials begin the same week as the start of the quizzes). I did definitely notice that there was a smaller proportion of people coming into tutorials that had absolutely no idea what any of the questions were. So, a lot of the time in previous semesters it would become quite clear to me that people did not know much about what was going on, because either they've been to class but did not really pay attention or haven't gone over their notes ... And so, often I would explain a few things on the board at the beginning to get them started - that was last year ... But this semester ... it was easier on me. I could help them more in terms of the level that they should be at instead of starting from scratch. Jack also remarked that the quizzes helped him as a tutor to focus on the concepts students needed for the exams rather than always going over basics. I think it was a good idea. It kept people more on track than usual and it made tutorials a bit easier for me: in that I could sort of talk to them more about the stuff they need to be doing in the exam, instead of having to go over the basics with them like [the lecturer] would have done in the lectures anyway. Jack’s observations about the change in student behaviour between semesters provides further evidence of the positive impact we were finding with the introduction of the quizzes on educational exchange. Viewed through the lens of the CT-space, which captures transactions that occur between an instructor and a student for the duration of a course, the data from both the focus group and tutor interviews illuminate not only an increased frequency of interactions, but also students' improved involvement with the course through staying on top of things. The students’ transactions in the educational exchange were now timely and up- to-date. Australasian Journal of Educational Technology, 2021, 37(1). 174 Final grades In order to answer our first research question, we draw on the evidence from the comparison of grades distribution in the trial semester with the previous semester of the same course. While admitting the limitations of this approach, we note many similarities in the delivery of the courses: the same course coordinator, the same teacher, the same course book and all other materials, identical split into lecture topics, and the same external assessor. The role of the assessor is to benchmark and calibrate the test and exam (contributing 20% and 60% respectively to final grade) against previous semesters. A content- analysis was conducted for both the test and exam by the assessor, to ensure that the level of difficulty was at least as hard as in the previous semester. Prior to marking of the final exam by the teaching team, a mark- norming calibration was employed to align partial marks to be given in order to be consistent with the previous semester marking rubric. Figure 6 demonstrates the shift in course grades across the grade bands: from reduction in the proportion of fails to an increase (by almost identical amount) in A-band grades. A Mann-Whitney U test was run to determine whether the difference between the trial semester (N = 393) and the previous semester (N = 518) was significant. The final grades were found to be statistically significantly higher in the trial semester (Mdn = B-grade band) than in the previous semester (Mdn = C-grade band), U = 90664.5, z = -2.923, p = .003, which answered our research question 2 in the affirmative, favouring the trial semester. In order to answer research question 3, as to whether the pass-rate would be significantly higher, we conducted a test for two proportions (chi-square test of homogeneity) and concluded that 303 students (77.1%) had passed the course in the trial semester compared to 362 students (69.9%) in the previous semester. This was a statistically significant difference in proportions of .07, p = .015, answering question 3 in the affirmative, that the pass-rate was indeed higher for the trial semester. Similarly, in order to answer research question 4, whether the proportion of students achieving A grades would be significantly higher, we also conducted a test for two proportions and concluded that 106 students (27.0%) had achieved A grades in the trial semester compared to 99 students (19.1%) in the previous semester. Again, this was a statistically significant difference in proportions of .08, p = .005, answering question 4 in the affirmative. To address a possible limitation, we checked the result on the adjusted grades, accounting for the fact that 7% allocated to quizzes in the trial semester came from a reduction in the proportion allocated to other assessment components (excluding the final exam). The result remains significant even after the adjustment. Figure 6. Final grades comparison between the trial semester and previous semester A-band B-band C-band Fail Trial semester (393 students) 26.97 27.23 22.9 22.9 Previous semester (518 students) 19.11 28.38 22.39 30.12 0 5 10 15 20 25 30 35 PE RC EN T O F EN RO LL ED Final grades comparison (%) Trial semester (393 students) Previous semester (518 students) Australasian Journal of Educational Technology, 2021, 37(1). 175 Conclusion As a result of modernisation of higher education, there is a strong need for new frameworks to be developed for conducting evaluation research on testing technology-assisted innovations. To that end, we developed our CT-space framework and used it to guide us in evaluating the impact of the introduction of online quizzes. This intervention was designed according to meta-analytic findings from experimental educational psychology (optimising the effect of distributed practice on long-term retention) and implemented as a manifestation of the theoretical principles of the CT-space theory. The utility of the CT-space was providing a structure for isolating factors that influence educational exchange in order to map the impact on learning. In our study, we ensured that pedagogical practice remained unchanged (same course materials, contents of lectures, course book, assessor, teacher, and course coordinator) in order to restrict the CT-space to a plane (Figure 1). We then introduced a change targeting the x-axis (frequency of students’ engagement) by incorporating a series of short multichoice online questions that were assigned three times a week. Our evidence demonstrates that a large majority of students consistently engaged with this new educational transaction, confirming an optimal increase along the x-axis in the CT-space. Furthermore, we have reason to believe that our intervention improved the quality of student engagement (y-axis) with reference to competence related beliefs and self-efficacy. The introduction of pre-lecture quizzes enabled competence building for students across all grade bands, but importantly this effect extends to the C-grade band students – a group often associated with undesirable behavioural choices and low levels of engagement. We also mapped the impact of the quizzes through the lens of self-efficacy, which is developed through gaining mastery experience. In the trial semester, we observed that the majority of students were coming to the lectures prepared – having revised the material from the previous lecture in order to pass the quiz assessment. The quizzes were spaced uniformly throughout the semester, thus frequently reinforcing the message that they were performing successfully. Hence, the quizzes provided validation for students about their successful learning efforts and, with the frequency, enabled the accumulation of their mastery experience. It is therefore plausible to suggest that this, in turn, increased the students’ self-efficacy, leading to higher-quality engagement. The comparison of the grade distribution in the trial semester with the previous semester provided further evidence of positively impacting the students’ engagement with the course, showing significant overall improvement. Additionally, there was a significant increase in attendance at lectures despite the available lecture capture videos provided daily. In particular, 26% of the students attributed their improved attendance of lectures to the introduction of quizzes. This, combined with the data from the focus groups and tutor interviews, and viewed through the lens of the CT-space, not only illuminates an increased frequency of interactions, but also students' improved involvement with the course by staying on top of the material. The students’ transactions in the educational exchange have become timely and up-to-date. Overall, our findings suggest that it is possible for a relatively small change in course delivery utilising affordances of technological advances to improve the course coordinates in the CT-space (Figure 1); thus, indicating a more efficient and effective educational exchange between teachers and learners. Future research The design principle of our intervention is generalisable and transferable to other educational settings as heuristics for instructional design. Stylianides and Stylianides (2013) propose three dimensions of evaluation of classroom interventions: (1) how amenable it is to scaling up, (2) how practicable it is for curricular integration, and (3) how capable it is of producing long-lasting effects. Evaluated this way, our intervention, arguably, can be deemed effective for the first two criteria: the number of students utilising online quizzes is unlimited; it is practicable for incorporation into existing curricular structures at any level, as most contemporary learning management systems provide capability for setting up online quizzes. To determine long-lasting effects is more difficult and ongoing research is needed for this with more nuanced theoretical and practical considerations. Further related theoretical and empirical research can seek to operationalise the multidimensional constructs of the quality of student engagement and pedagogical practice, which are represented by values of Australasian Journal of Educational Technology, 2021, 37(1). 176 multivariable functions on the y and z axes in the CT-space. This could be achieved by utilising empirical and theoretical findings based on influential theoretical frameworks that conceptualise student engagement. For example, Kahu and Nelson’s (2018) sociocultural engagement framework offers a powerful lens for unpacking complex interactions occurring within the educational interface by utilising the explanatory power of the four key psychosocial mechanisms that influence quality of student engagement (academic self-efficacy, emotions, belonging, and well-being). Reeve (2014) also investigated variations in student quality of engagement which, he posits, arise out of the quality of one’s inherent and acquired sources of motivation. His student-teacher dialectical framework is based on the self-determination theory and stresses the importance of agentic engagement. Grounded on empirical observations, the framework demonstrates that the quality of a teacher’s motivating style affects midsemester changes in students’ motivation relating to their psychological needs (autonomy, competence and relatedness), which in turn affects students’ quality of engagement. This is very fitting for framing future investigations about the effect of changes along the z-axis (pedagogical practice) through adjusting a teacher’s motivating style and analysing its impact within the student engagement plane in the CT-space. References Bandura, A. (1997). Self-efficacy: The exercise of control. Freeman. Bond, M., & Bedenlier, S. (2019). Facilitating student engagement through educational technology: Towards a conceptual framework. Journal of Interactive Media in Education, 2019(1), 1- 14. https://doi.org/10.5334/jime.528 Cepeda, N. J., Pashler, H., Vul, E., & Wixted, J. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354-380. https://doi.org/10.1037/0033-2909.132.3.354 Dibbs, R. (2019). Forged in failure: Engagement patterns for successful students repeating calculus. Educational Studies in Mathematics, 101, 35-50. https://doi.org/10.1007/s10649-019-9877-0 Eccles, J. S., & Wigfield, A. (2002). Motivation beliefs, values, and goals. Annual Review of Psychology, 53, 109-132. https://doi.org/10.1146/annurev.psych.53.100901.135153 Fredricks, J. A., Filsecker, M., & Lawson, M. A. (2016). Student engagement, context, and adjustment: Addressing definitional, measurement, and methodological issues. Learning and Instruction, 43, 1-4. https://doi.org/10.1016/j.learninstruc.2016.02.002 Heinrich, E., Henderson, M., & Dalgarno, B. (2016). Editorial: From tinkering to systemic change. Australasian Journal of Educational Technology, 32(2). https://doi.org/10.14742/ajet.3219 Heinrich, E., Lee, C. B., & Henderson, M. (2017). Editorial. Australasian Journal of Educational Technology, 33(1). https://doi.org/10.14742/ajet.3732 Helme, S., & Clarke, D. (2001). Identifying cognitive engagement in the mathematics classroom. Mathematics Education Research Journal, 13(2), 133-153. https://doi.org/10.1007/BF03217103 Kahu, E. R., & Nelson, K. (2018). Student engagement in the educational interface: Understanding the mechanisms of student success. Higher Education Research & Development, 37(1), 58- 71. https://doi.org/10.1080/07294360.2017.1344197 Keller, J. M. (2010). Motivational design for learning and performance: The ARCS model approach. Springer. https://doi.org/10.1007/978-1-4419-1250-3 Merrill, M. D. (2002). First principles of instruction. Educational Technology, Research and Development, 50(3), 43-59. https://doi.org/10.1007/BF02505024 Merrill, M. D. (2013). First principles of instruction: Identifying and designing effective, efficient and engaging instruction. John Wiley & Sons. Montgomery, A. P., Hayward, D. V., Dunn, W., Carbonaro, M., & Amrhein, C. G. (2015). Blending for student engagement: Lessons learned for MOOCs and beyond. Australasian Journal of Educational Technology, 31(6). https://doi.org/10.14742/ajet.1869 Moore, M. (2013). The handbook of distance education (3rd ed.). Routledge. https://doi.org/10.4324/9780203803738 Novak, G., Patterson, E. T., Gacrin, A. D., & Christian, W. (1999). Just-in-time teaching; Blending active learning with web technology. Prentice Hall. Pajares, F., & Miller, M. D. (1994). Role of self-efficacy and self-concept beliefs in mathematical problem solving: A path analysis. Journal of Educational Psychology, 86(2), 193-203. https://doi.org/10.1037/0022-0663.86.2.193 https://doi.org/10.5334/jime.528 https://doi.org/10.1037/0033-2909.132.3.354 https://doi.org/10.1007/s10649-019-9877-0 https://doi.org/10.1146/annurev.psych.53.100901.135153 https://doi.org/10.1016/j.learninstruc.2016.02.002 https://doi.org/10.14742/ajet.3219 https://doi.org/10.14742/ajet.3732 https://doi.org/10.1007/BF03217103 https://doi.org/10.1080/07294360.2017.1344197 https://doi.org/10.1007/978-1-4419-1250-3 https://doi.org/10.1007/BF02505024 https://doi.org/10.14742/ajet.1869 https://doi.org/10.4324/9780203803738 https://doi.org/10.1037/0022-0663.86.2.193 Australasian Journal of Educational Technology, 2021, 37(1). 177 Reeve J. (2012). A self-determination theory perspective on student engagement. In S. Christenson, A. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement. Springer. https://doi.org/10.1007/978-1-4614-2018-7_7 Roy, S. Inglis, M., & Alcock, L. (2017). Multimedia resources designed to support learning from written proofs: an eye-movement study. Educational Studies in Mathematics, 96(2), 249-266. https://doi.org/10.1007/s10649-017-9754-7 Skaalvik, E. M., Federici, R. A., & Klassen, R. M. (2015). Mathematics achievement and self-efficacy: Relations with motivation for mathematics. International Journal of Educational Research, 72, 129- 136. https://doi.org/10.1016/j.ijer.2015.06.008 Spector, J. M., & Merrill, M. D. (2008). Editorial: Effective, efficient and engaging (E3) learning in the digital age. Distance Education, 29(2), 123-126. https://doi.org/10.1080/01587910802154921 Stylianides, A. J., & G. J. Stylianides (2013). Seeking research-grounded solutions to problems of practice: Classroom-based interventions in mathematics education. ZDM Mathematics Education, 45(3), 333-341. https://doi.org/10.1007/s11858-013-0501-y Usher, E. L., & Pajares, F. (2009). Sources of self-efficacy in mathematics: A validation study. Contemporary Educational Psychology, 34(1), 89-101. https://doi.org/10.1016/j.cedpsych.2008.09.002 Van Alten, D. C. D., Phielix, C., Janssen, J., & Kester, L. (2019). Effects of flipping the classroom on learning outcomes and satisfaction: A meta-analysis. Educational Research Review, 28, 100281. https://doi.org/10.1016/j.edurev.2019.05.003 Wigfield, A., & Eccles, J. S. (2000). Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, 25, 68-81. https://doi.org/10.1006/ceps.1999.1015 Corresponding author: Tanya Evans, t.evans@auckland.ac.nz Copyright: Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC- ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY- NC-ND 4.0. Please cite as: Evans, T., Kensingon-Miller, B., & Novak, J. (2021). Effectiveness, efficiency, engagement: mapping the impact of pre-lecture quizzes on educational exchange. Australasian Journal of Educational Technology, 37(1), 163-177. https://doi.org/10.14742/ajet.6258 https://doi.org/10.1007/978-1-4614-2018-7_7 https://doi.org/10.1007/s10649-017-9754-7 https://doi.org/10.1016/j.ijer.2015.06.008 https://doi.org/10.1080/01587910802154921 https://doi.org/10.1007/s11858-013-0501-y https://doi.org/10.1016/j.cedpsych.2008.09.002 https://doi.org/10.1016/j.edurev.2019.05.003 https://doi.org/10.1006/ceps.1999.1015 mailto:t.evans@auckland.ac.nz https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/ https://doi.org/10.14742/ajet.6258