Australasian Journal of Educational Technology, 2021, 37(6). 75 Student achievement emotions: Examining the role of frequent online assessment Kaitlin Riegel, Tanya Evans University of Auckland The rapid inclusion of online assessment in higher education has left a void in investigating the relationship this form of assessment has with student emotions. This study examines the influence of frequent online assessment on student emotions in a university setting using a mixed-methods approach. Students' emotions in an online quiz and a traditional classroom test in a second-year mathematics course (n = 91) were analysed using both quantitative and qualitative approaches, through the lens of the control-value theory. The study used an adaptation of the Achievement Emotions Questionnaire (AEQ) to collect data on reported student emotions in both assessments, as well as qualitative data on student’s views of the frequent online assessment. Students reported higher levels of positive emotions and lower levels of negative emotions in an online quiz compared to the test, and we attempted to identify sources of these differences. The findings are discussed together with implications for habitualisation of assessment emotions. Practically grounded generalisations are outlined as opportunities for disrupting negative emotions and reaffirming positive emotions, which are suitable for implementation in higher education on a broad scale. Implications for practice or policy: • For educators designing tertiary assessment aimed at promoting positive and reducing negative emotions, we advise incorporating features that students perceive as allowing them greater control over obtaining success. • Specifically, we advise incorporating frequent low stakes online quizzes into tertiary courses. • These present opportunities for students to habitualise positive assessment-related emotions, which correlate with performance and constructs such as self-efficacy. • The Achievement Emotions Questionnaires (AEQ) can be adapted to investigate achievement emotions in different forms of assessment. Keywords: emotions, assessment, online quizzes, control-value theory, mixed-methods Introduction The incorporation of online tools in courses has become a widespread practice in most domains of higher education. This type of instructional blend is increasingly considered as the future conventional model in higher education (Brown, 2016; Heinrich et al., 2016). With the improved efficiency in administration afforded by technological advances, the use of online computer-based assessment is on the rise. Major reductions in costs associated with marking through automated instant marking (e.g., multi-choice online quizzes) opened the higher education world to new possibilities in increasing frequency of assessment. For example, traditional (not even) weekly assignments can be replaced by online quizzes that are due before every lecture. As a myriad of higher education courses that feature online assessment are being developed worldwide, we must evaluate the impact these different assessments have on students. Emotions play a critical role in student academic interest, engagement, performance, and overall wellbeing (Kahu & Nelson, 2018; Pekrun & Stephens, 2010; Schukajlow & Rakoczy, 2016). Historically, however, higher education theorists and empirical researchers have focused on cognition and motivation, leaving the role of emotions in student academic experience vastly unacknowledged. While major research efforts have been undertaken in the domain of blended learning, most studies focus on students’ experiences from cognitive and motivational perspectives (for review see Brown, 2016). Similarly, existing studies of online assessment concentrate on the relationship with motivation and achievement on large summative assessment, but neglect to address the relationship with student emotions (Gikandi et al., 2011; Di Meo & Marti-Ballester, 2020). Overdue change is reflected in numerous recent publications in the field, including a special issue of the Studies in Higher Education on “The Role of Emotions in Higher Education Teaching and Learning Processes” (2019), signaling the recent emergence of this research frontier in higher education Australasian Journal of Educational Technology, 2021, 37(6). 76 (e.g., Lincoln & Kearney, 2019; Peterson et al., 2015; Xing et al., 2019). In addressing this gap in the literature for computer-based assessment-related emotions, this study set out to examine student emotions around frequent online assessment in a university setting. Theoretical framework Achievement emotions Emotions that are experienced by learners in relation to achievement activities and outcomes are called achievement emotions (Pekrun, 2006). While there is a growing body of literature on emotions in education, research on achievement emotions besides anxiety is still limited. In general, positive emotions, such as enjoyment, have been shown to correlate positively with engagement, attention, and flow while increasing motivation, effort, and academic performance (Mega et al., 2014; Pekrun et al., 2017; Pekrun et al., 2019; Schukajlow & Rakoczy, 2016). Conversely, negative emotions, such as boredom and hopelessness, have been shown to correlate negatively with the above (Pekrun et al., 2017; Pekrun et al., 2019; Peixoto et al., 2017). High-activation negative emotions, such as anxiety, can be more variable in their effects on learning and performance, but have been shown to negatively associate with attention, performance on complex tasks, and student achievement (Sotardi et al., 2020; Steinmayr et al., 2016). Achievement emotions are conceptualised as either state emotions, which occur in a specific moment, or trait emotions, which are emotions individuals tend to experience in certain scenarios. Although state emotions are more powerful predictors of these relationships, it is also valuable to investigate the trait emotional experiences reinforced in students around assessments. Control-value theory This study was conceived through the lens of the control-value theory (CVT) proposed by Pekrun (2000, 2006), which claims that achievement emotions can be considered as products of an individual's appraisals of the subjective control and value of activities and corresponding outcomes. Subjective control refers to the individual's control over the activity and outcome (e.g., studying will lead to success). Subjective value refers to the individual's perceived importance of the activity and outcome (e.g., the importance of passing a test). Through the lens of the CVT, assessments that allow students more control over obtaining success or avoiding failure will result in higher levels of positive emotions, like joy, and lower levels of negative emotions, like hopelessness. Pekrun (2006) commented that individuals may experience, "higher emotion intensity with subjectively more important success or failure" (p. 320). Therefore, a lower-risk, summative assessment would likely correspond with lower emotional intensity, whereas an assessment worth a large portion of a final grade would correspond with greater emotional intensity. Additionally, assessments that are controllable and positively valued as an activity will elicit more positive and fewer negative emotions. Unpacking student assessment emotions through investigating their subjective control and value will inform the effectiveness of the assessment design in promoting useful affect. Since its conception, the theory has been widely accepted and received support from empirical studies (Pekrun et al., 2011). Table 1 outlines outcome-based and activity-based achievement emotions by their appraisals as posited by the CVT. The Achievement Emotions Questionnaire (AEQ) is a validated measurement instrument of achievement emotions (Pekrun et al., 2011), which was developed within the CVT. The AEQ contains a test-related section that is designed to measure emotions around taking a test or exam. However, to quantitatively consider the variations between students’ emotions in different forms of assessment, such as tests and online quizzes, it is necessary to ensure the measure is reliable for each context. Australasian Journal of Educational Technology, 2021, 37(6). 77 Table 1 The control-value theory: Basic assumptions on control, values, and achievement emotions Appraisals Object focus Value Control Emotion Outcome/prospective Positive (success) High Anticipatory joy Medium Hope Low Hopelessness Negative (failure) High Anticipatory relief Medium Anxiety Low Hopelessness Outcome/retrospective Positive (success) Irrelevant Joy Self Pride Other Gratitude Negative (failure) Irrelevant Sadness Self Shame Other Anger Activity Positive High Enjoyment Negative High Anger Positive/negative Low Frustration None High/low Boredom Note. Reprinted from Pekrun (2006, p. 320) Habitualised achievement emotions Pekrun (2000) argues that we may assume that repetition of emotions can cause cognitive appraisals to be bypassed, leading the emotion to become habitualised in recurring scenarios. For example, test anxiety may initially be experienced by the way of a control-value appraisal but can become proceduralised so that the situation is no longer evaluated, with anxiety induced at the prospect of formal assessment. Zeidner (1998) stated, "anxiety is generally shaped by repeated failure during critical developmental periods" (p. 167). More broadly, Crossman (2007) highlighted how assessment-related emotions are influenced by previous experiences, commenting, "student references to emotions and relationships were particularly rich when they linked past experiences of assessment with current perceptions" (p. 318). Further, research has shown there are reciprocal effects between achievement and emotions (Ahmed et al., 2013; Pekrun et al., 2017; Pekrun et al., 2019). Understanding the emotions students experience around frequent online assessment is critical to avoid reinforcing negative assessment emotions and to seize the opportunity to habitualise positive assessment emotions. The relationship between online assessment and emotion While there exists research on emotions relating to assessment, there is little on student emotion in online assessment or on comparing emotions in different forms of assessment. Historically, most research has focused on anxiety, demonstrating that students experience less stress and anxiety in summative assessment that takes place online (Cassady & Gridley, 2005; Dermo, 2009; Engelbrecht & Harding, 2004; Stowell & Bennett, 2010). However, Stowell et al. (2012) reported that students with high classroom-test anxiety experienced similar anxiety levels in online assessment, while students with low classroom-test anxiety experienced more anxiety online. A few studies have looked at positive affect, for example, confidence (Cassady & Gridley, 2005) and comfortability (Dermo, 2009), and found a positive association with online assessment. More recently, Daniels and Gierl (2017) found students experienced more positive emotions than negative emotions upon completing a computer-based exam, while Harley et al. (2020) reported that students experienced fewer negative emotions during computer-based assessment than their typical test- taking emotions. Harley et al. (2020) commented, “research is needed to understand how non-traditional environments affect students’ emotions and educational outcomes” (p. 3). To better unpack the influence of online assessment on student emotions, it is also necessary to control for the differences between students. Existing research has demonstrated that females are more likely to experience assessment-related anxiety than males (Zeidner, 1998). The few studies that have investigated other emotions have shown that males tend to report higher positive emotions and lower negative emotions (Frenzel et al., 2007; Harley et al., 2020). As outlined Australasian Journal of Educational Technology, 2021, 37(6). 78 previously, positive emotions tend to correlate positively with achievement, and negative emotions negatively. There is also some evidence that anxiety has a weaker correlation with poor performance in online assessment (Stowell et al., 2012; Stowell & Bennett, 2010). As research around emotions in online assessment settings is still growing, it is necessary for studies to include variables such as gender and prior achievement to understand which groups of students’ online assessments support or neglect. The little existing research on emotions in online assessment and the lack of attention paid to positive emotions in such a context, demonstrates a need for deeper investigation. This study is not limited to investigating assessment anxiety, but the complex emotional landscape of university students’ assessment experiences through disentangling the relationship between their reported emotions in an online quiz and a traditional classroom test. Moreover, the online quizzes in this study differ extensively from existing studies in features of grade weight, time available per question, and frequency, reflecting the current mainstream trends. Combining the complementary power of a mixed-methods approach, we examined the role online assessment played in students’ assessment-related emotions on a large sample of undergraduate students, which allowed for more generalisable and transferrable conclusions. In this way, we have added to the investigations on effective and efficient ways of conducting online assessment – a central aspect of today’s higher education. Research questions The main goal of our study was to examine and explain student achievement emotions in relation to frequent online assessment in today’s university context through the theoretical lens of the CVT. To that end, we identified three sequential research questions: (1) How do students’ emotional perceptions of an online quiz compare to the benchmark of a traditional test? (2) What relationships exist between student assessment-related emotions, prior achievement, and gender? (3) Through the lens of the CVT, how can we explain differences in students’ emotional perceptions around frequent online assessment and invigilated forms of assessment through what they identify as the prevalent differences? Through the lens of the CVT, we expected to see higher levels of positive emotions and lower levels of negative emotions in a quiz compared with the test. We additionally predicted that both gender and prior achievement would contribute to explaining student assessment emotions. This study aimed to systematically integrate quantitative and qualitative data within a single investigation, using the CVT as a theoretical foundation to reach a plausible conclusion. We expected to inform theoretical consideration of assessment design and opportunities for students to habitualise positive assessment-related emotions through frequent online assessment, which in turn would correlate positively with academic performance constructs such as self-efficacy. Method Study site The study was conducted at the University of Auckland (New Zealand) in a standard second-year service mathematics course with around 400 students enrolled each semester. The course is designed to support other majors such as computer science, finance, economics, physics, chemistry, biology, and other sciences. We selected this course as our study site because the students enrolled in this course represent a versatile sample of different majors that are commonly found in conventional universities worldwide. Further, mathematics as a subject is often viewed as a challenging obstacle for non-mathematics majors who are required to complete this course for their degree. Many would not choose to enrol in this course otherwise. Investigation pertaining to the affective domain in such a context is of particular importance as, for example, mathematics-related anxiety is a well-documented phenomenon. The course was changed in 2016 to include frequent online assessment outside the classroom. Online quizzes were introduced between lectures throughout the term. Each quiz consisted of two multiple-choice questions, which assessed two key points from the previous lecture. The students needed to submit each Australasian Journal of Educational Technology, 2021, 37(6). 79 quiz before the start of the next lecture and they were given 30 minutes to complete the quiz once it was opened. The students had two attempts at each quiz, with their best result kept, and they were given instant feedback on incorrect responses after submission. Every quiz had multiple questions, which were randomly selected to reduce the possibility of cheating and provide different options if students reattempted. There were 31 quizzes throughout the 12 week term, and the best 27 results were recorded, making up 13 percent of the final grade. The course also had a 1 hour-long test held under exam conditions, consisting of 20 multiple-choice questions, and worth 20 percent of the final grade. The remaining marks for the course were from assignments, tutorials, and a 50 percent final exam. The test occurred halfway through the semester outside of normal lecture hours. The content and style of the questions were similar for both the online quizzes and the test. The impact of the online quizzes was analysed in a previous study by Evans et al. (2021). They concluded that the intervention had optimised the impact of the distributed (spaced) practice on long-term memory retention. They further argued that the online quizzes enhance the quality of student engagement and claim the frequency of successful learning efforts allowed an accumulation of mastery experience for the students, thus enhancing their self-efficacy. Measures A mixed-methods approach to data collection was taken to utilise the advantages of the complementary powers of quantitative and qualitative perspectives and mitigate the limitations of each (Ercikan & Roth, 2009). Specifically, this approach allowed us to answer a “more complete range of research questions” (Johnson & Onwuegbuzie, 2004, p. 21), in both detecting quantitative differences in emotional experiences and qualitatively analysing students’ subjective experience of each assessment, which in turn can inform stronger conclusions. The questionnaire developed for this project adapted the test-related section of the AEQ, with the statements pertaining to before and during taking the test. The AEQ consists of statements that measure achievement emotions through the lens of the CVT on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The statements in these sections pertained to three positive emotions (enjoyment, hope, and pride) and four negative emotions (anger, anxiety, hopelessness, and shame). These statements were adapted to allow paired responses to each item for both the test and an online quiz as illustrated in Figure 1. The questionnaire can be found at https://doi.org/10.17608/k6.auckland.9975608. The Cronbach's alpha reliabilities demonstrated good internal consistency in both the test and a quiz for anxiety (test α = 0.87; quiz α = 0.89), enjoyment (test α = 0.84; quiz α = 0.85), hope (α = 0.83; quiz α = 0.81), hopelessness (α = 0.91; quiz α = 0.92), and shame (test α = 0.79; quiz α = 0.76). Anger (test α = 0.63; quiz α = 0.53) and pride (test α = 0.65; quiz α = 0.67) showed more questionable results for α in terms of internal validity. This could possibly be explained by the focus of the study being on emotions before and during an assessment, not after, resulting in the subscales for these two emotions having the fewest items, as generally more items feature after assessment. Figure 1. An example of a statement and response options in the questionnaire Additionally, we collected categorical information, including students' prior achievement (measured by their grade in a prerequisite course) and students' gender (male, female, gender diverse, and decline to answer options, however, we only received male and female responses). At the end of the questionnaire, students were provided with the opportunity to identify what they perceived to be the major difference between an online quiz and the test. The question was phrased as follows: “From your point of view, what is the main difference between taking the test and an online quiz?” A semantic https://doi.org/10.17608/k6.auckland.9975608 Australasian Journal of Educational Technology, 2021, 37(6). 80 inductive thematic approach was employed to code the responses and identify themes (Braun & Clarke, 2012). First, a qualified researcher (the first author) with postgraduate level research experience read, typed, and re-read all the participant responses to the question to become familiar with the data. Next the researcher annotated the responses with codes that were derived from the explicit content of the participant responses. The codes were then grouped into initial themes. These themes were reviewed, with some consolidated and some split into subthemes. An experienced researcher (the second author) confirmed that they assigned the same themes to participant responses. Finally, the themes were defined and named. Procedure The questionnaire was distributed on paper at the beginning of a lecture in the last week of the semester. Participants were acquired by convenience sampling of students in attendance at this lecture (N = 94). While probability sampling is more powerful in terms of generalisability, convenience sampling is the dominant sampling method within developmental science (Bornstein et al., 2013). The limitations of this approach are discussed later in the paper. An approval to conduct the study from the University of Auckland Ethics Committee was granted (approval number 022987). The response rate was 99% of those students present in the lecture. The completed questionnaires were collected, and the results were anonymously entered into a spreadsheet for analysis. Out of the questionnaires collected, 83 students wrote at least one relevant sentence in response to the open-ended question, all of which were included in the qualitative analysis. After data cleaning, there were 91 responses to the items in the questionnaire. Table 2 presents a summary of students included in the quantitative analysis. Raw data and analysis scripts are available at https://doi.org/10.17608/k6.auckland.9975608. Table 2 A summary of the participating students in the study Prior achievement Gender A-range B-range C-range Total Female 26 8 2 36 Male 31 14 10 55 Total 57 22 12 91 Results Quantitative analysis Table 3 shows the results of paired-samples t-tests for each emotion on the test and a quiz. Inspection of plots of the differences showed normality was a reasonable assumption for the positive emotions. The positive emotions were all slightly negatively skewed, and the negative emotions all demonstrated positive skew. Due to the robustness of a t-test to mild deviations from normality, we continued with the analysis. Removal of extreme outliers did not change the significance levels and only slightly changed the effect sizes, so they were kept in the analysis. All positive emotions were reported to be experienced more in a quiz than the test, while all negative emotions were experienced more in the test than a quiz. The effect size was large for anxiety (d = 0.88), medium for anger, hopelessness, and shame (d = 0.65, 0.58, and 0.52, respectively), and small to medium for enjoyment, hope, and pride (d = 0.31, 0.47, and 0.31, respectively). The effect sizes for the difference in negative emotions between the assessments were larger than the difference in positive emotions. Table 3 Descriptive statistics and t-test results for emotions by assessment-type Test Quiz 95% CI for Mean difference Emotion M SD M SD t p d Enjoyment 3.01 0.76 3.17 0.83 -0.28, -0.05 -2.92 .004 0.31 Hope 3.40 0.69 3.59 0.70 -0.27, -0.10 -4.44 .000 0.47 Pride 2.96 0.84 3.11 0.88 -0.26, -0.05 -2.97 .004 0.31 Anxiety 3.00 0.83 2.34 0.85 0.50, 0.82 8.35 .000 0.88 https://doi.org/10.17608/k6.auckland.9975608 Australasian Journal of Educational Technology, 2021, 37(6). 81 Hopelessness 2.16 0.85 1.86 0.79 0.19, 0.41 5.52 .000 0.58 Anger 2.65 0.84 2.28 0.76 0.25, 0.49 6.22 .000 0.65 Shame 2.31 0.84 2.01 0.75 0.18, 0.43 4.92 .000 0.52 Note. N = 91 A two-way ANOVA was conducted with variables prior achievement and gender on the difference between emotions reported in each assessment. The interaction between prior achievement and gender was not statistically significant, and analysis of the main effects showed prior achievement and gender were not significant for any emotion. Additionally, we ran a two-way ANOVA with variables prior achievement and gender on emotions reported in each assessment. In both assessments, the interaction between prior achievement and gender was not statistically significant for any emotions. The main effect of gender was found to be statistically significant for quiz-related anxiety (F(1, 83) = 6.840, p < .05, partial η2 = .076) and quiz-related shame (F(1, 83) = 6.219, p < .05, partial η2 = .070). The unweighted marginal means of quiz- related anxiety for females and males were 2.76 ± .158 and 2.25 ± .115, respectively. The unweighted marginal means of quiz-related shame for females and males were 2.38 ± .136 and 1.96 ± .099, respectively. Overall, males were found to report lower quiz-related anxiety (95% CI, 0.12 to 0.90, p < .05) and quiz- related shame (95% CI, 0.09 to 0.76, p < .05) than females. Finally, prior achievement was not found to have a significant effect in most cases; however, there were several significant results, the details of which can be found at https://doi.org/10.17608/k6.auckland.9975608. Qualitative analysis From the 83 responses, 71 distinct codes were assigned across the statements, and from these codes, 22 themes were identified. There was a necessity to consider the themes as two-dimensional, defined by interactions between students’ experience and the features of the assessment. When considering which differences between an online quiz and the test were important to the students, it was valuable to our research to retain this interplay. For example, two statements from our data read, “The stress and panic of a test” and “There is more pressure and stress placed on me for the test because it is weighted more than the quiz.” The first statement is an example of the theme anxiety, stress, and pressure in general, whereas the second statement is an example of the theme anxiety, stress, and pressure with respect to importance. Table 4 is presented in a matrix form to reflect the two axes and contains a summary of counts for each theme. The boundaries of each theme are defined below with excerpts from student responses. Anxiety, stress, and pressure The most common theme with 41 counts was the higher levels of anxiety, stress, and pressure in the test and lower levels in an online quiz. This aspect was frequently described without providing a reason through, for example, a participant stating: “I feel more nervous taking the test than online quiz.” Participant responses such as: “There is much less pressure during the online tests because there is less weighting on them. Therefore, I am less nervous during the quizzes,” further incorporated the aspect of the assessment that students’ identified as causing their anxiety. As seen in Table 4, sometimes stress and pressure were mentioned in relation to specific features, such as the relative importance of the test or the difference in time available. Open book One participant responded that the major difference between the two forms of assessment was: “Test is closed book online quiz is open book”, and another: “Test tests your knowledge/memory better since you cannot rely on the coursebook.” This theme is one of the most prominent, with 35 references to the open book nature of the quizzes. The theme includes participant comments on the need to recall or memorise content for the test and the ability to use other resources during the quizzes like the internet. Difficulty Students commented on the difference in difficulty of the assessments, with a consensus that the test was more difficult. For example: “Quiz is much easier and better.” One student indicated they were difficult for different reasons. And only one person regarded the test as a “challenge”. https://doi.org/10.17608/k6.auckland.9975608 Australasian Journal of Educational Technology, 2021, 37(6). 82 Importance Students discussed the difference in weight of the assessment in relation to their final course grade. In particular, one participant stated: “Since the test accounts for a greater amount of the overall final grade, it makes it more serious.” Another responded: “The test is worth a lot more and is harder than the quizzes. I see the quizzes as more of revision after class for the test and exam.” There were further, general comments from participants about their perception of the test as more serious than the quiz, and that the online quiz served more as revision. Volume and time The difference in the volume of content, the number of questions, and the average time per question was identified as a main difference by students. These differences were not always negatively framed: “Test = integrated concepts and questions. Online quiz = single concept is tested each time - each quiz.” However, as seen in Table 4, these responses were commonly related to how stress-inducing the assessment was, through comments such as: “Time Limit (sic). Most of time, I feel stressed cos no time to cover all questions.” Table 4 Two-dimensional themes and counts for student responses on the difference between an online quiz and a test The main difference is: Assessment conditions Frequency Importance Open book Quiz perks Volume and time General Other Total 4 7 14 26 11 12 - 6 80 Anxiety, stress, and pressure 1 - 8 6 3 6 17 - 41 Difficulty 2 1 2 2 - 3 16 - 26 Positive affect 5 - - 1 - - 7 - 13 Total 12 8 24 35 14 21 40 6 160 Positive affect Statements about the absence of anxiety were within the theme of anxiety, stress, and pressure, but comments pertaining to positive affect were coded separately, since we theoretically perceive them as separate constructs. The comments all related to greater positive affect (e.g., comfort, enjoyment, confidence) in relation to a quiz, such as “Online quizzes made me relax and it is enjoyable to me.” However, one participant regarded the test as being more satisfying. Assessment conditions, frequency, and quiz perks Assessment conditions encapsulates comments about the environment or context of the assessment (e.g., at home or invigilated). Only one student expressed a preference for test conditions, stating: I feel much more comfortable sitting the online quiz in my own workspace combined with the fact I have access to my notes in the quizzes. Whereas in the test the fact it is much more of my grade compared to a quiz, while being in test conditions makes it feel much harder. Out of comments about the frequency of the assessment, most were labelling the quizzes as inconvenient or annoying. Quiz perks included features exclusive to the quiz (apart from open book), like having multiple attempts and the recency of the content. There were two students who referenced collaboration as a quiz perk. The online quiz is meant to be done independently, and while there are measures to prevent cheating in place (e.g., randomising question presentation), this highlights a problem with independent summative assessment. Discussion Explaining emotional differences between the assessments through the CVT As anticipated through the lens of the CVT (Pekrun, 2006), our results demonstrated that students report higher levels of positive emotions and lower levels of negative emotions in an online quiz than the test. Australasian Journal of Educational Technology, 2021, 37(6). 83 These findings are in line with recent research on online assessment emotions (e.g., Harley et al., 2020). Our study particularly demonstrated that not only do students report different levels of emotions in each assessment, but that many participants identify their subjective emotional experience as the main difference between the two assessments. Table 5 shows which emotions were reported to be significantly higher or lower in an online quiz than the test. The findings highlighted that students perceived themselves as having more control and less likely to fail in an online quiz, accounting for the most prominent difference, which occurs in anxiety. In this sense, "I might fail" becomes "I will likely pass". The higher levels of reported pride and lower levels of reported shame in a quiz suggested a larger number of students feeling they are succeeding on a quiz. Finally, we saw a difference in how the students perceived the assessment itself, with an online quiz being positively valued as an activity. Table 5 An outline of how achievement emotions differ in an online quiz than a test Appraisals Object focus Value Control Emotion Outcome/prospective Positive (success) High Anticipatory joy Medium Hope Low Hopelessness Negative (failure) High Anticipatory relief Medium Anxiety Low Hopelessness Outcome/retrospective Positive (success) Irrelevant Joy Self Pride Other Gratitude Negative (failure) Irrelevant Sadness Self Shame Other Anger Activity Positive High Enjoyment Negative High Anger Positive/Negative Low Frustration None High/Low Boredom Note. Higher in an online quiz, Lower in an online quiz. Adapted from Pekrun (2006, p. 320). Contrary to our expectations for the second research question based on previous research (Frenzel et al., 2007; Harley et al., 2020), we found no significant effects of gender on test-related emotions, but this is not the first study to report null findings (e.g., Luo et al., 2016). However, we did find males reported less quiz- related anxiety and quiz-related shame than females. In most cases, prior achievement did not significantly influence the emotions experienced, which was again surprising given the relationship between emotions and achievement (e.g., Pekrun et al., 2019; Schukajlow & Rakoczy, 2016). Finally, our study aimed to discover possible explanations for the difference between students’ reported emotional experiences around a quiz and a test. Our thematic analysis brought to light that many students’ perceptions of the main difference between the two types of assessments were the aspects of a quiz that allow them control over succeeding, such as being open book and the time per question, as well as the difficulty of the assessment. The CVT suggests this results in the higher levels of enjoyment and hope, as well as the lower levels of anxiety and hopelessness reported in a quiz than the test. This interpretation is clearly supported in cases where participants identified the reason for the difference in, for example, their anxiety and stress, being the open book nature of a quiz. The participant responses provided evidence that the online quiz difficulty was more manageable and the assessment environment was more favourable, which could explain students positively value engaging in the quizzes as an activity and, consequently, the difference we discovered in the enjoyment and anger levels between the two assessments. The qualitative feedback on the importance of the test in comparison to an online quiz supported a difference in the intensity of the achievement emotions experienced by the students. It is reasonable to infer that the difference in reported anxiety levels, both from the t-test (d = 0.88) and the qualitative data was, in part, due to the relative significance of the test in comparison to a quiz. In fact, the results show medium to large effect sizes for the lower level of all negative emotions in a quiz than in a test. However, if the intensity of emotion experienced due to the relative importance of the assessment was the only factor, we would Australasian Journal of Educational Technology, 2021, 37(6). 84 have expected to see fewer negative emotions but not the greater levels of positive emotions identified in the quantitative analysis: enjoyment (d = 0.31), hope (d = 0.47), and pride (d = 0.31), and these being somewhat supported by the qualitative data. Consequently, we suggest that it was a combination of a lower intensity of emotion, more control, and higher likelihood of success. That is, there are features of online quizzes, which impacted the emotions experienced in this assessment that a traditional invigilated environment would not provide. Thereby, for educators designing tertiary assessment aimed at promoting positive and reducing negative emotions, the incorporation of the online quizzes provides an example of how to alter students control over and experiences of success, while still maintaining a positive valuation. Habitualising positive affect One of the consequences of this research was the opportunity for educators to use the online quiz intervention to disrupt negative emotions and reaffirm positive emotions related to assessment. Previous literature supports that higher levels of positive emotions and lower levels of negative emotions are associated with higher achievement, motivation, effort, engagement, and attention. Our data showed that, on average, every positive emotion was experienced at a higher level, and every negative emotion was experienced at a lower level in the online quiz than the test. Zeidner (1998) argues, "changing habitualized emotions by breaking up procedural schemes is assumed to be critical for any kind of educational intervention wanting to reduce negative emotions" (as cited in Pekrun, 2006, p. 324). Through completing the online quizzes 31 times throughout the semester, students built positive and reducing negative emotions to this type of assessment. Though there was some annoyance with the frequency of the quizzes identified through the qualitative analysis, this did not seem to carry over into the emotional experiences within the assessment. Additionally, the frequency of the quizzes has been shown to be one of the beneficial aspects of the intervention. In an earlier study involving online quizzes, Evans et al. (2021) suggested that the increased student engagement as an outcome of frequent online assessment resulted in the accumulation of learners’ mastery experience, thus increasing students’ self-efficacy. Self-efficacy is defined as a person’s judgment of their ability to organise and execute courses of action required to achieve the desired outcome (Bandura, 1997) and is a well-known predictor of academic achievement (Bandura, 2010; Ferla et al., 2009; Richardson et al., 2012). Research has demonstrated that self-efficacy associates positively with positive emotions and negatively with negative emotions (Luo et al., 2016; Pekrun et al., 2011). Further, it has been shown that high anxiety can undermine self-efficacy (Usher & Pajares, 2009). In the context of education, experiences with success contribute to developing self-efficacy, while failure impairs it (Usher & Pajares, 2009). The different levels of reported emotions related to an online quiz supports the suggestion from Evans et al. (2021) that the frequent online quizzes may enable the development of self-efficacy. Limitations and future research A limitation of our study was in recruiting participants through convenience sampling in a lecture at the end of the semester, where attendance was around 25%. It is possible that this sample could consist of, for example, more keen students, and may not be an accurate representation of the population. Therefore, the results should be used cautiously, and future studies should aim to ensure the use of a representative sample. Though previous research using the AEQ has not treated the temporal component of emotions, before, during, and after assessment-emotions, as separate constructs, the theoretical distinction between prospective outcome, retrospective outcome, and activity object focus may suggest a need for an investigation. For this study, there were not enough items to separate into two scales and remain reliable measures. A final limitation of the research is that we did not control for student emotion prior to the quizzes being introduced. It is possible that regular engagement with quizzes with the same style questions and content to be expected on the test altered the emotions experienced during the test due to increased confidence and familiarity, and thereby reducing, for example, anxiety levels. We theorise that assessment-related emotions and self-efficacy could become habitualised in one form of summative assessment and influence the affective response with another. This is a question for future research that can focus on the analysis of repeated measures by observing assessment-related emotions at multiple time points during a semester. Australasian Journal of Educational Technology, 2021, 37(6). 85 A critical outcome of this research is not only a glimpse into identifying what students find problematic and beneficial about assessment, but also a finding that contradicts an intuitive assumption that frequent assessment can be detrimental to students’ well-being as a result of additional stress and negative assessment-related emotions. This finding underscores the importance of future directions of inquiry that combine a complex interaction between emotional, cognitive, and motivational aspects of learning. Despite the conclusive comparisons, the suggestion in this paper is not to abandon traditional testing in favour of frequent online quizzes. We suggest as technology and research progress, we must also progress in how we implement and use assessment in higher education. Future research can focus on the investigation of already-used non-traditional forms of assessments. For example, there are universities that utilise take- home tests, and it would be valuable to see how this practice impacts student achievement emotions. Future research also should continue to investigate student assessment emotions on a more individual level through, for example, collecting data on personality traits, which has been addressed in the case of anxiety by Deshler et al. (2019). Conclusion Emotions are inseparable from student learning and achievement – to the extent that students themselves identify them as a distinguishing feature of different assessments, as we have shown in this study. Considering student emotions while designing assessment has been argued in the paper to be necessary. Our study has demonstrated that, by utilising technological advances, a different form of summative assessment can be introduced into a university mathematics course, which stimulates higher levels of positive emotions and lower levels of negative emotions than a traditional invigilated test. This results from increased student control while increasing their perceived likelihood of success, maintaining a positive valuation of the activity, and decreasing emotional intensity through lower grade weighting. Ultimately, when approaching future course design, we must consider how different forms of assessment can be combined to optimise student experience. This study has demonstrated the potential for frequent online assessment to interrupt habitualised negative assessment-related emotions, build positive assessment experiences, and in turn, change how students view and approach assessment through improving their affective experiences. References Ahmed, W., van der Werf, G., Kuyper, H., & Minnaert, A. (2013). Emotions, self-regulated learning, and achievement in mathematics: A growth curve analysis. Journal of Educational Psychology, 105(1), 150–161. https://doi.org/10.1037/a0030160 Bandura, A. (1997). Self-efficacy: The exercise of control. W. H. Freeman. Bandura, A. (2010). Self-efficacy. In I. B. Weiner, & W. E. Craighead (Eds.), The Corsini Encyclopedia of Psychology (Vol. 4, pp. 1-3). Wiley. https://doi.org/10.1002/9780470479216.corpsy0836 Bornstein M. H., Jager, J., & Putnick, D. L. (2013). Sampling in developmental science: Situations, shortcomings, solutions, and standards. Developmental Review, 33(4), 357–370. https://doi.org/10.1016/j.dr.2013.08.003 Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology: Vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 57-71). American Psychological Association. https://doi.org/10.1037/13620-004 Brown, M. G. (2016). Blended instructional practice: A review of the empirical literature on instructors' adoption and use of online tools in face-to-face teaching. The Internet and Higher Education, 31, 1- 10. https://doi.org/10.1016/j.iheduc.2016.05.001 Cassady, J. C., & Gridley, B. E. (2005). The effects of online formative and summative assessment on test anxiety and performance. Journal of Technology, Learning, and Assessment, 4(1), 4-30. https://ejournals.bc.edu/index.php/jtla/article/view/1648 Crossman, J. (2007). The role of relationships and emotions in student perceptions of learning and assessment. Higher Education Research and Development, 26(3), 313-327. https://doi.org/10.1080/07294360701494328 Daniels, L. M., & Gierl, M. J. (2017). The impact of immediate test score reporting on university students' achievement emotions in the context of computer-based multiple-choice exams. Learning and Instruction, 52, 27-35. https://doi.org/10.1016/j.learninstruc.2017.04.001 https://doi.org/10.1037/a0030160 https://doi.org/10.1002/9780470479216.corpsy0836 https://doi-org.ezproxy.auckland.ac.nz/10.1016/j.dr.2013.08.003 https://doi.org/10.1037/13620-004 https://doi.org/10.1016/j.iheduc.2016.05.001 https://ejournals.bc.edu/index.php/jtla/article/view/1648 https://doi.org/10.1080/07294360701494328 https://doi.org/10.1016/j.learninstruc.2017.04.001 Australasian Journal of Educational Technology, 2021, 37(6). 86 Dermo, J. (2009). e-Assessment and the student learning experience: A survey of student perceptions of e-assessment. British Journal of Educational Technology, 40(2), 203-214. https://doi.org/10.1111/j.1467-8535.2008.00915.x Deshler, J., Fuller, E., & Darrah, M. (2019). Affective states of university developmental mathematics students and their impact on self-efficacy, belonging, career identity, success and persistence. International Journal of Research in Undergraduate Mathematics Education, 5(3), 337-358. https://doi.org/10.1007/s40753-019-00096-3 Di Meo, F., & Martí-Ballester, C. (2020). Effects of the perceptions of online quizzes and electronic devices on student performance. Australasian Journal of Educational Technology, 36(1), 111-125. https://doi.org/10.14742/ajet.4888 Engelbrecht, J., & Harding, A. (2004). Combining online and paper assessment in a web-based course in undergraduate mathematics. Journal of Computers in Mathematics and Science Teaching, 23(3), 217- 231. https://www.learntechlib.org/primary/p/11487/ Ercikan, K., & Roth, W. (2009). Rethinking the relationship between the general and the particular. In K. Ercikan, & W. Roth (Eds.) Generalizing from educational research: Beyond qualitative and quantitative polarization (pp. 207-261). Routledge. https://doi.org/10.4324/9780203885376 Evans, T., Kensington-Miller, B., & Novak, J. (2021). Effectiveness, efficiency, engagement: Mapping the impact of pre-lecture quizzes on educational exchange. Australasian Journal of Educational Technology, 37(1), 163-177. https://doi.org/10.14742/ajet.6258 Ferla, J., Valcke, M., & Cai, Y. (2009). Academic self-efficacy and academic self-concept: Reconsidering structural relationships. Learning and Individual Differences, 19(4), 499-505. https://doi.org/10.1016/j.lindif.2009.05.004 Frenzel, A. C., Pekrun, R., & Goetz, T. (2007). Girls and mathematics - A "hopeless" issue? A control- value approach to gender differences in emotions towards mathematics. European Journal of Psychology Education, 22(4), 497-514. https://doi.org/10.1007/BF03173468 Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333-2351. https://doi.org/10.1016/j.compedu.2011.06.004 Harley, J. M., Lou, N. M., Liu, Y., Cutumisu, M., Daniels, L. M., Leighton, J. P., & Nadon, L. (2020). University students’ negative emotions in a computer-based examination: The roles of trait test- emotion, prior test-taking methods and gender. Assessment & Evaluation in Higher Education, 1-17. https://doi.org/10.1080/02602938.2020.1836123 Heinrich, E., Henderson, M., & Dalgarno, B. (2016). Editorial: From tinkering to systemic change. Australasian Journal of Educational Technology, 32(2), i-iii. https://doi.org/10.14742/ajet.3219 Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14-26. https://doi.org/10.3102/0013189X033007014 Kahu, E. R., & Nelson, K. (2018). Student engagement in the educational interface: Understanding the mechanisms of student success. Higher Education Research & Development, 37(1), 58-71. https://doi.org/10.1080/07294360.2017.1344197 Lincoln, D., & Kearney, M. (2019). The role of emotions in higher education teaching and learning processes. Studies in Higher Education, 44(10), 1707-1708. https://doi.org/10.1080/03075079.2019.1665301 Luo, W., Ng, P. T., Lee, K., & Aye, K. M. (2016). Self-efficacy, value, and achievement emotions as mediators between parenting practice and homework behaviour: A control-value theory perspective. Learning and Individual Differences, 50, 275-282. https://doi.org/10.1016/j.lindif.2016.07.017 Mega, C., Ronconi, L., & De Beni, R. (2014). What makes a good student? How emotions, self-regulated learning, and motivation contribute to academic achievement. Journal of Educational Psychology, 106(1), 121-131. https://doi.org/10.1037/a0033546 Peixoto, F., Sanches, C., Mata, L., & Monteiro, V. (2017). "How do you feel about math?": Relationships between competence and value appraisals, achievement emotions and academic achievement. European Journal of Psychology of Education, 32(3), 385-405. https://doi.org/10.1007/s10212-016- 0299-4 Pekrun, R. (2000). A social-cognitive, control-value theory of achievement emotions. In J. Heckhausen (Ed.), Motivational psychology of human development (pp. 143-163). Elsevier. https://doi.org/10.1016/S0166-4115(00)80010-2 Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational Psychology Review, 18(4), 315-341. https://doi.org/10.1007/s10648-006-9029-9 https://doi.org/10.1111/j.1467-8535.2008.00915.x https://doi.org/10.1007/s40753-019-00096-3 https://doi.org/10.14742/ajet.4888 https://www.learntechlib.org/primary/p/11487/ https://doi.org/10.4324/9780203885376 https://doi.org/10.14742/ajet.6258 https://doi.org/10.1016/j.lindif.2009.05.004 https://doi.org/10.1007/BF03173468 https://doi.org/10.1016/j.compedu.2011.06.004 https://doi.org/10.1080/02602938.2020.1836123 https://doi.org/10.14742/ajet.3219 https://doi.org/10.3102/0013189X033007014 https://doi.org/10.1080/07294360.2017.1344197 https://doi.org/10.1080/03075079.2019.1665301 https://doi.org/10.1016/j.lindif.2016.07.017 https://doi.org/10.1037/a0033546 https://doi.org/10.1007/s10212-016-0299-4 https://doi.org/10.1007/s10212-016-0299-4 https://doi.org/10.1016/S0166-4115(00)80010-2 https://doi.org/10.1007/s10648-006-9029-9 Australasian Journal of Educational Technology, 2021, 37(6). 87 Pekrun, R., Goetz, T., Frenzel, A. C., Barchfeld, P., & Perry, R. P. (2011). Measuring emotions in students’ learning and performance: The Achievement Emotions (AEQ). Contemporary Educational Psychology, 36(1), 36-48. https://doi.org/10.1016/j.cedpsych.2010.10.002 Pekrun, R., Lichtenfeld, S., Marsh, H. W., Murayama, K., & Goetz, T. (2017). Achievement emotions and academic performance: Longitudinal models of reciprocal effects. Child Development, 88(5), 1653-1670. https://doi.org/10.1111/cdev.12704 Pekrun, R., Murayama, K., Marsh, H. W., Goetz, T., & Frenzel, A. C. (2019). Happy fish in little ponds: Testing a reference group model of achievement and emotion. Journal of Personality and Social Psychology, 117(1), 166-185. https://doi.org/10.1037/pspp0000230 Pekrun, R., & Stephens, E. J. (2010). Achievement emotions in higher education. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 25, pp. 257-306). Springer. https://doi.org/10.1007/978-90-481-8598-6_7 Peterson, E. R., Brown, G. T., & Jun, M. C. (2015). Achievement emotions in higher education: A diary study exploring emotions across an assessment event. Contemporary Educational Psychology, 42, 82- 96. https://doi.org/10.1016/j.cedpsych.2015.05.002 Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students' academic performance: A systemic review and meta-analysis. Psychological Bulletin, 138(2), 353- 387. https://doi.org/10.1037/a0026838 Schukajlow, S. & Rakoczy, K. (2016). The power of emotions: Can enjoyment and boredom explain the impact of individual preconditions and teaching methods on interest and performance in mathematics? Learning and Instruction, 44, 117-127. https://doi.org/10.1016/j.learninstruc.2016.05.001 Sotardi, S. A., Bosch, J., & Brogt, E. (2020). Multidimensional influences of anxiety and assessment type on task performance. Social Psychology of Education, 23(2), 499-522. https://doi.org/10.1007/s11218- 019-09508-3 Steinmayr, R., Crede, J., McElvany, N., & Wirthwein, L. (2016). Subjective well-being, test anxiety, academic achievement: Testing for reciprocal effects. Frontiers in Psychology, 6, 1994. https://doi.org/10.3389/fpsyg.2015.01994 Stowell, J. R., Allan, W. D., & Teoro, S. M. (2012). Emotions experienced by students taking online and classroom quizzes. Journal of Educational Computing Research, 47(1), 93-106. https://doi.org/10.2190/EC.47.1.e Stowell, J. R., & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research, 42(2), 161-171. https://doi.org/10.2190/EC.42.2.b Usher, E. L., & Pajares, F. (2009). Sources of self-efficacy in mathematics: A validation study. Contemporary Educational Psychology, 34(1), 89-101. https://doi.org/10.1016/j.cedpsych.2008.09.002 Xing, W., Tang, H., & Pei, B. (2019). Beyond positive and negative emotions: Looking into the role of achievement emotions in discussion forums of MOOCs. The Internet and Higher Education, 43, 100690. https://doi.org/10.1016/j.iheduc.2019.100690 Zeidner, M. (1998). Test anxiety: The state of the art. Springer. https://doi.org/10.1007/b109548 Corresponding author: Kaitlin Riegel, krie235@aucklanduni.ac.nz Copyright: Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC- ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY- NC-ND 4.0. Please cite as: Riegel, K., & Evans, T. (2021). Student achievement emotions: Examining the role of frequent online assessment. Australasian Journal of Educational Technology, 37(6), 75-87. https://doi.org/10.14742/ajet.6516 https://doi.org/10.1016/j.cedpsych.2010.10.002 https://doi.org/10.1111/cdev.12704 https://doi.org/10.1037/pspp0000230 https://doi.org/10.1007/978-90-481-8598-6_7 https://doi.org/10.1016/j.cedpsych.2015.05.002 https://doi.org/10.1037/a0026838 https://doi.org/10.1016/j.learninstruc.2016.05.001 https://doi.org/10.1007/s11218-019-09508-3 https://doi.org/10.1007/s11218-019-09508-3 https://doi.org/10.3389/fpsyg.2015.01994 https://doi.org/10.2190/EC.47.1.e https://doi.org/10.2190/EC.42.2.b https://doi.org/10.1016/j.cedpsych.2008.09.002 https://doi.org/10.1016/j.iheduc.2019.100690 https://doi.org/10.1007/b109548 https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/ https://doi.org/10.14742/ajet.6516