Microsoft Word - [2] 6370-Article Text-21731-1-11-20201215.docx Australasian Journal of Educational Technology, 2020, 36(6). 15 Students’ sense-making of personalised feedback based on learning analytics Lisa-Angelique Lim, Shane Dawson University of South Australia Dragan Gašević Monash University Srećko Joksimović, Anthea Fudge, Abelardo Pardo, Sheridan Gentili University of South Australia Although technological advances have brought about new opportunities for scaling feedback to students, there remain challenges in how such feedback is presented and interpreted. There is a need to better understand how students make sense of such feedback to adapt self- regulated learning processes. This study examined students’ sense-making of learning analytics–based personalised feedback across four courses. Results from a combination of thematic analysis and epistemic network analysis show an association between student perceptions of their personalised feedback and how these map to subsequent self-described self-regulated learning processes. Most notably, the results indicate that personalised feedback, elaborated by personal messages from course instructors, helps students refine or strengthen important forethought processes of goal-setting, as well as to reduce procrastination. The results highlight the need for instructors to increase the dialogic element in personalised feedback in order to reduce defensive reactions from students who hold to their own learning strategies. This approach may prompt reflection on the suitability of students’ current learning strategies and achievement of associated learning goals. Implications for practice or policy:  Personalised feedback based on learning analytics should be informed by an understanding of students’ self-regulated learning.  Instructors implementing personalised feedback should align this closely with the course curriculum.  Instructors implementing personalised feedback in their courses should consider the relational element of feedback by using a positive tone.  Personalised feedback can be further enhanced by increasing the dialogic element and by including more information about learning strategies. Keywords: learning analytics, feedback, self-regulated learning (SRL), student perspectives, epistemic network analysis Introduction Learning in a university has always presented complex challenges, especially for first-year students trying to navigate a rigorous academic culture that requires them to acquire new knowledge and skills independently, manage competing deadlines and prepare for assessments across multiple courses. To succeed in such an environment, it is critical for students to engage in self-regulated learning (SRL). With the development of online learning technologies in the 21st century, the landscape of higher education has been transformed. Today, learning happens across multiple platforms, with many courses favouring blended approaches to support students’ learning in both online and physical spaces (Allen & Seaman, 2013). Many university courses now comprise elements of online learning to different degrees. As learning takes place increasingly outside of formal settings, it is even more critical for students to be able to regulate their own learning; that is, to maintain sufficient motivation, organise their resources and operate suitable learning strategies, in order to succeed in such an environment. However, students face many challenges in learning how to self-regulate their learning to optimise study outcomes (Hadwin et al., 2019). While the incorporation of feedback has been shown to be central in the Australasian Journal of Educational Technology, 2020, 36(6). 16 development of student SRL proficiency (Butler & Winne, 1995), there remain challenges in how such process feedback can be personalised and provided at scale. Over the past few decades, the vast uptake of digital technology in education has provided for more flexible teaching practices while also generating new modes of data about students’ learning activities and strategies. The technologies and associated trace data can be utilised to address the challenges related to the provision of feedback at scale and aid the development of SRL. This makes learning analytics (LA) an attractive option for feedback provision in contemporary higher education settings. To date, research has seen a steady stream of LA-based feedback interventions being developed; especially, learning analytics dashboards (LADs) have increased in popularity over the last decade (Bodily & Verbert, 2017; Jivet et al., 2018). While these are fully automated systems, other instructor-mediated LA-based feedback interventions have also featured in the literature, such as the Student Relationship Engagement System (Liu et al., 2017) and OnTask (Pardo et al., 2018). The goal of these and other LA-based feedback systems is to develop students’ SRL by increasing awareness of their learning (Pardo et al., 2017). Feedback levelled at SRL is deemed to be the most effective (Butler & Winne, 1995; Hattie & Timperley, 2007; Nicol & MacFarlane-Dick, 2006); hence LA-based feedback presents a promising solution to the challenge of scaling personalised and timely feedback. Despite the growing number of LA-based systems designed to provide feedback to students, there has been minimal work investigating how students perceive and act on feedback to alter their behaviour to optimise their learning outcomes (Ferguson et al., 2016; Jivet et al., 2018). In short, if students do not understand the feedback they receive, they are not well positioned to adopt appropriate actions that can better support their learning (Ryan et al., 2019). Students’ recipience of feedback is critical for improving SRL proficiency. Feedback is only effective when students make sense of it and are willing to act on it (Price et al., 2010; Winstone et al., 2017). This underscores the need to examine students’ sense-making of feedback as pivotal to its effectiveness (Carless, 2016). Presently, little is known about student sense-making of personalised feedback derived from LA, nor how students adapt SRL processes in response (Jivet et al, 2018; Klein et al., 2019). Such insights can aid the design of LA-based feedback technologies to address the noted deficiencies in student recipience and impact on learning (Wise, 2014). In this paper, we position sense-making as central to feedback. We draw on contemporary understandings of feedback as a dialogical process, “whereby learners make sense of information from various sources and use it to enhance their work or learning strategies” (Carless, 2016, p. 1). This process firstly involves the communication of information about students’ processes or products of learning and secondly, how students interpret that information in relation to their learning progress. This sense-making phase is essential to the feedback process. As noted by Henderson et al. (2019), “unless learners can identify, interpret and act upon [their feedback information] then the feedback process is thwarted” (p. 21). Therefore, sense-making sits at the intersection of interpretation and action. It is a process of comprehension, serving as a “springboard to action” (Weick et al., 2005, p. 409). Student online learning data is being increasingly used in LA research and as a scalable form of feedback. Currently, the number of LA-based systems being created continues to grow; however, there is little research by comparison, which can demonstrate how these new modes of feedback are interpreted and actioned by students. In view of the research gap around students’ sense-making of LA-based feedback, the present study aimed to understand the interplay between students’ perceptions of their personalised LA-based feedback and their self-described enactment of it. This understanding will bear implications for the design and implementation of this innovative approach to feedback. Conceptual framework This study was framed by two key concepts that are fundamental to effective feedback processes, namely student perceptions of feedback, and SRL. This section reviews the literature on these two concepts and discusses their significance in the feedback domain. Student perceptions of feedback Research has found that students’ engagement with feedback, that is, their willingness to receive and enact feedback, is mediated by their perceptions of it. In a widely cited synthesis of feedback research, Winstone et al. (2017) proposed four groups of interpersonal communication variables influencing the way students receive, understand and act on their feedback; these are characteristics of the receiver, sender, message and Australasian Journal of Educational Technology, 2020, 36(6). 17 context. Characteristics of the receiver refers to the motivation and attitudes of the student towards feedback, as well as skills and self-efficacy to enact the feedback. Characteristics of the sender refers to students’ perceptions of their feedback givers, typically, the instructor. Research findings indicate that when students trust the accuracy of the feedback and perceive their instructors to care about their learning, then they will be more likely to engage with their feedback (Klein et al., 2019; Winstone et al., 2017). Characteristics of the message includes students’ perceptions of the quality, focus, amount and tone of the feedback. Engagement with feedback tends to be fostered when progress is reported, when there is sufficient detail, where there is a focus on future tasks (rather than what has been done in the past), and positive tone. Characteristics of the context captures features of the teaching context in terms of whether students were trained in feedback use, assessment and curriculum design or timing of the feedback. Engagement with feedback is more likely when students have been trained in feedback use, where the curriculum comprises progressive assessment and where there is regular, rather than one-off, feedback being communicated. The present study examined students’ perceptions of their feedback; therefore, we focused on the characteristics of the sender, message and context. Given that LA-based feedback is still relatively new (Pardo et al., 2017), few studies have systematically examined students’ perceptions of this mode of feedback. Perception studies have mainly indicated students’ satisfaction with their feedback (e.g., Khan & Pardo, 2016; Pardo et al., 2019) but not whether satisfaction influenced students’ enactment of feedback or further efforts to learn. More recently, Lim et al. (2020) carried out a study analysing students’ perceptions of feedback message, sender and context as described in Winstone et al.’s (2017) framework. The findings indicate that students held mostly positive perceptions of their personalised feedback, especially the quality, task focus and the instructor’s care towards their learning. This exploratory sense-making analysis revealed that students’ perception of their feedback as providing useful information about their progress was accompanied by increased motivation to learn. However, negative perceptions of feedback quality – for example, that feedback did not accurately reflect effort – were accompanied by feelings of frustration, suggesting an unwillingness to enact their feedback. While the study provided a base of research in how students perceived LA-based feedback, it did not provide a complete picture of students’ sense-making, as it did not address how students enacted personalised feedback through subsequent adaptations of SRL. Given that LA-based feedback intends to target students’ SRL (Pardo et al., 2017), the present study continued the line of sense-making research, to include enactment of feedback in terms of SRL. SRL and feedback SRL is a commonly referenced construct in LA and feedback research (Viberg et al., 2020). Broadly, it refers to the control that students have over their thoughts, feelings and actions towards the attainment of learning goals. Although different models of SRL have been proposed (e.g., Winne & Hadwin, 1998; Zimmerman, 2000), they share a common understanding of the learner as an active agent in their learning and that it is a staged, iterative process. The present study adopted the sociocognitive SRL model (Zimmerman, 2000; Zimmerman & Moylan, 2009) as a theoretical lens to frame students’ discourse about their learning processes in response to feedback. The model discusses the SRL processes that happen at a task-specific level, while portraying them at a macro level (Efklides, 2011). This broadness is particularly suited for analysing students’ descriptions of their learning derived from their discourse. Zimmerman (2000) theorised SRL as a three-phase cyclical process. The forethought phase involves processes of task analysis, goal-setting and strategy selection to achieve those goals. These processes are further influenced by personal motivational beliefs. Forethought processes, especially goal-setting, set the stage for effective self-regulation (Cosnefroy et al., 2018; McCardle et al., 2017), because goal-setting automatically generates a feedback loop for self-evaluation at the end of a learning session. During the performance phase, self-control processes, comprising volitional and learning strategies, enable the learner to focus and execute the forethought phase productively. In effective self-regulation, learners’ ongoing self- observation processes provide metacognitive awareness of progress, critical for informing their adaptation of learning strategies during this phase. Self-observation provides a feedback loop for the reflection phase. In this phase, the learner self-evaluates progress, with input from self-observation in the performance phase and the set goal(s) during the forethought phase. Self-judgement is an evaluation of progress, while self- reaction is an emotional response to the judgement, in the form of self-satisfaction, or adaptive or defensive reactions. Feedback arising from processes in the self-reflection phase serves as input for the next iteration of SRL, by influencing motivation and task analysis processes. Australasian Journal of Educational Technology, 2020, 36(6). 18 Although LA feedback interventions are aimed at supporting the development of students’ SRL processes (Pardo et al., 2017), evidence of how these feedback interventions inform students’ SRL processes is still needed (Ryan et al., 2019). Most recent empirical studies have indicated that personalised, LA-based feedback yields observable, positive benefits for students’ SRL in terms of learning strategies and performance (Lim et al., 2019; Matcha et al., 2019), as well as time management (Ahmad Uzir et al., 2019). However, what is less known is how students make sense of their received feedback, that is, how they translate their feedback information into decisions for learning. For example, Lim et al. (2019) found positive impact of personalised feedback on students’ SRL operations as well as performance. However, the performance benefits were not relayed to lower-achieving students as defined by their programme entry scores. This nuanced finding underscores the need to investigate how students make sense of LA-based feedback to adapt SRL. Especially, there is a need to know how such feedback updates less visible internal processes, such as forethought processes of motivation or goal-setting, as well as reflective processes of self-evaluation. Therefore, Zimmerman’s (2000) SRL model presents an appropriate lens to examine students’ sense-making of personalised feedback. In considering students’ sense-making of their feedback, it is important to take into account the learning context. The curriculum design plays a significant role in students’ SRL and, accordingly, their academic performance (Matcha, Gašević, et al., 2020). However, to date research is limited in exploring how students make sense of and enact their feedback, in different contexts. To sum up, Figure 1 illustrates the conceptual framework of this study. The figure shows the two concepts of feedback perceptions and SRL, with their respective elements as discussed in this section, comprising students’ sense-making of personalised feedback. Drawing from the literature review, the black lines in Figure 1 indicate a possible association between these two concepts in students’ sense-making. Figure 1. Conceptual framework for the examination of students’ sense-making of personalised feedback Purpose of the study The present study aimed to explore students’ sense-making of personalised, LA-based feedback in four large-enrolment undergraduate courses. Sense-making in this study was operationalised in terms of Winstone et al.’s (2017) perceptions of the interpersonal communication factors of feedback and Zimmerman’s (2000) theory of SRL. From the above review, it is evident that students hold perceptions of their received feedback, which influence their motivation to continue learning efforts, as well as their willingness to engage with their feedback. The review also discussed the role of LA-based feedback in affecting specific SRL processes, but that to date, empirical evidence is lacking to show how students make sense of this feedback to update their internal forethought or reflection processes. The study extends the Australasian Journal of Educational Technology, 2020, 36(6). 19 findings of Lim et al. (2020) by studying how students make sense of personalised LA-based feedback to modify their SRL processes. This study was framed by the following research questions:  RQ1. How do students describe their SRL adaptations in their sense-making of personalised feedback?  RQ2. Does (and if so, how) students’ sense-making of personalised feedback—that is, the association between their perceptions and self-described SRL adaptations— differ across contexts? Method Contexts The study was carried out over 2017 and 2018 in four courses at two research-intensive universities in Australia. Three of the courses (A, B, and C) were situated in Institution 1. Course D was situated in Institution 2. Table 1 summarises the features of these four courses and the personalised feedback provided therein. Although all four courses comprised a large student cohort, the courses were taught across different disciplines. Courses A, B and C adopted a blended learning teaching model (on-campus teaching supplemented by online materials), while Course D used a flipped classroom approach. For Course D, this involved students completing specific learning tasks (watching a topic video and completing an associated quiz) each week prior to attending lectures. Courses A, C and D were carried out over the typical duration of 13 weeks; Course B had a shorter duration of 7 weeks. Although Courses A, B and D were first-year courses, Course C was a foundational course in a university pathway programme involving non-traditional students. These students tend to be from equity groups with limited exposure to higher education in comparison with traditional university students. Using OnTask for personalised feedback Instructors in the four courses used an LA-based software called OnTask (Pardo et al., 2018) for personalising feedback to all students. OnTask automates the collection of learner data from various sources (e.g., learning management system activity and engagement, assessment and attendance) to enable instructors to generate and send personalised feedback to all students in their course. In this study, each instructor selected relevant indicators of engagement specific for their course. The collected data informed the design of personalised feedback messages (see Table 1). Instructors made their own decisions on the frequency of the feedback based on their own contexts as well as additional content to the feedback, such as motivational messages (Courses B and C). Therefore, the content, style and frequency of the feedback was different for each course. Participants and procedure Ethics approval for the study was obtained, prior to any data collection, from the lead author’s institution. Students from the four courses were invited to take part in course-based focus groups, with an incentive of an AUD $20 gift card. A total of 21 focus groups were conducted, with 86 participants across the four courses. The size of the groups varied from two to 10, due to the voluntary participation and students’ availability after classes. Each focus group session took between 30 and 60 minutes. To ensure trustworthiness, more than one focus group was engaged for each course, thereby allowing for a fuller understanding of the issues examined (Morgan, 1997). To protect students’ privacy, no information about participants was recorded apart from gender. Two weeks before the final assessment, students were invited to take part in the focus group discussions. Students were informed that discussions were recorded. In order to facilitate comparison across contexts, a semi-structured interview schedule was used. In line with the study’s aim, questions centred around students’ perceptions and enactment of their personalised feedback. The following guiding questions were used for all focus groups: Australasian Journal of Educational Technology, 2020, 36(6). 20 1. Please tell me more about the feedback emails that you received from your course instructor. 2. Did you follow the recommended actions in the feedback? Why or why not? 3. Did you find that the emails motivated you to study in the course, or not? Why? 4. How did the emails affect the way you learn in this course? 5. Overall, what did you like about this method of providing feedback and support? Table 1 Course features and personalised, data-informed process feedback messages in four courses Course features Feedback message details C/se Dis. (No.) CD SA No. of feedback emails Metrics used Additional message content A HS (242) Blended 2 quizzes (20%), Practicals (25%), Final exam (55%) 2 Logins to course site; E-book assignment completion; Tutorial & workshop attendance; Quiz marks (pass/fail) Tips for boosting learning skills; Offer of assistance from instructor B Arch (83) Blended Assignments (50%), Final project (50%) 3 Access to assessment-related links; Access to course resources; Quiz marks (pass/fail) Motivational messages; Reminders; Offer of assistance from instructor C FS (215) Blended Quiz (15%), Progressive assessments (20% + 20% + 35%) Participation (10%) 13 Logins to course site; Access to assessment-related links; Access to course resources; Tutorial attendance; Assessment 1 & 2 marks Motivational messages; Reminders; Tips for boosting learning skills; Offer of assistance from instructor D CE (601) Flipped Midterm exam (20%), Weekly online prep (20%), Project (20%), Final exam (20%) 8 Completion of weekly online prep; Outcome of weekly video quiz; Midterm grades Recommended strategy to improve outcome Note. C/se – course; Dis. (No.) – Discipline & no. of enrolled students; SA – summative assessment; HS – Health Science; Arch – Architecture; FS – Foundation Studies; CE – Computer Engineering Data analysis All focus group sessions were transcribed verbatim and imported into NVivo version 12 for analysis. The transcript for each focus group was segmented into meaning units; that is, phrases, sentences or series of sentences, each pertaining to feedback perceptions or SRL processes. With regard to feedback perceptions, we used the 11 codes generated from Lim et al. (2020), based on Winstone et al.’s (2017) framework. To understand how students adapted their SRL in response to their personalised feedback (RQ1), individual meaning units were coded in alignment with the subprocesses in Zimmerman’s (2000) SRL model. To ensure coding reliability, a second coder classified 40% of the meaning units in a first attempt, resulting in a moderate level of agreement (Cohen’s kappa = .51). The codes were refined and a second attempt Australasian Journal of Educational Technology, 2020, 36(6). 21 undertaken resulting in a high level of agreement (Cohen’s kappa = .93). Where there were disagreements, these were negotiated in order to reach consensus. Table 2 provides the list of all 20 codes used in the analysis. To explore associations between students’ perceptions of feedback and their adaptations of SRL and how these differed by context, we employed epistemic network analysis (ENA; Shaffer & Ruis, 2017). ENA is a graph-based method for analysing associations between coded data (Sinclair et al., 2019). Connections between codes were discovered through their co-occurrence within the same stanza, that is, the entire transcript for each focus group. We chose this method to analyse the data because of its ability to quantify and visualise networks of relationships between students’ perceptions of their personalised feedback and their adaptations to SRL processes. The protocol for ENA (Shaffer & Ruis, 2017) was followed for the study. Co-occurrences at the level of the student were analysed then aggregated to course level. The dimensions that explained the highest value of variability (SVD1 and SVD2) were then used to present the epistemic networks and perform further analyses. The result of ENA is a network graph that visualises relationships between codes; codes are represented as nodes, and relationships represented as edges. For instance, one node would represent the code Ftags (Goal-setting) and another, the code PQUALp (Positive perceptions of feedback quality). The edge between them would represent that students in this focus group tended to express positive perceptions of feedback quality together with goal setting. Node size indicates frequency of occurrence of the code, while thickness of edges indicates the strength of the relationship. In our analysis, each point in the sense- making space presents the summary of each student’s network within the space (Shaffer & Ruis, 2017). Thus, ENA quantifies connections between coded entities and creates models of these relationships. Results RQ1. How do students describe their SRL adaptations in their sense-making of personalised feedback? A total of 209 meaning units were coded into themes relating to Zimmerman and Moylan’s (2009) SRL model. Figure 2 shows the presence of each code and its distribution across the four courses. Forethought processes (Fmotp, Ftags, Ftaps and Fmotn) comprised half of all theme groupings (50%), followed by the reflection processes (Rse, Rsrd, Rsrp, Rsrn) at 31% and the performance processes (Pctrl) at 19%. The most frequently observed theme in the forethought phase was enhanced motivation (Fmotp, 27%). As illustrated in Exemplar 12 (Table 2), students felt more motivated to perform better in response to feedback. This theme was present in all courses. The second most frequently expressed theme was a reduction in procrastination habits (Pctrl, 19%). The feedback messages served as a reminder for students about tasks that were due, bringing these to students’ attention and their subsequent action (Exemplar 16, Table 2). The third most frequently expressed theme was an enhancement of students’ goal-setting (Ftags, 14%). Students used information in the feedback to set proximal goals for themselves (Exemplar 14, Table 2). Examples of some goals were to set up a planner of due dates (Course C), to keep up to date with the course materials each week (Course A), to complete the associated e-book module for the week (Course A), and to complete the weekly preparation activities (Course D). There were also goals to study a specific topic ahead of the next week, or to revise topics that needed review, according to the feedback (Course D). Figure 2 shows that the Pctrl and Ftags codes were particularly evident for Courses A and C. Australasian Journal of Educational Technology, 2020, 36(6). 22 Table 2 List of codes to classify perceptions of, and SRL adaptations to, personalised feedback Conceptual dimension Code Description Exemplar Student perceptions Message: Quality 1. PQUALp Describes the quality of feedback content in a positive way 1. If you’re confused and stuck, it kind of helps you stay on track. 2. PQUALn Describes the quality of feedback content in a negative way 2. But I’m not sure about the accuracy of this, because one time I could not answer all the question the first time… but the feedback came like you have done well in that thing. Message: Task 3. PTASKp Describes the feedback as being "task-focused" in a positive way 3. Helped me remember I had something to do especially that week, in case I forgot I had an assignment due. 4. PTASKn Describes the feedback as being "task-focused" in a negative way 4. It was a kind of oh you're not doing this, not doing this. Message: Tone 5. PTONEp Describes the tone of the feedback in a positive way 5. It's sort of like a friendly reminder. 6. PTONEn Describes the tone of the feedback in a negative way 6. it was automated, it's not someone genuinely emailing me, saying hey, well done. Message: Amount 7. PAMTn Describes the amount of feedback given in a negative way, as being too long, or too much or too little 7. I find it a little too wordy that’s why I skim through it. Sender: Care 8. PATTp Describes the attitude of the sender in a positive way 8. Just I feel supported, that someone actually cares for you to do well and get into your undergrad with a good grade. Context: Time 9. PTIMEp Describes feedback as being timely 9. I think it’s fine because it usually comes after a quiz or something like that, to encourage you on to the next one. 10. PTIMEn Describes feedback as not being timely 10. I think lots of my emails came in late, because he would say something, but I already did it. Context: Curriculum 11. PCurr Describes the feedback emails as being integrated with the course 11. I guess the feedback is… always suggesting that you go back and revise and the course itself is well-structured so it supports the effect of the feedback. Students’ SRL adaptations Forethought: Motivation 12. Fmotp Feedback increases motivation to learn in the course 12. I think they did motivate me because I didn’t want to fail anything. 13. Fmotn Feedback reduces motivation to learn in the course 13. Facilitator: If it tells you that you have to do all this then it makes you feel not as motivated? R40: I’m not as motivated I guess. Forethought: Goal-setting 14. Ftags Feedback enhances students' goal-setting. Goal might be proximal (immediate task) or distal (course grade). 14. The email shows me what I need to learn for the upcoming week so that I don’t fall behind. Forethought: Task analysis 15. Ftaps Feedback enhances students' planned strategies for study. 15. Because I’ve got a bit of a habit of just looking at what’s on the learnonline site, and not fully reading it properly. But from the reminder emails I’ve seen that I need to do these things, and I would have missed that if I was just skimming over the learnonline site. It’s been reminded in the emails that I need to do them. Australasian Journal of Educational Technology, 2020, 36(6). 23 Performance: Self-control 16. Pctrl Feedback was associated with general task strategies for self-control during learning, e.g., reducing procrastination, or improving time management 16. When I got the emails, I was kind of procrastinating my assignment, it gave me a reminder to do my work. So I kind of sat down and did my work after I got the email. Reflection: Self- evaluation 17. Rse Feedback makes students reflect their learning. 17. …it still makes you think whether you're up to date or not. Reflection: Self-reaction 18. Rsrp Feedback makes students feel positive about their progress and/or performance 18. I guess if you’re walking from A to B it lets you know you haven’t strayed off track and that’s comforting in a way. 19. Rsrn Feedback makes students feel negative about their progress and/or performance 19. I’ll be up to date with everything and I would have covered all the content and there will still be these emails saying that I need to work harder, need to do more. 20. Rsrd Feedback invokes efforts to protect the students' self- image, e.g., by stating confidence in their own ability or study strategies rather than actions recommended in the feedback 20. I don't think I would change my study habit because it was working pretty well for me. Figure 2. Presence of SRL codes from focus group discussions in four courses Themes observed in the reflection phase were predominantly related to self-evaluation (Rse, 13%). Students expressed that the feedback prompted them to reflect on whether they were on track (Exemplar 17, Table 2) or on topics which required more work to strengthen understanding. Topical reflection featured particularly in Course D, for example: So, you get individual score, individual feedback on which subject you are doing well … If I am doing something wrong, then you get feedback like you are not doing it right, so I go back and revise it so I can get my hand on it. (Focus group 7) Australasian Journal of Educational Technology, 2020, 36(6). 24 In regard to the reflection phase, a proportion of comments related to defensive self-reactions (Rsrd, 8%), observed mainly in Course D and Course A (53% and 47% within Rsrd respectively). Students who expressed this theme felt that the recommended action – completing e-book assignments or reviewing videos – was not their preferred learning strategy. These students remarked that their preferred strategy worked better; thus, they were not likely to change their study approaches regardless of the feedback (Exemplar 20, Table 2). Other self-reactions were also present to a lesser extent. Positive self-reactions (Rsrp, 5%) were observed mainly for Courses D and C (64% and 27% respectively). Comments in this theme expressed how information in feedback made students feel positive about their progress and/or performance, fostering persistence in their studies. A smaller proportion of negative self-reactions (Rsrn, 4%) were observed mainly for Courses A and D (50% and 38% respectively). Within this theme, students commented on how, upon reading the feedback, they felt they were “not quite up to standard” (Focus group 2) even after having completed a number of learning tasks (Exemplar 19, Table 2). Additional comments suggest that this reflective response may have discouraged further engagement with the feedback, as evidenced by the quote below: Because it’s saying, make sure you do the [e-book] but, you know, as we said it’s not the best. So that’s saying specifically you’ve got to do that and, yeah, the specific things, [so] you dismiss the whole email a bit. (Focus group 1) Overall, this analysis provides evidence that students enacted their personalised feedback with respect to all three phases of Zimmerman’s (2000) model. The predominant response was in terms of forethought, namely, an increased motivation to learn, followed by reflection, and lastly, self-control processes. This enhanced motivation is a bonus – other evaluations of LA-based feedback via dashboards (e.g., Matcha, Ahmad Uzir, et al., 2020) found a negative impact of LA-based feedback on motivation, particularly when individual progress was compared with that of peers. In contrast, personalised feedback in this study did not compare students’ progress or performance with peers; rather, an achievement frame of reference was used, that is, a comparison of current progress against course goals. The comments also highlight that students were active agents in their learning process and as such were actively making decisions about whether to adapt their learning strategies based on their feedback (Winne, 2006). RQ2. Does (and if so, how) students’ sense-making of their personalised feedback differ across contexts? Figure 3 shows the course-average networks of the 20 codes describing students’ perceptions of feedback and their self-reported subsequent learning processes mapped to SRL in the sense-making space. We focus on the results showing how the two groups of coded elements are associated, as well as on lines with greater thickness as these reflect more frequently occurring connections (Shaffer & Ruis, 2017). Course A As shown in Figure 3A (top left), a strong connection is evident between positive perceptions of the task- focused nature of the feedback (PTASKp) and goal-setting (Ftags). Students’ comments indicate that they used their feedback as a checklist and set goals to complete the tasks on which feedback was given. For example, one student noted that their feedback prompted them to make “a manual schedule so I don’t forget so I kept on track” (Focus group 2). Australasian Journal of Educational Technology, 2020, 36(6). 25 Figure 3. Group (course)-average epistemic networks between the coded dimensions of feedback perception and SRL enactment in four courses Course A’s network shows a range of SRL processes connected to positive perceptions of feedback quality (PQUALp), especially that feedback provided useful progress information. Together with these perceptions, several forethought processes were expressed: goal-setting (Ftags), planned strategies (Ftaps), and to a lesser extent motivation (Fmotp). Furthermore, self-control during the performance phase (Pctrl) and self-evaluation in the reflection phase (Rse) were observed to be connected to positive perceptions of feedback quality. As shown in Figure 3A, negative perceptions of feedback quality (PQUALn) were connected with two types of reflection processes: defensive self-reactions (Rsrd) and self-evaluation (Rse). Students who expressed PQUALn disagreed with the recommendation to complete e-book assignments, preferring instead their own learning strategies. For example: I think some of the ways it might have recommended might not have been part of the normal peoples' way of studying. … personally I didn't do any of the [e-book] because that's not how I like to do it. (Focus group 2) The above comment highlights how the student’s negative perception of their personalised feedback, having triggered a defensive reaction, led to a failure to enact the feedback, that is, to engage with the e- book assignments. Course B Figure 3B (top right) shows a strong association between positive perceptions of feedback quality (PQUALp) and increased motivation (Fmotp), indicating that these two codes were frequently tied together in this course. As students appreciated feedback on their progress, they were motivated to persist in their study in order to maintain or even improve their results. The excerpt below illustrates this association: Australasian Journal of Educational Technology, 2020, 36(6). 26 I think for me it was, when I got the email that said you’re on the right track, oh yeah, I want to continue doing well. … Or if he said you’re doing well, you just want to continue to keep on track and do your best or keep doing well. (Focus group 13) Increased motivation (Fmotp) was connected with other perceptions as well. Similar to Course A, increased motivation was associated with positive perceptions of the task-focused nature of the feedback (PTASKp). As well, positive perceptions of tone (PTONEp) and sender attitude (PATTp) were observed to foster motivation, as exemplified by the following comment: It’s actually nice to have emails to encourage people to continue. Because maybe people will do the quiz and that’s it, there’s no feedback. But if they’re like doing them and someone says “Good job, you did well on your assignment, here’s another one coming up”, it puts you back on track. (Focus group 19) For students who expressed this and similar comments, the perception that the lecturer cared enough to acknowledge their efforts elicited greater motivation to keep up with their coursework. Similar to Course A, the SRL codes Ftags and Pctrl were located close together; however, no association was observed between PTASKp and Ftags or Pctrl. Instead, goal-setting was associated with positive perceptions of timeliness (PTIMEp). Considering this course, at 7 weeks’ duration, was shorter than the three other courses in this study, students received personalised feedback regarding their progress every alternate week. Students expressed that the feedback emails “were spaced out well, when they were needed”, and that without the regular feedback, they “didn’t think [they] would have been as organised” (Focus group 14). Course C Figure 3C (bottom left) shows strong connections, evidenced by thicker lines, between several pairs of codes. Similar to Course B, a key feature of this network is the association between enhanced motivation (Fmotp) and PQUALp. As noted earlier, this course was a pathway programme characterised by students with less experience in formal academic environments than typical first-year students. Students appreciated the quality of their feedback for providing useful and actionable information about how to improve their work. For example, the feedback messages informed students about their progress on assignments and regularly nudged students to prepare drafts of written work for instructors’ review prior to final submission. Equipped with knowledge of effective study skills, they were subsequently more motivated to persist in their efforts. The following comment illustrates the PQUALp–Fmotp association: It was really motivating because the points we were lacking in, they described how to improve it. So by making more drafts and attending more consult hours we could build up from low grades to higher grades. (Focus group 19) Similar to Course A, a strong connection was observed between PTASKp and Ftags. Notably, this course was built around progressive assessment pieces (see Table 1), requiring students to complete small assignments leading up to a complete essay. Students appreciated the extra reminders via their feedback messages, using them to set proximal learning goals around learning tasks. The following is an example of this PTASKp–Ftags association: The emails helped put my thoughts in a clear order and help me out to do this, do that. It’s the first time I did a calendar to plan everything. (Focus group 20) An interesting association was observed between positive perceptions of tone (PTONEp) and self-control processes (Pctrl). Notably, both themes were the most frequently observed across all courses (41% and 33% respectively). Students in this course frequently described their feedback as having an encouraging tone, being a recognition of their efforts. For other students, the feedback prompted them to “do things instead of just sitting back and procrastinating” (Focus group 21). Encouragement was especially important as some students had enrolled in the course after having a long break from study and therefore lacked confidence in their academic abilities. The following comment illustrates this point: Australasian Journal of Educational Technology, 2020, 36(6). 27 From being from a country kid and moving to the city and doing this kind of stuff, it was very daunting going into study. So, getting the recognition that hey you attended this, or good work on submitting your assignments, it was just nice having that feedback. (Focus group 20) Course D The network for Course D (Figure 3D, bottom right) clearly differs from the other courses. Foremost, this network shows many more connections between nodes. Secondly, the network structure is affected by the presence of PCurr; that is, students’ perceptions of their personalised feedback being embedded within the curriculum. This signifies that the weekly feedback was seen as an essential component of the flipped learning course. Enhanced motivation (Fmotp) was most frequently observed in this course compared to other courses (52%) and was connected with a number of perception codes. Similar to Courses B and C, associations between Fmotp and PQUALp were evident. However, beyond providing useful information, students also felt motivated by the perception that the feedback gave them a sense of personal accountability for their own learning, for example: It’s not the email itself that motivates me, but it’s what it represents … the fact that there was that sort of accountability for your own learning and there is that motivation to know that you need to stay on top of things. (Focus group 6) Unexpected associations were observed between enhanced motivation (Fmotp) and negative perceptions of feedback quality (PQUALn) and tone (PTONEn). Some students perceived their feedback as being automated (PTONEn), thereby offering little actionable intelligence (PQUALn). However, as noted above, the emails fostered a sense of accountability. This sense of accountability was motivating for the student, as explained below: It’s the amount of time that’s gone into the systems, which for me is like sort of most important, because then it makes me want to get the most out of them, and the fact that it is just so instantaneous and so regular it’s really helpful, because in other units you wouldn’t get that kind of feedback straight away. (Focus group 7) Similar to Fmotp, planned strategies (Ftaps) were associated with both positive and negative perceptions regarding quality, task and tone (PQUALp, PTASKp and PTONEn). Students expressed how the feedback helped them to clearly structure their learning process, for example: For other subjects, I know for my stuvac [study vacation] period I’m going to go and just get a past paper and try and do all the practice problems. For this one I need to go through all of the content and make sure I’m familiar and understand it thoroughly before I’m starting and able to do problems. (Focus group 6) The comment above demonstrates a clarity about the strategies to be undertaken for final exam preparation, that is, to revise the content for a thorough understanding before undertaking practice problems. This specificity is missing from comments in other courses. Finally, perceptions of feedback (PCurr) were linked to reflection processes of self-evaluation (Rse) and defensive self-reactions (Rsrd). As noted earlier, this course recorded the highest proportion of Rse comments (71%). Furthermore, there was a qualitative difference in the focus of the evaluation for students in Course D. Although students in the other courses discussed self-evaluation with respect to progress of learning tasks, Course D students related their self-evaluation to mastery of learned topics. Considering how feedback was perceived as being embedded in the curriculum, the weekly cycle of preparatory work and feedback meant that students could review all their feedback messages to help them ascertain the topics they were weaker in. This PCurr–Rse association is exemplified in the following comment: Australasian Journal of Educational Technology, 2020, 36(6). 28 In an individual week, the quizzes that we do are worth maybe 1% or something, and it’s very easy to get through ten, 15 minutes, but I think it makes more sense in the longer aspect, if you go back and see it as sort of a reflection of the whole semester. Then you go back and say, “Oh, this is what I struggled with the first time I learned it. This probably what I should focus on now for the final exam.” (Focus group 7) Like self-evaluation, defensive self-reactions also featured most in this course (53%). This linked to PCurr in the following way: the feedback reported students’ activity and performance on the preparatory quizzes as their learning; however, some students felt that they were learning “in a way that the automated system didn’t account for” (Focus group 4), thereby making the feedback less useful. This association was similarly observed in Course A. Finally, as with Course A, negative perceptions of feedback quality (PQUALn) were associated with Rsrd. Some students felt that the recommended action in the feedback, for example, to rewatch the video and reread a section, was “a little overwhelming” (Focus group 6); and therefore, they did not enact their feedback. Discussion Implications for personalised feedback based on LA In this study, we examined personalised feedback based on LA and further elaborated upon by instructors. The findings demonstrate that this form of feedback, from the students’ perspective, increased motivation and kept students on task. The findings also demonstrate how personalised feedback can be aligned with flipped learning design to possibly enhance students’ reflection in particular. The results of this study indicate the features of LA-based feedback that are influential to students’ subsequent adaptation of SRL processes. Specifically, this feedback should inform about progress, be clear about future tasks, and have a positive tone. These are aligned with Winstone et al.’s (2017) review and, in fact, illustrate principles of effective feedback defined by Nicol and McFarlane-Dick (2006). Furthermore, the task-focused nature of this feedback seems to help with goal-setting, an important forethought process that is key to effective SRL (Cosnefroy et al., 2018; McCardle et al., 2017). Finally, having feedback embedded in the curriculum and regularly informing students about their mastery over content may be important for enhancing reflection processes. This embeddedness was a unique feature of Course D, which used a flipped learning design. This highly structured form of blended learning comprised cycles of self- exploration of topics prior to actual instruction. Feedback provided at the end of one cycle is purposed to improve subsequent learning cycles (Pardo et al., 2019). Students were able to recognise this iterative cycle of activity and feedback through their perception of feedback being embedded in the course rather than an addition to the course. This contextual feature of LA-based feedback illustrates the kind of sustainable, dialogic feedback recommended by feedback researchers (e.g., Carless & Boud, 2018). In contrast, features of personalised feedback that may hinder SRL adaptation are negative perceptions of feedback quality with respect to accuracy of data and a perceived misalignment with preferred learning strategies as these lead to defensive self-reactions. Research has noted students’ perceptions of the accuracy of their reported data as an important consideration for LA-based interventions, affecting students’ engagement with learner dashboards (Klein et al., 2019). To mitigate this, personalised feedback could occasionally include metacognitive prompts that explicitly direct students to “reflect, monitor, and control their own learning process” (Sonnenberg & Bannert, 2015, p. 76). In its current state, LA-based feedback fosters a reliance on external feedback without building students’ own capacity for generating accurate judgements of their own learning (Griffin et al., 2013). The relatively small presence of comments relating to learning strategies suggests that LA-based feedback, even in the form of personalised feedback messages, may still offer limited intelligence to students, to inform adaptation of learning strategies. Alternatively, students may lack the language to talk about their specific learning strategies. This finding echoes criticisms of visualised feedback through learner dashboards (e.g., Matcha, Ahmad Uzir, et al., 2020). Therefore, this is one area in which personalised feedback could be improved, by including this knowledge. A number of strategies have been described in detail by Dunlosky et al. (2013), such as distributed practice and self-testing. In this study, the recommended actions in the personalised feedback related to course learning tasks. Some of these Australasian Journal of Educational Technology, 2020, 36(6). 29 recommendations were, in effect, effective learning strategies. For example, the purpose of performing activities in the e-book (Course A) was a way for students to practice their understanding through self- testing. Likewise, the weekly preparatory tasks (Course D) were an opportunity for students to set a regular goal for self-learning, as well as to test their understanding. Students perceived their feedback to be task- focused, which helped them fulfil immediate course requirements, but were less able to recognise the recommended actions as being examples of effective learning strategies. Consequently, the feedback may be less helpful for developing strategies for SRL at a broader level (Hattie & Timperley, 2007). To help students build a repertoire of broader learning strategies, personalised feedback messages could be framed as such. In this way, they are not only equipped to be on task in the respective course, but also developing effective SRL skills and strategies for use beyond the course. Another reason for the current limitation of personalised feedback to address learning strategies is the lack of ready information about the kind of learning strategies that students followed. With very recent advancements in LA, it is now possible to detect students’ dynamic learning strategies from trace data (e.g., Ahmad Uzir et al., 2019; Matcha et al., 2019). For example, Ahmad Uzir et al. (2019) used a combination of process mining and ENA to discover different combinations of time management strategies, such as starting ahead of scheduled topics, preparing, catching up or revisiting. If this information could be made available to instructors, they could also offer that feedback directly on strategies and potentially even suggest change in the use of strategy. This is another possible direction for the enhancement of personalised feedback. The present study should be discussed in view of recent related studies on personalised feedback. Other studies have been conducted in two of the same courses. Lim et al.’s (2019) study examined the impact of this feedback on students’ e-book activity in Course A by comparing levels of this activity between the intervention cohort (feedback condition) and a matched sample from earlier cohorts who did not receive the feedback (no-feedback condition). They found that the feedback group demonstrated an increase in e- book activity after the first emailed feedback which was sustained throughout the rest of the course, while the no-feedback group showed classic crammed learning behaviour. However, it was also noted that the feedback group displayed overall lower levels of activity with the e-book compared to the no-feedback group, yet they still obtained higher course grades. The insights from the analysis of focus group discussions in the present study may help to explain these conflicting findings. Students expressed an increased motivation and improved self-control processes, which likely translated into consistent engagement with the learning activities. As noted in the Results section, there were also a few comments showing defensive self-reactions in response to the feedback; because these students had their preferred learning methods, they did not enact the recommended learning strategy (the e-book), but continued with their own, albeit with more consistency. Therefore, for Course A, the evidence from the present study indicates that students responded to their feedback and adapted their motivation and self-control processes to optimise their learning in the course. Two other studies were conducted recently on the impact of personalised feedback on SRL in Course D (Ahmad Uzir et al., 2019; Matcha et al., 2019). Both these studies found that, compared to earlier cohorts without feedback, the cohort that did experience the weekly feedback displayed more effective SRL strategies and time management tactics. Moreover, Matcha et al. (2019) found that the proportion of students using more optimal SRL processes was higher. This resonates with students’ comments in this study regarding how feedback helped them to clearly structure their learning process. The pre-lecture preparation tasks informed their forethought processes as planned strategies for learning. Furthermore, Ahmad Uzir et al. (2019) found that the proportion of students who used comprehensive time management strategies – revisiting previous topics and short preparing in addition to engaging in pre-lecture preparation activities – was higher when feedback was present. Again, this resonates with students’ comments illustrating how their feedback informed self-evaluation of their mastery of the topic. The feedback on the weekly topic quizzes that were part of the online preparatory work highlighted to students the specific topics which required stronger understanding. Students then revisited their feedback to identify specific topics for review. Although these two studies were not carried out in the same cohort of students as the present study, the contextual elements (course and feedback design, instructor) were entirely the same. Thus, the present study corroborates the evidence of impact on SRL and provides further insight through the analysis of focus group discussions. Australasian Journal of Educational Technology, 2020, 36(6). 30 This study as well as the critical reviews of feedback delivered via learner dashboards (e.g., Jivet et al., 2018; Matcha, Ahmad Uzir, et al., 2020) highlight that perhaps reporting students’ learning data to them as feedback is insufficient for truly developing SRL and does little to encourage motivation and engagement (Jivet et al., 2018). The data element in such feedback demands sense-making, which, without sufficient guidance, may have adverse effects on SRL. In comparison, the present study shows that, when such feedback is further elaborated with advice from the course instructor, this more personalised feedback helps students to refine or strengthen important forethought processes of goal-setting or task analysis to some extent, as evidenced by the presence of codes relating to these processes. As noted above, the implementation of LA-based feedback in this study can foster SRL by supporting students in self- observation processes so that they know how they are progressing, motivating them to continue in their studies and keeping them on task with the course requirements. The feedback could be further enhanced by making it more dialogic and fostering an increased sense of agency by students, for example, by getting them to set their own goals for learning and prompting them to monitor their progress towards those goals. Limitations of the study We acknowledge that this study is not without its limitations. Firstly, as the analysis relied solely on students’ accounts of their experiences with personalised feedback, students may have had difficulty articulating or recalling how their learning changed in response to the feedback. However, we argue that obtaining students’ accounts was necessary for an understanding of this feedback from the experience of these key stakeholders. Notably, self-reports of SRL have been criticised as being inaccurate when compared with the use of learning traces (e.g., Zhou & Winne, 2012). We argue the use of verbal self- reports in this study has shed some light on students’ internal forethought and self-reflection processes, elements not readily captured through the analysis of trace data. While acknowledging these retrospective accounts could be limited by the fragmented or biased memories of students’ actual experience, we see our study as supplementing findings from other research that examined learning traces to detect SRL, allowing for a fuller understanding of SRL adaptations in response to feedback. Secondly, due to the scope of this study, we did not examine or collect data on learner characteristics, which was one category of variables affecting students’ engagement with feedback (Winstone et al., 2017). Zimmerman and Moylan (2009) also posited that students’ goal orientations play an influential role in the forethought processes of strategic planning. Future studies could build on the current study to add in the consideration of learner variables such as goal orientations, self-efficacy or baseline self-regulatory levels in order to capture a full understanding of the interpersonal communication variable characteristics affect students’ recipience of personalised feedback. Conclusion Our results indicate possible benefits of instructor-elaborated feedback over highly visualised reports of learner data. Perhaps most important is that such feedback offers a frame of reference that avoids social anxiety through peer comparison; instead, the standard provided to students is based more on achievement, that is, progress towards course goals. At the same time, the results from this study highlight a further area of development for this innovative form of feedback: to increase the dialogic element in order to reduce defensive self-reactions for students who hold to their own learning strategies, helping them to reflect further on the suitability of their current learning strategy. Importantly, our study contributes to filling a notable gap in LA research through qualitative research to understand the influence of LA-based feedback on students’ learning. Future research on LA-based feedback interventions should continue to examine their impact both in terms of learning behaviours as measured through learning traces as well as student perspectives of their experiences with such interventions. Acknowledgements This work was supported by the Australian Government Office for Learning and Teaching under Grant number SP 16-5264. The authors would also like to thank the two anonymous reviewers for their constructive feedback and suggestions to enhance the paper. Australasian Journal of Educational Technology, 2020, 36(6). 31 References Ahmad Uzir, N. A., Gašević, D., Matcha, W., Jovanović, J., Pardo, A., Lim, L.-A., & Gentili, S. (2019). Discovering time management strategies in learning processes using process mining techniques. In M. Scheffel, J. Broisin, V. Pammer-Schindler, A. Ioannou, & J. Schneider (Eds.), Lecture notes in computer science: vol. 11722. Transforming learning with meaningful technologies (pp. 555–569). Springer. https://doi.org/10.1007/978-3-030-29736-7_41 Allen, I. E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Sloan Consortium. http://www.onlinelearningsurvey.com/reports/changingcourse.pdf Bodily, R., & Verbert, K. (2017). Review of research on student-facing learning analytics dashboards and educational recommender systems. IEEE Transactions on Learning Technologies, 10(4), 405–418. https://doi.org/10.1109/TLT.2017.2740172 Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281. https://doi.org/10.3102/00346543065003245 Carless, D. (2016). Feedback as dialogue. In M. A. Peters (Ed.), Encyclopedia of educational philosophy and theory (pp. 1–6). Springer. https://doi.org/10.1007/978-981-287-532-7_389-1 Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354 Cosnefroy, L., Fenouillet, F., Mazé, C., & Bonnefoy, B. (2018). On the relationship between the forethought phase of self-regulated learning and self-regulation failure. Issues in Educational Research in Higher Education, 28(2), 329–348. http://www.iier.org.au/iier328/cosnefroy.pdf Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58. https://doi.org/10.1177/1529100612453266 Efklides, A. (2011). Interactions of metacognition with motivation adn affect in Self-regulated learning: The MASRL model. Educational Psychologist, 46(1), 6–25. https://doi.org/10.1080/00461520.2011.538645 Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., Rienties, B,. Ullman, T., & Vuorikari, R. (2016). Research evidence on the use of learning analytics: Implications for education policy (Joint Research Centre Science for Policy Report, EUR 28294 EN). European Union. https://doi.org/10.2791/955210 Griffin, T. D., Wiley, J., & Salas, C. R. (2013). Supporting effective self-regulated learning: The critical role of monitoring. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (Vol. 28, pp. 19–34). Springer. https://doi.org/10.1007/978-1-4419-5546-3_2 Hadwin, A. F., Davis, S. K., Bakhtiar, A., & Winne, P. H. (2019). Academic challenges as opportunities to learn to self-regulate learning. In H. Askell-Williams & J. Orrell (Eds.), Problem solving for teaching and learning (pp. 34–47). Routledge. https://doi.org/10.4324/9780429400902-4 Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81– 112. https://doi.org/10.3102/003465430298487 Henderson, M., Ajjawi, R., Boud, D., & Molloy, E. (2019). Identifying feedback that has impact. In M. Henderson, R. Ajjawi, D. Boud, & E. Molloy (Eds.), The impact of feedback in higher education: Improving assessment outcomes for learners (pp. 15–34). Springer International Publishing. https://doi.org/10.1007/978-3-030-25112-3_2 Jivet, I., Scheffel, M., Drachsler, H., & Specht, M. (2018). License to evaluate: Preparing learning analytics dashboards for educational practice. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 31–40). Association for Computing Machinery. https://doi.org/10.1145/3170358.3170421 Khan, I., & Pardo, A. (2016). Data2U: Scalable real time student feedback in active learning environments. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 249–253). Association for Computing Machinery. https://doi.org/10.1145/2883851.2883911 Klein, C., Lester, J., Nguyen, T., Justen, A., Rangwala, H., & Johri, A. (2019). Student sensemaking of learning analytics dashboard interventions in higher education. Journal of Educational Technology Systems, 48(1), 130–154. https://doi.org/10.1177/0047239519859854 Australasian Journal of Educational Technology, 2020, 36(6). 32 Lim, L.-A., Dawson, S., Gašević, D., Joksimović, S., Pardo, A., Fudge, A., & Gentili, S. (2020). Students’ perceptions of, and emotional responses to, personalised LA-based feedback: An exploratory study of four courses. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2020.1782831 Lim, L.-A., Gentili, S., Pardo, A., Kovanović, V., Whitelock-Wainwright, A., Gašević, D., & Dawson, S. (2019). What changes, and for whom? A study of the impact of learning analytics-based process feedback in a large course. Learning and Instruction. https://doi.org/10.1016/j.learninstruc.2019.04.003 Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-driven personalization of student learning support in higher education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, applications, and trends: A view of the current state of the art to enhance e-learning (pp. 143-169). Springer International Publishing. https://10.1007/978-3-319-52977-6_5 Matcha, W., Ahmad Uzir, N. A., Gasevic, D., & Pardo, A. (2020). A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies, 13(2), 226–245. https://doi.org/10.1109/TLT.2019.2916802 Matcha, W., Gašević, D., Jovanović, J., Pardo, A., Lim, L., Maldonado-Mahauad, J., Gentili, S., Pérez- Sanagustín, M., & Tsai, Y.-S. (2020). Analytics of learning strategies: Role of course design and delivery modality. Journal of Learning Analytics, 7(2), 45–71. https://doi.org/10.18608/jla.2020.72.3 Matcha, W., Gašević, D., Uzir, N. A. A., Jovanović, J., & Pardo, A. (2019). Analytics of Learning Strategies: Associations with Academic Performance and Feedback. In Proceedings of the 9th International Learning Analytics and Knowledge Conference (pp. 461–470). Association for Computing Machinery. https://doi.org/10.1145/3303772.3303787 McCardle, L., Webster, E. A., Haffey, A., & Hadwin, A. F. (2017). Examining students’ self-set goals for self-regulated learning: Goal properties and patterns. Studies in Higher Education, 42(11), 2153– 2169. https://doi.org/10.1080/03075079.2015.1135117 Morgan, D. L. (1997). Focus groups as qualitative research (2nd ed.). SAGE. Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090 Pardo, A., Bartimote-Aufflick, K., Buckingham Shum, S., Dawson, S., Gao, J., Gašević, D., Leichtweis, S., Liu, D. Y.-T., Martínez-Maldonado, R., Mirriahi, N., Moscal, A. C. M., Schulte, J., Siemens, G., & Vigentini, L. (2018). OnTask: Delivering data-informed personalized learning support actions. Journal of Learning Analytics, 5(3), 235–249. https://doi.org/10.18608/jla.2018.53.15 Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2019). Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology, 50(1), 128– 138. https://doi.org/10.1111/bjet.12592 Pardo, A., Poquet, O., Martinez-Maldonado, R., & Dawson, S. (2017). Provision of data-driven student feedback in LA and EDM. In C. Lang, G. Siemens, A. F. Wise, & D. Gašević (Eds.), Handbook of learning analytics (1st ed., pp. 163–174). Society for Learning Analytics Research. https://doi.org/10.18608/hla17.014 Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback: All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277–289. https://doi.org/10.1080/02602930903541007 Ryan, T., Gašević, D., & Henderson, M. (2019). Identifying the impact of feedback over time and at scale: Opportunities for learning analytics. In M. Henderson, R. Ajjawi, D. Boud, & E. Molloy (Eds.), The impact of feedback in higher education: Improving assessment outcomes for learners (pp. 207– 223). Springer International Publishing. https://doi.org/10.1007/978-3-030-25112-3_12 Shaffer, D. W., & Ruis, A. (2017). Epistemic network analysis: A worked example of theory-based learning analytics. In C. Lang, G. Siemens, A. Wise, & D. Gašević (Eds.), Handbook of learning analytics (1st ed., pp. 175–187). Society for Learning Analytics Research. https://doi.org/10.18608/hla17.015 Sinclair, A. J., Ferreira, R., Gašević, D., Lucas, C. G., & Lopez, A. (2019). I wanna talk like you: Speaker adaptation to dialogue style in L2 practice conversation. In S., Isotani, E. Millán, A. Ogan, P. Hastings, B. McLaren, R. Luckin (Eds.), Lecture notes in computer science: vol. 11626. Artificial intelligence in education (pp. 257–262). Springer. https://doi.org/10.1007/978-3-030-23207-8_48 Sonnenberg, C., & Bannert, M. (2015). Discovering the effects of metacognitive prompts on the sequential structure of SRL-processes using process mining techniques. Journal of Learning Analytics, 2(1), 72–100. https://doi.org/10.18608/jla.2015.21.5 Australasian Journal of Educational Technology, 2020, 36(6). 33 Viberg, O., Khalil, M., & Baars, M. (2020). Self-regulated learning and learning analytics in online learning environments: A review of empirical research. In Proceedings of the 10th International Conference on Learning Analytics and Knowledge (pp. 524–533). Association for Computing Machinery. https://doi.org/10.1145/3375462.3375483 Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16(4), 409–421. https://doi.org/10.1287/orsc.1050.0133 Winne, P. H. (2006). How software technologies can improve research on learning and bolster school reform. Educational Psychologist, 41(1), 5–17. https://doi.org/10.1207/s15326985ep4101_3 Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 27–30). Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410602350 Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. In Proceedings of the Fourth International Conference on Learning Analytics and Knowledge (pp. 203– 211). Association for Computing Machinery. https://doi.org/10.1145/2567574.2567588 Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37. https://doi.org/10.1080/00461520.2016.1207538 Zhou, M., & Winne, P. H. (2012). Modeling academic achievement by self-reported versus traced goal orientation. Learning and Instruction, 22(6), 413–419. https://doi.org/10.1016/j.learninstruc.2012.03.004 Zimmerman, B. J. (2000). Self-efficacy: An essential motive to learn. Contemporary Educational Psychology, 25(1), 82–91. https://doi.org/10.1006/ceps.1999.1016 Zimmerman, B. J., & Moylan, A. R. (2009). Self-regulation: Where metacognition and motivation intersect. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 311–328). Routledge. Corresponding author: Lisa-Angelique Lim, lisa.lim@unisa.edu.au Copyright: Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY-NC-ND 4.0. Please cite as: Lim, L.-A., Dawson, D., Gašević, D., Joksimović, S., Fudge, A., Pardo, A., & Gentili, S. (2020). Students’ sense-making of personalised feedback based on learning analytics. Australasian Journal of Educational Technology, 36(6), 15-33. https://doi.org/10.14742/ajet.6370