Australasian Journal of Educational Technology, 2021, 37(6). 102 Repositioning students as co-creators of curriculum for online learning resources Aaron McDonald, Heath McGowan, Mollie Dollinger La Trobe University Ryan Naylor The University of Sydney Hassan Khosravi The University of Queensland Amid increasing calls for universities to transition to online learning, there is a need to explore how platforms and technology can provide positive student experiences and support learning. In this paper, we discuss the implementation of an online peer learning and recommender platform in a large, multi-campus, first-year health subject (n = 2095). The Recommendation in Personalised Peer Learning Environments (RiPPLE) platform supports student’s co-creation of learning resources and allows for students to provide feedback and rate their peers’ submissions. Our results indicated that both student engagement and academic performance were positively impacted for users by the introduction of the RiPPLE platform, but that academic preparedness, in the form of students’ ATAR scores, strongly influenced their engagement and the benefits received. Implications for practice or policy: • We explored if students were willing to co-create learning resources online. • Our study piloted an online platform known as Recommendation in Personalised Peer Learning Environments (RiPPLE). • Critical analysis provides insights into fostering online engagement and peer learning. • We further offer recommendations for future practice on how to embed online student co-creation of curriculum. Keywords: peer learning, online learning, co-creation, student engagement, learning resources, quantitative Introduction Even before the COVID-19 global pandemic, many scholars had discussed that higher education was likely to shift towards online learning delivery (Ellis & Bliuc, 2016; Hanna, 1998; Lee, 2017; Robinson & Hullinger, 2008). Through online platforms, universities can extend their market audience to geo-diverse learners, cater to the growing cohorts of part-time and mature-age students, and potentially adopt cost- efficient and scalable learning support systems that can reduce expenditure and staffing demands (Dollinger, Cox et al., 2020; Stone, 2017). However, alongside the discussion of the benefits of online learning looms questions of how online learning can support many of the established benefits of a face-to- face learning environment, including student engagement and peer interaction (Dawson, 2006; Dumford & Miller, 2018; Gillett-Swan, 2017; Muir et al., 2019). To address this gap there have been a variety of introduced online platforms and systems aimed to enhance the online learning experience and support engagement. These include platforms such as OnTask and the Student Relationship Engagement System (SRES), which use traceable student data to provide personalised feedback as well as platforms that support group interaction or peer-assisted learning (e.g. Breakout) (see Calacci et al., 2016; Dollinger et al., 2019; Pardo et al., 2018; Vigentini et al., 2020). However, there is currently a gap in the literature exploring the ways in which students and staff can jointly co-create curriculum and learning resources at scale. In particular, in regard to how student co-creation can occur within online platforms and systems. Therefore, in this study we will discuss a platform known as RiPPLE (Recommendation in Personalised Peer Learning Environments) that aims to allow for students to co-create content during the subject as well as recommend content to their peers through an advanced recommender system. To explore this topic, we piloted RiPPLE in a large first-year anatomy subject (n = 2095) to Australasian Journal of Educational Technology, 2021, 37(6). 103 understand how online co-creative platforms, such as RiPPLE, can improve student learning outcomes and engagement. Engagement in this study is conceptualised through a behavioural frame and is measured through factors such as students’ usage of the platform and the creation or responses to student-generated questions (see Axelson & Flick, 2010; Kahu, 2013 for varying definitions of engagement). Unique to the RiPPLE platform is the ability to allow for students and staff to be the joint co-creators of any given subject (Khosravi et al., 2019). Student and staff co-creation can be defined as a process where student and staff resources (e.g., ideas, feedback, platforms) interact to support improved student learning and/or experiences (Dollinger et al., 2018). Examples to date have included students and staff co-creating rubrics, study support resources, and social media (Fraile et al.,2017; Fujita et al., 2017; Knight et al., 2020). Previous studies have shown that facilitating student-staff co-creation can lead to improved academic outcomes for students as well as enhanced perceptions of employability, self-efficacy, and ownership (Dollinger & Lodge, 2019; Mercer-Mapstone et al., 2017). By supporting students to become the co- creators of content and course delivery, staff can also support scalable peer learning (Blau & Shamir-Inbal, 2017). Co-creation also provides benefits for staff, as they have a better understanding of students’ perspectives and needs (Marquis et al., 2017; Matthews et al., 2018). Also, while studies have highlighted that student co-creation or partnership can initially increase academic workload (Coombe et al., 2018), as interventions and programs become more established, by allowing students to co-create content academic workload may decrease in the long-term. Background on peer learning and student co-created content A rich body of literature in psychology and education (Boud et al., 2014; Topping, 2005) increasingly recognises peer-learning as an important form of learning, feedback, and assessment (Carless et al., 2011; Liu & Tsai, 2005). While the benefits of peer and social learning have been well-established and widely accepted, methods for their effective facilitation among large communities of diverse learners still remains a challenge (Chiu & Hew, 2018; Potts et al., 2018). Previous research has explored the impact of using technology to promote peer-learning in many contexts including computer conferencing and learner communities (Wise & Cui, 2018; Zhao et al., 2014), recommender systems (George & Lai, 2019; Zheng & Yano, 2007), and as a mechanism to utilise students’ untapped information, communication technology (ICT) skills while building student-led learning practices (Lang et al., 2017). In this study, we focused on exploring the impact of using technology to promote peer learning in the form of partnering with students as co-creators of content. While, historically, students have typically been able to provide feedback on curriculum, there is growing focus on how students can co-create curriculum in real-time to support peer-informed learning (Cook- Sather, 2014; Konings et al., 2014). Bovill and Woolmer (2019) highlighted a range of recent initiatives, from students co-creating whole courses or subjects (see Woolmer et al., 2016) to students co-creating curriculum alongside staff through committees (Mihans et al., 2008). Bovill and Woolmer (2019) also pointed out an important distinction between co-creation in the curriculum, whereby students have opportunity to modify or add to an existing structure and design, and co-creation of the curriculum, where students co-create before the subject takes place. In the initiative discussed here, students had the opportunity to co-create in the curriculum through the creation of learning resources. As scholars have previously discussed, supporting students as co-creators of the curriculum shifts traditional power imbalances between students and staff (Cook-Sather, 2014). Staff are able to democratise the educational experience, which can have transformation effects on students that support deeper engagement (Bergmark & Westman, 2016). The benefits of student co-created content and rating of one another’s content also extends to students’ assessment literacy. By actively taking on roles as the producer, students may be able to develop a greater understanding of the language of assessment (Rust et al., 2003) and the practice of giving and receiving feedback (Mulder et al., 2014). Deeley and Bovill (2017) also found that co-creation of assessment and feedback enhanced motivation and engagement and helped to develop and strengthen learner communities. Yet scholars have also noted the barriers to supporting student co-created content including concerns over quality, pressure on time and budget, and uncomfortableness in the variations of roles on both the staff and students who are involved (Bovill, 2014; Dollinger et al., 2018). Therefore, it is important to consider that student co-created content does not replace the need for teacher expertise, or even oversight, but rather Australasian Journal of Educational Technology, 2021, 37(6). 104 offers an alternative learning process. Further, while there is growing research into how students can co- create the curriculum through initiatives such as workshops, focus groups, and committees, there are few fully online examples (for an exception, see Pee [2019]). Therefore, amid ongoing calls to improve student engagement in online learning environments (e.g., Davey et al., 2019), it is a timely endeavour to explore how student co-created content can enhance the quality of the learning experience. Overview of RiPPLE platform RiPPLE (refer to Khosravi et al., 2019) is an online learning platform that employs learner-centred and pedagogically supported approaches to engage students in authentic learning experiences. The platform further aims to harness the creativity and evaluation power of students as experts-in-training to develop a repository of high-quality learning resources. In the current version of the platform, students use a set of learning themes or topics, which have been created by the course staff, to create a range of learning resources types including multiple-choice questions, multi-answer questions, matching type questions, worked examples as well as open-ended notes. As students are developing their expertise, it is likely that some of the learning resources created are ineffective, inappropriate or incorrect. Hence, RiPPLE utilises an evaluation process that again partners with students as moderators to judge the quality of their peers' work (Abdi et al., 2021). Figure 1 illustrates the evaluation interfaces used by the platform. Due to the potential that the decision made by an individual might be incorrect, the platform employs a redundancy-based strategy and assigns the task of evaluating a resource to multiple students. RiPPLE then utilises a suite of consensus algorithms (see Zheng et al., 2017) to decide whether or not the resources should be added to the repository. Additionally, RiPPLE uses a spot- checking algorithm (Wang et al., 2020) to enable academics to effectively facilitate students' content creation and evaluation contributions with minimal oversight. This algorithm identifies resources that would benefit the most from an expert judgement and presents them to academics. Figure 1. The current interface for student evaluation of learning resources in RiPPLE Based on students’ interaction with resources that have been added to the course repository, RiPPLE uses AI algorithms, to calculate a student’s level of knowledge on a topic based on their engagement with resources and recommends personalised learning activities to each student based on their mastery level Australasian Journal of Educational Technology, 2021, 37(6). 105 (Abdi et al., 2020). To help students regulate their learning, RiPPLE uses transparent and explainable AI models to allow students to understand how their mastery is computed and why particular resources have been recommended to them. Figure 2 shows one of the main pages in RiPPLE. The upper part of the figure contains an explainable model of their knowledge state using an open learner model as outlined in (Abdi et al., 2019). The lower part of the figure displays learning resources from the repository that are recommended to a student based on their learning needs using the recommender system outlined in (Khosravi et al., 2017). Figure 2. The current interface of the student modelling and recommendation page in RiPPLE Material and methods This study explored how the RiPPLE platform could support online peer learning to drive engagement and positively impact students’ academic achievement. To pilot the platform, we selected a large first-year human anatomy subject (HBS1HBB). The subject is taught in the second semester of the first year and has students from multiple campuses. Students who take the subject are from several discipline areas including allied health, nursing, and biological science. In the year of the intervention (2019) there were 2095 students enrolled: 25% regional and 75% metro. In the results section we will further discuss the specific demographics and cohorts who utilised the platform compared to non-users. Other than introduction of the platform, the subject’s 12-week learning design remained consistent to help aid comparison of the impact of RiPPLE. The research project was approved by the Human Ethics Committee (HEC19283). The platform was available to students throughout the semester and students could choose to use the platform but opt-out of data collection. When students signed up for the platform (using their student ID) they could also choose an anonymous username to protect their privacy and potentially avoid embarrassment if they incorrectly answered questions. At the end of the semester, the research team extracted usage analytics from the software such as frequency of logins and type of engagement (e.g., creating new questions). All data was then matched to data from the learning management system (LMS) (i.e., grades) and then de-identified by a member of the research staff before further analysis took place. The research questions for this study were: 1) How do students choose to engage with the platform? 2) How does student engagement with the platform impact students’ academic performance? 3) Are there any trends or patterns in which student cohorts choose to engage with the platform? Australasian Journal of Educational Technology, 2021, 37(6). 106 The study collected several types of data, including student demographic data linked to their student ID, including Australian Tertiary Admissions Rank (ATAR), gender, campus location, and final grade in the subject. ATAR is a measure of academic preparedness (similar to other measures used internationally, such as school grade point average or the Scholastic Assessment Test [SAT] scores). ATAR is calculated from an individual’s study scores in each subject taken in upper secondary school, where those scores are modified according to academic rigor and the degree to which that subject contributes to the assumed knowledge and academic literacies required for tertiary studies (University Admissions Centre, 2020). High ATAR students more likely to have been accepted into a course of their first preference (Helal et al. 2019), which may suggest a higher intrinsic interest in the subject matter. However, it should be noted that ATAR as a measure has also been shown to be a poor predicator of overall student university performance, and is potentially linked more to students’ socioeconomic status, rather than academic aptitude (Craft, 2019; Win & Miller, 2005). Through the platform, we were also able to collect data on whether students created a profile, logged in, created questions, answered questions, and rated questions. Students were also given the option to fill out some standard feedback on the platform itself via an embedded survey. Below we present that data with analysis. Results In this section we will provide an overview of the major findings resulting from the intervention organised by corresponding research question. Research question 1: Student engagement and usage In 2019, there were 2095 students in total enrolled in HBS1HBB. While all students were invited to use the platform to review learning content and prepare for quizzes and exams, a total of 1379 students logged onto the RiPPLE platform (66%). The remainder were designated non-participants in this study, and their data was included only in total-cohort analyses when exclusion would have resulted in misleading comparisons. Of the students who chose to engage with the supplementary platform, 635 students (46% of RiPPLE users, or 30% of the total student numbers) logged into the platform but then did not create or answer questions. We refer to this group as disengaged in later analyses. The remaining 744 students (54% of the RiPPLE users, or 35% of the total class cohort) were designated the engaged group. Note that this group was self- selected on the basis of student behaviour. Within the engaged group, the majority of users (n = 687, 92%) chose to engage with the platform through answering questions and providing ratings, rather than creating materials to share. A total of 57 of the 744 students (8%) authored a total of 566 questions, indicating a high level of engagement in that select cohort. This may indicate that students interested in becoming co-creators of learning content may only constitute a subgroup of the total student cohort. However, given that 54% of the 744 students still engaged by answering questions, it’s possible the having fewer lead users can still support wider engagement. Note in the author category, there were four students who created significant learning resources (respectively 130, 90, 47, and 39 resources) constituting over half of all submitted content (54%). As can be seen in Table 1, engagement in the platform was often linked to key semester dates, with the number of resources answered by students often corresponding to upcoming assessments. For example, in Week 1 just 816 resources were answered by students compared to 58,703 in the week leading up to the final examination. Australasian Journal of Educational Technology, 2021, 37(6). 107 Table 1 Breakdown of student engagement (resource creation and answers) by week, compared to 2018 online engagement answering practice tests Week Number of practice test questions answered 2018 Number of RiPPLE resources answered 2019 Number of RiPPLE resources created 2019 Assessments 1 1,036 816 16 2 1,954 3,125 79 3 2,010 2,161 40 4 2,016 3,141 54 Individual online assessment 20% 5 1,478 2,110 48 6 864 2,400 40 7 2,191 2,049 41 8 1,597 1,629 12 Team report 15% 9 1,998 581 17 Mid semester break 654 747 51 10 2,214 2,204 41 11 1,281 5,284 53 12 818 13,966 48 Team report 15% Study vacation 4,624 58,703 26 Post examination 13 0 0 Examination 35% Total 24,748 98,916 566 As described earlier, the previous learning design for the subject provided online practice tests. By comparing the number of attempts of the old practice design to the resources co-created by students through RiPPLE, use of RiPPLE appears to have led to a significant increase in engagement with the subject revision materials. In 2018, the online practice question yielded just 24,748 attempts by students compared to 98,916 through the RiPPLE platform in 2019. This was particularly the case from Week 11 into the examination period, when resource usage increased approximately 11.5-fold, but even during Weeks 1 to 10 of semester, resource usage was approximately 15% higher each week on average in 2019 than in 2018. The team also used end-of-semester student feedback surveys as a proxy to understand if student satisfaction was impacted through the introduction of the platform. While new technology can often meet resistance (see Porter et al., 2016), we found no significant negative impacts from introducing the RiPPLE platform between 2018 and 2019 cohorts (Table 2). However, future evaluation would also need to include a specific question about the RiPPLE platform to better ascertain student satisfaction of the platform. Australasian Journal of Educational Technology, 2021, 37(6). 108 Table 2 HBS1HBB university run student feedback survey Rating out of 5 Student feedback on course questions 2018 2019 The learning outcomes of the subject were made clear to me 4.20 4.18 The subject enabled me to achieve the learning outcomes 4.08 4.05 I found the subject to be intellectually stimulating 4.28 4.28 I found the resources provided for the unit to be helpful 3.81 3.88 I received constructive feedback on my work 3.79 3.87 The feedback I received was provided in time to help me improve 3.76 3.90 The overall amount of work required of me for this subject was appropriate 3.79 3.74 Overall I was satisfied with the quality of this subject 3.92 4.07 Subject overall (All questions) 3.95 4.00 Student feedback following the implementation of the RiPPLE platform in 2019 showed that the resources provided, satisfaction with feedback received and its timeliness, and the overall satisfaction of this subject improved. In 2018, 428 students responded (26.9% response rate) and in 2019, also 428 students responded (27.1% response rate). Note that these were sub-samples of the total student cohorts. Due to the anonymous nature of student satisfaction surveys, it was difficult to assess how many survey participants had engaged with RiPPLE during the semester. It is possible that highly motivated or engaged students are more likely to engage with both RiPPLE and the student satisfaction survey, leading to the relative enrichment of this group in the survey results. Research question 2: Impact on students’ academic performance Our second question centred around the question of whether greater usage of the platform and type of usage (whether authoring or responding to questions) improved students’ academic achievement and/or supported deeper learning. To attempt to answer this question, we evaluated the same students across two similar subjects. HBS1HBA is a first-year subject with similar learning design to HBS1HBB (particularly the 2018 version of HBS1HBB). The content covers introductory human physiology rather than human anatomy and is taken by students in Semester 1. Almost all students (82 ± 3.1%) who have studied HBS1HBA continue to HBS1HBB in Semester 2. We were therefore able to use the semester 1 physiology subject as a negative control when comparing total cohort results in 2018 and 2019 to identify the academic impact of RiPPLE in HBS1HBB. When we compared the results of students who studied both subjects in 2018 with students who studied both subjects in 2019 (with the introduction of the RiPPLE platform) we found that: 1) Students performed better in HBS1HBB relative to HBS1HBA, independent of the year (p < 0.0001); 2) No significant increase in average grade was observed in HSB1HBA between the 2 years; 3) In HSB1HBB, however, mean student marks for the total cohort significantly increased by 2.5 percentage points (p < 0.01) in 2019 compared to 2018; 4) There was no significant difference in ATAR for the 2018 (69.8 ± 0.41) and 2019 (70.9 ± 0.48) student cohorts, indicating that there were no significant differences in academic preparation between the 2 years. This suggests that the increase in average grade seen was most likely due to the introduction of RiPPLE. These results are displayed in Figure 3. Note that the 2.5 percentage point increase in grades observed was an average taken from the total cohort (i.e., including the non-participating and disengaged user groups as well as the group that engaged with RiPPLE). Comparing only the RiPPLE users with the total cohort in Australasian Journal of Educational Technology, 2021, 37(6). 109 2018 was considered methodologically unsound due to adding in more potentially confounding variables, however, it may be likely that if a higher proportion of students had engaged with the RiPPLE platform, a larger average increase in grades might have been observed. Figure 3. Comparison of marks for students completing two first-year subjects ( SEM) To explore how usage of RiPPLE interacted with students’ academic performance, we compared the number of resources answered compared to students’ final grades (Figure 4). We found a positive exponential correlation between resources answered and final grade (A: 122.8 ± 7.2; B: 56.4 ± 5.4; C: 27.7 ± 4.0; D: 15.4 ± 4.2; N: 5.7 ± 3.4). The gradient of the natural logarithm of responses used against grade distribution was 0.701. Figure 4. Overall student marks compared to resources answered in RiPPLE We also explored how RiPPLE usage related to students’ ATAR. As described in the introduction, higher academic preparation may indicate students have higher intrinsic motivation or interest in the subject material, or better assessment literacy. It is well-established that measures of academic preparation correlate with academic outcomes (Schneider & Preckel, 2017), and we therefore wanted to investigate the relationships between ATAR, resource usage, and academic outcomes. As expected, students who answered more questions through RiPPLE were also more likely to have a higher ATAR than disengaged users. To illustrate (see Table 3), students with an ATAR of 80 or above answered over 100 resources Australasian Journal of Educational Technology, 2021, 37(6). 110 during the semester compared to those in with an ATAR of 70-79 (ap < 0.001), 60-69 (bp < 0.0001), and 50-59 ranges (cp < 0.0001). Further, students with a higher ATAR typically had a higher final mark as well. Also seen in Table 3, students with an ATAR of 80+ had a significantly higher final mark relative to those in the 70-79, 60-69 and 50-59 ranges (d,e,fp < 0.0001 for each). Table 3 Student ATAR compared to resources answered and final marks ( SEM) ATAR range Resources answered Final mark (%) 80+ 101.2 ± 7.8a,b,c 82.3 ± 0.5d,e,f 70-79 62.1 ± 7.3a 74.4 ± 0.7d 60-69 46.2 ± 6.8b 68.2 ± 0.9e 50-59 37.6 ± 5.5c 65.8 ± 1.0f However, interaction with RiPPLE improved student marks independently of other variables. Although the number of resources answered also increased exponentially with ATAR (similar to the relationship between resources answered and subject grades shown in Figure 4), the relationship was not as strong. The gradient of the natural logarithm of responses used against ATAR distribution was 0.323. This suggested that the academic impact of RiPPLE usage had an additional effect on grades, beyond that predicted by student ATAR. Finally, we investigated the impact on academic performance of different types of engagement with RiPPLE. To do so, we compared the average ATAR and marks achieved by the author, responder, and disengaged user groups (Table 4). Table 4 HBS1HBB ATAR and mark distribution across user types ( SEM) Author Responder Disengaged ATAR (0-100) 79.4 ± 2.3a 73.4 ± 0.6b 68.7 ± 0.7a,b Mark (%) 86.0 ± 1.8c,d 77.9 ± 0.4c,e 68.7 ± 0.6d,e Authors had a significantly higher ATAR than the disengaged (ap < 0.001) but not responders. This may be due to the relatively small number in the author group. Post-hoc power analysis indicates  = 0.71 for this comparison. The author group would need to contain 70 people, and the total cohort would need to be expanded to 2366, to reach statistical significance. Responders also had a significantly higher ATAR than the disengaged (bp < 0.0001). Authors had significantly higher marks than responders (cp < 0.001) and the disengaged (dp < 0.0001). Responders performed significantly better than the disengaged (ep < 0.0001). Authors obtained bigger gains in academic performance relative to their ATAR than responders, who also benefited more than disengaged users. This demonstrates that more active types of engagement benefited students more. Research question 3: Student cohort differences Our final research question sought to explore how the platform was used across locations. As the study took place a multi-campus university with two metropolitan based campuses and four regional locations. While there was at least some engagement from students across all campuses, it was especially popular with metropolitan students (71.6 ± 3.8 SEM (MET) versus 50.0 ± 6.6 SEM (REG); p < 0.01) (Figure 5). Australasian Journal of Educational Technology, 2021, 37(6). 111 Figure 5. Metropolitan student versus regional student RiPPLE usage ( SEM) Our results indicated that metropolitan students were more likely to use the platform. This may have been due to differences in promotion across the multi-campus teaching team, as well as regional students having issues with internet connectivity. In a study by Dollinger, D’Angelo et al. (2020) regional Victorian students expressed issues around internet connection in their communities. Additionally, students at regional campuses on average have lower ATARs than metropolitan students, due to the correlation between ATAR and socioeconomic status (Cardak & Ryan, 2009). This relationship between resource usage and ATAR described above, may also have contributed to lower resource usage among regional students. Discussion Previous research has identified many potential benefits to co-creation of curriculum between students and academic staff, including reduced power disparities, deeper engagement, deep learning, increased motivation to study, better academic literacy, and more developed peer communities (Bergmark & Westman, 2016; Cook-Sather, 2014; Deeley & Bovill, 2017; Khosravi et al., 2021; Rust et al., 2003; Shibani et al., 2020). In this study, we investigated the interaction between types of student engagement with a platform for co-creation of learning resources, and student satisfaction and academic performance. Our results indicated that both student engagement and academic performance were positively impacted for users by the introduction of the RiPPLE platform, but that academic preparedness, in the form of students’ ATAR scores, strongly influenced their engagement and the benefits received. Student engagement with co-created learning materials Student engagement with the RiPPLE platform was clearly complex, and involved a wide range of individual reactions, moderated by contextual factors, including time during semester, and individual factors, including ATAR and (we hypothesise) confidence and academic self-efficacy. The student cohort broke down roughly into thirds. Thirty-four percent either chose not to engage with RiPPLE at all, or not to participate in this study. A further 30% logged on to the platform at least once during semester, but didn’t answer any questions. Without more detailed qualitative analysis, it is difficult to assess the motivations of these disengaged users. Their potential involvement with RiPPLE may have ranged from a lack of interest after they’d seen the platform, to intimidation or dissatisfaction with the interface, to exclusion from the learning community created, to a deeper engagement where questions were looked at as a study or confidence aid but not directly answered. Also, time of usage was an important factor. Students who logged on to RiPPLE in Week 1 of semester may have only seen 16 resources or less on the platform (Table 1). In this case, it is perhaps easier to understand students disengaging on their first experience with the platform. However, as Table 1 also shows, student usage of co-created resources increased dramatically at the end of semester in preparation for final exams. Disengaged users who logged on during this time may have had 566 resources to learn from — a much richer resource, and a context in which it makes more sense to speculate that students may find value in the platform even if they didn’t directly answer questions. For MET REG 0 20 40 60 80 RiPPLE Usage by Location Campus # R e s o u rc e s A n s w e re d ** Australasian Journal of Educational Technology, 2021, 37(6). 112 that reason, we designated this group disengaged users rather than non-participants, since a group of this size was likely to contain a multitude of motivations and perspectives. The final groups, those who actively engaged with the platform, comprised 36% of the total cohort. Three percent, the authors, created learning resources, while 33% responded to questions written by peers but didn’t create resources themselves. Authors and responders appeared to find the co-created resources more engaging than those written by academic staff, as demonstrated by the 15% increase in resources used between Weeks 1 to 10 seen between the 2018 (academic-written) and 2019 (student co-created) cohorts. This difference was dramatically amplified in the immediate lead up to the exam period, when the 2019 students had 11.5 times the number of interactions as seen during the same period in the previous year. This demonstrated the high value, particularly for just-in-time learning, that students found in the RiPPLE platform and the co-created resources. Again, the ongoing development of new resources by their peers may have contributed to this. Even in the week immediately prior to the exam, students were still creating new resources, meaning that users were rewarded with novel questions as well as familiar study aids, which would not otherwise have been possible had all learning resources been developed by academic staff. The relatively small subgroup of student authors or co-creators presented here may be disheartening for academics interested in scalable, peer-learning approaches to online learning. However, it is consistent with other online or technology-enabled research that finds the majority of users only read and observe in online spaces, rather than contribute new content. In a study by Stewart et al. (2010) of online crowdsourcing behaviour, the authors build on a phenomenon known as participation inequity to discuss the 90-9-1 rule, “…where (a) 90% of users are ‘lurkers’ (i.e., they read or observe, but don’t contribute), (b) 9% of users contribute from time to time, but other priorities dominate their time, (c) 1% of users participate very often and account for most contributions” (p. 30). While further research is needed in educational-specific spaces, this may signify that the motivation to co-create new content may be limited to student subgroups or be impacted by educational and/or learning and teaching strategies. Future research could explore specifically the motivations of these users, as well as how specific teaching strategies may encourage usage. Student engagement and academic performance Overall, we found evidence that introducing RiPPLE led to an increase in academic performance, and that more active involvement (authoring rather than responding) led to greater improvement. However, the role of academic preparation complicates this analysis somewhat. As the total cohort analysis indicates, the 2019 cohort demonstrated an average increase of 2.5 percentage points compared to the 2018 cohort. This increase was not seen in the average grades observed in the subject used as a negative control, and there was no difference observed in average ATAR between the years. Note that this increase was in the average obtained by the total cohort, rather than just RiPPLE users. Because of the interaction between RiPPLE usage and ATAR, and the well-observed interaction between academic preparation and academic performance (Schneider & Preckel, 2017), it was not methodologically valid to compare the 2018 cohort’s performance with RiPPLE users only. Further, RiPPLE users were perhaps more motivated or intrinsically interested in the subject matter than others. Even within ATAR bands, subject-level motivation has been shown to be an important factor in achievement and retention, leading to a third of students overperforming or underperforming in their grade expectations regardless of academic preparation (Baik et al., 2015; Naylor et al., 2018; Schneider & Preckel, 2017). For these reasons, we decided to compare average grades from the total cohorts, but taking these factors into account, the impact of using RiPPLE may have been higher than this 2.5 percentage point increase indicates, given that only a third of users actively engaged with the platform. Given that the introduction of student co-creation appeared to have a positive impact on academic performance, the limited uptake of the platform (with only a third of students using it) raises questions on how it can be better promoted to other students. To promote usage (particularly, authoring questions) in this cohort, students were told that several questions from the student-created resources would be used on the final exam, and that if their submitted question was chosen for an exam, they would receive a 2% bonus in their exam mark. It is possible that this bonus was not enough to overcome students’ reticence or fear of loss of face, or that students didn’t believe that the bonus marks were realistically attainable to them (a key aspect of goal-setting theory [Locke & Latham, 2002]). There likely needs to be more consideration in how to encourage less competitive and academic low-achieving students to engage with the platform. One mechanism could be requiring students to create at least one question during the semester, and perhaps Australasian Journal of Educational Technology, 2021, 37(6). 113 provide a rating or feedback (and thereby answer) at least two of their peers, although imposing extrinsic constraints such as this may undermine the impact of engaging with the technology. Unfortunately, student motivation and conceptions of success remain very diverse (Naylor et al., 2016), particularly in large cohorts such as this. While attempting to build intrinsic motivation to use the platform may still likely lead to students who only engage minimally, exposure and encouragement to use the platform may lead to greater engagement. A mediating role for academic preparation? As described above, academic preparation, in this case, as indicated by a student’s ATAR score, similar to high school grade point average in other national contexts, is closely related to academic performance (Schneider & Preckel, 2017). Because students with higher ATAR scores are more likely to receive university offers in line with their first preferences, it is also linked to intrinsic motivation and interest in the subject matter. Under these circumstances, it was difficult to extract the impact of RiPPLE as an educational technology from academic preparation, and hence other factors such as academic self-efficacy, assessment literacy, and intrinsic motivation. It is clear that resource usage (and indeed, resource creation) increased with ATAR, and that academic performance also increased with both ATAR and resource usage. In both cases, resource usage increased exponentially with both ATAR and performance. Although there may have been other factors involved, such as internet access or differences in promotion, this may account for the lower resource usage seen among regional students compared to metropolitan students, since regional students on average have lower ATAR scores than urban students. To attempt to disentangle the relationships between preparation, performance, and resource usage, we calculated the natural logarithm of resource usage and compared those relationships. We found that the gradient of the relationship between resource usage and subject grades was over twice that with ATAR, which indicated that using the student co-created resources had a positive relationship with performance beyond that expected by the interaction between ATAR and performance. That is, students who engaged with RiPPLE were more likely to do well than students who did not in each ATAR band. On a similar note, authors were more likely to have a slightly higher ATAR than responders (although, perhaps because of low statistical power, this was not statistically significant), who in turn had a higher ATAR than disengaged users. Authors also appeared to obtain higher performance outcomes than responders, and both achieved more than disengaged users, even after accounting for ATAR. Therefore, more active engagement with RiPPLE, by creating resources rather than just responding to them, appeared more beneficial. A limitation of this study was that we were unable to establish causal relationships between these factors, although there were strong theoretical reasons to believe that engagement with more learning resources should have led to better academic performance if those resources were well aligned. We cannot exclude any possible impact from higher achieving students having better academic self-efficacy or assessment literacy, however, which may have influenced their willingness to access the resources in the first place or gain more benefits from using them. Certainly, that students who were more academic prepared, and therefore more likely to be confident in their academic abilities and performance, were more likely to author resources than others is not surprising. It is perhaps likely that these factors all positively impacted on each other. Although more must be done to ensure all students can benefit from student co-creations such as RiPPLE, one benefit of the platform may be the added capability for gifted or deeply engaged students to challenge themselves and/or showcase their knowledge. For less prepared cohorts, it may be necessary to build confidence and self-efficacy as a precursor to engagement with these platforms. Another limitation of the study was the ability to control for other factors that may have impacted our results, especially in regard to our comparisons between 2018 and 2019 cohorts. However, the teaching team made no other intentional changes to the subject’s learning design (e.g., instructional materials, change to assessment) nor was there any change in university policies affecting admissions or assessment. Australasian Journal of Educational Technology, 2021, 37(6). 114 Implications and recommendations While this study focussed on the implementation of student-led assessment in a large cohort of first-year anatomy students at La Trobe University, these findings have implications for all subjects with a significant online presence, or those transitioning to an online delivery model. In particular, our analysis which highlighted the positive outcomes for students, is an encouraging finding on the importance of supporting digital peer-to-peer learning technologies that allow for students to create, share, and provide feedback on study resources. Interaction with RiPPLE improved student grades independently of all other factors, including ATAR, in an exponential fashion. This may indicate a greater understanding of the subject material and an enhanced readiness to undertake second-year studies. Improved grades should be the goal of all students; in the health sciences at La Trobe University students’ weighted average mean (WAM), similar to their GPA, is used to determine admission into highly-competitive clinical and Graduate Entry by Masters (GEMs) degrees for registration as a practicing clinician. Improved marks, even at first-year level, will lead to more students achieving their goals of becoming a clinician. As such, we make the following recommendations when implementing platforms which place students as co-creators of curricula: 1. Students need to be encouraged to use the platform as often as possible, 2. Students should be shown data on how engagement with the platform improves student outcomes, 3. Give incentives for using the platform, in the form of extra credit or other prizes; and 4. Use the platform in every class and lecture. Our study also points to implications for staff in regard to their workload. The implementation of the platform caused minimal extra work for the staff, with the bulk of responsibilities relating to the promotion of the platform during lectures and loading sample questions to guide students’ contributions. We also hypothesise that workload will decrease in subsequent teaching periods, as the platform can be rolled over from semester to semester. However, some teacher moderation may be necessary to ensure students’ loaded resources are correct. Future research As the implementation of RiPPLE was such a success, this platform is currently being implemented in other subjects across several year levels, as it was noted in user surveys that only 20.5% of students were opposed to the use of RiPPLE in their other subjects (data not shown). Given the significant improvement in student outcomes, with respect to their engagement with the platform, more students need to be encouraged to move from the disengaged category to the engaged category, while simultaneously encouraging the engaged students to become authors. One mechanism to improve participation in the platform is to link activity to assessment, which the authors will explore in future research. There is also a need to continue to explore users’ behavioural profiles, to better understand what subgroup user cohorts exist and how to nuance support across all groups. Finally, given that online and blended learning environments have negatively impacted academic workloads, especially during COVID-19 restrictions and lockdowns, resources such as RiPPLE can significantly improve student outcomes while decreasing staff workloads. References Abdi S., Khosravi H., & Sadiq S. (2020) Modelling learners in crowdsourcing educational systems. In I. Bittencourt, M. Cukurova, K. Muldner, R. Luckin, & E. Millán (Eds.), Artificial intelligence in Education. Lecture notes in computer science (Vol. 12164, pp. 3-9). Springer. https://doi.org/10.1007/978-3-030-52240-7_1 Abdi, S., Khosravi, H., Sadiq, S., & Demartini, G. (2021). Evaluating the quality of learning resources: A learner sourcing approach. IEEE Transactions on Learning Technologies, 14(1), 81-92. https://doi.org/10.1109/tlt.2021.3058644 Abdi, S., Khosravi, H., Sadiq, S., & Gasevic, D. (2019). A multivariate elo-based learner model for adaptive educational systems. Proceedings of the Educational Data Mining Conference. Montréal, pp. 462–467. arXiv:1910.12581v1 Axelson, R. D., & Flick, A. (2010). Defining student engagement. Change: The magazine of higher learning, 43(1), 38-43. https://doi.org/10.1080/00091383.2011.533096 https://doi.org/10.1007/978-3-030-52240-7_1 https://doi.org/10.1109/tlt.2021.3058644 https://doi.org/10.1080/00091383.2011.533096 Australasian Journal of Educational Technology, 2021, 37(6). 115 Baik, C., Naylor, R., & Arkoudis, S. (2015). The first year experience in Australian universities: Findings from two decades, 1994-2014. Melbourne Centre for the Study of Higher Education. Bergmark, U., & Westman, S. (2016). Co-creating curriculum in higher education: Promoting democratic values and a multidimensional view on learning. International Journal for Academic Development, 21(1), 28-40. https://doi.org/10.1080/1360144X.2015.1120734 Blau, I., & Shamir-Inbal, T. (2017). Re-designed flipped learning model in an academic course: The role of co-creation and co-regulation. Computers & Education, 115, 69-81. https://doi.org/10.1016/j.compedu.2017.07.014 Bovill, C. (2014). An investigation of co-created curricula within higher education in the UK, Ireland and the USA. Innovations in Education and Teaching International, 51(1), 15-25. https://doi.org/10.1080/14703297.2013.770264 Bovill, C., & Woolmer, C. (2019). How conceptualisations of curriculum in higher education influence student-staff co-creation in and of the curriculum. Higher Education, 78(3), 407-422. https://doi.org/10.1007/s10734-018-0349-8 Boud, D., Cohen, R., & Sampson, J. (Eds.). (2014). Peer learning in higher education: Learning from and with each other. Taylor & Francis https://doi.org/10.4324/9781315042565 Calacci, D., Lederman, O., Shrier, D., & Pentland, A. S. (2016). Breakout: An open measurement and intervention tool for distributed peer learning groups. arXiv preprint arXiv:1607.01443 Cardak, B. A., & Ryan, C. (2009). Participation in higher education in Australia: Equity and access. Economic Record, 85(271), 433-448. https://doi.org/10.1111/j.1475-4932.2009.00570.x Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395-407. https://doi.org/10.1080/03075071003642449 Chiu, T. K., & Hew, T. K. (2018). Factors influencing peer learning and performance in MOOC asynchronous online discussion forum. Australasian Journal of Educational Technology, 34(4). https://doi.org/10.14742/ajet.3240 Cook-Sather, A. (2014). Multiplying perspectives and improving practice: What can happen when undergraduate students collaborate with college faculty to explore teaching and learning. Instructional Science, 42(1), 31-46. https://doi.org/10.1007/s11251-013-9292-3 Coombe, L., Huang, J., Russell, S., Sheppard, K., & Khosravi, H. (2018). Students as partners in action: Evaluating a university-wide initiative. International Journal for Students as Partners, 2(2), 85-95. https://doi.org/10.15173/ijsap.v2i2.3576 Craft, J. A. (2019). Academic performance of low SES students at an Australian university satellite campus. Studies in Higher Education, 44(8), 1372-1385. https://doi.org/10.1080/03075079.2018.1440382 Davey, B., Elliott, K., & Bora, M. (2019). Negotiating pedagogical challenges in the shift from face-to- face to fully online learning: A case study of collaborative design solutions by learning designers and subject matter experts. Journal of University Teaching and Learning Practice, 16(1), 3. https://eric.ed.gov/?id=EJ1213950 Dawson, S. (2006). A study of the relationship between student communication interaction and sense of community. The Internet and Higher Education, 9(3), 153-162. https://doi.org/10.1016/j.iheduc.2006.06.007 Deeley, S. J., & Bovill, C. (2017). Staff student partnership in assessment: enhancing assessment literacy through democratic practices. Assessment & Evaluation in Higher Education, 42(3), 463-477. https://doi.org/10.1080/02602938.2015.1126551 Dollinger, M., Cox, S., Eaton, R., Vanderlelie, J., & Ridsdale, S. (2020). Investigating the usage and perceptions of third-party online learning support services for diverse students. Journal of Interactive Media in Education, (1), 14. https://doi.org/10.5334/jime.555 Dollinger, M., D’Angelo, B., Naylor, R., Harvey, A., & Mahat, M. (2020). Participatory design for community-based research: A study on regional student higher education pathways. The Australian Educational Researcher, 1-17. https://doi.org/10.1007/s13384-020-00417-5 Dollinger, M., & Lodge, J. (2019). Student-staff co-creation in higher education: an evidence-informed model to support future design and implementation. Journal of Higher Education Policy and Management, 42(5), 532-546. https://doi.org/10.1080/1360080X.2019.1663681 Dollinger, M., Lodge, J., & Coates, H. (2018). Co-creation in higher education: Towards a conceptual model. Journal of Marketing for Higher Education, 28(2), 210-231. https://doi.org/10.1080/08841241.2018.1466756 https://doi.org/10.1080/1360144X.2015.1120734 https://doi.org/10.1016/j.compedu.2017.07.014 https://doi.org/10.1080/14703297.2013.770264 https://doi.org/10.1007/s10734-018-0349-8 https://doi.org/10.4324/9781315042565 https://doi.org/10.1111/j.1475-4932.2009.00570.x https://doi.org/10.1080/03075071003642449 https://doi.org/10.14742/ajet.3240 https://doi.org/10.1007/s11251-013-9292-3 https://doi.org/10.15173/ijsap.v2i2.3576 https://doi.org/10.1080/03075079.2018.1440382 https://eric.ed.gov/?id=EJ1213950 https://doi.org/10.1016/j.iheduc.2006.06.007 https://doi.org/10.1080/02602938.2015.1126551 https://doi.org/10.5334/jime.555 https://doi.org/10.1007/s13384-020-00417-5 https://doi.org/10.1080/1360080X.2019.1663681 https://doi.org/10.1080/08841241.2018.1466756 Australasian Journal of Educational Technology, 2021, 37(6). 116 Dollinger, M., Liu, D., Arthars, N., & Lodge, J. (2019). Working together in learning analytics towards the co-creation of value. Journal of Learning Analytics, 6(2), 10-26. https://doi.org/10.18608/jla.2019.62.2 Dumford, A. D., & Miller, A. L. (2018). Online learning in higher education: exploring advantages and disadvantages for engagement. Journal of Computing in Higher Education, 30(3), 452-465. https://doi.org/10.1007/s12528-018-9179-z Ellis, R. A., & Bliuc, A. M. (2016). An exploration into first‐year university students' approaches to inquiry and online learning technologies in blended environments. British Journal of Educational Technology, 47(5), 970-980. https://doi.org/10.1111/bjet.12385 Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69-76. https://doi.org/10.1016/j.stueduc.2017.03.003 Fujita, M., Harrigan, P., & Soutar, G. (2017). A netnography of a university’s social media brand community: Exploring collaborative co-creation tactics. Journal of Global Scholars of Marketing Science, 27(2), 148-164. https://doi.org/10.1080/21639159.2017.1283798 George, G., & Lal, A. M. (2019). Review of ontology-based recommender systems in e- learning. Computers & Education, 142, 103642. https://doi.org/10.1016/j.compedu.2019.103642 Gillett-Swan, J. (2017). The challenges of online learning: Supporting and engaging the isolated learner. Journal of Learning Design, 10(1), 20-30. https://eprints.qut.edu.au/102750/ Hanna, D. E. (1998). Higher education in an era of digital competition: Emerging organizational models. Journal of Asynchronous Learning Networks, 2(1), 66-95. https://pdfs.semanticscholar.org/57a5/ecaf5aa0ff45f78072c4522475640d09d524.pdf Helal, S., Li, J., Liu, L., Ebrahimie, E., Dawson, S., & Murray, D. J. (2019). Identifying key factors of student academic performance by subgroup discovery. International Journal of Data Science and Analytics, 7(3), 227-245. https://doi.org/10.1007/s41060-018-0141-y Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38(5), 758-773. https://doi.org/10.1080/03075079.2011.598505 Khosravi, H., Cooper, K., & Kitto, K. (2017). Riple: Recommendation in peer-learning environments based on knowledge gaps and interests. Journal of Educational Data Mining 9(1), 42-67. https://doi.org/10.5281/zenodo.3554627 Khosravi H., Demartini G., Sadiq S., & Gasevic D. (2021). Charting the design and analytics agenda of learner sourcing systems. Proceedings of the 11th International Conference on Learning Analytics and Knowledge. Online, 32–42. https://doi.org/10.1145/3448139.3448143 Khosravi, H., Kitto, K., & Williams, J. J. (2019). RiPPLE: A crowdsourced adaptive platform for recommendation of learning activities. Journal of Learning Analytics, 6(3), 91–105. https://doi.org/10.18608/jla.2019.63.12 Knight, S., Gibson, A., & Shibani, A. (2020). Implementing learning analytics for learning impact: Taking tools to task. The Internet and Higher Education, 45, 100729. https://doi.org/10.1016/j.iheduc.2020.100729 Könings, K. D., Seidel, T., & van Merriënboer, J. J. (2014). Participatory design of learning environments: Integrating perspectives of students, teachers, and designers. Instructional Science, 42(1), 1-9. https://doi.org/10.1007/s11251-013-9305-2 Lang, C., Craig, A., & Casey, G. (2017). A pedagogy for outreach activities in ICT: Promoting peer to peer learning, creativity and experimentation. British Journal of Educational Technology, 48(6), 1491-1501. https://doi.org/10.1111/bjet.12501 Lee, K. (2017). Rethinking the accessibility of online higher education: A historical review. The Internet and Higher Education, 33, 15-23. https://doi.org/10.1016/j.iheduc.2017.01.001 Liu, C. C., & Tsai, C. M. (2005). Peer assessment through web‐based knowledge acquisition: Tools to support conceptual awareness. Innovations in Education and Teaching International, 42(1), 43-59. https://doi.org/10.1080/14703290500048838 Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57(9), 705-717. https://doi.org/10.1037/0003- 066X.57.9.705 Marquis, E., Black, C., & Healey, M. (2017). Responding to the challenges of student-staff partnership: The reflections of participants at an international summer institute. Teaching in Higher Education, 22(6), 720-735. https://doi.org/10.1080/13562517.2017.1289510 https://doi.org/10.18608/jla.2019.62.2 https://doi.org/10.1007/s12528-018-9179-z https://doi.org/10.1111/bjet.12385 https://doi.org/10.1016/j.stueduc.2017.03.003 https://doi.org/10.1080/21639159.2017.1283798 https://doi.org/10.1016/j.compedu.2019.103642 https://eprints.qut.edu.au/102750/ https://pdfs.semanticscholar.org/57a5/ecaf5aa0ff45f78072c4522475640d09d524.pdf https://doi.org/10.1007/s41060-018-0141-y https://doi.org/10.1080/03075079.2011.598505 https://doi.org/10.1145/3448139.3448143 https://doi.org/10.18608/jla.2019.63.12 https://doi.org/10.1016/j.iheduc.2020.100729 https://doi.org/10.1007/s11251-013-9305-2 https://doi.org/10.1111/bjet.12501 https://doi.org/10.1016/j.iheduc.2017.01.001 https://doi.org/10.1080/14703290500048838 https://doi.org/10.1037/0003-066X.57.9.705 https://doi.org/10.1037/0003-066X.57.9.705 https://doi.org/10.1080/13562517.2017.1289510 Australasian Journal of Educational Technology, 2021, 37(6). 117 Matthews, K. E., Cook-Sather, A., & Healey, M. (2018). Connecting learning, teaching, and research through student–staff partnerships: Toward universities as egalitarian learning communities (pp. 23- 29). UCL Press. https://doi.org/10.2307/j.ctt21c4tcm.7 Mercer-Mapstone, L., Dvorakova, S. L., Matthews, K. E., Abbot, S., Cheng, B., Felten, P., Knorr, K., Marquis, E., Shammas, R., & Swaim, K. (2017). A systematic literature review of students as partners in higher education. International Journal for Students as Partners, 1(1), 1-23. https://doi.org/10.15173/ijsap.v1i1.3119 Mihans, R., Long, D., & Felten, P. (2008). Student-faculty collaboration in course design and the scholarship of teaching and learning. International Journal for the Scholarship of Teaching and Learning, 2(2), 1-9. https://digitalcommons.georgiasouthern.edu/ij-sotl/vol2/iss2/ Muir, T., Milthorpe, N., Stone, C., Dyment, J., Freeman, E., & Hopwood, B. (2019). Chronicling engagement: Students’ experience of online learning over time. Distance Education, 40(2), 262-277. https://doi.org/10.1080/01587919.2019.1600367 Mulder, R., Pearce, A., & Baik. C. (2014). Peer review in higher education: Student perceptions before and after participation. Active Learning in Higher Education 15(2), 157–171. https://doi.org/10.1177/1469787414527391 Naylor, R., Baik, C., & Arkoudis, S. (2018). Identifying attrition risk based on the first year experience. Higher Education Research & Development, 37(2), 328-342. https://doi.org/10.1080/07294360.2017.1370438 Naylor R., Coates H., Kelly P. (2016) From Equity to Excellence: Reforming Australia’s National Framework to Create New Forms of Success. In: Harvey A., Burnheim C., Brett M. (eds) Student Equity in Australian Higher Education. Springer, Singapore. https://doi.org/10.1007/978-981-10- 0315-8_15 Pardo, A., Bartimote, K., Buckingham Shum, S., Dawson, S., Gao, J., Gašević, D., Leichtweis, S., Liu, D., Martínez-Maldonado, R., Mirriahi, N., Moskal, A. C. M., Schulte, J., Siemens, G., & Vigentini, L. (2018). OnTask: Delivering Data-Informed, Personalized Learning Support Actions. Journal of Learning Analytics, 5(3), 235–249. https://doi.org/10.18608/jla.2018.53.15 Pee, L. G. (2019). Enhancing the learning effectiveness of ill-structured problem solving with online co- creation. Studies in Higher Education, 45(11), 1-15. https://doi.org/10.1080/03075079.2019.1609924 Porter, W. W., Graham, C. R., Bodily, R. G., & Sandberg, D. S. (2016). A qualitative analysis of institutional drivers and barriers to blended learning adoption in higher education. The Internet and Higher Education, 28, 17-27. https://doi.org/10.1016/j.iheduc.2015.08.003 Potts, B. A., Khosravi, H., Reidsema, C., Bakharia, A., Belonogoff, M., & Fleming, M. (2018). Reciprocal peer recommendation for learning purposes. Proceedings of the 8th International Conference on Learning Analytics and Knowledge, Sydney, 226–235. https://doi.org/10.1145/3170358.3170400 Robinson, C. C., & Hullinger, H. (2008). New benchmarks in higher education: Student engagement in online learning. Journal of Education for Business, 84(2), 101-109. https://doi.org/10.3200/JOEB.84.2.101-109 Rust, C., Price, M., & O’Donovan, B. (2003). Improving students' learning by developing their understanding of assessment criteria and processes. Assessment & Evaluation in Higher Education, 28(2), 147-164. https://doi.org/10.1080/02602930301671 Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565–600. https://doi.org/10.1037/bul0000098 Shibani, A., Knight, S., & Shum, S. B. (2020). Educator perspectives on learning analytics in classroom practice. The Internet and Higher Education, 46, 100730. https://doi.org/10.1016/j.iheduc.2020.100730 Stewart, O., Lubensky, D., & Huerta, J. M. (2010, July). Crowdsourcing participation inequality: A SCOUT model for the enterprise domain. Proceedings of the ACM SIGKDD Workshop on Human Computation. 30-33. https://doi.org/10.1145/1837885.1837895 Stone, C. (2017). Opportunity through online learning: Improving student access, participation and success in higher education. The National Centre for Student Equity in Higher Education, Curtin University. Topping, K. J. (2005). Trends in peer learning. Educational Psychology, 25(6), 631-645. https://doi.org/10.1080/01443410500345172 University Admissions Centre. (2020). Australian Tertiary Admission Rank. https://www.uac.edu.au/future-applicants/atar https://doi.org/10.2307/j.ctt21c4tcm.7 https://doi.org/10.15173/ijsap.v1i1.3119 https://digitalcommons.georgiasouthern.edu/ij-sotl/vol2/iss2/ https://doi.org/10.1080/01587919.2019.1600367 https://doi.org/10.1177/1469787414527391 https://doi.org/10.1080/07294360.2017.1370438 https://doi.org/10.1007/978-981-10-0315-8_15 https://doi.org/10.1007/978-981-10-0315-8_15 https://doi.org/10.18608/jla.2018.53.15 https://doi.org/10.1080/03075079.2019.1609924 https://doi.org/10.1016/j.iheduc.2015.08.003 https://doi.org/10.1145/3170358.3170400 https://doi.org/10.3200/JOEB.84.2.101-109 https://doi.org/10.1080/02602930301671 https://doi.org/10.1037/bul0000098 https://doi.org/10.1016/j.iheduc.2020.100730 https://doi.org/10.1145/1837885.1837895 https://doi.org/10.1080/01443410500345172 https://www.uac.edu.au/future-applicants/atar Australasian Journal of Educational Technology, 2021, 37(6). 118 Vigentini, L., Liu, D. Y., Arthars, N., & Dollinger, M. (2020). Evaluating the scaling of a LA tool through the lens of the SHEILA framework: A comparison of two cases from tinkerers to institutional adoption. The Internet and Higher Education, 45, 100728. https://doi.org/10.1016/j.iheduc.2020.100728 Wang, W., An, B., & Jiang, Y. (2020). Optimal spot-checking for improving the evaluation quality of crowdsourcing: Application to peer grading systems. IEEE Transactions on Computational Social Systems, 7(4), 940-955. https://doi.org/10.1109/TCSS.2020.2998732 Win, R., & Miller, P. W. (2005). The effects of individual and school factors on university students' academic performance. Australian Economic Review, 38(1), 1-18. https://doi.org/10.1111/j.1467- 8462.2005.00349.x Wise, A. F., & Cui, Y. (2018). Learning communities in the crowd: Characteristics of content related interactions and social relationships in MOOC discussion forums. Computers & Education, 122, 221- 242. https://doi.org/10.1016/j.compedu.2018.03.021 Woolmer, C., Sneddon, P., Curry, G., Hill, B., Fehertavi, S., Longbone, C., & Wallace, K. (2016). Student-staff partnership to create an interdisciplinary science skills course in a research-intensive university. International Journal for Academic Development, 21(1), 16–27. https://doi.org/10.1080/1360144X.2015.1113969 Zhao, H., Sullivan, K. P., & Mellenius, I. (2014). Participation, interaction and social presence: An exploratory study of collaboration in online peer review groups. British Journal of Educational Technology, 45(5), 807-819. https://doi.org/10.1111/bjet.12094 Zheng, Y., Li, G., Li, Y., Shan, C., & Cheng, R. (2017). Truth inference in crowdsourcing: Is the problem solved? Proceedings of the Very Large Data Base (VLDB) Endowment, 10(5), 541–552. https://doi.org/10.14778/3055540.3055547 Zheng, Y., & Yano, Y. (2007). A framework of context‐awareness support for peer recommendation in the e‐learning context. British Journal of Educational Technology, 38(2), 197-210. https://doi.org/10.1111/j.1467-8535.2006.00584.x Corresponding author: Aaron McDonald, A.McDonald@latrobe.edu.au Copyright: Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC- ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY- NC-ND 4.0. Please cite as: McDonald, A., McGowan, H., Dollinger, M., Naylor, R., & Khosravi, H. (2021). Repositioning students as co-creators of curriculum for online learning resources. Australasian Journal of Educational Technology, 37(6), 102-118. https://doi.org/10.14742/ajet.6735 https://doi.org/10.1016/j.iheduc.2020.100728 https://doi.org/10.1109/TCSS.2020.2998732 https://doi.org/10.1111/j.1467-8462.2005.00349.x https://doi.org/10.1111/j.1467-8462.2005.00349.x https://doi.org/10.1016/j.compedu.2018.03.021 https://doi.org/10.1080/1360144X.2015.1113969 https://doi.org/10.1111/bjet.12094 https://doi.org/10.14778/3055540.3055547 https://doi.org/10.1111/j.1467-8535.2006.00584.x mailto:A.McDonald@latrobe.edu.au https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/ https://doi.org/10.14742/ajet.6735  We explored if students were willing to co-create learning resources online.  Our study piloted an online platform known as Recommendation in Personalised Peer Learning Environments (RiPPLE).  Critical analysis provides insights into fostering online engagement and peer learning.  We further offer recommendations for future practice on how to embed online student co-creation of curriculum. Keywords: peer learning, online learning, co-creation, student engagement, learning resources, quantitative Introduction Background on peer learning and student co-created content Overview of RiPPLE platform Figure 1. The current interface for student evaluation of learning resources in RiPPLE Material and methods Results Research question 1: Student engagement and usage Table 2 HBS1HBB university run student feedback survey Research question 2: Impact on students’ academic performance Figure 3. Comparison of marks for students completing two first-year subjects ( SEM) Figure 4. Overall student marks compared to resources answered in RiPPLE Table 3 Student ATAR compared to resources answered and final marks ( SEM) Table 4 HBS1HBB ATAR and mark distribution across user types ( SEM) Research question 3: Student cohort differences Figure 5. Metropolitan student versus regional student RiPPLE usage ( SEM) Discussion