Journal of Teaching and Learning with Technology, Vol. Vol. 11, Special Issue, pp.3-17. doi: 10.14434/jotlt.v11i1.34594 Teaching Experiences of E-Authentic Assessment: Lessons Learned in Higher Education Audrey Raynault Université Laval Géraldine Heilporn Université Laval Alice Mascarenhas Université de Sherbrooke Constance Denis Université de Sherbrooke Abstract: The realities of the 21st century have led professors and lecturers to renew their learning assessment practices so that they are more adapted to and contextualized in the current professional world. Despite advances in teaching and learning, assessment methods may still deviate from practice in authentic contexts. Although some instructors are already familiar with more authentic assessments, most are accustomed to using exams as standard practices to test students’ achievement of course objectives and essays to prepare students for research or written argumentation. Nevertheless, such typical assessments often lack authenticity and do not develop the full potential of students’ 21st- century learning or literacy skills such as communication, creativity, or working with technologies. The past decade has seen the beginnings of a broader reflection on teaching, learning, and evaluating with technologies, including more authentic assessments. In this reflective essay we present how technologies make it possible to diversify assessment methods, resulting in enhanced authenticity and development of 21st-century learning and literacy skills. Authentic assessment methods with technologies (e.g., recorded video presentations, explanatory interviews with descriptive assessment grids, PechaKucha presentations, blog posts, social media and e-portfolios) are illustrated with examples from several disciplines. We also explain how proposing a number of methods to students for the same assessment may help answer their various needs and preferences without increasing instructors’ grading load. Furthermore, we discuss how diversifying assessment methods with technologies often results in a transformation of assessment modalities. Beyond assessments as an evaluation of knowledge and/or skills at a fixed time, authentic assessments with technologies may become continuous or iterative processes with multiple feedback occasions from instructors, thereby combining synchronous interactions and/or discussions with asynchronous reflections to improve students’ involvement and active learning. Keywords: e-assessment, educational technologies, authentic assessment, higher education, pedagogical alignment Approximately 2 years before the COVID-19 pandemic, about 40% of instructors had used e- assessment1 in their practices and half of all students had been evaluated using e-assessment, according to an international survey that mainly took place in Portugal, the United States, the United Kingdom, Canada, Norway, and Australia (Rolim & Isais, 2019). Typically, e-assessment has relied on multiple- 1 E-assessment is widely defined as the “use of a computer as part of any assessment-related activity” (Jordan, 2013, p. 88). Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu choice questions (Fluck, 2019; Rolim & Isais, 2019) or other forms of e-exams including case studies, long essays, or computer coding activities (Fluck, 2019), and automatic grading and immediate feedback for students have been cited as important benefits (e.g., M. Brady et al., 2019; Fluck, 2019; Rolim & Isais, 2019, Stödberg, 2012). However, such typical e-assessment relies on indirect proxy items—efficient and simplistic substitutes—from which instructors think valid inferences can be made about the students’ performance with respect to certain prioritized challenges. Unfortunately, most typical e-assessments, and especially quizzes, are not authentic assessments. Indeed, according to Wiggins (1990), authentic assessments should engage students in tasks similar to those in the workplace setting or in everyday life; they are led by the student or a group of students and allow students the freedom to create according to their interests; they lead to an outcome or product resulting from problem solving or cocreation; they are characterized by the learning processes generated and the mobilization of skills and knowledge as well as by the development of unique responses. Supported by digital technology, authentic evaluation must allow for latitude in the choice of tool while keeping traces on the process (Wiggins, 1990). In the digital age, teaching, learning, and assessment need to be rethought to align with real- world considerations, while contributing to the development of students’ 21st-century skills, including communication and collaboration, creativity and innovation, and working with technologies (Redecker et al., 2012). Time-limited exams or exams without reference materials may no longer be relevant, especially in distance education. In the case of online training, Siemens (2005) proposed the concept of connectivism, according to which learning takes place through connections between people, between platforms, and between types and levels of knowledge (Chekour et al., 2015). In a world where changes are unpredictable, teaching, learning, and assessment need to be relevant and adapted to the large range of possibilities offered by technologies. As Gulikers et al. (2004) indicated, authenticity is difficult to define but allows for reflection and demonstration of learning. The authors suggested a return to pedagogical alignment according to Biggs (1996), that is, alignment of content, pedagogical methods, and assessments while taking into consideration the limits of available technologies in each context. E-assessments must therefore allow for critical, collaborative, and complex content applications, and/or learning in authentic situations with temporal or technological constraints such as those of the job market. According to St-Onge et al. (2022), e-assessments are authentic when students have time to consult and reflect on any information sources they need for the assessment, as they would do in actual professional practice. Performance is no longer prioritized; rather, a combination of process, progress, and production is tantamount. Although some shift toward more authentic assessments had already begun prior to the pandemic, the forced transition to online teaching during the pandemic has been a catalyst for deeper reflection on pedagogical and assessment practices. Furthermore, enhancing the authenticity of assessments also reduces the risks of cheating and plagiarism, while better preparing students for professional practice (Sotiriadou et al., 2020). In a digital age, all teachers need to understand that e- assessments go far beyond quizzes with multiple-choice questions for assessing low-level cognitive skills. Wikis, blogs, simulations, and scenarios are only a few examples of e-assessment opportunities in which higher level cognitive skills can be assessed (Appiah, 2018). In the next section, we present five authentic e-assessment methods that we have used with our students, approaches that support the development of 21st-century learning and literacy skills. We also highlight benefits and challenges so that teachers can reflect on implementing these in their own courses. 4 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu Diversifying Assessment Methods With Technology This section illustrates several authentic assessment methods that help students develop 21st-century skills. These are (1) collaborative exams, (2) recorded video presentations and/or podcasts, (3) PechaKucha presentations, (4) blog posts and social media, and (5) e-portfolios. Collaborative Exams Several North American and European universities have been implementing collaborative exams in nursing, science, health or psychosocial science, and engineering programs (Bezerra, 2018). Collaborative exams can be authentic assessments and may be useful in eliciting higher levels of abstraction and deeper understanding of content than some other types of reviews that promote lower level cognitive skills, use of rote learning strategies, and knowledge retention (Gilley & Clarkston, 2014; Mahoney & Harris-Reeves, 2019). Thus, authentic collaborative exams avoid multiple-choice questions assessing low-level cognitive objectives, as is sometimes the case in two-stage exams, and instead meet the criteria outlined above. In the case of two-stage exams (Kapitanoff, 2009; Leight et al., 2012; Stearns, 1996; Wieman et al., 2014, as cited in Cozma, 2021, p. 3), the exam begins with a solo attempt and continues with a collaborative test consisting of the same or similar questions as in the individual stage. When authentic, the second test transforms the exam situation into a learning situation that enhances students’ understanding of the exam content through discussions with their peers. Generally, collaborative exams engage the active use of high-level cognitive processes (Krathwohl, 2002), such as cocreation, analysis, or complex problem solving (Dahlström, 2012; Mahoney & Harris-Reeves, 2019). They also reduce assessment anxiety for students (Beilock, 2008; Lusk & Conklin, 2003; Zimbardo et al., 2003), increase the performance and academic results of both struggling and high-achieving learners (Woody et al., 2008), and improve students’ perception of the course and their motivation to study for such exams (Knierim et al., 2015). However, there are conflicting results regarding knowledge retention after completing collaborative exams. Some studies pointed to an improvement in retention (Cortright et al., 2003) whereas others were unable to uncover any difference between conducting the review alone or working collaboratively (Leight et al., 2012; Sandhal, 2010). No studies that reported negative impacts were identified. Several factors may account for these discrepancies: the characteristics of the intended student population, the content and type of course, the complexity of the concepts covered, the format and conditions of the review, and the research methodology used. Other studies looked at fully collaborative exams, another type of collaborative assessment (Cozma, 2021; Muir & Tracy, 1999; Zimbardo et al., 2003). As Cozma described (our translation): in this case, the collaboration is not intended to provide feedback following a traditional examination, but to radically transform the design of the examination itself. This type of examination moves away from the idea that test results are able to account for the merit and knowledge of the students and seeks to develop a particular stance, involving the sharing of knowledge, the negotiation of ideas, the justification of beliefs, and a relationship of mutual aid rather than competition. (Cozma, 2021, p. 3) The first, individual stage requires students to turn in an individual exam paper at the end of the allotted time. The collaborative stage then begins, with a reduction in the number of questions to allow time for discussion. It concludes with students handing in either an individual or a group paper. The success of the collaborative stage may be dependent on the group composition, especially if there 5 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu are dominant students (Zipp, 2017). Zipp concluded that collaboration benefits only weaker students, but studies by Gilley and Clarkston (2014) and Leight et al. (2012) found higher scores on the collaborative exams than those achieved by each team member. In addition, Dahlström (2012) found that weaker students increased their ability to produce high-level cognitive responses (Biggs, 1996), whereas stronger students' scores remained similar. However, the strongest students performed better with new questions asked on the collaborative exams than with questions repeated in the two stages (individual and collaborative). This suggests that pooling knowledge provides better overall content understanding, as demonstrated in studies by Bezerra (2018) and Mahoney and Harris-Reeves (2019). In a software engineering course in a Montreal engineering university, a mix of both types of collaborative exams (two-stage and fully collaborative) were conducted online in a teaching and learning platform in three stages. This experience highlights the positive relationship between digital technology and the mobilization of collaboration during a learning assessment situation, as the studies we had previously identified on collaborative exams did not take place in a digital context. To complete these collaborative exams, first the students were instructed to carry out a preparatory stage (Stage 1), which involved creating teams of four or five students to prepare independently, in synchronous and asynchronous modes, an exam to be as light as the teacher’s learning objective grid, 1 month before the exam. The day of the collaborative exam, they first took an individual exam in the form of open questions on the Moodle platform (Stage 2). In the following hour, the students gathered in teams to carry out the collaborative phase (Stage 3) according to a prescribed schedule in synchronous mode by videoconference on their respective channels on the MS Teams platform. Results show that Stages 1 and 3 were complementary. Thus, the teams developed a high level of collaborative performance, which resulted in a high level of general performance and good results at the collaborative stage. The students also testified to having improved the quality and performance of the collaboration communication, synchronization, and coordination (Chiocchio et al., 2012; Raynault et al., 2020) between the preparatory phase (1 month before the collaborative exams) and the collaborative stage. The students mentioned on many occasions that they did not need to talk to each other to move forward and take risks during the collaborative stage and that they trusted each other thanks to their group cohesion developed during the preparatory stage. Finally, according to the teammates, digital literacy enabled them to carry out all stages of the collaborative exam system while developing collective knowledge and an understanding of the dimensions of collaboration, to learn how to form an expert team of expert engineers in an authentic context. Recorded Video Presentations and/or Podcasts Recorded video presentations represent a simple way to implement authentic e-assessment in higher education, while developing students’ communication, collaboration, creativity, digital literacy, and working with technology skills. Students are provided with one or several themes to explore and, when deemed necessary, a starting list of pedagogical resources (e.g., professional or scientific publications, links to web content), along with a detailed list of instructions on the goal and expected content of the recorded video presentation. Of course, the more realistic the proposed problem or context, the more authentic the recorded video presentation is for the student (e.g., a presentation on how an online course unfolds and expectations in an educational technology course, a presentation of operations management to the CEO in a business course). Then, instead of the teacher presenting the content to them, students explore and present the content on their own. They are actively engaged in the process of constructing knowledge, which involves searching for and critically analyzing information, synthesizing content, and creating ways to present it to their peers and the instructor. Although very short videos (under 4–5 minutes) may be produced in individual assessments, students and instructors will usually benefit from working collaboratively on longer videos. 6 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu Collaborative assessments reduce the workload for both students and the instructor, in addition to fostering the development of collaboration skills that are now essential in the professional world. A challenge in such e-assessments is to ensure that students watch the videos produced by other teams, since they often cover complementary themes and content. To this end, instructors may plan peer feedback between teams or a subsequent learning activity necessitating that students watch several videos and then answer related questions. Where possible, implementing peer feedback accompanied by a detailed evaluation grid encourages students to watch the videos with a critical and objective eye (e.g., providing audio or video feedback to their peers using the instructor’s assessment grid, along with improvement suggestions for future work). Another challenge concerns students’ technological skills, or lack thereof, for producing video content (Belt & Lowenthal, 2021). Therefore, instructors must be aware that some students might need technological support along the way, or at least tips on how to produce a video of satisfactory quality. (Readers interested in implementing recorded video presentations in their courses are encouraged to consult He and Huang, 2020, or Belt and Lowenthal, 2021, for a more general synthesis about video use in teaching, learning, and assessment.) Podcasts provide a variation on recorded presentations that can be even more authentic, particularly in communication or language courses. First, students could be invited to explore and discuss existing podcasts, thus combining sociocultural information, oral comprehension, and discussions, in teams. The instructor can draw attention to a number of potential interests in a language course, among them different accents for the same language, speed of speech, vocabulary used (informal or formal), slang words, and local cultural traits, such as commemorative festivals, cuisine, or politics. The creation of podcasts also allows students to develop communication, collaboration, and organization skills. Whether individually or in teams, this initiative requires a personal commitment from the students (Catterall & Davis, 2013). Podcast creation projects go through several stages: choice of topic (preferably chosen by students themselves), research on the topic, construction of a terminology grid about the subject, and planning of the podcast (quantity of episodes, subject and duration of each episode), during which students must follow instructions and constraints associated with the activity. In distance language courses, students practice oral expression by interacting with each other about the podcast creation (Catterall & Davis, 2013), deepen a specific vocabulary according to the chosen topic, and reinforce previously acquired knowledge. Finally, podcasts published on the same online platform may lead to subsequent peer-review and/or discussion activities. However, one must not forget the importance of considering ethical issues (Capelle, 2018) related to the creation of videos or podcasts in an educational context. Whether the podcast is created on a voluntary basis or as a mandatory course activity, students should be aware of netiquette so that no one ever feels uncomfortable or threatened during participation. The students must not fear that their productions will be used maliciously by others (Capelle, 2018). From a professional perspective, students can also discuss the benefits of publishing their videos and podcasts to build and manage their digital identity and enrich their e-portfolio (Capelle, 2018; Ollier-Malaterre, 2018). Therefore, potential public distribution of videos or podcasts should be decided by the students. Otherwise, it is preferable to use a closed and secure platform, accessible only to members of the same class, and to discuss with students the netiquette to follow (Bates, 2015). PechaKucha Presentations The PechaKucha format represents another interesting alternative to traditional video or audio presentations. This storytelling format uses a maximum of 20 slides of 20 seconds each for a total of 6 minutes and 40 seconds (Lison, 2020). The traditional oral presentation is thus transformed to 7 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu engage the learner in an authentic task; in the labor market, there are very few occasions when a person has more than 5 to 10 minutes to present a point of view. Hence, the format increases awareness of the time restrictions and value of content covered during this period. Students are also encouraged to prioritize graphics and limit text (University of British Colombia, 2020), allowing for the evaluation of the students’ ability to synthesize and understand a given subject while working on professional development. Furthermore, the visual appearance of each slide is important, given the total number of slides. Each second needs to be used wisely to ensure the objective is achieved within a limited time frame. Moreover, the image and the audio must be properly aligned. Creativity can be maximized using technology to ensure that the format is respected. A PechaKucha presentation also requires individual or collaborative planning of content and image, and students need to master the content to synthesize and popularize it effectively. Opportunities for plagiarism are also minimal given the need to reduce writing or diagramming to extract only the essential information. Finally, PechaKucha presentations can be delivered in class or online (synchronously or not). This could be a productive activity to facilitate online discussion, where students can critique and debate their position (University of British Colombia, 2020). A synthesis activity could be a PechaKucka, especially at the university graduate or postgraduate level. This was the case for one of us who proposed it as a postgraduate activity at Sherbrooke University (Quebec, Canada). Students were asked to formulate their advice to a peer regarding research supervision. Rather than summarizing the entire course content, they were asked to distill, in less than 7 minutes, the main conclusions they drew from it, that is, from readings, discussions, personal reflections, and forum exchanges (Lison, 2020). Students appreciated that the presentation was authentic and useful. Some students addressed the PechaKucha to a colleague, new or otherwise, and were encouraged to share it in their department. In continuation of the program, some of them used the PechaKucha as “business cards” to recruit a potential supervisor. Among the disadvantages noted are that some students mentioned that using the technology in this highly restricted context provoked anxiety. In some cases, the resulting production was not the reflection of the learner’s full potential but rather a result of their anxiety in using a new form of e- assessment. To overcome this problem at the current session, we have published several tutorials and have encouraged the students to share their work with each other and ask for feedback before submit it. Some of them used their peers’ comments to improve their PechaKucha, and certain students also asked for advice on how to record their presentations. Blog Posts and Social Media Like the authentic and familiar genre of podcasts, social media and blogs provide an interesting environment within which students can perform authentic assessments. Indeed, social media and blogs provide a space for real-world interactions, students sharing information and reacting with “likes” and comments. The interactions themselves are the basis of the learning experience, for both lifelong learning and informal learning. For a long time, the use of social networks in an academic context has been feared, discouraged, and even prohibited in some institutions. Such fear is not unfounded because the ethical issues related to the use of social networks in education are important. Privacy, data sharing, digital identity, intellectual property, and copyright are just a few of the many ethical concerns to consider (Anderson, 2019). However, there is no denying the importance of social media in the lives of 21st- century students. The concept of connectivism today is naturally applied in everyday life. The creation of networks between people is a fact, as is the sharing of knowledge. In this case, the academic use of social media naturally echoes what is already present in the lives of a large majority of students and instructors. According to Anderson (2019), social media in education offer, among other things, the 8 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu chance and support for collaborative learning, for strengthening motivation, and for integrating formal and informal learning. Thus, as part of an educational activity, students may be called upon to share their discoveries through Twitter using a pre-established hashtag and to interact with each other. It is also common to create a Facebook group in which students can discuss and share content. However, because of ethical issues related to the protection of private life, it may be more prudent to use a closed, secure platform that would allow similar tasks to be performed, such as through a Google or Teams account. Another specific example of the use of social media in authentic e-assessments consists of asking students to develop and publish blog posts on a digital platform. The creation of a blog involves several steps, whether it be public or open only to students of the same group. First, it is important to understand what a blog is and to identify the tone to use in writing a post that is accessible and appealing to the target audience. This understanding may also contribute to students’ sense of the authenticity of the assessment, since they will be writing and developing digital content for a larger audience than the instructor alone (Waycott et al., 2013). Then comes the topic of the blog, preferably chosen by the students. Blog posts can be used several times in a course as e-assessments for the students, with the goal being to present new ideas, to reflect on assigned readings or other pedagogical material, and to synthetize and present important content. The third step involves the form of presentation. Creating a blog gives a great deal of freedom to students who can use their preferred way to express themselves, whether through texts, diagrams, drawings, or concept maps (Duplàa & Talaat, 2011). However, the instructor may need to provide technological support to students who are less familiar with technologies or building digital content, especially when students use various digital formats such as images, videos, graphics, or other embedded content (Alruwais et al., 2018; Spector et al., 2016). If used with undergraduate students or in very competitive university programs, another challenge may consist of mitigating the risks of plagiarism as well as some students’ sense of vulnerability (Waycott et al., 2013). As for the digital platform, our experience has shown that the easiest way is to create only one blog for the whole class and add students as authors. This reduces the organizational workload for the instructor, who can also initiate students in the use of categories and hashtags to manage all posts. Furthermore, it is important that the blog be built on an easy-to-use digital platform (e.g., Google Sites, WordPress). When choosing the platform it is important to ensure that blog posts are easily accessible to other students in the course (whether publicly visible, or on a university restricted platform) to foster a sense of connectedness and collaboration between students. Instructors will then be able to ask or encourage students to visit other students’ posts and comment on them (saying what they liked, suggesting improvements, etc.), thereby promoting the development of collaboration skills (e.g., M. Brady et al., 2019). In the world of social media, interactions between peers of course make the experience more engaging and profitable for all. With students commenting on their peers’ posts, students can check if the message is clearly presented and improve it when necessary. E-Portfolios Over a whole semester (or even over several semesters, when several instructors collaborate in this regard), e-portfolios can be used for interesting and authentic assessments. In these, students are asked to reflect on their own learning path in a course, on collecting and presenting digital traces about important course content, on evidence of what they have learned on their own productions, and so on. To be authentic for students’ professional development, e-portfolios need to be presented in such a way that they can be provided along with a curriculum vitae to a potential employer (e.g., for future teachers). Even without this contingency, e-portfolios are authentic to the students in the sense that 9 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu they develop them for a large audience, minimally including their peers and the instructor, similarly to real-world blogs or websites they are familiar with. In our own experience with e-portfolios in a graduate-level course in educational technology, we asked students to use them recurrently throughout the semester so that they could reflect on their learning. In addition to using e-portfolios for collecting reflections about readings and digital tools they explored, they had to build and present two e-assessments of the course in separate sections within the e-portfolio. The first consisted of a critical reflection about integrating technologies in higher education with several leading subquestions, and the second focused on how they would improve a teaching and learning activity sequence, from the problem faced to the planning calendar for preparing the new sequence. In contrast with written essays, they had to think about how to present information on a digital support. We found that students made significant progress from the first e- assessment to the second in terms of critical analysis, synthesis, and digital presentation skills. They truly reflected on finding ways to synthetize important information and to present it in a clear and analytical manner. However, we recognize that although these e-assessments within e-portfolios were very relevant in an educational technology course, technological barriers or time constraints may represent a challenge in other disciplines. To overcome this challenge, we suggest asking students to work on digital productions on several occasions during a course (e.g., digital and/or interactive presentations or posters, infographics, blog posts), inviting them to get off the beaten track of readings and writings, while at the same time helping them develop creativity and digital communication and practice working with technology skills on real-world problems. Like blogs, e-portfolios should also be built on an easy-to-use digital platform, even one designed for this purpose (e.g., Bulb). Students should be encouraged to visit each other's e-portfolios to make suggestions and as an additional strategy to improve their own. In addition to fostering the development of 21st-century skills, an important benefit of e- portfolios is that they make students’ learning paths more transparent, thus offering a way of continuously monitoring their progress throughout a semester, if students regularly contribute to their own portfolios. However, challenges for instructors using e-portfolio assessments concern the workload for visiting all students’ e-portfolios, providing relevant and timely feedback, and finally grading them (M. Brady et al., 2019; Spector et al., 2016). As is the case for blogs, technological support to students may also be required. Since e-portfolios may have important benefits for students but also involve significant drawbacks for instructors, we suggest that instructors carefully consider their implementation in a course depending on the size of the group, the technological support that students would need, and the expected benefits from developing learning and 21st-century skills. Where advantageous, such as in teacher education, educational technology, design, or arts programs, the use of e-portfolios has to be well planned and thoughtfully integrated into course teaching, learning, and assessment activities so that students actually get involved in such coursework throughout a semester. Considerations for Applying Authentic E-Assessments As illustrated in the previous section, technologies enable instructors to diversify their assessment methods and to involve students in authentic e-assessments while developing essential professional skills. Authentic assessments with technologies help bridge the gap between teaching and learning and professional practice (Sotiriadou et al., 2020). As new assessment opportunities will continue to increase as digital technology progresses, instructors will have to reflect on the techno-pedagogical alignment between course objectives and teaching, learning, and assessment activities. As St-Onge et al. (2022) mentioned, this seems to have been a preoccupation of instructors while transforming their courses during the COVID-19 pandemic, and it should stay at the center of any transformation of assessment methods in the future. 10 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu Lessons Learned The importance of giving students the tools to go as far as possible in the development of their full potential while learning in a fair, equitable, and transparent manner was recognized by the Scientific Committee of the International Summit on ICT in Education in 2019 (https://edusummit2019.fse.ulaval.ca) and on numerous other occasions. Tier 1 equity issues, those related to access to digital technologies, are decreasing, but Tier 2 equity issues, those related to classroom uses of digital technologies and resources, are increasing, including underuse or overuse of all things digital, challenges arising from shallow or deep individual or collaborative learning, and uses for play and for learning (Resta et al., 2018). Higher performing students easily adjust to new assessment methods. Any approach to the improvement of classroom practice that is focused on assessment must deal with all aspects of assessment in an integrated way (Black & Wiliam, 2018, p. 552). Considering our real-life e-assessment experiences, Figure 1 presents a synthesis of our lessons learned. Figure 1. Synthesis of lessons learned about authentic e-assessment digital equity in a higher education setting. Offering Choices in Assessments With Technology Several assessment methods can be proposed to students as part of a single general assessment so that they can choose the one they prefer. For instance, an instructor could offer students the possibility of recording a video presentation or a podcast or preparing a blog post for a given assessment. Similarly, an instructor using e-portfolios would be very flexible in terms of formats used by students to present their contents. Hence, instructors can better answer students’ various needs and preferences by offering them choices or allowing them to personalize their work within boundaries imposed by the instructions and expectations related to a given assessment. This idea includes two important points Authentic e-assessment digital equity: lessons learned Offering choices Answering students’ diversified needs and preferences Clarifying e-assessment instructions and expectations Interrelate learning and assessment activities: pedagogical alignment Implement continuous evaluation processes with multiple feedback 11 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu concerning (1) students’ needs and preferences and (2) e-assessment instructions and expectations, which we detail in the following subsections. Answering Students’ Diversified Needs and Preferences. Providing students with a certain degree of choice in an assessment fosters student engagement and participation (Rose et al., 2018). These choices could be as simple as selecting the assessment topic from a predefined list or as broad as allowing several e-assessment methods (Heilporn et al., 2021). The choice of an assessment topic supports students’ interests and motivation, thereby providing multiple means of engagement according to the Universal Design for Learning (UDL) framework (Meyer et al., 2014). Moreover, the decision to allow for several delivery formats for the same assessment is tantamount to providing multiple means of action and expression, as suggested by UDL. By enabling students to select their own e-assessment method, those preferring to express themselves through oral communication could record a podcast whereas others might choose to write a blog post, all reflections of their diverse needs and preferences. Whatever the level of flexibility that an instructor is ready to offer, the mere fact of showing flexibility makes the assessment more authentic for students because they can find ways to connect the assessment with their interests or their personal/professional life, and it promotes their engagement in the assessment activities. Technologies offer a vast range of opportunities so that students can express themselves and demonstrate their knowledge and skills with accessible and easy- to-use applications; therefore, it really is up to instructors to imagine different ways of assessing their course objectives with technologies and to allow students some flexibility and control over e- assessments. Clarifying E-Assessment Instructions and Expectations. Introducing authenticity and flexibility in e- assessments can be stressful for instructors, especially the first time they do so, since they do not know what results to expect from the students; that is, they lose some control over the final assessment production, to the benefit of the students. St-Onge et al. (2022) found that instructors considered potential increases of their workload when reflecting about changes in course assessments. Furthermore, they were concerned about how they would ensure equity between students and/or provide formative feedback. From our experience, the biggest challenge experienced by instructors when transforming their course assessments to more authentic e-assessment methods consists in letting the students have more control over the final assessment production and in trusting their own ability to guide and support students along the assessment production process. First, clear instructions should be presented and explained to the students to avoid potential confusion and misunderstandings. These instructions determine the boundaries of the assessment: explanations regarding the expected content, guiding steps and/or questions, suggested or possible delivery formats, and so on. Second, instructors will benefit from accompanying the instructions with a descriptive assessment grid detailing how the evaluation criteria will be applied. By communicating their expectations to students as transparently as possible from the outset, they provide students with important information helping them self- regulate throughout the assessment production process and deliver high-quality work. The detailed descriptive assessment grid should be broad in scope so that it can be applied to any delivery format chosen by the students, which does not rule out including evaluation criteria regarding the visual and/or audio quality of presentation. This will also ensure that instructors’ grading workload will remain stable, by focusing the descriptive assessment grid on course and assessment objectives rather than on the specific topic or e-assessment method chosen by the students. Finally, because instructions and evaluation grids are broad enough in scope to provide students with some flexibility and choice in the e-assessments, certain students may ask questions to better understand what the assessment consists of and what is expected from them. This often happens when students experience flexibility and choice in assessments for the first time, especially if 12 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu the instructor has allowed them to select their preferred e-assessment method. In that situation, instructors may help students by discussing the assessment goals, expectations, and boundaries with the students, asking them what they would like to present and how they would do it, sometimes providing examples of what could be done while encouraging them to be creative. Such clarifications regarding e-assessment goals and expectations are part of the formative feedback that supports students striving to present an authentic work while developing 21st-century skills such as creativity, working with technologies, communication, and collaboration. This also marks the beginning of a dialogue between students and the instructor about a specific assessment, which we talk about in the next section. Interrelating Learning and E-Assessment Activities All assessment methods described above have one essential common element, which is that they strongly interrelate learning and e-assessment activities. Instead of a fixed-schedule evaluation of students’ learning, assessment activities are designed and integrated within learning activities over an extended period, as recommended by several authors (e.g., Black & Wiliam, 2018; Redecker et al., 2012; Romeu Fontanillas et al., 2016). They are assessment for and as learning (Black & Wiliam, 2018), in which instructors can provide feedback to help students progress, and students themselves can reflect on their learning and self-regulate to enhance their competencies. Authentic e-assessment methods such as those described above then become continuous assessment processes, during which instructors provide formative feedback and students iteratively improve their work (Romeu Fontanillas et al., 2016), a process we describe next. Implement Continuous Evaluation Processes With Instructor Feedback at Multiple Time Points Some of the e-assessment methods described above, such as e-portfolios and blog posts, foster communication between students and instructors during the learning and assessment process, as recommended by Redecker et al. (2012). In the case of e-assessments such as video presentations, infographics, and podcasts, which are often realized in student teams, we recommend that each team have an online collaborative discussion channel that is also accessible to the instructor, which facilitates collaboration between team members and makes the assessment production process more transparent for the instructor. As Lafuente Martínez et al. (2015) put it, “the more the instructor knows about the student’s learning process, the better he or she will be able to support it” (p. 11). By establishing a line of communication during the e-assessment process rather than considering only the final production, instructors and students enter in an interactive and ongoing discussion about the assessment, often referred to as a dialogic approach to feedback (Lafuente Martínez et al., 2015). Whereas students inform each other (when in teams) and the instructor of their current learning and where they are in the assessment process, the instructor provides formative feedback that allows students to adjust their work to the assessment goals and expectations. Also, having students explicitly write about where they are in the assessment process promotes the development of self-regulation in learning, a form of self-assessment and feedback (Nicol & Macfarlane-Dick, 2006). Lafuente Martínez et al. (2015) also found that having students work in teams for e- assessments increased the transparency of the assessment production process, thereby enhancing opportunities for instructors to provide ongoing feedback to improve the final assessment production. Furthermore, they advised that when an e-assessment is implemented in a blended or face-to-face learning environment, instructors should make use of the full potential of online collaborative discussion channels to provide relevant and meaningful feedback to the teams of students instead of relying on face-to-face feedback alone (which the students could interpret as a lack of support). 13 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu Therefore, instructors should be aware that monitoring the student assessment process and providing meaningful feedback requires more time than simply evaluating a final production. Since this could be a challenge for large groups of students, instructors will benefit from planning how they wish to monitor their students’ assessment processes and to clearly communicate to the students what kind and level of feedback they can expect, to prevent any disappointment or misunderstanding. Conclusion In this essay, we have presented five e-assessment methods that we have used in our classrooms that focus on authenticity and 21st-century-skill development. The assessment tasks approximate those that students will face in their future careers, but they also promote student learning and mastery of higher order skills. The lessons learned from authentic e-assessments in our practice underscore the importance of considering the values of social justice, equity, and equal opportunity to succeed. This requires providing students with opportunities and choices for digital activities, topics, and/or tools that meet their diverse needs and preferences for engagement and motivation to learn. Ongoing use of digital tools, as well as technology support when needed, must be available throughout the semester. In addition, authentic e-assessments must also incorporate clear instructions and meet planned and preannounced objectives, as well as be aligned with preparatory activities completed with digital tools throughout the semester (align technology, pedagogy, and context). Finally, digital technologies facilitate opportunities for exchange and interaction among students and between teachers and students; authentic e-assessments must allow for multiple and varied opportunities for synchronous and asynchronous feedback (between students and from the instructor) so that teachers and students can monitor their learning progress. References Alruwais, N., Wills, G., & Wald, M. (2018). Advantages and challenges of using e-assessment. International Journal of Information and Education Technology, 8(1), 34–37. https://doi.org/10.18178/ijiet.2018.8.1.1008 Anderson, T. (2019). Challenges and opportunities for use of social media in higher education. Journal of Learning for Development, 6(1), 6–19. Appiah, D. M. (2018). E-assessment in higher education: A review. International Journal of Business Management and Economic Research, 9(6), 1454–1460. Beilock, S. L. (2008). Math performance in stressful situations. Current Directions in Psychological Science, 17(5), 339–343. https://doi.org/10.1111/j.1467-8721.2008.00602 Belt, E. S., & Lowenthal, P. R. (2021). Video use in online and blended courses: A qualitative synthesis. Distance Education, 42(3), 410–440. https://doi.org/10.1080/01587919.2021.1954882 Bezerra, J. D. M. (2018, 21–23 October). Collaborative Testing Strategies in a Computing Course [Oral presentation]. International Association for Development of the Information Society conference. Budapest, Hungary. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347– 364. Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice, 25(6), 551-575. https://doi.org/10.1080/0969594X.2018.1441807 Brady, M., Devitt, A., & Kiersey, R. A. (2019). Academic staff perspectives on technology for assessment (TfA) in higher education: A systematic literature review. British Journal of Educational Technology, 50(6), 3080–3098. https://doi.org/10.1111/bjet.12742 14 https://doi.org/10.1080/01587919.2021.1954882 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu Capelle, C. (2018). Bilan d'expérimentation sur l'éducation au numérique. IMS Laboratory - University of Bordeaux. https://hal.archives-ouvertes.fr/hal-01897409 Catterall, J., & Davis, J. (2013). Supporting new students from vocational education and training: Finding a reusable solution to address recurring learning difficulties in e-learning. Australasian Journal of Educational Technology, 29(5), 640–650. Chekour, M., Laafou, M., & Janati-Idrissi, R. (2015). L’évolution des théories de l’apprentissage à l’ère du numérique. Revue de l’EPI (Enseignement Public et Informatique), 1–8. Chiocchio, F., Grenier, S., O’Neill, T. A., Savaria, K., & Willms, J. D. (2012). The effects of collaboration on performance: A multilevel validation in project teams. International Journal of Project Organisation and Management, 4(1), 1–37. https://doi.org/10.1504/IJPOM.2012.045362 Cortright, R. N., Collins, H. L., Rodenbaugh, D. W., & DiCarlo, S. E. (2003). Student retention of course content is improved by collaborative-group testing. American Journal of Physiology— Advances in Physiology Education, 27(3), 102–108. https://doi.org/10.1152/advan.00041.2002 Cozma, A.-M. (2021). L’examen collaboratif: étude de cas en contexte universitaire finlandais. Revue internationale de pédagogie de l’enseignement supérieur, 37(2). https://doi.org/10.4000/ripes.3116 Dahlström, Ö. (2012). Learning during a collaborative final exam. Educational Research and Evaluation, 18(4), 321–332. Duplàa, E., & Talaat, N. (2011). Connectivisme et formation en ligne. Distances et savoirs, 9(4), 541– 564. Fluck, A. E. (2019). An international review of eExam technologies and impact. Computers & Education, 132, 1–15. https://doi.org/10.1016/j.compedu.2018.12.008 Gulikers, J. & Bastiaens, T. & Kirschner, P. (2004). The Five-Dimensional Framework for Authentic Assessment. Educational Technology Research and Development. 52. 67-86. 10.1007/BF02504676. Gilley, B. H., & Clarkston, B. (2014). Collaborative testing: Evidence of undergraduate students. Research and Teaching, 43(3), 83–91. He, J., & Huang, X. (2020). Using student-created videos as an assessment strategy in online team environments: A case study. Journal of Educational Multimedia and Hypermedia, 29(1), 35–53. Heilporn, G., Lakhal, S., & Bélisle, M. (2021). An examination of teachers’ strategies to foster student engagement in blended learning in higher education. International Journal of Educational Technology in Higher Education, 18(1), 1–25. https://doi.org/10.1186/s41239-021-00260-3 Kapitanoff, S. (2009). Collaborative testing. Cognitive and interpersonal processes related to enhanced test performance. Active Learning in Higher Education, 10(1), 56–70. Knierim, K., Turner, H., & Davis, R. K. (2015). Two-stage exams improve student learning in an introductory geology course: Logistics, attendance, and grades. Journal of Geoscience Education, 63(2), 157-164. Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41, 212– 218. Jordan, S.E. (2013). E-assessment: Past, present and future. New directions in the teaching of physical sciences, 9, 87-106. Lafuente Martínez, M., Álvarez Valdivia, I. M., & Remesal Ortiz, A. (2015). Making learning more visible through e-assessment: Implications for feedback. Journal of Computing in Higher Education, 27(1), 10–27. https://doi.org/10.1007/s12528-015-9091-8 Leight, H., Saunders, C., Calkins, R., & Withers, M. (2012). Collaborative testing improves performance but not content retention in a large-enrollment introductory biology class. Life Science Education, 11, 392–401. https://doi.org/10.1187/cbe.12-04-0048 15 https://hal.archives-ouvertes.fr/hal-01897409 https://doi.org/10.1504/IJPOM.2012.045362 https://doi.org/10.1016/j.compedu.2018.12.008 https://doi.org/10.1186/s41239-021-00260-3 https://doi.org/10.1007/s12528-015-9091-8 https://doi.org/10.1080/87567559909596077 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu Lison, C. (2020). La présentation orale en contexte de formation à distance : évaluer un Pecha Kucha. Évaluer. Journal international de recherche en éducation et formation, Numéro Hors-série, 1, 173-180. Lusk, M., & Conklin, L. (2003). Collaborative testing to promote learning. Journal of Nursing Education, 42(3), 121–124. https://doi.org/10.3928/0148-4834-20030301-07 Mahoney, J. W., & Harris-Reeves, B. (2019). The effects of collaborative testing on higher order thinking: Do the bright get brighter? Active Learning in Higher Education, 20(1), 25–37. Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal design for learning. Theory and practice. CAST Professional Publishing. http://udltheorypractice.cast.org/ Muir, S. P., & Tracy D. M. (1999). Collaborative essay testing. Just try it! College Teaching, 46(1), 33– 35. https://doi.org/10.1080/87567559909596077 Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199– 218. https://doi.org/10.1080/03075070600572090 Ollier-Malaterre, A. (2018). La compétence numérique de gestion des frontières sur les réseaux sociaux numériques : un capital culturel technologique à la Bourdieu. Lien social et Politiques, (81), 121–137. https://doi.org/10.7202/1056307ar Raynault, A., Lebel, P., Brault, I., Vanier, M. C. & Flora, L. (2021). How interprofessional teams of students mobilized collaborative practice competencies and the patient partnership approach in a hybrid IPE course. Journal of Interprofessional Care, 35(4), 574–585. Redecker, C., Punie, Y., & Ferrari, A. (2012). E-assessment for 21st century learning and skills. In A. Ravenscroft, S. Lindstaedt, C. D. Kloos, & D. Hernández-Leo (Eds.), 21st century learning for 21st century skills (Lecture Notes in Computer Science, Vol. 7563, pp. 292–305). Springer. https://doi.org/10.1007/978-3-642-33263-0_23 Resta, P., Laferrière T., McLaughlin, R., & Kouraogo, A. (2018). Issues and challenges related to digital equity: An overview. In J. Voogt, G. Knezek, R. Christensen, & K.-W. Lai (Eds.), Second handbook of information technology in primary and secondary education. Springer International Handbooks of Education. https://doi.org/10.1007/978-3-319-71054-9_67 Rolim, C., & Isaias, P. (2019). Examining the use of e-assessment in higher education: Teachers and students’ viewpoints. British Journal of Educational Technology, 50(4), 1785–1800. https://doi.org/10.1111/bjet.12669 Romeu Fontanillas, T., Romero Carbonell, M., & Guitert Catasús, M. (2016). E-assessment process: Giving a voice to online learners. International Journal of Educational Technology in Higher Education, 13(1), 1–20. https://doi.org/10.1186/s41239-016-0019-9 Rose, D. H., Robinson, K. H., Hall, T. E., Coyne, P., Jackson, R. M., Stahl, W. M., & Wilcauskas, S. L. (2018). Accurate and informative for all: Universal design for learning (UDL) and the future of assessment. In S. N. Elliott, R. J. Kettler, P. A. Beddow, & A. Kurz (Eds.), Handbook of accessible instruction and testing practices (pp. 167-180). Springer International Publishing. https://doi.org/10.1007/978-3-319-71126-3_11 Sandahl, S. S. (2010). Collaborative testing as a learning strategy in nursing education. Nursing Education Perspectives, 31(3),142–147. Siemens, G. (2005). Connectivism: Learning as network-creation. ASTD Learning News, 10(1), 1–28. Sotiriadou, P., Logan, D., Daly, A., & Guest, R. (2020). The role of authentic assessment to preserve academic integrity and promote skill development and employability. Studies in Higher Education, 45(11), 2132–2148. https://doi.org/10.1080/03075079.2019.1582015 Spector, J. M., Ifenthaler, D., Sampson, D., Yang, L. (Joy), Mukama, E., Warusavitarana, A., Dona, K. L., Eichhorn, K., Fluck, A., Huang, R., Bridges, S., Lu, J., Ren, Y., Gui, X., Deneen, C. C., Diego, J. S., & Gibson, D. C. (2016). Technology enhanced formative assessment for 21st 16 https://doi.org/10.1080/87567559909596077 https://doi.org/10.1080/03075070600572090 https://doi.org/10.1007/978-3-319-71126-3_11 Denis, Heilporn, Mascarenhas, and Raynault Journal of Teaching and Learning with Technology, Vol. 11, Special Issue, jotlt.indiana.edu century learning. Journal of Educational Technology & Society, 19(3), 58–71. http://www.jstor.org/stable/jeductechsoci.19.3.58 Stearns, S. A. (1996). Collaborative exams as learning tools. College Teaching, 44(3), 111–112. Stödberg, U. (2012). A research review of e-assessment. Assessment & Evaluation in Higher Education, 37, 591 - 604. St-Onge, C., Ouellet, K., Lakhal, S., Dubé, T., & Marceau, M. (2022). COVID-19 as the tipping point for integrating e-assessment in higher education practices. British Journal of Educational Technology, 53(2), 349–366. https://doi.org/10.1111/bjet.13169 University of British Columbia (2020, August 17). Interview with Dr. Roberta Borgen (Neault). Learning in a Pandemic. https://ets.educ.ubc.ca/learning-in-a-pandemic-roberta-borgen/ Waycott, J., Sheard, J., Thompson, C., & Clerehan, R. (2013). Making students’ work visible on the social web: A blessing or a curse? Computers & Education, 68, 86–95. https://doi.org/10.1016/j.compedu.2013.04.026 Wiggins, G. (1990) The Case for Authentic Assessment. Practical Assessment, Research, and Evaluation, 2(2). Woody, W. D., Woody, L. K., & Bromley, S. (2008). Anticipated group versus individual examinations: A classroom comparison. Teaching of Psychology, 35(1), 13–17. https://doi.org/10.1080/00986280701818540 Zimbardo, P. G., Butler, L. D., & Wolfe, V. A. (2003). Cooperative college examinations: More gain, less pain when students share information and grades. Journal of Experimental Education, 71(2), 101–125. https://doi.org/10.1080/00220970309602059 Zipp, J. F. (2017). Learning by exams: The impact of two-stage cooperative tests. Teaching Sociology, 35(1), 62–76. 17 http://www.jstor.org/stable/jeductechsoci.19.3.58 https://doi.org/10.1111/bjet.13169 https://doi.org/10.1016/j.compedu.2013.04.026 https://doi.org/10.1080/00986280701818540