journal of teaching and learning with technology, vol. vol. 11, special issue, pp.51-54. doi: 10.14434/jotlt.v11i1.34436 using google docs to administer synchronous collaborative assessments harold olivey indiana university northwest holivey@iun.edu abstract: collaborative learning increases student achievement of learning outcomes in a wide range of disciplines, including the natural sciences, and is a hallmark of authentic assessment. to help students collaborate more effectively, i have used google docs, a free, online word-processing program accessible using almost any internet-connected device. assessments that include real-world application problems are composed in google docs and shared with students via links. google docs has proven to be more efficient than pencil-and-paper assessment, encourages greater collaboration within student groups than is possible with tools embedded in a learning management system, and provides opportunities to give students just-in-time instruction and examine student metacognition, all of which are foundational for authentic assessment. post-assessment grading is rapid, and corrected documents with instructor feedback can be easily shared with students. students have adapted readily to the platform and have learned on their own how to use the software beyond my original conception. i describe how i have used google docs successfully in a molecular biology course, offer considerations for grading and distributing corrections, and report on students’ perceptions of the assessments themselves. keywords: collaborative learning, online learning tools, synchronous teaching there exists a diversity of thought on what constitutes authentic assessment. an editorial on the components of authentic learning promoted four basic themes, “real-world problems that engage learners in the work of professionals; inquiry activities that practice thinking skills and metacognition; discourse among a community of learners; and student empowerment through choice” (rule, 2006). another group’s analysis of the literature led them to propose eight critical rules for authentic assessment: challenge, performance or product (outcome), transfer of knowledge, metacognition, accuracy, fidelity, discussion, and collaboration (ashford-rowe et al., 2014). one common element in authentic assessment is the importance of collaboration. as collaborative learning techniques have increased student success in biology courses (tessier, 2007; hacisalihoglu et al., 2018, reviewed in rutherford, 2015), i have been using collaborative assessments in my molecular biology (biol-l 211) course since 2017. the majority of these are low-stakes assessments that expose students to the real-world application of principles learned in the classroom. administering collaborative assessments using paper is problematic. it is difficult to get all students involved in finding solutions to problems when there is only one sheet of paper for groups to record answers on. providing assistance to students during the assessment is difficult because it requires removing their access to their work product to review it. finally, handing back graded assessments is cumbersome, requiring either copying the original document for each member of the group or scanning the documents to files and distributing them. returning graded assessments is tricky because group membership is not static (e.g., a student might come to the wednesday class because of illness on monday). although learning management systems (lmss) have tools to administer assessments online, these are usually designed for individual assessments. using lms tools with collaborative assessments is not an easy task and often requires confusing logistical steps. mailto:holivey@iun.edu olivey journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu because of social distancing requirements in fall 2020, my students met over zoom and used google docs to complete collaborative assessments. i chose google docs as others had reported using it successfully to facilitate online real-time collaboration (roberts et al., 2019; spaeth & black, 2012). google docs allows synchronous editing of documents by multiple users and runs on almost any laptop, tablet, or smartphone, eliminating a technology barrier for most students. preparing, delivering, grading, and returning assessments in google docs is a straightforward process. helping students figure out “what they know that they know” (i.e., metacognition) is important for authentic assessment. with paper-and-pencil assessments, i had to move from one group to the next, trying to read over shoulders or having to take students’ work away from them to assess it. with google docs, one can simultaneously monitor multiple groups and intervene when necessary to help students perform better on the assessment. this is far more efficient than walking from table to table, picking up a group’s work (necessitating a halt to their work), and then having a discussion. assessment tools built into lmss do not generally offer the same synchronous editing ability of google docs, nor do they allow the instructor to monitor student work in real time, making google docs a far better choice for collaborative assessment. although i have used google docs in a medium-sized class (enrollment varies between 30 and 50 students each fall); it is easily scalable using file sharing tools within google. methodology assessments in google docs can contain text, images, tables, and links to outside sources. once a template file is created, copies are made for each independent group in the course. this is done quickly using copydocs,1 a freely available script that easily generates multiple copies of any file in google drive (google docs automatically stores files on google drive). files can be renamed but do not have to be since they are shared as links. to share files with students, it is easiest to put the files in a google drive folder that is not shared (“restricted”) and then set the files’ access to “anyone with the link” and then “editor.” (if your institution has an agreement with google to allow access only to individuals within your institution, this may be preferable to “anyone with the link.”) links are shared with students online. to avoid students’ accessing links early, i pasted the links into an announcement and used my lms’s “delay posting” feature to hide the announcement until the start of class. during class i opened each file being worked on in a separate tab in my web browser. i could easily switch from tab to tab and monitor the progress of each group in real time and could intervene when needed. if a group had a simple error in their work, i could type into the body of their document (using a distinctive font color) or add a comment to the document. if the issue seemed to indicate a metacognitive issue (i.e., the students did not know what they did not know), i could intervene in person to guide the students’ discussion. i chose to give some feedback in person because of the size of my class, but in larger classes all feedback could be done within google docs. it would also be simple to have teaching assistants (tas) monitor groups in this way also. selecting all the files being used in that period and changing access from “editor” to “viewer” ensures that all students have equal time to work on the assessment. graded assessments are distributed by using the “add people and groups” option in google drive sharing. for each file, the email address of each group member is pasted into the window, and access is set to “commenter” to allow students to see the instructor’s comments. students receive an email that lets them know that they have been given access to the file and contains a direct link. setting 1 copydocs is available at http://tinyurl.com/copydocscript. for a helpful article on how to use this tool, see https://go.iu.edu/4hcb 52 http://tinyurl.com/copydocscript https://go.iu.edu/4hcb olivey journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu up a simple spreadsheet that lists the names and email addresses of each group makes this process much faster. for larger classes, the spreadsheet could be kept in a google drive folder that tas can access so that they can grade and distribute materials to the students they are responsible for. sharing using email addresses makes it simple to add extra members or remove absent members from those given access. results and discussion google docs makes administering assessments and returning graded work much easier. additionally, it allows all students in a group to access and participate in the assessment process, promoting the “discourse among a community of learners” highlighted by rule (2006). allowing closer monitoring of work by instructors and tas also helps foster metacognition and transfer of knowledge (ashfordrowe et al., 2014). few barriers were encountered in using google docs. in all semesters, students have always had at least one device that could be used. students used google docs in ways i did not anticipate. for example, students learned on their own that they could use their phones to take pictures of information they had written on scratch paper and paste those pictures directly into their document. there are some minor drawbacks to using google docs. first, deciding what types of questions to use requires careful consideration. for example, although it is possible to require students to draw as part of their answer, it is difficult to draw inside the google docs environment. however, as mentioned earlier, students can upload pictures of pencil-and-paper drawings into the document. this may necessitate the instructor demonstrating how to do this. there is also the risk of having students look up information on their devices, which would be greater in nonproctored or difficultto-proctor environments (e.g., hybrid learning scenarios, large classrooms). however, as authentic assessment should avoid relying on information that can be easily looked up, instead asking students to complete tasks that require application of core principles to real-world tasks, careful composition of the assessment should minimize this issue. i surveyed students (n = 19) in fall 2021 about their perception of the value of collaborative assessments. table 1 provides the questions asked (“group discussion questions” are low-stakes assessments; “group exams” are high-stakes assessments) and a synopsis of students’ responses. students overwhelmingly agreed that the authentic collaborative assessments used in the course benefitted them. at least 50% of students strongly agreed that collaborative assessments increased their understanding of concepts learned in the classroom and improved their grade in the course. these data suggest that students find value in the authentic collaborative assessments delivered via google docs. table 1. student perceptions of the value of collaborative assessments. question text agreed agreed strongly agreed somewhat working on the group discussion questions helped me understand concepts covered in the lecture. 88.89% 50.00% 38.89% i feel that the group discussion questions have improved my grade in this course. 68.42% 57.89% 10.53% 53 olivey journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu i feel that the group exams have improved my grade in this course. 78.95% 57.89% 21.05% as noted throughout, using google docs to deliver authentic assessments is easily scalable. for instance, i have started using it to deliver collaborative assessments in my human anatomy and physiology course, which had 186 students in the spring 2022 semester. grading and distribution are easily handled by changing sharing permissions on files and sharing only with students who participated on a particular document. because google docs looks and functions like other wordprocessing software, there is very little new learning required for either the instructor or students in the course. the multiuser editing experience has been excellent, with very few reports from students of errors or bugs. because google suites (of which google docs is a part) includes spreadsheet and presentation tools, there are opportunities to adapt what i describe to courses that require complicated mathematical calculations or presentations. these attributes, combined with its compatibility with devices nearly all students already own, make google suites a remarkably robust means for incorporating authentic collaborative assessments. acknowledgments all human subjects research was approved by the indiana university institutional review board, protocol #13603. the author wishes to acknowledge drs. kris huysken and mark hoyert for helpful comments and advice during the preparation of this manuscript. references ashford-rowe, k., herrington, j., & brown, c. (2014). establishing the critical elements that determine authentic assessment. assessment & evaluation in higher education, 39(2), 205–222. https://doi.org/10.1080/02602938.2013.819566 roberts, b. s., roberts, e. p., reynolds, s., & stein, a. f. (2019). dental students' use of studentmanaged google docs and other technologies in collaborative learning. journal of dental education, 83(4), 437—444. https://doi.org/10.21815/jde.019.053 rule, a. c. (2006). editorial: the components of authentic learning. journal of authentic learning, 3(1), 1–10. rutherford, s. (2015). e pluribus unum: the potential of collaborative learning to enhance microbiology teaching in higher education. fems microbiology letters, 362(23), article fnv191. https://doi.org/10.1093/femsle/fnv191 spaeth, a. d., & black, r. s. (2012). google docs as a form of collaborative learning. journal of chemical education, 89(8), 1078–1079. https://doi.org/10.1021/ed200708p tessier, j. (2007). small-group peer teaching in an introductory biology classroom. journal of college science teaching, 36(4), 64–69. 54 https://doi.org/10.1080/02602938.2013.819566 https://doi.org/10.1080/02602938.2013.819566 https://doi.org/10.21815/jde.019.053 https://doi.org/10.1093/femsle/fnv191 https://doi.org/10.1093/femsle/fnv191 https://doi.org/10.1021/ed200708p 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 3892 journal of teaching and learning with technology, vol. 2, no. 2, december 2013, pp. 21 42. undergraduate students’ perceptions of electronic and handwritten feedback and related rationale ni chang1, bruce watson2, michelle a. bakerson3, and frank x. mcgoron4 abstract: some instructors, besides awarding grades, provide comments/feedback on students’ assignments. views of students on feedback help frame effective and efficient teaching and learning. it is important to delve into this topic. in the 2013 academic year, all undergraduate students at a midwestern university were invited to complete a survey to share perceptions of which feedback form they preferred: handwritten or e-feedback and related rationale behind their preferences. their rationales were given in the categories of the following five themes: accessibility, timeliness, legibility, quality and personal. the data were analyzed quantitatively and qualitatively, and show that the majority of the respondents preferred e-feedback. with respect to the rationale, more respondents and higher ratings overall were given to e-feedback for timeliness, accessibility, and legibility. although more respondents overall favored efeedback, the ratings were higher in handwritten feedback for its quality and personal themes. age and class standing are positively associated with students’ desire for feedback in general and for e-feedback. however, there was a negative association between students’ gpa and feedback in general and e-feedback. in this article, addressed are also limitations, educational implications, and future research suggestions. keywords: feedback, electronic feedback, handwritten feedback, instructors, students i. introduction. feedback is information that fosters deep learning (denton, madden, roberts, & rowe, 2008; higgins, hartley, & skelton, 2002). it is a vital component of effective and efficient teaching and learning in higher education (ackerman & gross, 2010; ball, 2009; hounsell, 2003; matthews, janicki, he, & patterson, 2012; parkin, hepplestone, holden, irwin, & thorpe, 2012). good teaching is represented by helpful comments on students’ assignments (ramsden, 2003). with the rapid development of technologies, some instructors have shifted the way they provide feedback from a conventional handwritten approach to a technological format; specifically typing feedback and delivering it electronically. students’ views on feedback help frame both effective and efficient instruction and learning in higher education (denton et al., 2008; higgins et al., 2002; parkin et al., 2012). it is important to know students’ perceptions of feedback, 1 department of elementary education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, nchang@iusb.edu 2 department of professional educational services, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, watsonbr@iusb.edu 3 department of secondary education and foundations of education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, mbakerso@iusb.edu 4 department of elementary education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, fmcgoron@iusb.edu chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 22 including handwritten and electronic feedback (e-feedback) (ackerman & gross, 2010; carless, 2006; higgins et al., 2002). therefore, a survey was conducted at a regional campus of a large midwestern university during the academic year 2012 to 2013. the purposes of this survey study were to explore the perceptions of undergraduate students regarding two forms of feedback: efeedback and handwritten feedback and to explore the reasons behind the students varied preferences. the research questions underlying this study were “what do undergraduate students prefer: handwritten feedback or e-feedback?” and “what are their related rationale?” a. theoretical framework. students desire to receive feedback, as it could help better their learning (hyland, 2000). however, feedback needs to be easily accessible to students. accessibility is a general expectation of students in the millennial generation (morrissey, coolican, & wolfgang, 2011). a survey study conducted by di costa (2010) found that accessibility was mostly recognized by the students as a component in defining useful feedback. bridge and appleyard (2008) and sadler (2010) noted students appreciated the permanence and safety of feedback that could be accessed electronically. in contrast, chang et al. (2012) found one reason given by the handwritten feedback supporters was that they were able to easily access feedback conveniently through professors in class. that is, students did not need to rely on computers to access feedback. besides accessibility of feedback, timeliness has been identified as an important element in benefiting student learning. the national union of students (nus; 2008) survey found students were unhappy with the timing of their feedback. although students want feedback that is constructive, they have a strong preference for feedback that is prompt (scott, 2006) and timely (ferguson, 2011). if feedback is received late, it becomes useless to students, as many students have already moved on (denton et al., 2008). to receive feedback early, it seems electronically delivered feedback gets the majority of student support (chang et al., 2012). when bridge and appleyard (2008) asked students to consider the issue of online feedback, 88% reported that they favored online feedback because they were able to receive it faster than in the more conventional format of hand delivery. bai and smith (2010) cited the automated nature of e-learning as contributing to the benefit of timely feedback. when feedback is typed rather than handwritten, feedback is readable. denton et al., (2008) reported that students considered legibility a feature that would significantly improve the feedback they received. therefore, legibility is a significant element in supporting student learning (ferguson, 2011). (price, handley, millar, & o'donovan, 2010) reported students’ general criticism of feedback was mainly due to illegible writing. illegible feedback makes it unclear, leaving students both disappointed and frustrated, which are also supported by the study conducted by chang et al. (2012). in aiding students to learn, feedback also needs to be constructive and helpful. the content needs to be understood by students. feedback should also enable students to know what and where their attention is needed and whether or not their work is on right track. furthermore, allowing students to engage in revisions according to received feedback is beneficial to students as well. all the above is the operational term of quality. according to the national union of students (2008), students are dissatisfied with the quality of feedback. case (2007) also identified poor and low quality feedback as issues in the feedback students received. when considering the quality of online instruction, yang and durrington (2010) found quality of chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 23 instructors’ feedback as the aspect mentioned most often in student course evaluations. when time and quality were considered as competing aspects of feedback, students were happy to wait a little longer for feedback if quality increased (chang et al., 2012; ferguson, 2011). quality feedback also needs to contain language that is positive and relational, which may help establish the relationship between instructors and students. when such feedback is received, students may think their professors care about their learning. time and effort spent in providing feedback on students’ assignments is appreciated by students. students are thus likely to read feedback and, in turn, better their performances. all the above is the operational term of personal in terms of feedback. krause and stark (2010) found that feedback is most useful to students when it is perceived to be personal. students responding to ferguson’s (2011) study want feedback to be both positive and personal. when the tone of feedback is overly negative, students often feel that instructors do not care about their learning (price et al., 2010). without feedback that is personal, students may view assignments as mere products, leaving them feeling alienated and disengaged (di costa, 2010; mann, 2001; price et al., 2010). with respect to feedback that is personal, one interesting finding by chang et al. (2012) was that respondents who supported handwritten feedback perceived that type of feedback as more personal than those who supported e-feedback. the handwritten supporters also recognized that handwritten feedback enabled them to have close rapport with their instructors. accessibility, timeliness, legibility, quality, and personal, as have been mentioned above, are the five themes identified by chang et al. (2012) through a prior study in the academic year 2011-2012. two hundred and sixty students from the school of education at the university participated in the study. the study was intended to explore what form of feedback that the students preferred, handwritten or electronic, and related rationale behind their preferences. in term of e-feedback, it was defined as all feedback that was delivered to students electronically. as the result of the study, chang et al. (2012) found that the majority of the participants (68%) preferred e-feedback while 32% preferred handwritten feedback. when considering rationale for preferring e-feedback, 38% of the respondents enjoyed its easy accessibility. thirty percent of students favored timeliness and 16% supported its legibility. not as many e-feedback supporters mentioned quality (10%) and personal (1%) aspects as they did for timeliness and legibility. in contrast, there were many more handwritten feedback supporters who endorsed quality (40%) and personal (32%). fewer students favored handwritten feedback for accessibility (25%), and timeliness (3%). no handwritten feedback supporters indicated legibility as a rationale. the present study further explored the two aspects: what form of feedback did the students prefer: handwritten or electronic feedback? and what was the related rationale? ii. methods. a. participants. all undergraduate students at a midwestern university were invited to participate in a survey asking about handwritten and e-feedback and the related rationale. of the approximate 7,200 students, 763 undergraduate students responded, with a return rate of almost 11%. out of the 763 respondents, those respondents who skipped questions are noted in the results. almost twice as many female as male respondents preferred e-feedback (n = 475) over handwritten feedback (n = 273). the predominant age range was 18-24 (n = 423). class standing for the most part was chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 24 evenly distributed. the predominant gpa range was 3.01-4.00 (n = 470) and the college of liberal arts (clas) had the most respondents (n = 301) (see table 1). b. instrument. the online survey was hosted on survey monkey and was used to collect data. the survey questions were modified and revised from the previous study to obtain more valid information with students of the entire campus. in other words, based on the five themes: accessibility, timeliness, legibility, quality, and personal, which were derived from the previous study (chang et al., 2012), the present study expanded and extended each of the themes with a few corresponding items on a 7 point likert scale. for example, there were four factors under the theme of accessibility: (a) allows me to get information easily, (b) allows me to receive and send information conveniently, (c) allows me to ask questions easily and (d) makes me feel secure to receive feedback from the professor. the survey instrument consisted of thirteen closed-ended questions with multiple factors in each and four open-ended questions. table 1. demographics in terms of handwritten and e-feedback feedback preference. handwritten e-feedback blank total variables n % n % n % n % gender male 74 35.24 135 63.98 1 0.47 210 100 female 199 36.18 340 61.93 10 1.82 549 100 273 36% 475 62% age 18-24 180 42.55 239 56.50 4 0.95 423 100 25-34 53 29.78 122 68.54 3 1.69 178 100 35-44 26 26.26 71 71.72 2 2.02 99 100 45-54 11 25.58 31 72.09 1 2.33 43 100 55+ 5 27.78 12 66.67 1 5.56 18 100 275 36% 475 62% class standing freshman 74 46.84 81 51.27 3 1.90 158 100 sophomore 74 43.27 95 56.21 2 1.18 171 100 junior 62 32.80 125 66.14 2 1.06 189 100 senior 65 27.20 170 71.13 4 1.67 239 100 gpa 3.01-4.00 161 34.26 302 64.26 7 1.49 470 100 2.01-3.00 78 36.62 134 62.91 1 0.47 213 100 1.01-2.00 4 25.00 12 75.00 0 0 16 100 0.00-1.00 1 100 0 0 0 0 1 100 unknown 31 56.36 23 41.82 1 1.82 55 100 chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 25 school arts 23 34.33 42 62.69 2 2.99 67 100 business 31 27.68 80 71.43 1 0.89 112 100 education 57 43.18 74 56.49 1 0.76 132 100 clas 118 39.20 181 60.13 2 0.66 301 100 health 34 28.81 85 71.43 1 0.84 120 100 technology 12 44.44 14 51.85 1 3.70 27 100 note. percent ranges refer to the partitioned group or n. also, some of the ns do not add up to 763 as some respondents skipped questions. c. procedure. after the institutional review board approval, the survey link was sent out to all undergraduate students who were in attendance at the university via an email invitation. on survey monkey, the students were first prompted with a study information sheet, which informed them of the purpose of the study, ensured confidentiality and also made it clear that participation was voluntary. if potential respondents agreed to participate, they continued on to complete the survey. all potential participants received a first follow-up letter electronically two weeks after the initial invitation letter was sent out. a second follow-up letter was emailed to all potential participants two weeks later. the study was closed two weeks following the second follow-up letter. d. data analysis. to answer the research questions of whether the undergraduate students preferred e-feedback or handwritten feedback, nonparametric and parametric tests were utilized. spss 20 was used to answer why either of these options was preferred over the other. a crosstabs procedure, using the chi-square test of independence was used to analyze the nominal variables. a chi-square test of independence measures the degree to which a sample of data comes from a population with a specific distribution (bakerson, 2009; mertler & vanatta, 2005; rosenberg, 2007; stevenson, 2007). it tests whether the observed frequency count of a distribution of scores fits the theoretical distribution of scores. this issue was addressed through the use of the pearson's chi-square procedure (bakerson, 2009, mertler & vanatta, 2005, rosenberg, 2007). independent t-tests were conducted to compare feedback preference for all factors under the five themes; accessibility, timeliness, legibility, quality, and personal (charmaz, 2000; creswell, 2002). correlations of demographic variables, with feedback preferences, were run to establish patterns in the variables (cresswell, 2002). in addition all responses to open ended questions were analyzed with respect to their justifications or preferences for handwritten or e-feedback providing a purposeful examination of detailed actual experience (cresswell, 2002). chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 26 iii. results and discussion. a. preference for the form of feedback. with respect to the first research question: “what do the participants prefer: handwritten feedback or electronic feedback?” it was found that the majority of the participants (n=476, 63.3%) preferred e-feedback (see figure 1). the studies conducted by chang et al. (2012), denton et al. (2008), and parkin et al. (2012) also yielded similar findings in which more students preferred e-feedback than handwritten feedback. figure 1. feedback preference. b. degrees of preferences for both forms of feedback. in addition to a question on preference, the respondents were also asked to rate the degree of preference for e-feedback and handwritten feedback in general, and then for all factors under the five main themes; accessibility, timeliness, legibility, quality, and personal. table 2 details the results of the question concerning what degree a respondent preferred: e-feedback or handwritten feedback. whichever the preference by the respondents, handwritten or e-feedback, these respondents also rated their preferred feedback form higher than the other. table 2. t-tests comparing how much preference for handwritten and e-feedback feedback based on choice of feedback. n mean sd t df p preference for handwritten handwritten 276 1.95 1.01 -24.596 745 0.00 e-feedback 471 4.46 1.51 preference for e-feedback handwritten 274 4.33 0.921.39 29.33 748 0.00 e-feedback 476 1.86 0.92 note. likert scale 1 = very much prefer to 7 = not preferred at all, the lower the mean the stronger the preference. 0.0% 20.0% 40.0% 60.0% 80.0% 100.0% handwritten feedback electronic feedback chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 27 c. the usefulness of two forms of feedback. the respondents were also asked to rate the degree of usefulness of each form of feedback (see table 3). when the respondents preferred handwritten feedback, they also thought handwritten feedback was more useful than e-feedback. when the respondents chose e-feedback as their preferred form, they rated e-feedback as much more useful than handwritten feedback. table 3. t-tests comparing usefulness of feedback. n mean sd t df p usefulness of handwritten handwritten 275 1.644 0.878 -16.147 748 0.000 e-feedback 475 3.324 1.591 usefulness of e-feedback handwritten 274 3.518 1.435 20.127 747 0.000 e-feedback 476 1.787 0.916 note. likert scale 1 = very useful to 7 = not useful at all, the lower the mean the stronger the preference. d. accessibility. there were four factors under the theme of accessibility: (a) allows me to get information easily, (b) allows me to receive and send information conveniently, (c) allows me to ask questions easily and (d) makes me feel secure to receive feedback from the professor. irrespective of the respondents’ preferred feedback form, there was a statistically significant difference in the perceptions of each of the factors under this theme between handwritten feedback supporters and e-feedback supporters. that is, when the respondents chose handwritten feedback as their preferred feedback form, they rated all factors more strongly than those who preferred efeedback (see table 4). when the respondents chose e-feedback as their preferred feedback form, they rated all factors under e-feedback more strongly than the same factors under handwritten feedback (see table 5). overall, however, these respondents gave higher ratings to e-feedback than to handwritten feedback regardless of preferred feedback form (see tables 4 & 5). table 4. t-tests comparing accessibility factors for e-feedback feedback. n mean sd t df p (a) allows me to get information easily handwritten preference 270 2.722 1.595 13.858 736 0.000 e-feedback preference 468 1.511 0.773 (b) allows me to receive and send information conveniently handwritten preference 269 2.100 1.307 9.668 733 0.000 e-feedback preference 466 1.380 0.703 (c) allows me to ask questions easily handwritten preference 269 2.877 1.815 9.770 734 0.000 e-feedback preference 467 1.803 1.164 (d) makes me feel secure to receive feedback from the professor handwritten preference 267 3.240 1.664 12.912 729 0.000 e-feedback preference 464 1.882 1.167 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 28 the justifications provided by the e-feedback supporters for (a) allows me to get information easily include, “i'm always online, always even on my phone so it makes things easier for me.” “[n]o matter where you are, you usually have access to the internet therefore you can get it anywhere at any time.” denton et al. (2008) and parkin et al. (2012) found similar data. they found that technology enabled students to access their grades and feedback at a time and place of their choosing. in commenting on (b) allows me to receive and send information conveniently, some e-feedback supporters wrote, “easily accessible as it only requires one or two clicks of the mouse.” “very helpful because i can log on whenever it is convenient for my schedule to check on things.” similarly, conveniently receiving and sending information with the use of the internet was concluded in chang (2011) and chang et al. (2012). students recognized and appreciated the flexibility and convenience that technology could provide in facilitating their learning (denton et al., 2008; parkin et al., 2012). in contrast, handwritten feedback supporters had their own reasons to support (a) allows me to get information easily and (b) allows me to receive and send information conveniently. the respondents justified, “it does not require a computer to read.” to some students, finding a computer and/or logging on a computer required an effort. a student noted, “if it's an email or electronic, i have to take the time to log in to the computer, which at home is slow and in a dark corner.” the rationale given by the handwritten feedback supporters is consistent with the studies conducted by chang (2011) and by chang et al. (2012), handwritten feedback was independent of the internet, which made student learning convenient. to avoid redundancy, the discussion of (c) allows me to ask questions easily will be made in section of personal. with respect to why e-feedback supporters supported (d) felt secure to receive feedback from professors, here are some of the explanations: “i don't have to worry about losing it!” “it's nice that you can always go back to refer to it when it's saved online.” yet, the handwritten feedback supporters contended, “does make me feel secure with having the actual feedback in my hands.” “this is also good for keeping me secure because i can always keep and lock the feedback from it being deleted.” even though chang et al. (2012) identified and supported this category, few other studies have examined this category. therefore future research is warranted for better facilitating student learning. table 5. t-tests comparing accessibility factors for handwritten feedback. n mean sd t df p (a) allows me to get information easily handwritten preference 274 2.449 1.465 -17.526 728 0.000 e-feedback preference 456 4.568 1.648 (b) allows me to receive and send information conveniently handwritten preference 271 3.989 1.623 -10.838 518 0.000 e-feedback preference 454 5.286 1.447 (c) allows me to ask questions easily handwritten preference 266 2.872 1.680 -12.335 579 0.000 e-feedback preference 454 4.504 1.770 (d) makes me feel secure to receive feedback from the professor handwritten preference 268 1.720 1.206 -14.100 718 0.000 e-feedback preference 452 3.489 1.832 chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 29 e. timeliness. there is only one factor under the theme of timeliness: (e) [feedback] allows me to receive feedback fast. on this factor, there was a statistically significant difference between the views by the handwritten feedback supporters and those by the e-feedback supporters. when the respondents chose handwritten feedback as their preferred feedback form, they rated timeliness more strongly than those favoring e-feedback. when the respondents chose e-feedback as their preferred feedback form, they rated timeliness more strongly than those favoring handwritten feedback. overall, however, these respondents’ ratings for e-feedback were stronger than for handwritten feedback regardless of preferred feedback form (see table 6). table 6. t-tests comparing timeliness theme for handwritten and e-feedback. n mean sd t df p handwritten (e) feedback allows to receive feedback fast handwritten 266 3.624 1.581 -12.220 570 0.00 e-feedback 451 5.135 1.631 e-feedback (e) feedback allows to receive feedback fast handwritten 267 2.277 1.461 8.927 731 0.00 e-feedback 466 1.504 0.883 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. regardless of the respondents’ preferences for the two forms of feedback, it is apparent that they rated e-feedback as timelier than handwritten feedback. the mean difference of views on timeliness is notably large (see table 6). similar findings were determined in the reports by chang et al. (2012) and dennen, darabi, and smith (2007). when feedback is delivered electronically, students do not have to wait until next class or another week, as a student wrote, “…i don't have to wait a week to hear back on how well i did or what i need to improve on.” another student pointed out, “if i receive feedback that is very late, i usually disregard it because it is irrelevant.” the findings are consistent with parkin et al. (2012), who found that if students did not receive feedback in time for it to be meaningful germane to a task assessed, the relevance of the feedback could thus be reduced. feedback needs to be timely to appropriately promote student learning (chang et al., 2012; dennen et al., 2007; di costa, 2010; ferguson, 2011; parkin et al., 2012; rowe & wood, 2008). however, from the perspectives of those who supported handwritten feedback, timeliness did not seem to be a concern. one respondent rationalized that feedback that was regularly delivered in class would enable students to predict when they could receive feedback from instructors: “with handwritten feedback, you know when you can expect to receive it (i.e. in class or other scheduled meeting time).” another reason behind not being concerned about timeliness is the view many handwritten feedback supporters, even some e-feedback supporters, had that the delayed return of feedback is due to instructors spending time reading students’ work, as a student put, “it takes longer to get a handwritten feedback … because the professor took the time and effort to read it [your work].” thus, feedback could be shaped by individual student assignments as a means of individualized instruction (chang et al., 2012). as such, the respondents perceived that they were likely to receive detailed and constructive feedback, as some commented, “i am willing to wait longer for and prefer to wait for detailed handwritten feedback as opposed to electronic feedback.” “if constructive feedback given, time isn’t too chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 30 much a factor.” “it's okay if they take a little longer because the quality is better.” chang et al. (2012) and ferguson (2011) had a similar finding that students would be willing to wait longer for quality feedback. throughout the entire survey, neither those students who preferred e-feedback nor those who preferred handwritten feedback specifically indicated the size or type of assignments in relation to timeliness. in other words, none mentioned about to what extent timeliness is based on the size or type of assignment: a longer assignment might be turned around slower than a shorter assignment. it could be easily understood that short essay can be more quickly evaluated than a longer paper. therefore, there is no particular answer to this issue. nonetheless, feedback that is received untimely is not helpful in deepening or maximizing student learning (chang et al., 2012; dennen et al., 2007; di costa, 2010; ferguson, 2011; parkin et al., 2012; rowe & wood, 2008). f. legibility. there were two factors under the theme of legibility; (f) [feedback] enables me to read the feedback and (g) [feedback] enables me to understand what the professor writes. there were statistically significant differences between the perceptions by the handwritten feedback supporters and those by e-feedback supporters. when the respondents chose handwritten as their preferred feedback form, they rated both factors under the theme of legibility more strongly than those e-feedback supporters did (see table 7). the same holds true for the respondents who chose e-feedback as their preferred feedback form. these respondents rated the two factors of legibility under e-feedback more strongly than under handwritten feedback (see table 8). table 7. t-tests comparing legibility factors for handwritten feedback. n mean sd t df p (f) enables me to read the feedback handwritten preference 266 2.959 1.510 -11.912 716 0.000 e-feedback preference 452 4.522 1.800 (g) enables me to understand what the professor writes handwritten preference 267 3.079 1.450 -12.404 717 0.000 e-feedback preference 452 4.601 1.675 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. even though there are statistically significant differences within each factor, overall more students preferred e-feedback on both of these factors, and gave higher ratings regardless of their particular feedback preference (see tables 7 and 8). chang et al. (2012), denton et al. (2008), ferguson (2011), and price et al. (2010) reported similar findings. common justifications provided by the respondents include, “since it is typed, it is legible [,] [i]f their spelling and grammar is good at least.” “… electronic feedback wins in this category [legibility].” denton et al. (2008) and parkin et al. (2012) found that many students were likely to read or use feedback if it was returned to them in a typed and legible format. chang (2011), chang et al. (2012), and ferguson (2011) also confirmed the finding that typed feedback enabled students to read feedback without difficulty. with respect to (g), [feedback] enables me to understand what the professor writes, to some respondents, e-feedback, even if it is typed, does not make sense to chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 31 students and is full of spelling errors, it is of little use, as a respondent expressed, “you will always be able to read typed [feedback], but that doesn't matter if [it] is not necessarily comprehensible and more subject to misspellings.” on the contrary, if feedback’s quality was good, the respondents were willing to take time to decipher it. a student put it this way: “if the quality of what is written is high enough, student time to making out the writing is worth it.” the linkage between legibility and quality appears to suggest that students care about their learning and hope to act on feedback to better their work (chang et al., 2012; ferguson, 2011). however, further research is needed for a deep look at this factor. table 8. t-tests comparing legibility factors for e-feedback feedback. n mean sd t df p (f) enables me to read the feedback handwritten preference 267 1.846 1.316 6.707 728 0.000 e-feedback preference 463 1.324 0.788 (g) enables me to understand what the professor writes handwritten preference 265 1.996 1.242 5.886 726 0.000 e-feedback preference 463 1.495 1.021 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. g. quality. there were seven factors under the theme of quality: [feedback] (h) offers constructive criticism or comments, (i) is helpful, (j) allows me to understand the content of the professor’s comment, (k) allows for revisions and improvement, (l) provides detailed information that i would like to know in text, (m) provides detailed information that i would like to know at the end of paper, and (n) allows me to feel and touch the feedback, which is conducive to my reading and understanding. there were statistically significant differences between the views by the handwritten feedback supporters and those by the e-feedback supporters on all the factors of quality. that is, when the respondents chose handwritten feedback as their preferred feedback form, they rated all factors more strongly than those by e-feedback supporters (see table 9). the same holds true for those who chose e-feedback as their preferred feedback form. these respondents rated factors of quality under e-feedback statistically more strongly than under handwritten feedback (see table 10). however, overall, more respondents rated factors of (h) and (n) under handwritten feedback higher than those under e-feedback (see tables 9 & 10). chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 32 table 9. t-tests comparing quality factors for handwritten feedback. n mean sd t df p (h) offers constructive criticism or comments handwritten preference 268 1.679 1.126 -9.792 718 0.000 e-feedback preference 452 1.799 1.659 (i) is helpful handwritten preference 267 1.588 1.098 -10.137 717 0.000 e-feedback preference 452 2.741 1.656 (j) allows me to understand the content of the professor's comment handwritten preference 267 1.970 1.214 -10.962 716 0.000 e-feedback preference 451 3.268 1.695 (k) allows for revisions and improvement handwritten preference 265 1.951 1.228 -10.375 712 0.000 e-feedback preference 449 3.229 1.770 (l) provides detailed information i would like to know in text handwritten preference 266 2.139 1.382 -9.426 711 0.000 e-feedback preference 447 3.333 1.770 (m) provides detailed information i would like to know at the end of a paper handwritten preference 263 1.658 1.036 -10.914 708 0.000 e-feedback preference 447 2.904 1.672 (n) allows me to feel and touch the feedback, which is conducive to my reading handwritten preference 265 1.676 1.258 -11.655 707 0.000 e-feedback preference 444 3.205 1.902 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. handwritten feedback supporters perceived that if the feedback was handwritten, the quality of handwritten feedback was always higher than that of e-feedback. a student said, “handwritten feedback from my courses has been consistently higher quality and more thought out comments than any electronic feedback i have received.” most handwritten feedback supporters were also in sync with the notion that handwritten feedback was “more apt to explaining mistakes.” when feedback enabled students to see and understand their mistakes, it is likely that students perceived such feedback as high quality. therefore, handwritten feedback was helpful and comprehensible, and enabled students to know specifically where further improvement was needed. in addition, when instructors write feedback by hand, various colors of pens would be used for different purposes, as a respondent explained, “some teachers use different colored ink which helps distinguish whether the written comment refers to a mistake or simply a constructive comment. an example would be red ink for errors like [grammar]. blue ink could mean a [constructive] comment or constructive [criticism].” chang et al. (2012) found that the handwritten feedback supporters appeared to have attached much greater importance to the feedback that was more detailed and specific than feedback that was typed and sent electronically. chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 33 table 10. t-tests comparing quality factors for e-feedback feedback. n mean sd t df p (h) offers constructive criticism or comments handwritten preference 263 2.970 1.604 8.656 725 0.000 e-feedback preference 464 2.070 1.180 (i) is helpful handwritten preference 265 2.608 1.580 8.053 727 0.000 e-feedback preference 464 1.819 1.057 (j) allows me to understand the content of the content of the professor's comment handwritten preference 264 3.136 1.549 10.844 727 0.000 e-feedback preference 465 2.039 1.159 (k) allows for revisions and improvement handwritten preference 263 2.875 1.492 8.024 721 0.000 e-feedback preference 460 2.078 1.148 (l) provides detailed information i would like to know in text handwritten preference 261 3.111 1.561 8.787 719 0.000 e-feedback preference 460 2.174 1.259 (m) provides detailed information i would like to know at the end of a paper handwritten preference 259 3.290 1.567 9.676 714 0.000 e-feedback preference 457 2.230 1.310 (n) allows me to feel and touch the feedback, which is conducive to my reading handwritten preference 261 4.667 1.817 8.708 715 0.000 e-feedback preference 456 3.384 1.943 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. many handwritten feedback supporters also show their propensity toward handwritten feedback by rationalizing their disapproval of e-feedback. one respondent noted, e-feedback “[i]s usually based on a scale rather than the professor leaving actual comments.” miscommunication is another reason for many handwritten feedback supporters to feel disinterested in e-feedback. a respondent wrote, “it is particularly hard to fully understand nuance via electronic communication. [thus], miscommunication is so easy.” a lack of nonverbal cues could easily lead readers to misinterpret or misunderstand instructors’ intended comments or messages (chang, 2011). in terms of caring for student learning, the respondents felt that e-feedback did not show the sincerity of professors: e-feedback was “[n]ot always the best advice because it seems like they just threw it together.” these reasons indirectly convey that e-feedback is not useful and does not allow students to improve their learning. e-feedback supporters offered a different rationale for preferring all factors of quality. from their perspectives, e-feedback was specific and offered useful explanations: “i've noticed that most of the electronic feedbacks are more in-depth in their explanations and reasons.” parkin et al. (2012) echoed that the participants in their study felt that online feedback was thoughtful. additional reasons given by e-feedback supporters include, “the clarity i receive from electronic feedback has been better than written. i suspect that is because thoughts can be edited and organized in such a way that handwritten examples do not allow.” parkin et al. (2012) also reported that their respondents recognized editing and revising feedback could become fairly easy to tutors with the use of electronic tools. apparently, technology has made teaching more effective, as instructors are able to edit and reorganize feedback that has been composed. in chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 34 contrast, instructors who chose to write feedback by hand did not seem able to do so frequently and conveniently. an e-feedback supporter commented, “handwritten comments tend to be abbreviated more often and leaves you occasionally wondering if you missed something or if you correctly understand the abbreviations.” decoding abbreviations and wondering whether the resulting work matched the instructor’s intended meaning were fairly uneasy to the respondents and could generate a sense of uncertainty. such feeling and emotional status could plausibly become the reasons for some respondents to support e-feedback. however, these aspects were not found by the studies conducted by (chang, 2011) and by (chang et al., 2012). as such, an investigation could be warranted to further the understanding of how to facilitate student learning via assessment feedback. the qualitative data given above might help point to specific, detailed, clear, thoughtful, and comprehensible feedback that was generally desired by the respondents, as it could offer information for improvement. in other words, the data showed that irrespective of their particular feedback preferences, the respondents viewed that handwritten feedback could provide constructive feedback. this might explain why more respondents, in general, gave higher ratings to handwritten feedback than e-feedback on (h) offers constructive criticism or comments than to e-feedback. h. personal. there were four factors under the category of personal: [feedback] (o) allows me to establish rapport with my professor, (p) encourages me to read feedback, (q) shows that the professor cares about me, and (r) makes me appreciate my professor's time and attention. when the respondents chose handwritten feedback as their preferred feedback form, they rated all factors significantly more strongly than those by e-feedback supporters (see table 11). the same holds true for those who chose e-feedback as their preferred feedback form. these respondents rated all factors under electronic feedback significantly more strongly than the same factors under handwritten feedback (see table 12). however, overall, more respondents rated factors of (q) and (r) under handwritten feedback higher than those under e-feedback (see tables 11 & 12). one of the main reasons for handwritten supporters to support handwritten feedback may be that “[h]andwritten feedback … always seems personal …” as a respondent stated. commonly felt by the respondents is that e-feedback appears to distance instructors from students psychologically (chang, 2011), as some students noted: “there seems to be a distance between you and the professor if all feedback is just electronic.” the respondents explained, “electronic is usually more of a summary…” “… they … just copy and paste a generic statement.” similarly chang et al. (2012) found that “… sometimes electronic feedback feels generic and impersonal” (p. 12). as such, if feedback is handwritten, it would be difficult for instructors to “duplicate” feedback, as a respondent pointed out, “i feel like an instructor is much less likely to copy and paste when the feedback is handwritten.” if feedback is copied and pasted on a student’s assignment, the student would be made to“[a]lmost feel as if i’m simply a part of a mass email that is sent out to a lot of students.” this is implicit that instructors care very little about student learning, if e-feedback is delivered in this fashion. therefore handwritten feedback seems a good-to-fit candidate for instructors to show care about student learning, as a respondent remarked, “i think that having a professor hand write their comments not only shows that you[‘re] not just another number but that they actually care about your improvements in their classes.” this might also explain why overall the respondents in the present study gave higher ratings on chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 35 the factors of (q) shows that the professor cares about me, and (r) makes me appreciate my professor's time and attention, irrespective of their particular preferred feedback forms. in fact, the respondents’ view of care rendered by instructors had already been expressed in the section of timeliness. that is, handwritten feedback supporters were willing to wait a bit long to receive handwritten feedback, because they perceived that instructors took time to provide thoughtful and constructive feedback, which demonstrated that their academic enhancement was cared by the instructors. table 11. t-tests comparing personal factors for handwritten feedback. n mean sd t df p (o) allows me to establish rapport with my professor handwritten preference 265 1.751 1.114 -9.940 710 0.000 e-feedback preference 447 2.953 1.772 (p) encourages me to read the feedback handwritten preference 265 1.381 0.871 -10.945 710 0.000 e-feedback preference 447 2.651 1.765 (q) shows that the professor cares about me handwritten preference 263 1.464 0.923 -9.164 707 0.000 e-feedback preference 446 2.498 1.686 (r) makes me appreciate my professor's time and attention handwritten preference 264 1.337 0.778 -9.007 707 0.000 e-feedback preference 445 2.256 1.546 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. in this sense, handwritten feedback seems to have a tendency to make students feel personally connected with instructors. “[h]andwritten feedback seems more human than electronic feedback,” commented a respondent. chang et al. (2012) also reported that when all feedback was received electronically, it became easy for a student to feel like a number and that when feedback was handwritten it would encourage students to ask instructors for clarifications of comments. this can also address (c) allows me to ask questions easily in the section of accessibility. when feedback was written by hand and delivered in class, asking instructors questions becomes quite easy. “handwritten feedback makes it more welcoming to ask the professor questions about their feedback face-to-face and encourage building a student-instructor relationship with the instructor,” commended a respondent. chang et al. (2012) echoed that it was convenient to approach instructors for explanations if feedback was delivered in class. easy and immediate responses from instructors also represent gestures that instructors care about students’ improvement. asking instructors questions face-to-face could promote a positive relationship between instructor and student, which seemed, in turn, to encourage students to read feedback. otherwise, reading feedback is unlikely to happen, as a respondent shared, “[m]y professor does not get to know me this way …, if it can be all uniform and not unique to each student, the connection is not there so reading the "comments" is much less likely to happen.” it is apparent that students’ emotions, derived from the relationship between instructor and student, plays a very important role in student learning. “the personal relationship between a professor and myself is very important to me.” “i love to feel the connection between the professors,” remarked the respondents. di costa (2010) and rowe and wood (2008) also reported that students wanted chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 36 instructors to consider their feelings; they wanted instructors to be empathetic and understandable. table 12. t-tests comparing personal factors for e-feedback feedback. n mean sd t df p (o) allows me to establish rapport with my professor handwritten preference 262 4.053 1.780 9.777 718 0.000 e-feedback preference 458 2.769 1.647 (p) encourages me to read the feedback handwritten preference 261 3.874 1.914 14.769 717 0.000 e-feedback preference 458 2.109 1.280 (q) shows that the professor cares about me handwritten preference 260 3.862 1.804 10.461 714 0.000 e-feedback preference 456 2.540 1.516 (r) makes me appreciate my professor's time and attention handwritten preference 261 3.671 1.860 11.240 715 0.000 e-feedback preference 456 2.318 1.342 note. likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. some e-feedback supporters disagreed with their peers and believed that e-feedback had its capability to establish rapport with professors. they defended that e-feedback was “[m]ore one on one [than] the classroom,” and “… was speaking directly to me.” in view of e-feedback supporters, e-feedback was “[m]ore personal.” the findings are consistent with rowe and wood (2008) that students requested feedback to be more personal, as it could motivate student learning and guide students in the right direction. i. correlations among demographic factors. the second research question: “what are their related rationale?” was also examined through correlations of demographic variables. table 13 shows there were positive correlations among students’ ages and feedback preference. it means that the older the students were the more they preferred feedback. the finding is consistent with the findings by chang (2011) and chang et al. (2012). in addition, a positive correlation was also observed among class standings and feedback preference. this means the higher class standing, the more the students desired for feedback. this finding is incongruent with the reports by siew (2003) and chang et al. (2012). in regards to gpa, however, gpa and feedback preference were negatively correlated. this means that those whose gpa was between 1.00 and 2.01 craved for feedback more than those whose gpa ranged between 2.01 and 3.00. this finding is inconsistent with the reports by chang (2011) and chang et al. (2012) that the higher gpa the respondents had, the more eager they wished to receive feedback. however, further research is needed as there seemed more respondents whose gpa ranged between 3.01-4.00 (62.4%) than those gpa between 2.01 and 3.00 (28.1%), 1.012.00 (2.1%). in terms of preference for a particular form of feedback, a crosstabs procedure, using the chi-square test of independence, revealed there were no statistically significant differences between the observed and expected frequencies on the variables of interest. the results failed to reveal a statistically significant difference in terms of gender, χ2(2, 752) = 3.543, p = 0.170 chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 37 table 13. feedback correlations among demographic variables. gender age class standing gpa college feedback preference gender 1.000 -.088* -.041 -.033 -.020 -.003 age 1.000 .272** -.050 .008 .147** class standing 1.000 -.258** -.044 .165** gpa 1.000 -.005 -.072* college 1.000 -.004 feedback preference 1.000 *. correlation is significant at the 0.05 level (2-tailed). **. correlation is significant at the 0.01 level (2-tailed). between handwritten and e-feedback. this means that regardless of gender there was no preference between handwritten or e-feedback. however, the chi-square test of independence indicated a statistically significant difference, χ2(5, 752) = 16.792, p = 0.005 among age. this means the older students were, the more preference they had for e-feedback. the chi-square test for independence also indicated a statistically significant difference, χ2(3, 746) = 21.020, p = 0.000, among class standing. e-feedback was preferred by seniors 72.3%. juniors also preferred e-feedback 66.8%. for freshmen and sophomores the preference for e-feedback was about even. a crosstabs procedure, chi-square test of independence, also revealed a statistically significant difference χ2(4, 752) = 13.511, p = 0.009 among gpa respondents. in the 3.01–4.00 gpa group, 65.4% preferred e-feedback. among gpa respondents in the 2.01–3.00 gpa group, 63.4% preferred e-feedback, while gpa respondents in the 1.01-2.00 gpa group preferred efeedback 75.0% of the time. there was statistically significant difference χ2(5, 751) = 11.719, p = 0.039 among colleges as well. the biggest preference difference was found in the college of health sciences with 71.4% of these respondents preferring e-feedback. all other colleges preferred e-feedback as well, although the differences were much smaller. j. educational implications. the findings offer useful insights of the respondents on their preferred feedback form and the related rationale behind their preferences. as such, it is time for instructors and concerned administrators to start contemplating how to compose/or develop and deliver feedback, be it handwritten or e-feedback, in order to genuinely facilitate student learning. to be more specific, it is time to make changes to ways to develop and deliver e-feedback to bolster its quality and personal attributes. it is time to make changes to ways to develop and deliver handwritten feedback to better its timeliness, accessibility, and legibility. the need for change also implies that a form of feedback may not matter much if feedback, be it handwritten or e-feedback, is useful and beneficial to student learning and/or contains all the five themes. therefore, in providing feedback, instructors need to “engage with students, consider their responses and offer individualized challenges” (rushoff, 2013). perhaps, basic training or professional development for instructors would enable them to establish a better understanding of what kind of e-feedback, for example, is needed by students. in addition the delivery style impacts student learning, as a student pointed out, “the few times i have received feedback in these ways [electronically] chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 38 (especially through annotations), i found it [e-feedback] immensely helpful. as such, i think this problem is more of one of education on the part of professors; if they are aware of this method of giving feedback and how to provide it in this way, then maybe they would be more likely to do so. professor training would be very helpful.” professional training converging on how to provide and deliver feedback, be it handwritten or e-feedback, is of great significance. k. future research. this study demonstrated that both handwritten feedback and e-feedback supporters appeared to clearly hold their own positions. to facilitate student learning via assessment feedback, future research would be useful to examine specifically what content of handwritten feedback is desired by respondents and, when and how instructors deliver this feedback to students. the same is necessary for the examination of e-feedback supporters’ views. further research may also be focused on if “a hybrid approach” to providing and sending feedback to students is helpful from the students’ point of view, e.g. tablet pc or iannotate pdf on ipad. these approaches would allow for handwriting and delivering feedback electronically. or future research may need to be focused on the following question: “do students prefer feedback provided with the use of voicethread, the software that allows for recording feedback orally and delivering it electronically? in addition future research may look into whether or not feedback provided through various electronic means, such as email, webs, oncourse, phones, etc., would result in different students’ perceptions or even in different impact on their learning. interested others could also delve into to what extent e-feedback or handwritten feedback could really improve teaching and learning. l. limitations. the following limitations were identified (1) even though the survey instrument was modified and improved from the previous study, 2% of the respondents thought the survey was a bit too long. thus, it might be the case that some respondents might not be able to complete the survey in earnest or honestly convey their insights. (2) this survey was conducted at the beginning of the spring semester. it might be that some students had not had much experience receiving or reading feedback. (3) it might be that some respondents’ perceptions might not fully reflect their views taken into consideration that they might not comprehend certain survey questions and/or might be distracted by their surroundings when the survey was being taken. (4) lastly, since there was no clear definition of e-feedback given, it might bear on the answers of the respondents to some survey questions. nonetheless, with a large number of the respondents involved in this study, the findings could still make useful contributions to teaching and learning in higher education, generating a stimulating topic for the best interest of students. iv. conclusion. feedback preferences of undergraduate students at a midwestern university were explored with regards to handwritten feedback or e-feedback and the rationale behind these preferences. it was found that about two thirds of the respondents preferred e-feedback. however, each group of supporters appeared to hold their explicitly distinct reasons for their perceptions in terms of the five themes: accessibility, timeliness, legibility, quality, and personal. despite their differing chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 39 views, it appears that irrespective of their distinctive preferences, ratings for favoring handwritten feedback under some factors of quality and personal were stronger than for efeedback. likewise, there were stronger ratings and more respondents, regardless of their distinctive preferences, supporting e-feedback for its timeliness, accessibility, and legibility. the justifications that backed up their expressed preferences could also explain why there were higher ratings for usefulness of handwritten feedback than that of e-feedback. in addition, the respondents’ various perceptions with respect to e-feedback were also found to be positively correlated with age and class standing and negatively correlated with gpa: those whose gpa is between 1.01-2.00 favored more feedback than those whose gpa was between the range of 3.01–4.00 and that of 2.01–3.00. the findings indicate that the majority of students long for assistance from instructors to better their learning via assessment feedback. it is important for instructors to be mindful when providing feedback on students’ assignments in terms of what, why, how, and when. since feedback offering has been recognized by literature to have significant effect on student learning (case, 2007; chang, 2011; ferguson, 2011; krause & stark, 2010) and fundamental in supporting and regulating the learning process (ifenthaler, 2010). it is time for all faculty concerned with effective student learning to understand more about the provision of feedback via the assessment process. awarding a single grade is not welcomed by students and is not conducive to improving learning. students do desire to receive feedback (chang, 2011; siew, 2003). however, the feedback should truly help advance their learning. references ackerman, d. s., & gross, b. l. (2010). instructor feedback: how much do students really want? journal of marketing education, 32(2), 172-181. doi: 10.1177/0273475309360159 bai, x., & smith, m. b. (2010). promoting hybrid learning through a sharable elearning approach. journal of asynchronous learning networks, 14(3), 13-24. bakerson, m. (2009). persistence and success: a study of cognitive, social, and institutional factors related to retention of kalamazoo promise recipients at western michigan university. proquest dissertations & theses database: a&i. . western michigan university, united states ball, e. (2009). a participatory action research study on handwritten annotation feedback and its impact on staff and student. systemic practice and action research, 22, 111-124. bridge, p., & appleyard, r. (2008). a comparison of electronic and paper-based assignment submission and feedback. british journal of educational technology, 39(4), 644-650. carless, d. (2006). differing perceptions in the feedback process. studies in higher education, 31, 219-233. case, s. (2007). reconfiguring and realigning the assessment feedback processes for an undergraduate criminology degree. assessment & evaluation in higher education, 32(3), 285299. chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 40 chang, n. (2011). pre-service teachers’ views: how did e-feedback through assessment facilitate their learning? journal of scholarship of teaching and learning, 11(2), 16-33. chang, n., watson, b., bakerson, m., williams, e., mcgoron, f. , & spitzer, b. (2012). electronic feedback or handwritten feedback: what do undergraduate students prefer and why? journal of scholarship of teaching with technology, 1(1), 1-23. charmaz, c. (2000). grounded theory: objectivist and constructivist methods (2nd ed.). london: sage. creswell, j. w. (2002). research design. london: sage. dennen, v. p., darabi, a., & smith, l. j. (2007). instructor-learner interaction in online courses: the relative perceived importance of particular instructor actions on performance and satisfaction. distance education, 28(1), 65-79. denton, p., madden, j., roberts, m., & rowe, p. (2008). students' response to traditional and computer-assisted formative feedback: a comparative case study. british journal of educational technology, 39(3), 486-500. doi: 10.1111/j.1467-8535.2007.00745.x di costa, n. (2010). feedback on feedback: student and academic perceptions, expectations and practices within an undergraduate pharmacy course. paper presented at the atn assessment conference 2010 university of technology sydney. ferguson, p. (2011). student perceptions of quality feedback in teacher education. assessment & evaluation in higher education, 36(1), 51-62. higgins, r., hartley, p., & skelton, a. (2002). the conscientious consumer: reconsidering the role of assessment feedback in student learning. studies in higher education, 27, 53-64. hounsell, d. (2003). student feedback, learning, and development. berkshire, uk: srhe & open university press. hyland, p. (2000). learning from feedback on assessment. manchester, uk: manchester university press. ifenthaler, d. (2010). bridging the gap between expert-novice differences: the model-based feedback approach. journal of research on technology in education, 43(2), 103-117. krause, u., & stark, r. (2010). reflection in exampleand problem-based learning: effects of reflection prompts, feedback and cooperative learning. evaluation & research in education, 23(4), 255-272. mann, s. (2001). alternative perspectives on the student experience: alienation and engagement. studies in higher education 26(1), 7-20. chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 41 matthews, k., janicki, t., he, l., & patterson, l. (2012). implementation of an automated grading system with an adaptive learning component to affect student feedback and eesponse time. journal of information systems education, 23(1), 71-83. mertler, c. a., & vanatta, r. a. . (2005). advanced and multivariate statistical methods (3rd ed ed.). glendale, ca: pyrzcak publishing. morrissey, g., coolican, m., & wolfgang, d. (2011). an intersection of interests: the millennial generation and an alternative world language teacher education program. paper presented at the american educational research association annual conference new orleans, la. national union of students. (2008). student experience report. http://aces.shu.ac.uk/employability/resources/nusstudentexperiencereport.pdf parkin, h., hepplestone, s., holden, g., irwin, b., & thorpe, l. (2012). a role for technology in enhancing students’ engagement with feedback. assessment & evaluation in higher education, 37(8), 963-973. price, m., handley, k., millar, j., & o'donovan, b. (2010). feedback: all that effort, but what is the effect? assessment & evaluation in higher education, 35(3), 277-289. doi: 10.1080/02602930903541007 ramsden, p. (2003). learning to teach in higher education (2nd ed.). london: routledgefalmer. rosenberg, k. m. (2007). the excel statistics companion. belmont, ca: thomson higher education. rowe, a. d., & wood, l. n. (2008). student perceptions and preferences for feedback. asian social science, 4(3), 78-88. rushoff, d. (2013, january 15, 2013). online courses need human element to educate. from http://www.cnn.com/2013/01/15/opinion/rushkoff-moocs/index.html sadler, d. r. (2010). beyond feedback: developing student capability in complex appraisal. assessment & evaluation in higher education, 35(5), 535-550. doi: 10.1080/02602930903541015 scott, g. (2006). accessing the student voice: a higher education innovation program project. canberra, australia: department of education, science and training. siew, p. f. (2003). flexible on-line assessment and feedback for teaching linear algebra. international journal of mathematical education in science & technology, 34(1), 43-52. stevenson, j. p. (2007). applied multivariate statistics for the social sciences (5th ed.). new york, ny: routledge. chang, n., watson, b., bakerson, m.a. and mcgoron, f.x. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 42 yang, y., & durrington, v. (2010). investigation of students' perceptions of online course quality. international journal on e-learning, 9(3), 341-361. journal of teaching and learning with technology, vol. 10, special issue, pp. 158-163. doi: 10.14434/jotlt.v9i2.31437 “into the unknown” supervising teacher candidates during the 2020 covid-19 pandemic steffany maher indiana university southeast stmaher@iu.edu alan zollman indiana university southeast alanzoll@ius.edu abstract: in mid-march 2020, our public schools ended classroom instruction because of the 2019 coronavirus disease (covid-19) pandemic. the timing of the suspension of face-to-face instruction was in the middle of the student teaching clinical experience for our secondary education teacher candidates. without preparation, teacher candidates were to guide their middle and high school students through online learning. university faculty were experiencing a similar challenge—how to support and direct their teacher candidates in midexperience. this was a change in the logistics of teaching and in the focus of education. unlike in previous years, the first priority of schools was not high-stakes standardized testing, nor daily pacing guides, but rather the emotional and social health of students. this change in the schools’ priorities fit well with the preparation of our teacher candidates at our midwest regional teaching university. our focus was to prepare teachers of students, not teachers of mathematics or english. while our secondary teacher candidates did not have all the tools and technological skills needed to switch to online teaching immediately, they did know it was the relationship with the learner that was most important. the second change in the schools’ priorities was a time-allocation switch from mostly teaching to mostly planning, communicating, and supporting one another as teachers. our teacher candidates already knew that effective communication makes everyone a better instructor and benefits student learning. to help our teacher candidates make these transitions, we developed and used a clinical practice interview protocol with our candidates regularly online. in addition to these changes in priorities in schools, another helpful experience our teacher candidates possessed was prior experience as students of online classes themselves. many had taken several online or hybrid university courses and knew what worked well and what they needed to avoid as online teachers. in this reflective essay, we discuss how the university academic clinical educators (university supervisors) supported and facilitated our teacher candidates in preparing and implementing quality instruction during the covid-19 pandemic. keywords: teacher candidates, covid-19 pandemic, teacher supervision, student teaching, clinical practice. the 2019 coronavirus disease crisis in this reflective essay, we discuss how we, the university academic clinical educators (university supervisors), supported and facilitated our secondary education teacher candidates (student teachers) in preparing and implementing quality instruction through the 2019 coronavirus disease (covid-19) pandemic. what began as an extended spring break for p–12 (prekindergarten to 12th grade) and postsecondary schools turned into a complete shutdown when, in mid-march of 2020, public schools in our midwest state ended classroom instruction because of the pandemic. this suspension of facemaher and zollman journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu to-face instruction took place in the middle of our secondary education teacher candidates’ clinical experience. thus, without any preparation, our teacher candidates were asked to guide their middle and high school students through online learning. our university teacher educators were experiencing a similar challenge: how do we support and direct our teacher candidates, midexperience, through this transition to distance teaching? covid-19’s effect on national education priorities as covid-19 spread across the united states, we began to see a change in the logistics of teaching and in the focus of education in general. the first priority of schools all over the nation was not highstakes standardized testing, as it had been before the pandemic (we saw states cancel these tests), nor daily pacing guides through scripted curriculum, but rather the emotional and social health of students. as our focus at our university was to prepare teachers of students rather than teachers of mathematics or english, this change fit well with the preparation of our teacher candidates. while our secondary candidates did not have all the tools and technological skills needed to switch to online teaching immediately, they did know it was the relationship with the learner that was most important. our approach to the crisis our approach was to supervise and support our teacher candidates following the three stages of newteacher development described by griggs, sullivan-losey, and zollman (2018). they proposed that new teachers and teacher candidates progress through three stages of development: first being concerned about oneself, next being concerned about the subject content, and last being concerned about their students’ learning. for example, teacher candidates' first priority is themselves: where should i park, which bathroom should i use, how should i dress? in a non-covid-19 year, this stage is fairly short in duration. when these concerns are allayed, the candidates move to a focus on the subject content. some candidates (and teachers) never progress beyond this stage. they view themselves as teachers of the subject, not teachers of students. when candidates are secure in the content knowledge they are teaching, they move to a focus on their students’ learning of the content. in our initial approach to teacher candidates after the school shutdown, we communicated with our candidates on this first stage of new teacher development, concerns about oneself. we checked in with our candidates regularly, asking how they were doing physically and emotionally. we also followed abraham maslow’s hierarchy of needs (1954) in our remote communications strategy. maslow's hierarchy describes a person’s most fundamental needs as physiological, followed by safety, belongingness and love, esteem, and ultimately, self-actualization. although there is criticism of the original hierarchy’s argument that a lower level must be completely satisfied before one moves onto a higher level, this model is widely used in business and education motivational training. we believed it would be useful for our students. for our purpose of connecting and communicating with our teacher candidates, we found this hierarchy to cover most of our topics in a sequential process. in fact, we developed our own clinical practice interview protocol (see appendix), based on both griggs et al.’s (2018) new teacher development stages and maslow’s (1954) hierarchy. thus our first response as teacher educators was to reach out to our students—to offer reassurance that they would get through this and that they would be able to graduate on time. this correlates with concerns about oneself as a teacher and with maslow’s (1954) need for safety. this initial reaction also stemmed directly from our philosophy of teaching. at our university, our teacher educator program believes in educating the whole child. we teach the student, not the content. 159 maher and zollman journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu our second focus was to help alleviate our teacher candidates’ concern about the subject content. our teacher candidates transitioned from mostly teaching in their classrooms to mostly planning, communicating, and supporting one another. it was at this point, to successfully monitor this transition, that we developed a formal clinical practice interview protocol and asked our candidates to complete it at intervals during their distance-teaching experience and share it with their university academic clinical educator. for this stage, we built on the foundation our candidates had with their own online education and encouraged use of the technology available to them. throughout this ever-evolving pandemic experience, our third and ultimate priority was to care for our teacher candidates as best we could from a distance and to ensure their development, as we were concerned about our students’ learning (griggs et al., 2018). we modeled this to our teacher candidates in our education methods courses, encouraging them to embrace this philosophy of education in their own teaching practices. meeting with our secondary teacher candidates through the video conferencing tool zoom was one way that we checked in with them to ensure they were staying physically, mentally, and emotionally healthy, had the basic content knowledge, and focused on their students. during these meetings we were pleased to hear our candidates sharing how they were checking in with their own students. they often were worried about students with whom they were not able to connect during their distance teaching. as teacher educators, we were pleased to see our candidates focused on their students, rather than themselves, the content, test scores, or curricula. they shared our ultimate priority in education. our teacher candidates’ personal challenges as educators ourselves, we also worried about our candidates. some of them were living at “home” in states apart from where they were teaching; some had limited access to wi-fi; others struggled to stay connected with their p–12 clinical educators (mentor teachers) and their students; one served in the national guard and was called up for active duty while completing his clinical practice through distance teaching. our weekly zoom “check-in” meetings were a vital part of staying connected, troubleshooting problems, and continuing to live in relationship with one another from a distance. many of our students worked part-time in the service industry, for instance, as waitstaff in restaurants. these jobs disappeared when the state ordered mandatory closure of nonessential businesses. money for food and rent disappeared. to respond to the psychological and safety needs of our students, the university initiated a student emergency assistance fund, supported by faculty and staff. we publicized the availability of these funds to our teacher candidates. several candidates applied and were awarded monies—within just a few days. along with the assistance fund, the university’s grab & go emergency food pantry provided nonperishable food items, as well as gift cards to grocery stores and gas stations, to our candidates in need. our teacher candidates’ transition to distance teaching similar to the change in the public schools’ priorities was our candidates’ time-allocation switch from mostly teaching to mostly planning, communicating, and supporting one another as teachers. our teacher candidates already knew that effective communication makes everyone a better instructor and benefits student learning. to help our candidates make this transition, we developed the clinical practice interview protocol and met with them weekly online. the protocol asked our teacher candidates questions about their distance-teaching clinical practice experiences. the completed protocol and subsequent zoom meetings took the place of our final two formal observations of their 160 maher and zollman journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu teaching. the protocol questions were divided into three categories: logistics, teaching, and closure, following our teaching philosophy. one aspect the protocol revealed was a feeling of disconnect from us and their fellow teacher candidates. additionally, our candidates had been developing relationships with their p–12 clinical educators and students and wanted to continue to foster those relationships, even if they had to do it from a distance. to help with this dilemma, educators all over the united states were quickly learning how to teach online synchronously through mediums such as zoom and google meets, another videocommunication platform. most of our candidates had never taken part in a zoom meeting before. thus, our university academic clinical educators intentionally modeled how to conduct an online class meeting and student check-in via zoom, skills that were easily transferable to other platforms, such as google meets. most of our candidates engaged with google classroom, as this was their placement schools’ online learning platform. some candidates were able to teach online along with their classroom teachers because their school provided access. others had to send their lessons and materials to their p–12 clinical educators because their school did not provide access to them as teacher candidates. whatever access our candidates had to their students, we did our best to guide them through distance teaching. in fact, the university set up internet “hot spots” for our students. similarly, several of the school districts, using school buses, did the same for their students to have internet access. as part of the university system in our midwest state, our university offers free download use of kaltura for video creation, and several of our candidates utilized this in their teaching. our candidates also were well versed in kahoot and other online “game” applications, such as quizizz and quizlet, to keep students motivated and connected. our teacher candidates also used these apps for formative and summative content assessments. in addition to these apps, for content-specific teaching, as in mathematics, our teacher candidates needed to learn and then use such programs as equatio, jamboard, geogebra, symbolab, photomath, and wolframalpha. in a change from “normal” interactions with teacher candidates, the university faculty set up text messaging through remind, a messaging service built especially for education. in this venue, the teacher candidates felt comfortable in communicating concerns, fears, depression, and content questions directly with us. it also made it possible for us to quickly and privately check in with our teacher candidates, asking them about their specific circumstances and caring for them from a distance. our teacher candidates’ advantages while our candidates had much to learn in this abrupt transition, they also had several unique advantages. one was that our teacher candidates had been in the classroom with their p–12 clinical educators during the fall semester before clinical practice. they had an established rapport with their classroom teachers, and these teachers trusted our teacher candidates as co-teachers in planning and teaching. several teachers allowed our candidates to take the lead in the online teaching. a second advantage was the rapport our teacher candidates felt among themselves as a cadre. they shared fears, frustrations, ideas, and assistance. they felt a responsibility to support one another through this crisis. this was evident in our zoom meetings and in the protocol. a third advantage was teacher candidates’ prior experience as students of online classes themselves. many had taken several online or hybrid university courses and knew what worked well and what they needed to avoid as online teachers. as mentioned previously, our candidates understood that building relationships with students to foster student learning was the ultimate objective. 161 maher and zollman journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu final reflection what did we learn from this experience? reflecting upon the semester, we, as university academic clinical educators, successfully stayed focused on our philosophy of preparing teachers of students, not teachers of mathematics or english. while our secondary teacher candidates did not have all the tools and technological skills needed to switch to online teaching immediately, they did know it was the relationship with the learner that was most important. throughout the semester, the main concept that we reinforced during the crisis was the importance of trust and respect. our teacher candidates must establish a rapport of trust, respect, and communication with their students; our candidates must establish a similar rapport with their p–12 clinical educators; and we must establish a rapport with our candidates. with distance learning, this is challenging. our candidates learned that it is important to take the time during zoom and google meets meetings to include activities, such as ice breakers, that develop rapport. through these activities, teachers create community with their students, even in online spaces, so students see by their actions that teachers value them as individual learners. our teacher candidates faced an unprecedented crisis while finishing their education degree. because of our rapport with our candidates, while they struggled, they maintained the teaching objective of educating the whole student. they did not view themselves as online teachers—they viewed themselves as teachers of students. we finished their clinical experience pleased with how our candidates approached this crisis, and we believe this bodes well for them as future teachers in the ever-evolving field of education. appendix appendix 1. secondary education covid-19 clinical practice interview protocol. we understand that these are strange times and that you are working with your p-12 clinical educator to offer the best learning opportunities possible for your students. because you are all teaching from a distance, we have developed this protocol in order to best understand your teaching practices and clinical experiences for the remainder of your clinical practice. please answer each question to the best of your ability. once complete, please email this to your academic clinical educator. your academic clinical educator will review it before meeting with you via zoom to discuss your experiences. we plan to use this protocol and the subsequent zoom meeting to take the place of our two remaining observations of your clinical practice. please contact your academic clinical educator with any questions. part a. logistics 1. how many weeks of clinical practice have you completed? 2. is your school using e-learning, packets, or both to teach from a distance? 3. if so, since we last observed, please explain how you are expecting to use these tools for teaching and learning? a. planning: i. how will you co-plan or plan with your p-12 clinical educator? ii. what resources are you using to create meaningful learning experiences? b. instruction: i. how will you deliver the instructional material in a meaningful way? ii. what will be the process for delivering the material? 162 maher and zollman journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu c. evaluation: i. how will you know if this material has created knowledge? ii. formative/ summative? d. experience: i. how will you make this a meaningful experience for your students, who are already suffering from angst due to the disruption of everyday life? 4. if not, how do you and your p-12 clinical educator plan to teach from a distance? please explain in as much detail as possible. 5. how much of your unit were you able to teach before your school changed to distance learning? were you able to get data to use in the student learning project? what is the status of this project? 6. what ideas do you have for showing student growth if you were not able to complete the datagathering portion of a unit (pre-assessment, post-assessment), either with a unit you created or with a section of distance learning? feel free to brainstorm ideas and ask questions here: 7. has anything changed since we last spoke to one another? if so, what is new? part b. teaching please email your lesson plan along with this completed form and any questions you may have for your academic clinical educator. 1. how are you and your p-12 clinical educator adjusting to changes? 2. how much contact do you have with your students? what efforts are you making to encourage students in their distance learning? 3. how are you assessing your students’ work? 4. what data do you plan to gather to show student growth? what data have you already gathered? 5. have you been able to upload any of the taskstream assignments (e.g., ability to plan your unit plan, impact evidence of student learning) onto canvas? if so, which ones have you done, which ones have you not done? part c. closure 1. how are you practicing self-care? 2. what are your current struggles? 3. what concerns do you have for yourself/your loved ones? 4. what do you need from me (your academic clinical educator)? 5. how have you focused on your students’ learning? references griggs, b., sullivan-losey, d., & zollman, a. (2018). more swim—less sink: co-teaching advantages for middle and secondary teacher candidates. in j. hollenbeck (ed.), the american process: uniting all in one (2nd ed., pp. 153-157)). dubuque, ia: kendall hunt. maslow, a. (1954). motivation and personality. new york, ny: harper. 163 3944 journal of teaching and learning with technology, vol. 3, no. 1, june 2014, pp. 72 89. doi: 10.14434.jotlt.v3n1.3944 faculty perceptions of webcasting in health sciences education barbara a. gushrowski1 and laura m. romito2 abstract: pre-recorded lectures (podcasts) and recordings of live lectures (lecture-capture) are now everyday occurrences on many college campuses. student use and opinions of these technologies have been frequently studied. however, there has been little reported on how faculty perceive these technologies. this article reports the results from a 2010 survey of dental, medical, and nursing faculty about their experiences with podcast/lecture capture technologies as teaching tools. a 46-item survey was distributed electronically to full-time faculty at the schools of dentistry, medicine, and nursing on the campus of an urban university in fall 2010 to determine their experiences and perceptions of podcast/lecture capture technologies as teaching tools. of the 398 respondents, 32% employed lecture capture while only 2% used podcasting. of those faculty not currently recording materials, 83 (68%) stated that they plan to do so in the next 2 years. lack of time, 26 (24%) and training, (22%) are major reasons stated for not recording course content. although a large number of faculty believe student learning has improved through the use of these technologies (74%, n=86), few stated that test scores have improved following implementation of electronic delivery of course materials (29%, n=34). there was no correlation between the use of podcast/lecture capture technologies and faculty gender, school, or years of teaching. a wide array of technologies to record lectures and present additional course materials electronically are in use at the health sciences programs on the campus. overall, faculty view these technologies in a favorable light. keywords: podcasting, lecture capture, health sciences, faculty perceptions introduction currently the term “podcast” describes both audio and/or video files that can be downloaded and played on a personal computer or mobile device. for example, lectures, based on a microsoft power point slideshow along with a recording of the instructor’s lecture narration can be downloaded and played on a laptop computer. such video podcasts can be pre-recorded and distributed in advance or in lieu of class, or they can be generated during the class session as “lecture capture” and made available subsequent to the class session. for the purposes of this paper a podcast is defined as any presentation that is pre-recorded and lecture capture refers to a presentation that is recorded live. pre-recorded lectures, supplemental, and study materials, as well as recordings of live lectures and streaming live video feeds of a lecture are now everyday occurrences on many college campuses (owston, lupshenyuk, & wideman, 2011). the use of podcasting, lecture capture, and other electronic delivery mechanisms has, in a relatively short period of time, become an accepted practice of instructional delivery in health science programs (nast, schafer 1 associate librarian, indiana university school of dentistry, bgushrow@iu.edu 2 associate professor, department of oral biology, indiana university school of dentistry, lromitoc@iu.edu journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 73 hesterberg, zielke, sterry, & rzany, 2009; walmsley, lambe, perryer, & hill, 2009; zanussi, paget, tworek, & mclaughlin, 2011). these materials are made available to students via itunes, school websites, or proprietary courseware products. despite the expanded use of these new technologies and their popularity with many students, there is a dearth of information regarding faculty use and perceptions of these instructional delivery methods. thus, the purpose of this study was to gather data about how faculty across various health science professions at one large urban midwestern university campus perceive these new technologies. specifically, we posed the following research questions: 1. to what extent are health sciences faculty using these technologies? 2. is there a difference in use and perceptions of webcasting technologies among faculty based on health science program, gender, or years of teaching experience? 3. what do faculty perceive to be advantages / disadvantages for themselves and for their students in using these technologies? 4. what, if anything, do faculty perceive as barriers to using these technologies? literature review the increasing use of these technologies in education is reflected in the growing number of articles devoted to the topic. rainsbury (rainsbury & mcdonnell, 2006) reported that a search in the pubmed database in 2006 found only 3 articles about podcasting in health sciences education. in 2010, “webcasts as topic” was added as a mesh term by the national library of medicine and ‘webcast’ is now listed in pubmed as a publication type. a 2011 pubmed search on podcasting in health sciences education yielded over 100 articles. many of the articles published over the past 5 years fall into 3 broad categories: basic how-to, student satisfaction, and student learning. articles in the how-to category, many of which were published from 2006-2008, define key terms, describe the technologies, and outline methods of producing podcasts and distributing recordings (cain & fox, 2009; corl, johnson, rowell, & fishman, 2008; elkind, 2009; hopp, 2010; jham, duraes, strassler, & sensi, 2008; kennedy, gray, & tse, 2008; long & edwards, 2010; mccartney, 2006; rowell, corl, johnson, & fishman, 2006; ruiz, mintzer, & leipzig, 2006). authors have used a variety of theoretical frameworks to explain student satisfaction with the technology and the method of content delivery. kardong-edgren & emerson use constructivist theory to explain that students who download and listen to a podcast may expect this activity to improve their grade, thereby making the lecture recording more meaningful. they further use five constructs to "explain a user's motivation for seeking, using, and continuing to use an electronic media technology: cognitive needs, affective needs, personal integrative needs, social integrative needs, and entertainment needs."(kardong-edgren & emerson, 2010). stiffler et al, state that educational podcasting is consistent with "…siemens' digital age orientation to learning and other connectivism theorists." connectivism theorists assert that knowledge exists outside the individual and in order for students to learn, this knowledge must connect "…to the right people at the right time and in the right context." (stiffler, stoten, & cullen, 2010). vogt et al., also discuss siemans' connectivism theory along with mayers' multimedia learning theory – that students will learn through several avenues including visual and auditory – to frame their study of undergraduate nursing students' learning and satisfaction with podcasting (vogt, schaffner, ribar, & chavez, 2010). others have reported survey results that focused on student satisfaction with the technology and the method of content delivery, though without a theoretical journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 74 framework (bollmeier, wenger, & forinash, 2010; forbes & hickey, 2008; lymn & bowskill, 2010; mckinney & page, 2009; nast et al., 2009; patasi, boozary, hincke, & jalali, 2009; pilarski, alan johnstone, pettepher, & osheroff, 2008; reynolds, mason, & eaton, 2008; schlairet, 2010; shantikumar, 2009; walmsley et al., 2009). in addition, papers have reported students’ perception of the value of podcasts as learning and exam preparation tools. more recently, efforts have been undertaken to assess these technologies in light of student learning outcomes. bollmeier et al., posit that cognitive load theory may explain why recorded lectures may improve learning. cognitive load theory describes learning taking place at three levels – short-term, working, and long-term memory. information is first processed through short-term memory into working memory. when too much information or poorly organized information is processed, the constraints on working memory don't allow the information to be fully processed into long-term memory. recorded lectures allow the students to review and process the information in smaller chunks and allows time for students to reflect on the information and thus transfer it into long-term memory (bollmeier et al., 2010). studies by bhatti et al, greenfield, o’neill et al, and schreiber et al, demonstrated that compared to standard instructional methods such as lecture, learning outcomes for students viewing podcasts are improved (bhatti et al., 2009; greenfield, 2011; o'neill, power, stevens, & humphreys, 2010; schreiber, fukuta, & gordon, 2010). however, hadley et al., nagler et al., and vogt et al., reported no significant differences in exam scores between students receiving in-person and online content delivery. (hadley et al., 2010; nagler, andolsek, dossary, schlueter, & schulman, 2010; vogt et al., 2010) while the student viewpoint and opinions of these technologies have been frequently studied, to date there has been little reported on how faculty perceive these technologies. one faculty concern is decreased student attendance in class. some investigators did not find this to be a significant issue (copley, 2007; forbes & hickey, 2008; lymn & bowskill, 2010; meade, bowskill, & lymn, 2009; nast et al., 2009; pilarski et al., 2008), however, kardong-edgren (kardong-edgren & emerson, 2010) found that faculty reported increasing student absenteeism after increased availability of podcasts.. bhatti (bhatti et al., 2009) discussed demands on faculty’s time in learning and implementing these technologies. another concern that has been noted is the ease with which online materials can be broadly disseminated which may result in the inadvertent or intentional violation of faculty intellectual property rights by students (johnson & grayden, 2006; read, 2007). one recent paper does report on some aspects of faculty views on webcasting in the classroom. a survey of 66 north american dental schools was conducted about the use of lecture recordings in dental education (horvath et al., 2013). several questions on the survey related specifically to faculty preparation for using the technologies and barriers experienced by faculty in implementation. nearly half of those responding to questions about faculty preparation (13) reported that formal training was available for faculty on the use of the recording technology, while 26% (7) reported no preparation or training prior to implementation. the barriers most reported were faculty resistance, technology problems, concerns about intellectual property, and fears that attendance in face-to-face lectures would decline. according to adoption-diffusion theories, faculty acceptance of a new instructional technology such as webcasting is a complicated, multidimensional process involving cognitive, emotional and contextual factors. the adoption process involves the individual faculty member’s decision to utilize the technology, while diffusion refers to adoption by a collective, such as at the school or campus level. a faculty member’s perception of the new technology is influenced by numerous factors journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 75 including their perception of whether the innovation is useful and if they would be capable of successfully employing it, as well as their observation of others’ success (or failure) with use of the technology (straub, 2009). our study was grounded in adoption – diffusion theories such as the technology acceptance model (tam) and the universal technology adoption and use theory (utaut), which purport that a faculty member’s adoption of new technology is based on his/her perceptions of the ease of use and utility of that technology. additionally, the utaut also considers whether the faculty feels social / environmental pressure to use the technology and the extent to which they perceive institutional support for its use. other factors moderating the decision to adopt a new technology such as podcasting that are also addressed by this theory are the age, gender and experience of the faculty (straub, 2009). as such, these constructs were incorporated into our survey. in an effort to elucidate faculty perceptions of podcast/lecture capture technologies as teaching tools in health sciences education, we conducted a campus-wide survey of dental, medical, and nursing faculty about their experiences with these technologies. this article reports on the results of the survey in which faculty were asked about the following: the extent to which they use these technologies, the system/software used, perceived advantages and disadvantages for themselves and their students, and the effects on student learning outcomes. methods study population the study participants were comprised of full-time faculty from the schools of dentistry, medicine, and nursing on the campus of an urban midwestern university. this campus is predominantly a health sciences campus. courses in health sciences programs such as medicine, dentistry, and nursing tend to have traditional content-dense lectures which would be amenable to these webcasting technologies. upon our request, a list of names and campus email addresses of all full-time health sciences faculty were compiled by each of the respective schools and sent us. a total of 1454 names and email addresses were submitted and all were contacted by email and asked to participate in the voluntary, confidential survey. survey instrument in 2009, we conducted a pilot survey at the school of dentistry and many of the items from that survey were included in the current study. the 2009 survey consisted of 37 items including multiple-choice and yes/no questions as well as open-ended questions that focused on the following: advantages and disadvantages for students and faculty in employing podcasts and lecture capture, barriers to implementation, future interest in using these technologies, and student learning outcomes. in the current study, additional yes/no questions were added such as "does your school use any type of lecture capture or podcasting system?" additional multiple choice questions asked about the specific technology systems available at the schools and how these systems are managed. demographic questions about the number of years of teaching experience and number of years teaching at this campus were also included. with these additions, the current survey contained a total of 46 items. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 76 the survey software, qualtrics™ (provo, ut), enabled a branching mechanism wherein all questions were not delivered to all participants, but were delivered based on responses to key survey items. for example, based on the response to the question – "how have you used the lecture capture or podcasting system?" – the participant was directed to additional questions related to the ease or difficulty of developing podcasts, lecture captures, or plans for future use of the technologies. the survey included several open-ended items that allowed participants to comment freely on the advantages and disadvantages to faculty and students of using podcast and lecture capture technologies. faculty who reported not already using the technology were asked to identify their perceived barriers to adoption and what would facilitate their use of webcasting technologies. survey administration following review and approval of the survey instrument and study protocol by the university institutional review board, we distributed the 46item survey in the fall semester of 2010 via email. the initial email message described the purpose of the survey and invited participation by the 1454 full-time health sciences faculty. the email invitation indicated that the study was voluntary, participation implied consent, all responses were confidential, and results data would be reported in aggregate and not linked to any individual respondent. the message also included a link to the survey. while we did collect limited demographic data, we did not gather any personally identifiable information in the survey and faculty were not offered any incentives to participate. the survey was open for 3 weeks, and responses were password protected and stored on the qualtrics™ server. the qualtrics™ software is equipped to track non-responders so we composed follow-up messages encouraging completion of the survey which the software delivered to the non-responders in weeks 2 and 3. after we collected the data, it was cleaned, coded and analyzed. descriptive statistics were obtained and qualitative and quantitative analyses were performed. quantitative analysis included frequencies and percentages, somers’d phi, and cramer’s v tests of correlation. qualitative analysis of data from the open-ended survey items generated several response categories based on common themes. we analyzed the data using spss statistical software (v. 19.0 spss, inc. chicago, ill 2011). results a total of 398 health sciences faculty completed the survey for an aggregate response rate of 27%. response rate varied by health science school as follows: dentistry 64% (n=69); nursing 57% (n=55), and medicine 27.5% (n=274). males accounted for 60% of the participants and females represented 40% (n=338). reported years of teaching experience ranged from less than 1 year to over thirty years, with approximately 50% of respondents having at taught at least 15 years (n=341). reported years of teaching at this campus had an identical range, but with 70% reporting 15 years or less with this campus (n=334). of the total number of respondents, 128 (32%) used lecture capture software to record their live lectures, 9 (2%) pre-recorded podcasts, 27 (7%) used both methods of recording, and 121 (30%) did not use either recording method. the remaining 113 faculty (28%) did not answer journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 77 this question. eighty-one respondents (20%) were not aware that their respective schools had podcast/lecture capture systems available. of those faculty reporting non-use of the technologies, 83 (68%) indicated they would consider recording a podcast or lecture in the next 2 years, 33 (27%) would not consider doing so, and 5 (4%) did not answer this question. those considering making recordings indicated that lack of time, 26 (24%) and training, 24 (22%) were the 2 biggest factors that were preventing them from adopting these technologies. the recording software used varies widely between health science schools. fifteen separate software packages were identified (see appendix a.) and many respondents indicated they used more than 1 of these. recording systems used for lecture capture are available on fixed workstations, 76 (49%), portable devices, 45 (29%), or both, 34 (22%), and are managed to a large degree by school or university information technology departments. lecture capture software is available in large lecture halls seating over 125 as well as small classrooms that seat fewer than thirty. a relatively large number of faculty, 98 (32%) did not know the name of the system used by their school. there is little standardization or consistency in the starting and stopping protocols for lecture capture. these procedures are carried out by school support staff, 97 (36%), faculty, 90 (33%), campus information technology staff, 50 (18%), students, 33 (12%), or automated by the system, 30 (11%). in addition, 44 faculty (16%) indicated that the initiation and rendering of lecture capture was conducted by some means other than the aforementioned methods; furthermore, of these 44 respondents, 31 did not know how the recordings were started and stopped. podcasting software for pre-recording lectures were used by 36 respondents (13%), 9 of whom used this method exclusively, while 27 used podcasts in addition to lecture capture. the podcast software used was often the same as that used for lecture capture. there was only 1 exclusively podcast system named. faculty who responded to questions concerning the most difficult aspects of webcasting (n= 135) ranked technical issues, 43 (32%) and learning the software, 34 (25%) as the two greatest challenges. however, many faculty, 47 (35%) indicated that there were no difficulties. distribution of the recordings is, for the most part, contained behind firewalls. course management systems are used as the repositories by 149 respondents (58%). other password protected sites such as itunes private channel, and departmental websites and wikis are used by 31 respondents (12%). only 5 faculty (2%) reported public distribution of their recordings on itunes and youtube public channels. forty-eight respondents (19%) indicated they did not know how the recordings were distributed. faculty who reported using webcasting technologies (n=137) were asked if they believed that use of these technologies has improved student learning. of the 86 responses obtained, 12 (14%) believed learning was not improved, 10 (12%) were unsure, and 64 (74%) believed learning is improved by the use of these technologies. faculty were then asked if students performed better on exams since the introduction of podcasts/lecture captures than in years prior to use of these technologies. of the 34 respondents, 10 (29%) indicated scores had improved, 5 (15%) were unsure, and 19 (56%) indicated that using the technologies did not improve their students’ exam performance. we performed correlation analyses to determine relationships between the use of podcasting or lecture capture technologies and the following 4 variables: school affiliation; faculty gender; total number of years teaching; or number of years teaching at this campus. our journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 78 results indicate there was no correlation between any of these variables and the use of podcasting technologies. table 1 relationship between use of podcast/lecture capture and faculty characteristics school affiliation (n=280) gender (n=260) yrs. teaching (n=252) yrs. teaching at this campus (n=119) use technologies dentistry medicine nursing 33 108 21 female male 58 87 < 1 1-5 6-10 11-15 16-20 21-25 26-30 >30 2 20 25 22 19 18 17 17 < 1 1-5 6-10 11-15 16-20 21-25 26-30 >30 6 28 28 15 12 10 8 6 don’t use technologies dentistry medicine nursing 27 73 18 female male 51 64 < 1 1-5 6-10 11-15 16-20 21-25 26-30 >30 4 19 20 13 12 14 17 11 < 1 1-5 6-10 11-15 16-20 21-25 26-30 >30 2 38 37 21 19 10 8 6 correlation* v = .050 (278), p=.704 rφ = .044 (258), p=.480 d= -.021 (250), p=.624) d= .006 (117), p=.566 *confidence interval of all correlations is 95%. tables 2 and 3 summarize basic themes that we identified from content analyses of participants’ free-text responses to open-ended survey items regarding the advantages and disadvantages to faculty and students in the use of podcast and lecture capture software. overwhelmingly (n=80) faculty reported an advantage to students was the ability to use the recordings to review, as often as needed, difficult concepts for improved comprehension and exam preparation. one limitation of this study is the response rate. although the school of dentistry and school of nursing generated a 64% and 57% response rate, respectively, the school of medicine response rate was only 24.5%. this may be attributed to the email list provided by the school of medicine which included all full-time faculty many of whom are exclusively involved in research and/or clinical teaching. we were unable to separate these individuals from faculty who engage in classroom instruction. other factors may account for the low response rate. asch et al (asch 1997) reported a model predicting response rates which revealed that physicians have a 9.6% lower response rate on surveys than non-physicians, and anonymous surveys have a 9% lower response rate. there are several methods recounted in the literature that attempt to assess and minimize response bias which can occur in even a high response rate survey (fillion 1976; lin and schaeffer 1995; hikmet 2003; menachemi 2011; asch 1997; ford & bammer 2009). two methods, comparing demographic characteristics of respondents to non-respondents, and contacting non-respondents following completion of the survey are not possible with anonymous journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 79 table 2a faculty perceptions of the benefits of podcast/lecture capture for students themes identified sample responses advantage to students n=100 can review materials as often as needed (80) “students can listen as often and when they like.” “allows students to hear and see the content for review or exam preparation purposes.” “students can review the lecture for better understanding.” “they can go back and review content they did not understand the first time.” “students have reported they like to go back and listen to them again before exams.” allows asynchronous learning opportunities (20) “they manage their own time and repeat sessions when needed.” “allow the students take the lecture at whatever time desired.” “view on their own time.” surveys such as this one. a third method, wave analysis (hikmet & chen, 2003; menachemi, hikmet, et al, 2006; montori et al, 2005), involves comparing survey answers of respondents who complete the survey in identifiable time units. these groups can be identified as early and late responders or fast, medium, and slow responders (ford & bammer) based on whether they completed the survey following the initial call or following subsequent calls. we chose wave analysis to determine if responses to the questions, or demographic characteristics were significantly different among the three groups of respondents. following the initial email request, we received 245 responses, following the first reminder 107 responses, and following the second and final reminder 53 responses. we performed a chi-square analysis comparing the characteristics of gender, school, and number of years teaching. we performed the same analysis on the attribute of use of the technology vs. nonuse of the technology which may have affected participation. we found no statistical significance in the responses between the three groups. despite the low response rate, we have demonstrated that the characteristics of the respondents are similar to the non-respondents, and the bias that might be present is unlikely to meaningfully impact our conclusions. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 80 table 2b faculty perceptions of the benefits of podcast/lecture capture for faculty themes identified sample responses advantage to faculty n=102 once recorded, lecture is widely available (36) “can record once and play for multiple classes.” “good for snow days, in case class would be cancelled you still have a way to cover material.” “distribute to larger audiences with less time” “wider distribution of our materials.” none (31) improved lecture quality (20) “rather than spending time lecturing i can view outcomes, edit, enhance & adapt course material.” “able to be consistent in the instruction across numerous sessions.” “helps me refine what is important.” “review and make improvements on delivery.” helpful to the students (15) “it gives the students another way of revisiting the lecture.” “[students] have raved about these podcasts as adding richness.” “they appreciate that we are trying to integrate technology for them into the presentation.” table 3a faculty perceptions of the disadvantages of podcast/lecture capture for students themes identified sample responses disadvantages to students n=97 none/don’t know (29) inability to interact with instructor (23) “loss of learner teacher interactions.” • “no interaction with lecturer.” • “they can't ask a question to clarify as they could during a live lecture.” • “i would assume it is less interactive for them.” less likely to attend class (18) “they will have incentive to skip live lectures.” “it provides an outlet/excuse for students not to attend lecture.” “reliance on the podcasts and thinking they do not need to attend class.” technology issues (17) “some students in rural areas have difficulty accessing them due to tech issues.” “very large file size.” “accessing another system to view the lectures.” “some students do not have a computer at home.” missing material delivered that is not recorded (5) “miss questions asked in class; content before or after the recording is being made.” they miss any visual material not on the screen and student questions.” lack of student engagement (5) “they can pay less attention in class.” “may not pay attention in lecture as they have a fallback option.” “may not be as engaged if watching lecture remotely.” journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 81 table 3b faculty perceptions of the disadvantages of podcast/lecture capture for faculty themes identified sample responses disadvantages to faculty n=87 none (28) technology issues (28) “getting access to the equipment.” “when equipment did not work, this was a nightmare.” “cumbersome recording” “time consuming to get software up and running.” low class attendance (14) “may reduce class attendance” “students don't come to class” “students don’t feel obligated to attend and are unable to participate in discussion” little faculty-student interaction (9) “no audience interaction.” “i like to give lectures that are interactive and can't do that with a recording.” “lack of interaction with learners can't gauge if there are problems with the message.” “discourages the use of discussion in class.” time-consuming to produce (8) “finding time to record them if not done live.” “pre-recorded podcasts can take a lot of time to produce.” “time to do it and learn the software/ hardware.” discussion the purpose of this study was to assess faculty use of podcasts and lecture capture technologies in this campus's health sciences education programs. much has been written about these instructional technologies from the student’s point of view. we wanted to hear from faculty about their experiences with this relatively new method of delivering instruction and their perceptions of the advantages and disadvantages of doing so. specifically, our research questions were intended to determine the following: 1) the extent to which health sciences faculty are using these technologies, 2) differences in use and perceptions of webcasting technologies among faculty based on health science program, faculty gender, or years of teaching experience, 3) faculty perceptions of the advantages / disadvantages for themselves and for their students, and 4) perceived barriers to using the technologies. regarding the extent to which faculty use webcasting technologies, although lecture capture and podcasting software systems are in use at each of the health sciences schools represented by the survey, only about one-third of the faculty respondents reported using them. furthermore, one in five faculty reported that they did not know these systems were available and 30% do not use them as teaching tools. we found that of the 34% who do use these technologies, the majority are using lecture capture methods rather than pre-recording materials for their students. additionally, although many of these faculty did not know the name of the available software product or system, this did not deter them from producing the recordings. our study failed to reveal any correlations between faculty gender and the use of podcast or lecture capture technologies. much research has been conducted on the issue of gender and technology. studies have been conducted on gender differences in perception of technology journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 82 (brunner & bennett, 1998), confidence in using technology (hon keung & alison lai fong, 2012), acceptance of (padilla-meléndez, del aguila-obra, & garrido-moreno, 2013), and attitude toward technology (bain & rice, 2006). many of these studies focus on students in k12, though some recent work has been done on students in teacher education programs (naaz, 2012; su luan & hanafi, 2007). while these studies show there are some gender difference in approaches to technology, there was nothing conclusive found in the literature about faculty gender differences in relation to their use of technology in the classroom. table 4 comparison of three waves of respondents on attributes that may have influenced participation respondents fast n=245 medium n=107 slow n=53 p-value gender male 123 (61.5%) 51 (58.6%) 32 (62.7%) female 77 (38.5%) 36 (41.4%) 19 (37.2%) total 200 (100%) 87 (100%) 51 (100%) .881 school dentistry 38 (15.7%) 21 (20.6%) 9 (17.0%) medicine 170 (70.2%) 65 (63.7%) 39 (73.6%) nursing 34 (14%) 16 (15.7%) 5 (9.4%) total 242 (100%) 102 (100%) 53 (100%) .622 year teaching don’t teach/didn't answer 50 (20.4%) 22 (20.6%) 2 (3.8%) <1 1 (0.4%) 4 (3.7%) 1 (1.9%) 1-5 35 (14.3%) 19 (17.8%) 11 (20.8%) 6-10 33 (13.5%) 11 (10.3%) 11 (20.8%) 11-15 27 (11.0%) 18 (16.8%) 5 (9.4%) 16-20 26 (10.6%) 6 (5.6%) 8 (15.1%) 21-25 28 (11.4%) 10 (9.3%) 7 (13.2%) 26-30 29 (11.8%) 8 (7.5%) 4 (7.5%) >30 16 (6.5%) 9 (8.4%) 4 (7.5%) total 245 (100%) (100%) (100%) .059 technology use 96 (57.1%) 48 (64.9%) 18 (46.1%) don't use 72 (42.9%) 26 (35.1%) 21 (53.8%) total 168 (100%) 74 (100%) 39 (100%) .265 we likewise were unable to find any correlations between use of these technologies and health science school, total years of teaching, or years of teaching at this campus. we found that faculty who have been teaching for only a few years are no more likely to use the technologies than faculty who have teaching for 20 years or more. we hypothesized that faculty with a long history at this campus would be more likely to use these technologies due to the tradition and culture on this urban campus which was an early-adopter of learning technologies and strongly promotes and supports its use in the classroom. however, our findings did not support this hypothesis. overall, faculty perceived the webcasting technologies to be advantageous. the number of comments regarding advantages to students and faculty (171) outnumbered the comments on the disadvantages (127). one advantage listed numerous times is that recordings can be viewed journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 83 by students anytime; this may also be a disadvantage in that if the recordings can be viewed anytime, students may not attend class. other investigators (bhatti et al., 2009; long & edwards, 2010) have also cited as advantages the convenience and flexibility of podcasts as well as their ability to be widely disseminated. schreiber (schreiber et al., 2010) noted that for students with certain learning styles, or slow learners, the ability for students to repeatedly review the material is a huge benefit. some of the shortcomings of podcasts/lecture capture were also identified by other authors and include a lack of student engagement (long & edwards, 2010; schreiber et al., 2010) and decreased motivation to attend class (schreiber et al., 2010). in the current study, technical issues were identified as problems for both faculty and students. similarly, bhatti noted that students may have difficulty with online accessibility (bhatti et al., 2009). previous studies have identified other drawbacks such as technical issues with hardware /software systems and production time (bhatti et al., 2009; jham et al., 2008). the current study found that a perceived lack of time and training were the principal barriers to faculty adoption of the webcasting technologies. though the wide variety of software available was not presented as a problem by respondents, such an array of choices may contribute to faculty perceptions that there is too much to learn to make this a viable method for instruction delivery. there is currently no initiative at the campus level to standardize the software, hardware or distribution mechanism for the recordings. such standardization may encourage more use by faculty, especially those reporting lack of time and training as barriers to implementation. an important aspect to incorporating podcasting technologies is the effect on teaching and learning outcomes. interestingly, in this study we found that more faculty than not believe use of these technologies enhances learning. however, relatively few faculty had evidence of improved test scores as a result of incorporating these instructional methods. this may indicate that faculty are not routinely measuring the impact of these technologies on student learning. their perceptions of enhanced learning may also be derived from subjective measures, such as student evaluations and comments. from a pedagogical viewpoint, multimedia learning theory suggests that podcasting technologies might enhance learning by allowing students to process both auditory and visual information together, and by enabling them to pause and replay podcasted material, thereby using repetition to activate memory circuits (mayer, 2001). however, a randomized controlled trial of learning outcomes in medical education found no significant difference in test performance between students receiving live lectures and podcasted lectures, although students found the live lecture format more engaging (schreiber et al., 2010). other studies have shown equal or better performance among students using these technologies compared to lecture alone (bhatti et al., 2009; vogt et al., 2010; zanussi et al., 2011). it has been argued that because podcasts are usually engaged by students in a passive and solitary manner, podcasts may actually hinder learning. however, faculty can structure podcasts so as to encourage more active learning by incorporating questions, interactive games, assignments, or student group activities associated with the content objectives. in addition, the style, length and delivery of podcasted content may affect student engagement and learning (long & edwards, 2010). alternatively, if pre-recorded podcasts are utilized, class time which was previously used for lecture may be restructured for more interactive learning activities. a recent survey of webcasting technologies in dental education found that as a result of using these technologies, faculty alter the way they teach (horvath et al., 2013). therefore, it is critical that faculty development efforts keep pace with these instructional technologies so that faculty can learn techniques to enhance the effectiveness and utility of such teaching tools. horvath et al. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 84 concluded that webcasting technologies may serve as a useful adjunct to the classroom environment. to maximize the effectiveness of these technologies, the authors offered several “best practices” which included the following: having sufficient preparation time and instructional objectives for faculty, complying with copyright / intellectual property laws, providing it support, combining recordings with other classroom activities, utilizing shorter content segments rather than full-length lecture recordings, and soliciting student evaluations regarding these technologies (horvath et al., 2013). the current study was limited to full-time faculty. although many part-time faculty teach in health sciences programs, we have found from previous attempts at surveying this group that they tend to be less responsive and more difficult to contact via email than their full-time colleagues. additionally, this survey did not distinguish between tenured/tenure track and nontenured/non tenure track faculty; this factor may have an impact on teaching load, support level, and availability of instructional technology resources. despite these limitations, we believe our study findings fill a void in the literature regarding the use and perceptions of podcasting technologies by health science faculty. ultimately, the goal of any instructional method should be to enhance learning and future research will explore teaching and learning outcomes as a result of the use of these technologies. conclusion we found a wide array of technologies to record lectures and present additional course materials electronically in use across all 3 health sciences schools. of the 30% of faculty who reported that they are not currently webcasting, most indicated that they plan to do so in the next 2 years. faculty identified more advantages than disadvantages for themselves and their students in using these technologies. the software and hardware will undoubtedly continue to change and develop but these methods of delivering instructional content have gained acceptance in health sciences education at this campus. further research is needed regarding the role that faculty status (i.e., full/part-time, tenure/tenure-track) plays in faculty use of technology in the classroom as well as faculty motivation and institutional support for using such technology. future studies should attempt to identify whether the investment in and use of such instructional technologies varies by discipline e.g., medical schools versus engineering or law schools, or by type and size of institution, as well as the impact that these technologies have upon student learning outcomes. acknowledgements the authors wish to thank david zahl and steve graunke for their assistance with statistical calculations and the faculty at the schools of dentistry, medicine, and nursing for their time in completing the survey. ethical approval the institutional review board at indiana university, bloomington, indiana reviewed and approved the protocol, irb # ex1008-31b. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 85 references bain, c. d., & rice, m. l. (2006). the influence of gender on attitudes, perceptions, and uses of technology. [article]. journal of research on technology in education, 39(2), 119-132. doi: 10.1080/15391523.2006.10782476 bhatti, i., jones, k., richardson, l., foreman, d., lund, j., & tierney, g. (2011). e learning versus lecture: which is the best approach to surgical teaching? colorectal diseases, 13(4), 459 62. doi: 10.1111/j.1463-1318.2009.02173.x bollmeier, s. g., wenger, p. j., & forinash, a. b. (2010). impact of online lecture-capture on student outcomes in a therapeutics course. american journal of pharmaceutical education, 74(7), 127. doi: 10.5688/aj7407127 brunner, c., & bennett, d. (1998). technology perceptions by gender. [article]. education digest, 63(6), 56. cain, j., & fox, b. i. (2009). web 2.0 and pharmacy education. american journal of pharmaceutical education, 73(7), 120. doi: 10.5688/aj7307120 copley, j. (2007). audio and video podcasts of lectures for campus-based students: production and evaluation of student use. [article]. innovations in education & teaching international, 44(4), 387-399. doi: 10.1080/14703290701602805 corl, f. m., johnson, p. t., rowell, m. r., & fishman, e. k. (2008). internet-based dissemination of educational video presentations: a primer in video podcasting. ajr. american journal of roentgenology, 191(1), w23-27. doi: 10.2214/ajr.07.2637 elkind, m. s. (2009). teaching the next generation of neurologists. neurology, 72(7), 657-663. doi: 10.1212/01.wnl.0000342516.08077.55 forbes, m. o., & hickey, m. t. (2008). podcasting: implementation and evaluation in an undergraduate nursing program. nurse educator, 33(5), 224-227. doi: 10.1097/01.nne.0000334775.98018.e8 greenfield, s. (2011). podcasting: a new tool for student retention? journal of nursing education, 50(2), 112-114. doi: 10.3928/01484834-20101230-06 hadley, j., kulier, r., zamora, j., coppus, s. f., weinbrenner, s., meyerrose, b., . . . khan, k. s. (2010). effectiveness of an e-learning course in evidence-based medicine for foundation (internship) training. journal of the royal society of medicine, 103(7), 288-294. doi: 10.1258/jrsm.2010.100036 hon keung, y., & alison lai fong, c. (2012). gender difference of confidence in using technology for learning. [article]. journal of technology studies, 38(2), 74-79. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 86 hopp, l. (2010). designing podcasts for clinical nurse specialist education. clinical nurse specialist, 24(2), 106-109. doi: 10.1097/nur.0b013e3181d33d80 horvath, z., o'donnell, j. a., johnson, l. a., karimbux, n. y., shuler, c. f., & spallek, h. (2013). use of lecture recordings in dental education: assessment of status quo and recommendations. journal of dental education, 77(11), 1431-1442. jham, b. c., duraes, g. v., strassler, h. e., & sensi, l. g. (2008). joining the podcast revolution. journal of dental education, 72(3), 278-281. doi: 72/3/278 [pii] johnson, l., & grayden, s. (2006). podcasts--an emerging form of digital publishing. international journal of computerized dentistry, 9(3), 205-218. kardong-edgren, s., & emerson, r. (2010). student adoption and perception of lecture podcasts in undergraduate bachelor of science in nursing courses. journal of nursing education, 49(7), 398-401. doi: 10.3928/01484834-20100224-04 kennedy, g., gray, k., & tse, j. (2008). 'net generation' medical students: technological experiences of pre-clinical and clinical students. medical teacher, 30(1), 10-16. doi: 10.1080/01421590701798737 long, s. r., & edwards, p. b. (2010). podcasting: making waves in millennial education. journal for nurses in staff development, 26(3), 96-101; quiz 102-103. doi: 10.1097/nnd.0b013e3181993a6f lymn, j., & bowskill, d. (2010). learning on the move. nursing standard, 24(31), 61. mayer, r. e. (2001). multimedia learning. new york: cambridge university press. doi: 10.1017/cbo9781139164603 mccartney, p. r. (2006). podcasting in nursing. mcn; american journal of maternal child nursing, 31(4), 270. doi: 10.1097/00005721-200607000-00014 mckinney, a. a., & page, k. (2009). podcasts and videostreaming: useful tools to facilitate learning of pathophysiology in undergraduate nurse education? nurse education in practice, 9(6), 372-376. doi: 10.1016/j.nepr.2008.11.003 meade, o., bowskill, d., & lymn, j. s. (2009). pharmacology as a foreign language: a preliminary evaluation of podcasting as a supplementary learning tool for non-medical prescribing students. bmc medical education, 9, 74. doi: 10.1186/1472 6920-9-74 naaz, s. t. (2012). attitude of prospective teachers towards computer technology: a study. [article]. golden research thoughts, 1(9), 1-3. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 87 nagler, a., andolsek, k., dossary, k., schlueter, j., & schulman, k. (2010). addressing the systems-based practice requirement with health policy content and educational technology. medical teacher, 32(12), e559-565. doi: 10.3109/0142159x.2010.528809 nast, a., schafer-hesterberg, g., zielke, h., sterry, w., & rzany, b. (2009). online lectures for students in dermatology: a replacement for traditional teaching or a valuable addition? journal of the european academy of dermatology and venereology, 23(9), 1039-1043. doi: 10.1111/j.1468-3083.2009.03246.x o'neill, e., power, a., stevens, n., & humphreys, h. (2010). effectiveness of podcasts as an adjunct learning strategy in teaching clinical microbiology among medical students. journal of hospital infection, 75(1), 83-84. doi: 10.1016/j.jhin.2009.11.006 owston, r., lupshenyuk, d., & wideman, h. (2011). lecture capture in large undergraduate classes: student perceptions and academic performance. internet and higher education, 14(4), 262-268. doi: 10.1016/j.iheduc.2011.05.006 padilla-meléndez, a., del aguila-obra, a. r., & garrido-moreno, a. (2013). perceived playfulness, gender differences and technology acceptance model in a blended learning scenario. [article]. computers & education, 63, 306-317. doi: 10.1016/j.compedu.2012.12.014 patasi, b., boozary, a., hincke, m., & jalali, a. (2009). the utility of podcasts in web 2.0 human anatomy. medical education, 43(11), 1116. doi: 10.1111/j.1365 2923.2009.03471.x pilarski, p. p., alan johnstone, d., pettepher, c. c., & osheroff, n. (2008). from music to macromolecules: using rich media/podcast lecture recordings to enhance the preclinical educational experience. medical teacher, 30(6), 630-632. doi: 10.1080/01421590802144302 rainsbury, j. w., & mcdonnell, s. m. (2006). podcasts: an educational revolution in the making? journal of the royal society of medicine, 99(9), 481-482. doi: 10.1258/jrsm.99.9.481 read, b. (2007). how to podcast campus lectures. chronicle of higher education, 53(21), a32. reynolds, p. a., mason, r., & eaton, k. a. (2008). webcasting: casting the web more widely. british dental journal, 204(3), 145-149. doi: 10.1038/bdj.2008.55 rowell, m. r., corl, f. m., johnson, p. t., & fishman, e. k. (2006). internet-based dissemination of educational audiocasts: a primer in podcasting--how to do it. ajr. american journal of roentgenology, 186(6), 1792-1796. doi: 10.2214/ajr.05.1315 ruiz, j. g., mintzer, m. j., & leipzig, r. m. (2006). the impact of e-learning in medical education. acad med, 81(3), 207-212. journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 88 schlairet, m. c. (2010). efficacy of podcasting: use in undergraduate and graduate programs in a college of nursing. journal of nursing education, 1-5. doi: 10.3928/01484834-20100524-08 schreiber, b. e., fukuta, j., & gordon, f. (2010). live lecture versus video podcast in undergraduate medical education: a randomised controlled trial. bmc medical education, 10, 68. doi: 10.1186/1472-6920-10-68 shantikumar, s. (2009). from lecture theatre to portable media: students' perceptions of an enhanced podcast for revision. medical teacher, 31(6), 535-538. doi:10.1080/01421590802365584 stiffler, d., stoten, s., & cullen, d. (2010). podcasting as an instructional supplement to online learning: a pilot study. computers, informatics, nursing : cin. doi:10.1097/ncn.0b013e3181fc3fdf straub, e. t. (2009). understanding technology adoption: theory and future directions for informal learning. review of educational research, 79(2), 625-649. doi: 10.3102/0034654308325896 su luan, w., & hanafi, a. (2007). gender differences in attitudes towards information technology among malaysian student teachers: a case study at universiti putra malaysia. [article]. journal of educational technology & society, 10(2), 158-169. vogt, m., schaffner, b., ribar, a., & chavez, r. (2010). the impact of podcasting on the learning and satisfaction of undergraduate nursing students. nurse education in practice, 10(1), 38-42. doi: 10.1016/j.nepr.2009.03.006 walmsley, a. d., lambe, c. s., perryer, d. g., & hill, k. b. (2009). podcasts--an adjunct to the teaching of dentistry. british dental journal, 206(3), 157-160. doi: 10.1038/sj.bdj.2009.58 zanussi, l., paget, m., tworek, j., & mclaughlin, k. (2011). podcasting in medical education: can we turn this toy into an effective learning tool? advances in health sciences education: theory and practice. doi: 10.1007/s10459-011-9300-9 journal of teaching and learning with technology, vol. 3, no. 1, june 2014. jotlt.indiana.edu 89 appendix a. software packages identified by faculty respondents software name # of users accordant 3 adobe captive 8 adobe connect 62 adobe presenter 31 apple podcast producer 3 camtasia 2 echo 360 1 elluminate 1 ishowu, quicktime pro 1 lecturnity 1 mediasite 13 perfect meeting 1 profcast 14 snapkast 10 wirecast 2 3520-11906-1-ce journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 56 61. using virtual environments for synchronous online courses gregory steel1 and scott l. jones2 keywords: online instruction; virtual reality; second life framework the dominant paradigm for online instruction focuses on asynchronous activity (where users communicate at different times, such as electronic mail or recorded videos), whether traditional online courses, such as described by russell and curtis (2013), or massive online open courses (mooc), such as described by rodriguez (2012). however, little research focuses on courses centered on synchronous online education technologies (where users communicate during the same time period). synchronous communication technologies offer potentially superior options in online education settings compared to asynchronous communication technologies. media richness theory argues that media offering more non-textual cues and the possibility for immediate feedback are more effective for communication, particularly in situations where ambiguity or confusion are more likely (daft, lengel, & trevino, 1987; rice, 1992; schmitz & fulk, 1991; trevino, lengel, & daft, 1987; and zmud, lind, & young, 1990). as educational environments offer hold potential for ambiguity and confusion, it is likely that course formats offering richer communication could improve learning. richer media also can increase the presence of an online instructor. student perception of the social presence of an instructor has been found to be highly influential to the success of online courses (hodges & cowan, 2012). this article describes how teachers can use virtual environments to teach synchronous online classes. virtual online environments offer a potential tool for supplying rich, synchronous online communication that comes close to mimicking the traditional classroom environment. virtual environments feature detailed, 3-d settings within which users, represented by avatars, can explore and interact. while many online virtual environments exist, this paper focuses on one such environment—second life—as at the time of this writing, it is free and relatively simple to use. the use of second life as an example of an online virtual environment should not be construed as a product endorsement. second life is the most studied virtual environment for education. however, while second life has been studied as an online learning tool, its use has not been studied within mostly synchronous online courses. studies have focused on its use as a tool within traditional face-to-face courses (denoyelles & seo, 2012; mayrath, traphagan, heikes, & trivedi, 2011; sierra, gutierrez, & garzon-castro, 2012; sutcliffe & alrayes, 2012), for use in part online, part face-to-face hybrid courses (hornik & thornburg, 2010), or for use as an additional activity for traditional online courses (mansour, bennett, & rude-parkins, 2009). combining virtual environments with other web 2.0 tools can create a largely synchronous format for online interaction that mimics much of the rich interaction of face-toface instruction without many of the limitations imposed by geography, allowing students anywhere to take the course. 1 assistant professor of fine arts and new media communication, indiana university kokomo, gsteel@iuk.edu 2 associate professor of fine arts and new media communication, indiana university kokomo, scotjone@iuk.edu steel, g. and jones, s.l. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 57 making it work there are a variety of requirements and preparations for teaching in virtual environments, including choosing which one to employ. while there are various options available for virtual environments, second life was used by the lead author. owned by linden labs and started in 2003, second life is free to use. to access it, one only needs to visit its website (secondlife.com), create an account, and download and install the viewer software (available for windows, mac and linux operating systems). before starting the course, the teacher needs to become familiar with the software to learn how to create an effective course and prepare students. in addition, when entering the course into the university course enrollment system, the instructor needs to make sure the course is clearly described so students understand the nature of the course before they enroll—the virtual environment can surprise students used to traditional online courses, as can the synchronous format. as a synchronous platform, the course needs scheduled days and times to meet. it should also be specified that students need a computer and broadband access, and any other hardware the instructor requires. if audio will be used, computers will need a microphone and speakers or headphones. the instructor should also talk with campus it support staff to learn if they will provide help for students. if many of the students in the class live within driving distance of the campus, an optional face-to-face training session before or during the first class period could be helpful, particularly if students have access to computers during the training session, such as in a computer lab or through their own laptops connected to the campus wi-fi network. the instructor also needs to locate virtual locations for students to meet—a variety of public areas suitable for teaching exist, including many created by universities. instructors can choose spaces ranging from indoor classrooms (figure 1) to a park by the eiffel tower (figure 2). users can also pay to create custom spaces. the instructor should also choose a few alternate locations in case the course needs to move during a meeting. to minimize problems during the first class period, it is a good idea to give students practice using the second life interface before the initial class, such as a series of introductory tasks to get them to create accounts and acquainted with the virtual environment. the lead author uses class periods in the virtual environment to conduct discussion of readings, much as one would in a face-to-face classroom. it is important to establish basic rules of classroom etiquette, particularly since many students are new to the environment. the lead author has a rule that only one member of the course at a time has permission to speak using audio. this is because simultaneous speakers using audio can create distortion effects. his students generally interact using text chat windows, thus preventing audio problems. most students are adept at communicating in this way—given the prevalence of texting in our society, many students are skilled at communicating via brief text messages. the lead author found that this format worked well for discussions; his impression was that some students were more willing to participate in discussions in a virtual environment than in a face-to-face one. second life also allows embedded files to be displayed—for example, the instructor could open powerpoint files and use them as visuals for online lectures and discussions. the rich media environment improved the social presence of both the instructor and of the students, allowing for richer social interaction than many traditional online instruction methods, thus better facilitating relationship building between students and the instructors and students with each other. steel, g. and jones, s.l. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 58 figure 1. an avatar stands in a space in second life modeled on a traditional classroom. there are tables and chairs on tiered levels for the students to sit at and an overhead screen at the front for the instructor’s use. figure 2. an avatar stands in a space in second life modeled after paris circa 1900. the avatar is in a park, and the eiffel tower and buildings are visible in the background. steel, g. and jones, s.l. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 59 there are a few other elements of second life instructors need to consider. while many areas of second life conform to popular taste, some areas of second life contain adult material that might offend some students. one should warn students of this before they start exploring. in addition, anyone can walk into class and begin interacting with students. in some cases this can be disruptive or inappropriate. while one can report inappropriate behavior, this does not immediately eliminate it. it is a good idea, as noted above, for the instructor to prearrange alternate locations for class meetings. in the case of a disruptive visitor, the instructor can tell everyone to teleport (move instantly) to the alternate location. the disruptive user will not know where everyone went and thus will be left behind. there are other limitations to synchronous online instruction. as the class is vulnerable to technological disruptions, a good backup plan is a must, such as moving the class to a chat room. in addition, students are in front of a computer and may be more tempted to multitask, perhaps by playing games, surfing the web, watching videos, chatting with friends, etc. in addition, many students participate from home, and during class they can face real world distractions, including unexpected visitors, children, roommates, and pets. this format also prevents students without broadband and suitable computers and hardware from participating, and compared to asynchronous online courses presents less flexibility in scheduling. while online virtual environments can simulate much of the traditional classroom environment, they benefit by being supplemented by other online tools, such as traditional course management systems, as well as social media such as facebook, twitter, and youtube. in addition, synchronous video communication, such as via skype or google+’s video chat feature, can provide additional means of adding a synchronous, social presence to such an online course. the lead author conducts “skype” office hours and was frequently logged in to many social media channels to communicate with students, thus improving the students’ sense of a real person. furthermore, an instructor can increase connectedness with students by conducting mandatory video conferences—either one-on-one or in small groups. the instructor could do this once during the first few weeks of the semester and could require one or more mandatory follow up conferences during the semester. as the technology changes and becomes more advanced, these strategies can be easily integrated into the digital community and utilized in a seamless way. future implications going forward, instructors do not have to forego rich, synchronous interaction when moving from the face-to-face classroom to online instruction; online instructors can use a virtual environment that simulates many of the benefits of the traditional classroom. in addition, instructors can combine the virtual reality synchronous classroom and online asynchronous instruction techniques. faculty can also use the virtual environment to take classes on virtual “field trips” to virtual recreations of real life places and other environments, thus facilitating learning. for example, students could discuss roman history or shakespeare’s julius caesar while visiting a recreation of ancient rome. the same shakespeare students could explore a virtual recreation of the globe theatre where shakespeare’s plays were performed. lastly, instructors can create their own virtual environments to illustrate lessons and facilitate discussion. as the pedagogical discourse continues to evolve and further evidence and questions arise, the flexibility and diverse nature of digital communities with a virtual environment such as steel, g. and jones, s.l. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 60 second life as the hub will be a viable option and provide a necessary proving ground for the future of higher education. references daft, r.l., lengel, r.h., & trevino, l.k. (1987). message equivocality, media selection, and manager performance: implications for information systems. mis quarterly, 11, 355–366. denoyelles, a., & seo, k. k. (2012). inspiring equal contribution and opportunity in a 3d multiuser virtual environment: bringing together men gamers and women non-gamers in second life. computers and education 58, 21-29. hodges, c. b., & cowan, s. f. (2012). preservice teachers’ views of instructor presence in online courses. journal of digital learning in teacher education, 28(4), 139-145. hornik, s., & thornburg, s. (2010). really engaging accounting: second life as a learning platform. issues in accounting education, 25(3), 361-378. mansour, s., bennett, l., & rude-parkins, c. (2000). how the use of second life affects elearners' perceptions of social interaction in online courses. journal of systemics, cybernetics & informatics, 7(2), 1-6. mayrath, m. c., traphagan, t., heikes, e. j., & trivedi, a. (2011). instructional design best practices for second life: a case study from a college-level english course. interactive learning environments, 19(2), 125-142. rodriguez, o. (2012). moocs and the ai-stanford like courses: two successful and distinct course formats for massive open online courses. european journal of open, distance, and elearning. retreived from http://www.eurodl.org/?p=archives&year=2012&halfyear=2&article=516 rice, r. e. (1992). task analyzability, use of new media, and effectiveness: a multi-site exploration of media richness. organization science, 3, 475–500. russell, v., & curtis, w. (2013). comparing a largeand small-scale online language course: an examination of teacher and learner perceptions. internet and higher education, 16, 1-13. schmitz, j., & fulk, j. (1991). organizational colleagues, information richness, and electronic mail: a test of the social influence model of technology use. communication research, 18, 487– 523. sierra, l. m. b., gutierrez, c. l., & garzon-castro, c. l. (2012). second life as support element for learning electronic related subjects: a real case. computers and education, 58, 291302. steel, g. and jones, s.l. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 61 sutcliffe, a., & arayes, a. (2012). investigating user experience in second life for collaborative learning. international journal of human-computer studies, 70, 509-525. trevino, l. k., lengel, r. h., & daft, r. l. (1987). media symbolism, media richness and media choice in organizations. communication research, 14, 553–574. zmud, r. w., lind, m. r., & young, f. w. (1990). an attribute space for organizational communication channels. information systems research, 1, 440–457. 3364-11905-1-ce journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 73 76. digital discourses: implementing technology within the public speaking classroom andrea m. davis1 and desiree d. rowe2 keywords: public speaking, digital citizenship, podcasting, digital storytelling framework in this semester-long project, students will be able to utilize various digital tools to meet four outcomes within the public speaking classroom. first, we are focused on the student’s ability to demonstrate critical consumption of media technologies. second, students should use these technologies to narrate and curate current events. third, technology should not hinder collaboration; rather we are seeking to utilize technology to encourage collaborative efforts that may have been impossible prior to the implementation of the technology. finally, we place an emphasis on the student’s investment in digital citizenship. for this project we place an emphasis on the notion of participatory culture, where individuals are part of a larger sustained cultural project that creates and facilitates (rather than just observes) the cultural production of information. in the classroom, our emphasis on participatory culture is manifested in our use of technology in relation to public speaking. we insist that students critically engage their own experiences and reaction to others experiences both creatively and digitally. prior to our emphasis on technology, we felt that the public speaking classroom existed in a vacuum, where the ideas expressed barely heard by other students and were rarely engaged in relation to the outside world. considering our location in the southeastern united states and over an hour from a large metropolitan city, we turned to technology in order for students to engage on a larger, more participatory scale. finally, this project also de-emphasizes the traditional public speaking ethos of truth. rather, we encourage students to work together to push the boundaries of thinking about topics and ideas, relying on their own experiences as meaning-making. making it work this project was developed though our faculty development institute’s technology initiative. we were asked to redesign general education courses with a technology-intensive focus. in redesigning the public speaking course, we incorporated tools students already used as well as new tools to reinvent the traditional three speech model of public speaking. we asked students to do a digital story, podcasting, and blogging, in addition to a traditional persuasive speech. leaving the traditional speech within the curriculum was a purposeful choice, one made to allow students to compare different communicative experiences and still get a “traditional” public speaking experience. to ease collaboration and communication across the course projects, some 1 assistant professor of communication studies, department of fine arts and communication studies, university of south carolina upstate, amdavis2@uscupstate.edu 2 assistant professor of communication studies, department of fine arts and communication studies, university of south carolina upstate, drowe@uscupstate.edu davis, a.m. and rowe, d.d. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 74 students used twitter outside of class to ask questions of the instructor and other students; this was not mandatory and not all students participated. phase 1. a digital story replaced the traditional introductory speech assignment. leopold’s (2010) assignment on media stories for persuasion was adapted to fit the needs of an introductory assignment. using microsoft photostory and/or imovie students were asked to design a digital story that would introduce themselves to the class. i placed particular emphasis on not merely hearing about the photographs, but encouraged students to reflexively engage the photos in order to make a coherent narrative. further, this project emphasized audience analysis, asking students what narrative they wished to share with their classmates. we had a one 75 minute class period workshop with photostory where students learned the basic functions – how to add photographs, arrange them, and add music/voiceover. homework included completing additional tutorials on the program. we then had a question and answer day the following week to deal with issues regarding both the assignment and the software. from that point, i worked with students on an as-needed basis on the project. most students requested additional help with editing, including adding effects, to the recording. after creating a two to three minute digital story, we had a presentation day where students introduced their digital story and then played it for the class. in creating the story the students were required to consider their audience’s needs as well as prior knowledge. they had to consider the effects of the visual (their photos) and audio (music/voice) choices in crafting a message. students were evaluated on content (45%), including clear narrative, the significance of the narratives and the photos used and delivery (45%). effective use of photostory/imovie software, as determined by their visual/audio product, was 10% of the assignment grade. phase 2. most semesters i ask students to critique an outside presentation. in the public speaking classroom, this allows students to apply the knowledge they have learned to produce their own speeches to other, perhaps more experienced, orators in the public eye. in election semesters, i instead ask students to write a critique of one of the debates. to enhance the students’ critical consumption of media through their own political discourse, i also created a course blog. i divided the class into two groups. i asked one group to blog the second debate and the second group to blog the third debate. the goal was to get the students to apply course concepts, but with the awareness of a public audience and political discourse. the difference between the blog and traditional assignment is its public nature. students were required to respond to statements publicly, support their answer to a group, and be aware of the effect their message had on the larger conversation. i asked the group that was not blogging for a given debate to read and respond to the posts of the bloggers. questions i posed to the students for the blog included: • who is the audience for this debate? • did one candidate “win” the debate? who? why do you think so? • what was the most effective message you heard in the debate? • what did the nonverbal communication of each candidate convey? • was your opinion on any of the issues changed through the debate? students’ posts focused on argument, delivery, nonverbal communication, and debate content. the responses to the posts asked questions (about communication styles/preferences as well as politics) and provided counter-narratives to the original post. some posts ended up being very lively with over half the class adding into the discussion. this assignment carried over into class (and pre-class) discussions about what candidates could do to be more appealing to likely voters. davis, a.m. and rowe, d.d. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 75 while an in-class discussion alone would have helped achieve our goal of dialogic engagement, the blogging component adding another layer of meaning. in our experience, the students who were more reticent to engage the in-class conversation where vocal in the blog posts. this allowed for a more substantive discussion both in-person and face-to-face. phase 3. podcasting replaced the informative speech in order to offer students a chance to “play” with a different type of technology and further explore ways to communicate an informative message. students created a 4minute informative podcast on an issue or topic of interest to them. this was a research-based assignment so they were required to use a minimum of five sources for the presentation. after introducing the assignment, i introduced audacity and showed them a tutorial, which included the basic functions. their homework was to download audacity, record one minute of audio, and edit that audio in some way. i also asked them to watch/listen to at least two additional tutorials. from that point, we worked individually and in groups on the podcasts. we had one individual work day where students brought in their laptops and we listened to works in progress and dealt with issues on a case-by-case basis. students problemsolved together and taught one another about the different editing tools they had learned. the podcasts were particularly helpful because students saw podcasting as useful to many courses they would take and mentioned how useful they would be in their future careers. students were evaluated on a revised informative speaking rubric. i tried to keep much of the grading criteria the same as an informative speech, as our goal is to incorporate technology while still maintaining the course objectives. in addition to using the podcasting software effectively for 10% of the grade (though recording, using at least two editing tools, and finalizing the podcast), students were graded on a clear argument, appropriate use of sources, delivery and outline. future implications the greatest challenge was overcoming students perceived difficulties learning new technology. for example, they were concerned that because they had never done podcasting that they could not do podcasting. as the semester progressed, and they learned each skill set, their confidence grew. in some cases they saw ready-made applications for the tools (i.e. podcasting) and in other cases (blogging) we had to discuss ways it could be applied in their work. students were decidedly more enthusiastic and driven after we discussed, and they saw, the practical application of all of the tools and the relationship to public speaking. the beginning of the semester was the most challenging; not all the students had bought in to the process and i had some technology difficulties in class that slowed the buy-in. while additional practice and preparation are always helpful, i found acknowledging that moments of difficulty are to be expected was useful. it should be noted that students didn’t self-select into this section and so were expecting a traditional public speaking experience. in the future, our university is looking to correct this by marking specific course sections as “technology intensive.” davis, a.m. and rowe, d.d. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 76 appendix appendix 1. suggested readings. lind, s.j. (2011). teaching digital oratory: public speaking 2.0. communication teacher, 26(3), 163-169. stommel, j. (2012). the twitter essay. hybrid pedagogy: a digital journal of teaching and technology. available at: http://www.hybridpedagogy.com/journal/files/twitter_and_the_student2point0.html. date accessed: 26 jan. 2013. vedantham, a., & hassen, m. (2011) new media: engaging and educating the youtube generation. journal of learning spaces. 1 (1). available at: http://libjournal.uncg.edu/ojs/index.php/jls/article/view/218. date accessed: 26 jan. 2013. references leopold, l. (2010). digital media stories for persuasion. communication teacher, 24(4), 187-191. scharff journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 1 14. improving oral presentations: inserting subtitles in videos for targeted feedback1 hanna yang2 and lauren f.v. scharff3 abstract: instructors are increasingly using videotaping in addition to written summarized feedback to develop oral presentation skills, but reviewing videotapes with students can be a time-consuming process. moreover, students may find that summarized feedback, which is displaced from the video itself, is vague and unhelpful. this project investigated a new way for instructors to deliver targeted feedback within video recordings, and embedded the new approach within other best practices (e.g. rubrics, guided self-reflection). we compared two groups (n=31) across two presentations, with one group first receiving videotapes that included interjected feedback, much like subtitles, in their videos, while the other group first received raw videotapes and met face-to-face with their instructor to review their performance. despite the significant student perception that face-toface feedback was more useful, our results showed that interjected feedback was more helpful for developing students’ style skills, and there was no difference in improvement across presentations for content, organization and response to audience. across both groups, students reported great benefit of video feedback because it provided them with a third-party perspective of their own performance. furthermore, interjected feedback provided instructors with a substantial time savings compared to the face-to-face meetings. keywords: oral presentations, feedback, videotaping, best practices providing meaningful feedback to students amidst the challenges of balancing the timeliness of the feedback with the quality of the feedback is a familiar struggle for most educators. this balance is particularly difficult to strike in the context of helping students improve their oral communication skills due to the ephemeral nature of the presentation. to address these challenges, some educators have turned to technology, for example videotaping student presentations. one relatively common way that instructors use video feedback to promote student development is to schedule meetings with students to replay the videotapes and analyze the students’ performance together. unfortunately this can pose an unsustainable burden of time and coordination for both parties, especially the faculty member. further, technology alone does not provide a complete solution (amirault & visser, 2009); it should be embedded within a course design that aids and incentivizes the students to conduct meaningful self-analysis and promote the development of targeted skills. while the few published studies available regarding the use of videotaping oral presentations share positive views of the practice, none share data on the development of oral presentation skills, nor do they address how the use of videotaping fits within a course design that embeds other, known best practices. thus, the purpose of this project 1 disclaimer: the views expressed in this document are those of the authors and do not reflect the official policy or position of the u. s. air force, department of defense, or the u. s. govt. 2 hanna yang, department of law, u.s. air force academy, 2354 fairchild drive, suite 4k25, u.s. air force academy, co 80840 hanna.yang@us.af.mil 3 lauren f.v. scharff, director, scholarship of teaching and learning, u.s. air force academy, 2354 fairchild drive, suite 4k25, u.s. air force academy, co 80840, lauren.scharff@usafa.edu yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 2 was to find and assess a way to help instructors provide timely, meaningful, and sustainable feedback to students about their oral communication skills that was also likely to be used by students. literature review feedback is a crucial aspect of the learning and development process because it helps target specific deficiencies and strengths, and provides formative guidance for development (for a nice overview of evidence, see chang et al., 2012). however, most instructors will readily admit that the process of grading and providing meaningful feedback is one of the least desirable aspects of their work. further, although some students are increasingly demanding more feedback from their instructors (chang et al., 2012), a large number of students also exhibit behaviors that indicate they do not value feedback, (e.g. failing to collect feedback, quickly glancing at their grades rather than taking time to read the feedback comments). to further complicate the “messages” received by faculty, some students indicate they prefer quality feedback over timeliness, whereas some indicate that they value timeliness over quality feedback (chang et al., 2012; winter & dye, 2004). it was within this mixed context that we approached our goal of oral presentation skill development, using technology as a tool embedded within other best practices. oral presentations—feedback challenges and a new approach. oral presentations pose several challenges for instructors with respect to their ability to provide meaningful, formative feedback. first, in contrast to written papers, oral presentations operate on a real-time basis, so without video capture, they leave no tangible artifact that students and instructors can review and assess. second, students may perceive a lack of clarity, reliability, validity, and fairness in the criteria used for assessing oral presentation skills (e.g. cooper, 2005; price, handley, millar, & o’donovan, 2010). for example, oral presentation assessments often emphasize content more than command of the oral medium, or command of the oral medium more than content, leading to an imbalanced assessment of oral presentation skills (cooper, 2005). the uneven focus is likely due to the fact that, without a videotape to allow multiple viewings, it is difficult to pay detailed attention to both aspects (content and style) of the presentation. a third challenge is that the nature of oral presentations does not naturally lend itself to the type of accurate, targeted commenting that instructors often provide in specific parts or margins of papers (mckeachie & svinicki, 2006), which provides students with subsequent opportunities for guided self-reflection. studies have shown that feedback needs to be specific to be effective (e.g. gibbs & simpson, 2004), but students often feel that instructor feedback is vague, difficult to follow, and not useful (price et al., 2010). with only summarized feedback provided separately from the oral presentation, it is easy to understand how the perception of vague and confusing feedback could be perpetuated in the context of oral presentation feedback. finally, a fourth challenge is the issue of timeliness of the feedback. studies have shown that if students do not receive timely feedback, they will be likely to disregard the feedback they eventually receive, based on the perception that such feedback is now irrelevant (e.g. gibbs & simpson, 2004; winter & dye, 2004). our personal experience suggests that the process takes several days, or in some cases, weeks to provide feedback for an entire class. these observations align with those of kovach (1996) who reported that efforts to capture oral presentations on video and provide instructor feedback require a formidable amount of time, administration, and cost. yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 3 the above workload issues might suggest that the drawbacks of videotaping oral presentations overcome the benefits. however, video capture has increasingly been used in many disciplines to provide feedback for improving oral communication skills, for example in medicine (savoldelli, naik, park, joo, & hamstra, 2006; byrne, sellen, jones, aitkenhead, hussain, gilder, smith, & ribes, 2002) and law (kovach, 1996; legal research and writing listserv responses, 2011). however, as we considered our own incorporation of videotaping student oral presentations, we realized that even the above-published “successes” had shortcomings. simply providing students with videotapes does not provide students the targeted guidance and feedback they need to meaningfully reflect on their videos (cooper, 2005). further, although face-to-face feedback enables targeted commenting during the meeting between the instructor and the student, it does not provide a historical artifact of targeted comments for students to review on their own. what if we could give targeted feedback in a manner that also allows students to have a permanent record of their presentation, i.e. interjected video feedback? our new approach, interjected video feedback, is textual instructor feedback that is manually inserted into a video at specific timeframes of a student’s performance, much like subtitles, thereby enabling a student to replay the video and see which specific moments in his or her presentation that did or did not meet the assessment criteria, as well as the manner by which they did or did not meet the assessment criteria. this is akin to comments interjected in a student’s written paper, which allows instructors to pinpoint specific writing issues at the precise points at which they occur, rather than in a global summary at the end of the student’s paper. moreover, interjected video feedback can be replayed by students at their leisure, providing them with multiple opportunities to review and self-assess their oral presentation skills. we also acknowledge that technology, in and of itself, rarely provides a complete solution. therefore, we incorporated as many best practices about feedback into this project as possible in order to place our use of interjected videotaped feedback in a context that both supported student learning and skill development, and maintained a manageable instructor workload. a framework of best practices. the major challenges we hoped to address with our course design and new technique were those of clarity and reliability of assessment, of student use of feedback, and of time and workload. no one best practice addresses all of these challenges, so we incorporated multiple practices: the use of a developmentally-oriented rubric combined with summarized feedback, student assignments requiring review of their videotapes and response to guided self-reflection questions, and more than one oral presentation assignment so that skills could develop. in order to test the impact of the new, targeted, interjected feedback, we randomly assigned half the students to receive it for the first presentation, while the other half received it for the second presentation. rubrics have been shown to be a helpful tool for providing timely, yet detailed feedback, as well as explicitly conveying the instructor’s expectations to students (stevens & levi, 2005; andrade, 1997). our rubric was also “developmental” in tone, in order to emphasize the process of learning. whereas some rubrics evaluate students’ demonstration of assignment components, (e.g. “style” or “content”) using end-state terms, such as “poor,” “good,” or “excellent,” our rubric evaluated students using terms denoting progression, namely by using the following terms: “not acceptable,” “beginning,” “intermediate,” and “advanced.” further, along with the rubric performance-level indications, we included several sentences of summarized comments at the end of the rubric feedback form. such summarized feedback provides more context, explanation, and in-depth insight about the student’s performance, and it can help students yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 4 understand the connection between their performance and scores on a standardized rubric. without the benefit of a rubric, summarized feedback may be perceived as unstructured, and therefore, unclear. our self-guided student reflections also encouraged students to make links between the rubric dimensions, i.e. instructor expectations, and their performance. as noted above, many students do not deeply process feedback, and thus, they do not use that feedback to shape their future efforts. by building guided self-reflection assignments into the course, we “forced” students to review their performance (watch their own video), identify specific behaviors that linked to each rubric component, and generate steps to improve each component in subsequent presentations. this guided reflection design follows from nicol and macfarlane-dick’s (2006) conclusion that students can only learn from their self-reflection if their reflection is informed by, or measurable against, specific goals, criteria, or standards. the third best practice we incorporated, multiple opportunities for development, supports long-time understanding of the role of practice in skill acquisition (e.g. newell & rosenbloom, 1980), as well as further promotes student use of feedback. by requiring students to come up with the self-reflected steps for improvement, we more explicitly framed the oral presentations as part of a developmental process, which framed the instructor’s feedback from the first presentation as part of a feed-forward process. studies have shown that students will often dismiss feedback if they believe that the feedback only pertains to a discrete assessment (gibbs & simpson, 2004; price et al., 2010). thus, this aspect of our design was incorporated to increase the value that students placed on the feedback, increasing the likelihood that they would use it to guide their development, not just because they were required to as part of the self-guided reflection assignment. justification for research. this project was designed to evaluate the impact of interjected video feedback on the development of students’ oral presentation skills and on student attitudes about the value of oral presentation feedback. we believed this new type of feedback could provide the specific, targeted guidance that would support student development equally well as face-to-face meetings during which the instructor and student review the video together, which has been the standard way for instructors to share targeted presentation feedback with students. further, instructor load would be reduced somewhat; a pilot study indicated that it took about half as much time for the instructor to watch a video presentation and interject the comments as to meet face-to-face with a student and share the same points. however, we acknowledge that there are qualitative differences between the interjected feedback, which is completely instructor determined, and the feedback that can occur during a face-to-face meeting, where students can direct some of the focus and also request elaboration or clarification. this personal tailoring within the face-to-face feedback process might make it more likely that students and instructors reach a common understanding on the assessment goals. on the other hand, our pilot data also indicated that some students may feel uncomfortable meeting face-to-face with instructors about their performance, and prefer to watch themselves in the privacy of their own rooms. therefore, this study was designed to compare the impact of interjected video feedback with face-to-face feedback, embedded within the best practices described above, on both student performance as well as student attitudes. yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 5 methods participants participants were 31 students from two sections of a core law course for sophomores at an institution in the midwest. while students are placed into course sections randomly by the registrar’s office each semester, in this case, the section of students receiving the face-to-face feedback first had an average academic composite (accomp) score of 3461.69, while the students receiving interjected feedback first had an average score of 3240.6 (max possible is 4,400, and most of our admitted students have a score of at least 2500). research design this study incorporated a two-group design with counterbalancing across two oral presentation assignments. one of the two sections was randomly selected to receive interjected video feedback following the first presentation, while the other section first received the raw video plus engaged in a face-to-face meeting with the instructor to review the video (nint=16, nf2f=15). the opposite types of feedback were given to each section following the second presentation. both groups for both presentations received summarized written feedback plus rubric scores (see details below), and completed the reflection assignment (see details below). dependent variables included performance scores, reflection assignment responses, and subjective feedback collected with an end-of-course questionnaire (see details below). in order to control for possible experimenter bias, a blind grader (not the instructor, and someone who did not know which students had received interjected feedback or face-to-face meeting with the instructor after their first presentation) used a rubric to assess the videotaped performances of the two student groups (the instructor graded the presentations separately for input into the course grade). materials equipment and software. currently, there is no software that allows instructors to accomplish video capture and interjected instructor feedback on a real-time basis, which would be most ideal and alleviate the stresses of time, administration, and cost. thus, we investigated several current software applications that would allow instructors to insert comments post production (e.g. camtasia, youseeu, screen-cast-o-matic, windows live moviemaker). additionally, we considered lecture capture systems that simultaneously capture a video and information written within a document shown on a screen, but then the comments are spatially displaced from the video. based on cost and ease of use, we chose windows live moviemaker 2011. this software application is free and intuitive to use for the interjection of short tailored feedback in the form of subtitles at specific points within the videos. since we ran our study, a newer version of moviemaker, windows moviemaker 2.6, was released. compared to the old version, the newer version of moviemaker requires a few additional steps to interject comments. a handheld camera was used to videotape the oral presentations. video scoring key. to streamline the interjected commenting process and to minimize students’ distraction level while they viewed their videos, the instructor created and used a video scoring key (see table 1). so, for example, instead of inserting lengthy phrases, paragraphs, or yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 6 narrative, the instructor might for example type in “tr-” to mark that a student transitioned poorly from one subject to the next or “to+” to indicate that a student demonstrated a very appropriate tone while making his or her legal argument. the video scoring key was based on the rubric that students were provided prior to their first and second oral advocacy exercises. table 1. video scoring key for interjecting comments in students’ presentation videos. key skill being assessed k knowledge of subject matter s support (law/facts) for your points tr transitions l logic of sequence ip information’s purpose w word choice p pace v volume to tone a articulation (grammar, enunciation) i inflection (of voice) ec eye contact m movements r responsiveness to audience’s questions/ answers e engagement level note. the instructor used a “+” or “-” after interjecting a key letter to indicate whether the student’s skill was strong or needed improvement. rubric. a rubric was created to address the widespread student perception that oral presentations are graded too subjectively and to guide the blind grader’s scoring. each component of the rubric (content, organization, style, and responds to audience) and each level of achievement (not acceptable, beginning, intermediate, and advanced) was derived from our institution’s outcomes for oral communication skills. the specific expectations for each level of achievement were tailored to both the oral advocacy focus of the course and the sophomore level of the students. each level of achievement had a small range of possible scores, with a maximum of 10 points per component. summarized feedback. the summarized feedback included instructor’s comments as well as a compilation of in-class peer critiquers’ comments. written comments in the form of full sentences were provided under headings that aligned with the rubric components: content, organization, style, and responds to audience. guided self-reflection assignment. the guided self-reflection required students to view their videotaped performance (half of them having interjected comments) and list specific instances of both strong and weak performances under each component (content, organization, style, and responds to audience). they were required to explain why their performance would have merited a certain level of achievement (not acceptable, beginning, intermediate, or advanced), using the language from the rubric. furthermore, students were required to describe specific steps they planned to take to improve in each component. this assignment helped ensure that the students would closely review their videos, because anecdotal feedback from prior semesters indicated that many students avoided watching themselves because it made them yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 7 uncomfortable. by requiring students to incorporate the language from the rubric, we created a structured framework for students to self-reflect and increased the connection between the instructor’s expectations and the students’ understanding about the assessment’s goals. student subjective feedback questionnaires. to ensure a more comprehensive understanding of the role of interjected feedback in developing students’ oral communication skills we created an end-of-semester questionnaire that asked students for their perceptions about the usefulness of viewing the videos, of the instructor’s written feedback (rubric scores and comments), of the interjected comments in the video, of the self-guided reflection, and of the rubric criteria. two additional questions asked about the clarity of the rubric criteria, and the number of times students reviewed their videos beyond what was required for the self-reflection. procedure during the course of one semester, students in the course were required to deliver two oral arguments, each lasting 8 minutes. during each presentation, students presented their evaluation and advocacy of a legal problem to fictional justices of the court (role-played by fellow classmates). the handheld camera was placed on a tripod and positioned to capture the speaker at a podium (the speaker stayed at the podium for the entire presentation). each observing student was given a copy of the peer review form, which they completed as the presentation occurred and then submitted to the instructor. following the presentations, the instructor transferred the media files of the students’ presentations from the handheld camera to a pc computer, opened up the media files on her computer using windows moviemaker, and used the “caption” function to insert comments using the shorthand letters from the video scoring key. it took the instructor about 10-15 minutes to interject comments into each student’s presentation. similar to grading papers, interjecting comments into the weaker presentations took longer than the stronger presentations. the videos and feedback were given to students within 4 to 8 workdays following the first presentation, and within 6 to 14 workdays following the second presentation. the feedback included the summarized written instructor comments and rubric evaluation. upon receiving their videos and feedback, students then had up to a week to complete the guided reflection. one section of students received interjected feedback, while the other section of students received only a raw video of their performance and individually met with the instructor in faceto-face meetings 1 to 4 workdays after receiving the videos. students were expected to bring their completed self-reflection to the face-to-face meeting. during these meetings, the instructor played and reviewed the videos with the students, stopping at specific points to discuss their performance. each of these meetings lasted about 20-30 minutes. the same procedure was followed for both presentations, except that the sections were reversed with respect to which section received interjected feedback and which received face-to-face feedback after the second oral presentation. during the final lesson of the semester, students completed a paper version of the subjective feedback questionnaire in class. no names or other identifying information were collected with the feedback, and it took approximately fifteen minutes for students to complete. yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 8 data analysis in order to test the impact of interjected feedback compared to face-to-face feedback, we compared the two groups with respect to their performance and subjective feedback. for the performance comparisons we had a blind scorer use the rubric to assign a total score of up to 40 points, based on his analysis of four components (content, organization, style, and responds to audience, each scored up to 10 points). for the subjective likert-scale feedback, 1 point was assigned for “not useful,” “disagree,” and “not likely,” 5 points were assigned for “very useful,” “strongly agree,” and “very likely,” and intermediate scores were given (2, 3, 4) for the progressively intermediate response options (e.g. minimally useful, somewhat useful, and useful, respectively). we categorized the open-ended responses based on common themes that appeared. results performance data—blindly scored video presentations. for each rubric component as well as the total score, we performed a 2 (group: interjected feedback first or face-to-face feedback first) x 2 (presentation: first or second) mixed anova, with group being the between variable and time being the within variable. for all components and the total score, there were significant main effects of time, p < 0.01 and there were no main effects for groups or interactions. however, accomp was higher for the group receiving face-to-face feedback first, t(24) = 1.6, p = 0.06 (one-tailed), and it significantly correlated with the scores on the students’ second presentation, r (26) = 0.52, p < 0.01. therefore, we calculated difference scores based on the students’ amount of improvement for each of the four component scores and the total score, and then for each we performed a single-factor, 2-level ancova using accomp as the covariate. in all cases, the adjusted means led to increases in the difference score for the interjected feedback first group, i.e. they showed more improvement between presentations, and decreases in the difference score for the face-to-face feedback first group. for the component of style, the adjusted difference between the groups was nearly significant f (1.25) = 3.43, p = 0.08, with the interjected feedback first group showing more improvement across the two presentations than the face-to-face feedback first group (mean improvement = 1.5 compared to 0.6, respectively). student questionnaires: likert-scale responses. in most cases, the average likert response scores indicated no difference between the two groups regarding the usefulness of the rubric criteria, the usefulness of viewing the videos on their own, the usefulness of the summarized feedback, or the usefulness of the self-reflection. in all these cases, there was generally good agreement that each of the aspects of the course feedback process were useful, with the average scores ranging from 3.8 up to 4.4 on the 5-point scale. however, both groups indicated that they watched their videos more times after the first presentation (mean = 1.53) than the second presentation (mean = 1.28). a 2 (group: interjected first or face-to-face first) x 2 (presentation: first or second) mixed anova for the number of times to watch their videos beyond what was required to complete the reflection assignment and meeting with the instructor showed no group difference and no interaction, but a significant effect of presentation, f(1,29)=5.24, p=.03. further, there was a clear indication that, regardless of group, the students believed the face-to-face feedback (mean = 4.5) was more useful than the interjected feedback (mean = 3.8). thus, we also performed a 2 (group: interjected first or face-to-face first) x 2 (type of feedback: yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 9 int or f2f) mixed anova for reported usefulness of feedback. regardless of what type of feedback they received first, students significantly rated face-to-face feedback as being more useful, f(1, 28) = 8.33, p< 0.01. there was no main effect for group nor was there a significant interaction. student questionnaires: open-ended responses. students’ open-ended responses showed several clear trends that help us better understand the performance and likert-scale data, and that hint at pros and cons for both the interjected and the face-to-face feedback. these comments did not show different trends based on group (whether students received interjected feedback first or face-to-face feedback first). first of all, the vast majority of students indicated the general value of having the videos to review. for example, several noted the helpful aspect of being able to view themselves as if they were a member of the audience rather than the presenter, for example: “seeing yourself is completely different sometimes than how actually you pictured yourself doing,” and “i was able to put a critique to an actual picture and see what everyone else saw.” many students also made generic comments about how watching their videos helped them improve: “i learn and improve better analyzing my own video on my own time,” “a lot of the times you don't notice the mistakes or habits you make so the video allowed me to break bad habits and improve,” and “i think [receiving a videotaped presentation] was the most useful feedback i have ever received on an oral presentation.” as noted above, all students were required to watch their videos prior to answering the guided reflections. thus, for both presentations, all students watched their video in private first, and then half of them met with the instructor for face-to-face feedback. similar to our pilot study, many students in this study also found it uncomfortable to watch themselves, even if at the same time they noted how beneficial it was to have the video recordings. example comments include, “it's very difficult to watch yourself in the video when you're not presenting and it helped give insights that i otherwise would not have noticed,” “it was awkward to watch myself, but it did help accentuate idiosyncrasies during the presentation,” and “allowed me to see firsthand what i was doing wrong. but it was the most awkward thing ever.” more explicitly related to the interjected feedback, many students appreciated the targeted nature of the interjected comments. example responses include, “helps identify exactly where mistakes were made,” “showed specific instances to focus on,” “showed positive/negative things right as they were happening,” and “that was the most useful part. i saw that i did something well or poorly and i was immediately notified from the instructor's point of view.” less positively, a small number of students indicated that the interjected comments were distracting, or that they struggled with the abbreviations used (table 1). for example, the interjected comments were a “little confusing had to go back and look up the symbol key a couple of times and it took away from watching the video.” with respect to the face-to-face feedback, students especially appreciated the depth of explanation when they met face-to-face with the instructor. one student stated, “i understood more when the feedback was face to face and more personal—i also learned more about the concepts,” and another student echoed this sentiment in the following comment: “[face to face] was the best feedback, even better than the written feedback because we were able to really dissect my argument and discuss the pros/cons and how to improve on other points that could have been made.” others noted that the face-to-face feedback “gave a chance to go deep into the reasoning behind deficiencies and find a way to fix them,” and “helped explain in detail what i could do better.” yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 10 discussion our study was designed to investigate how the use of interjected comments into video recordings of student oral presentations would impact student presentation skill development relative to the use of a video recording and face-to-face feedback sessions with the instructor. a motivation for this work was to create effective practices for students’ development while managing the load on the instructor. we carefully embedded the oral presentation feedback within several other best practices for student development (e.g. use of a rubric, guided reflection to link the feedback with the presentation objectives). overall, our data indicate significant positive effects of using video recordings, with respect to both the development of students’ presentation skills, and their self-reported attitudes. both groups improved between their first and second presentations. however, other than for the rubric component of style, where the group receiving interjected feedback first showed a strong trend for greater improvement, there were no significant differences between groups. the trend toward a difference in improvement for the style component may be due to the fact that this component focuses on more overt behaviors (e.g. “enunciation, pace, volume, eye contact, body movements”) that can be targeted more precisely within the video recordings. in contrast, the rubric components of content and organization tap into higher-level aspects of the presentations that aren’t easily targeted within a few frames. further, even when some aspect of organization or content was indicated using the interjected video comments, the nature of the comments, i.e. the use of short abbreviations such as “l” to indicate something about the logic of the sequence, meant that they were not deeply informative. this example highlights the inherent tension present when balancing instructor load and quality feedback; although short abbreviations are a time-saving mechanism for instructors, they can lead to the commonly held student perception that instructor feedback is vague and difficult to apply (price et al., 2010). the students’ self-reported feedback offers further insight into the relative benefits of the interjected and face-to-face feedback. regardless of whether they received the interjected feedback first or second, students reported great value in having the videos to review, and they showed an appreciation of the targeted nature of the interjected comments. thus, even though providing students raw videotapes without anything more may not help them to reflect as effectively as possible (cooper, 2005), the videotapes still serves as a tangible artifact that allows them to view themselves in the third-person, and therefore helps them gain a new perspective on their performance. furthermore, students’ positive reception of the interjected comments aligns closely with gibbs and simpson who stated that feedback needs to be specific to be effective (gibbs & simpson, 2004). many students also explicitly noted the discomfort they felt when watching themselves, which suggests another benefit of the interjected comments: the feedback review process can be private rather than shared with the instructor. however, these same students also clearly indicated that they especially appreciated the face-to-face feedback because of the depth and personalized nature of that feedback. in fact, for both groups, face-to-face feedback was rated as significantly more useful than the interjected feedback. these preferences highlight, perhaps, an unstated assumption that face-to-face meetings resulted in more “quality” feedback as opposed to interjected feedback which was merely “timely” (winter & dye, 2004; chang et al., 2012). one reason why students may have felt that the face-to-face meetings resulted in more quality feedback is that they had the opportunity to direct the discussion and engage in a dialogue with yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 11 the instructor, even if ultimately, they would have gained the same information through both interjected and summarized comments. as we move forward in considering how best to use an instructor’s time and resources, we should examine the disconnect between students’ perceptions and performance. after all, what we as instructors ultimately want is an improvement in student performance. if the face-toface feedback really was so much more useful, why didn’t the group receiving face-to-face feedback on the first presentation show more improvement from the first to the second presentation than the group that first received interjected feedback, especially with respect to the areas of content and organization? is it really worth an instructor’s time to meet individually with each student and review the videotapes? one interpretation is that the content and organization components of performance are more cognitively challenging and require more practice to improve. in contrast, the style components may be more tangible and easier for students to develop in a shorter time period. thus, even if the face-to-face feedback was more useful for students, the amount of improvement seen from one presentation to the next would not be significant. in future semesters, development of the content and organization components could be further enhanced by requiring more than two oral presentations in order to build in more opportunities for practice. alternately, the addition of writing assignments that specifically link to the presentations would allow instructors to give more detailed, interjected written feedback on the content and organization in the papers without needing to meet face-to face with the students however, we don’t want to forget about the benefit of the interjected comments on the style component development. the style and real-time audience interaction aspects of oral presentations are what distinguish oral presentations from written papers, and are the skills we hope to develop in our students. in the interest of not overloading instructors perhaps the more overt nature of the style elements could be captured through a peer-review process. the benefits of peer review (e.g. engagement, greater depth of processing for the reviewer and receiver of the review) are well documented for aspects of assignments to which students can bring some expertise (e.g. lundstrum & baker, 2009). throughout their lives, students have watched many others give presentations, and they should be able identify stylistic aspects of presentations that were less effective, especially if given specific guidance on behaviors to note. what most students are not practiced at is watching and analyzing their own performances, especially during more awkward moments where the human tendency is to look away. thus, students could be assigned to review a small number of classmates’ video recordings and, using style guidelines, insert the interjected feedback. the students could then watch their own videos with interjected feedback in the privacy of their own room. while instructors could still note stylistic aspects during face-to-face feedback, they would be able to focus the majority of their discussion on the higher-level aspects of content and organization. in this way, instructors could maximize their time and efforts, as well as leverage peer critiquing to provide students a well-balanced assessment of oral presentation skills that does not unduly emphasize content over command of the oral medium or oral medium over content (cooper, 2005). important to note is that all students received their feedback as part of an intentional course design that incorporated best practices, such as multiple presentations to support a developmental focus (gibbs & simpson, 2004; price et al., 2010), the integrated use of the rubric (stevens & levi, 2005; andrade, 1997), and structured reflection activities that “forced” students to watch the video at least once and explicitly state steps they would take for improvement (nicol & macfarlane-dick, 2006). in other words, the use of video technology in yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 12 and of itself is not a complete solution (hooper & rieber, 1995). an intentional course framework ensures more explicit overlap between the students’ and instructors’ understanding of the same goals (nicol & macfarlane-dick, 2006). without this framework of best practices, it’s likely that the positive impact of any feedback would be decreased. in fact, the significant decrease in the number of video viewings following the second presentation compared to the first presentation suggests that students often only move beyond the required minimum when there is a follow-on assignment that could clearly benefit from use of the feedback (gibbs & simpson, 2004; price et al., 2010). all of our best practices helped ensure that our feedback process was not a one-way one street from instructor to student, but rather, part of a process involving both traditional and non-traditional forms of feedback that required active engagement from the students as well as the instructor. also with respect to technology use, it’s important to acknowledge that the use of technology provides challenges (e.g. server space to store videos, purchase costs, time to learn to use applications) (kovach, 1996), and that, despite rapid evolution, the technology resources are often not designed with instructors’ goals in mind. in our study, the time it took the instructor to provide the interjected comments, post-production, using the abbreviations shown in table 1 was about half the amount of time taken when meeting face-to-face. thus, we did achieve a substantial time savings. however, at 10-15 minutes per video, the total amount of time was still substantial. thus, while we personally believe there is a benefit to recording student oral presentations and to interjecting comments to give feedback, especially for style elements, we cannot ignore some of the costs also associated with the approach. in sum, students crave feedback (robert & anthony, 2003), and our study indicates that video feedback can help support student development of oral presentation skills. our results also suggest that, depending upon the specific skills an instructor wants to develop, i.e. style versus content and organization, different types of feedback might be more effective. further, our student feedback responses suggest that access to even just the raw video without comments or a face-to-face meeting could provide some benefit, especially with respect to general aspects of the presentation, because the videos provide students with the perspective of a member of the audience. thus, an instructor might choose different feedback options for different oral presentations throughout the semester in order to balance developmental progress and load on the instructor. alternately, through the use of interjected comments by peers (for style elements) and face-to-face by instructors (for the higher-level content and organization elements), both types of components could be effectively developed without expecting an instructor to provide both types of feedback. crucially, we should all remember that feedback needs to implemented with best practices in mind, so that students have reason to and take the time to review and process the feedback. without student engagement in the feedback and development process, no development will occur. acknowledgements this research was made possible by contributions from james “jeremy” marsh and john hertel in the department of law, u.s. air force academy. yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 13 references amirault, r. j., & visser, y. l. (2009). the university in periods of technological change: a historically grounded perspective. the journal of computing in higher education, 21(1). andrade, h. (1997). understanding rubrics. educational leadership, 54, 14-17. http://www.jcu.edu/academic/planassess/pdf/assessment%20resources/rubrics/other%20rubri c%20development%20resources/rubric.pdf bloom, b. (1956). taxonomy of educational objectives: handbooks 1 to 3: the cognitive, affective, and psychomotor domain. london: longman. byrne, a.j., sellen, a.j., jones, j.g., aitkenhead, a.r., hussain, s., gilder, f., smith, h.l., & ribes, p. (2002). effect of videotape feedback on anesthetists’ performance while managing simulated anesthetic crises: a multicentre study, anaesthesia, 57, 169-82. http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2044.2002.02361.x/pdf chang, n., watson, a.b., bakerson, m.a., williams, e. e., mcgoron, f. x., & spitzer, b. (2012). electronic feedback or handwritten feedback: what do undergraduate students prefer and why? journal of teaching and learning with technology, 1(1), 1-23. http://jotlt.indiana.edu/article/view/2043/1996 cooper, d. (2005). assessing what we have taught: the challenges faced with the assessment of oral presentation skills, proceedings herdsa, university of sydney, australia. http://conference.herdsa.org.au/2005/pdf/refereed/paper_283.pdf gibbs, g., & simpson, c. (2004). conditions under which assessment supports student’s learning. learning and teaching in higher education, 1, 3-3. http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5 hooper, s., & rieber, l.p. (1995). teaching with technology. teaching: theory into practice, needham heights: allyn and bacon. http://www.d11.org/lrs/personalizedlearning/documents/hooper+and+reiber.pdf kovach, k. (1996). virtual reality testing: the use of video for evaluation in legal education, journal of legal education, 46(june), 233-251. lundstrum, k., & baker, w. (2009). to give is better than to receive: the benefits of peer review to the reviewer’s own writing. journal of second language writing, 18, 30-43. mckeachie, w.j., & svinicki, m. (2006). mckeachie’s teaching tips, boston: houghton mifflin newell, a., & rosenbloom, p. (1980). mechanisms of skill acquisition and the law of practice. computer science department paper 2387. retrieved 16 april 2013 http://repository.cmu.edu/compsci/2387 yang, h. and scharff, l.f.v. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 14 nicol, d. j., & macfarlane-dick, d. (2006). formative assessment and self-regulated learning: a model and seven principles of good feedback practice. studies in higher education, 31(2), 199-218. http://www.tandfonline.com/doi/pdf/10.1080/03075070600572090 price, m., handley, k., millar, j., & o’donovan, b. (2010). feedback: all that effort, but what is the effect? assessment & evaluation in higher education, 35(3), 277-289. robert, p., & anthony, h. (2003). a study of the purposes and importance of assessment feedback. university of technology, sydney. http://epress.lib.uts.edu.au/research/bitstream/handle/10453/6323/2003002119.pdf?sequence=1 savoldelli, g.l., naik, v.n., park, j., joo, h.s., & hamstra, s.j. (2006). value of debriefing during simulated crisis management, anesthesiology, 105, 279-85. http://journals.lww.com/anesthesiology/abstract/2006/08000/value_of_debriefing_during_sim ulated_crisis.10.aspx stevens, d. d., & levi, a. j. (2005). introduction to rubrics: an assessment tool to save grading time, convey effective feedback and promote student learning. sterling, va: stylus publishing, llc. responses on legal research and writing listserv (lrwprof-l@listserv.iupui.edu), december 2011 winter, c., & dye, v.l. (2004). an investigation into the reasons why students do not collect marked assignment and the accompanying feedback. learning and teaching projects 2003/2004. university of wolverhampton. http://wlv.openrepository.com/wlv/bitstream/2436/3780/1/an%2520investigation%2520pgs%25 20133-141.pdf. 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 journal of teaching and learning with technology, vol. vol. 11, special issue, pp.57-61. doi: 10.14434/jotlt.v11i1.34352 using video simulations for assessing clinical skills in speechlanguage pathology students abby hemmerich university of wisconsin-eau claire jerry hoepner university of wisconsin-eau claire abstract: a common challenge for clinical training programs is helping students apply academic knowledge to clinical settings. authentic assessment using simulation offers a unique approach to bridging this gap. miller’s pyramid provides a framework for competency-based education that integrates formative assessment and feedback at each stage of student learning. a multi-part assignment that builds from gathering data following a specific protocol (i.e., basic level), moves through interpretation of data (i.e., intermediate level), and then using that data to direct next steps (i.e., advanced level) scaffolds student learning toward clinical practice. review of past student assignments indicated better performance on intermediate and advanced skills when using a video-based, multicomponent assignment as compared to the original assignment design. incorporating video components allows simulation of rare clinical populations, while also replicating current telepractice service provision. by simulating patient interactions, the instructor replicates real world challenges, allowing the students to demonstrate in-the-moment problem solving and clinical responsiveness. keywords: competency-based education, video, videoconferencing, formative assessment like other clinical disciplines, education in the field of speech-language pathology is increasingly shifting towards competency-based education practices (hoepner & hemmerich, 2020). historically, students received knowledge-based education with descriptions of skills in their academic courses, often without practice implementing those skills. students were expected to apply their knowledge and skills in clinical practicum. clinical supervisors supported the application and implementation processes but did not always share the same approach or perspectives of the course instructors. competency-based education (cbe) systematically and incrementally implements knowledge and skills training, ultimately measuring readiness for clinical practice (mcallister et al., 2011). medical educators often use miller’s pyramid as a framework for developing knowledge, skills, and preparedness for clinical practice (lockyer et al., 2017; miller, 1990). at the base of the pyramid, instruction focuses on building declarative knowledge (i.e., knowing). the next level, knowing how, prepares students to interpret and apply that knowledge through guided demonstrations and models. in level three, showing how, students demonstrate their knowledge and skills through formative and summative competencies. finally, students are transitioned to clinical contexts, where performance is integrated into practice (i.e., doing). of course, learning and refining one’s approach still takes place in that final level (i.e., doing), but they have reached entry-level clinical competence. hoepner and hemmerich (2020) modified this framework to encompass knowledge, skills, and professional dispositions necessary to competently enter clinical practice in speech-language pathology. competency-based instruction and assessment in courses only addresses the bottom three levels of miller’s pyramid, since doing occurs in clinical practice. taking this one step further, figure 1 depicts how iterative assignments can move students through all three course-based levels of the pyramid. each level is addressed within each assignment (i.e., knowing, knowing how, and showing how); however, expectations for skills move from basic to advanced. feedback for all levels is formative, hemmerich and hoepner journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu providing guidance for the levels that follow, as well as summative feedback in the form of grades on each assignment. formative feedback at the final level is intended to carry over to clinical practice. figure 1. modified miller’s pyramid mapped to three levels of course assignments. mapping a field-specific example to miller’s pyramid. within the field of speech-language pathology, like other clinical fields, there are populations that are rare, making it difficult for students to have hands-on contact. in these cases, providing a simulated experience can be a valuable substitute. in speech-language pathology, children with cleft lip/palate are one example of a challenging population to access, as many of these children are seen in large medical centers where few graduate students get placements. the role of the speech-language pathologist (slp) with children with cleft palate involves multiple steps. initially, slps must evaluate the child and their family to identify areas of concern. this includes reviewing medical and educational documents, interviewing the family and other healthcare and educational professionals, and completing a hands-on examination of the child. once a plan of treatment is determined, the slp plays a role in feeding, swallowing, speech, and language development, as well as serving as a resource and counselor for the patient and family. students require opportunities to practice all of these skills and receive formative feedback to hone their skills. table 1 provides an example of a multi-part assignment to address these complex and interrelated levels required for student learning. multi-part assignment components following the protocol and gathering data as students tackle a new topic, they first need a chance to demonstrate prerequisite, foundational knowledge and basic skills for gathering relevant data. this can take multiple forms, such as completion of fact-based assignments or quizzes, but can also span higher levels of miller’s pyramid, such as demonstrating a skill with a standard case. in assignment 1 (see table 1), students first review materials and complete a protocol-based assignment by creating a plan for their oral mechanism exam and speech sound testing (i.e., planning data collection), and then implement that plan in a videorecorded submission where they complete the exam and testing on another person (i.e., actual data 58 hemmerich and hoepner journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu collection). this fits into the basic skills level for multiple reasons. first, the oral mechanism exam and speech testing students perform is relatively standardized; we are building skills in systematic data collection, ensuring the quality of their data for interpretation. second, the video recording submission demonstrates that they know how to complete these skills and does not require a higher-level skill of interpretation. multiple repetitions of this exam with individuals who have typical function provide a good baseline for assessing individuals who demonstrate deviations from that norm. interpreting and reporting data once students are comfortable with discipline-specific techniques for gathering information or data, they must learn what to do with that data (i.e., intermediate skill level). interpretation of data requires a deeper level of knowledge and the ability to compare results to expectations. in the example in table 1, this means applying their knowledge of normal oral motor function and typical speech to the results provided by the instructor. in this specific situation, the instructor provides a video recording with atypical findings because students do not have access to this clinical population. thus, this becomes a simulation out of necessity and provides the option of viewing the video multiple times. students review the video clinical exam and speech testing results, where they simulate the interpretation that would occur if this patient were present, distinguishing typical from atypical performance. their interpretations lead them to final clinical decisions (i.e., conclusions), and they create a clinical report following disciplinary guidelines. using data to direct next steps once students have skills in data collection and interpretation, the next step is applying that knowledge to new or more complex situations. in some fields, this may entail designing a new experiment to test new hypotheses. in other fields, like speech-language pathology, this entails addressing their findings or remediating patient skills. using role play to complete this step pushes students to a more advanced level, where they must carry out intervention and parent education, while responding in the moment to human variability enacted by the instructor (see table 1). this spans nearly all levels of miller’s pyramid, which include collecting data, interpreting data, and making adjustments based on that data in a live interaction. 59 hemmerich and hoepner journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu table 1. assignment components by levels of miller’s pyramid assignment knows knows how shows how assessment basic: following the protocol & gathering data (video submission) structures of head & neck oral mechanism exam plan demonstrates oral mechanism exam on a partner feedback on process of exam speech sound characteristics speech sound testing plan implements speech sound testing on a partner (no scoring) feedback on speech sounds used and techniques for eliciting sounds intermediate:* interpreting & reporting the data (video review, written submission) parent input interview plan interprets findings from listening to parent input feedback on summary of parent concerns speech sound errors & patterns speech sound testing plan interprets findings from listening to child’s speech feedback on speech sound summary – did you hear what you should have heard? advanced:* using data to direct next steps (role-play telesession) techniques for remediating speech sound errors techniques & therapy plan role play – teach instructor to make sounds 1) in-the-moment adjustments based on what instructor does 2) coaching by instructor on alternative approaches parent education parent education plan verbalize parent education & respond to questions 1) in-the-moment responsiveness to instructor questions 2) coaching by instructor on alternative topics or ways to explain *higher levels implicitly subsume prior levels evaluating assignment approach prior iterations of this course included a similar competency-based assignment compressed into a single live meeting with the instructor. students received client information (i.e., case history and demographics) and planned a brief assessment to carry out in a role-play simulation with the instructor. immediately following the assessment role-play, they interpreted their results and implemented a parent education and treatment simulation in the same meeting. this approach required some data collection skills but omitted a critical element—the oral mechanism examination—given time constraints. compressing all elements into a single interaction put students under tremendous pressure to perform efficiently and did not always allow them to show their full skillset. student performance on both iterations of this assignment were compared through a review of common errors (see figure 2). the multi-part assignment provided more opportunities for formative feedback regarding clinical skills employed in assessment and intervention. this led to fewer errors in interpretation and parent education as compared to the time-constrained condition. the expanded treatment role-play allowed the instructor to identify more nuanced challenges, evidenced by more support required during the live interaction (figure 2), and provide formative, in-the-moment feedback. 60 hemmerich and hoepner journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu figure 2. student performance summary on assignment iterations. implications video and video chat simulations are an innovative approach to implementing competency-based instruction. students are engaged in video development, along with data collection and interpretation, which provides a realistic representation of clinical workplace contexts. the use of multi-component assignments allows instructors to divide content into manageable segments. these segments are linked to a single case, allowing learners to make connections across contexts. using multiple segments provides repeated opportunities for formative assessment of knowledge and skills prior to the final portion of the assignment. instructor-student interactions within simulations via video conferencing provide exposure to the teleservice context, which is integral to contemporary service provision. the competency-based framework ensures development and assessment of skills for entry-level clinical work. these skills are measured authentically in the context of a simulated clinical experience, allowing assessment of in-the-moment problem solving. references hoepner, j.k. & hemmerich, a.l. (2020). using formative video competencies and summative inperson competencies to examine preparedness for entry-level professional practice. seminars in speech and language, 41(04), 310-324. https://doi.org/10.1055/s-0040-1713782 lockyer, j., carraccio, c., chan, m.k., hart, d., smee, s., touchie, c., holmboe, e.s., frank, j.r., & on behalf of the icbme collaborators. (2017). core principles of assessment in competency-based medical education. medical teacher, 39(6), 609616. https://doi.org/10.1080/0142159x.2017.1315082 mcallister, s., lincoln, m., ferguson, a., & mcallister, l. (2011). a systematic program of research regarding the assessment of speech-language pathology competencies. international journal of speech-language pathology, 13(6), 469-479. https://doi.org/10.3109/17549507.2011.580782 miller, g.e. (1990). the assessment of clinical skills/competence/performance. academic medicine, 65(9), s63-7. https://doi.org/10.1097/00001888-199009000-00045 0 10 20 30 40 50 60 70 80 90 100 error in interpretation or diagnosis parent education errors required support for treatment in live interaction pe rc en ta ge o f s tu de nt s single session multi-part assignment 61 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 journal of teaching and learning with technology, vol. 10, special issue, pp. 365-372. doi: 10.14434/jotlt.v9i2.31412 reducing uncertainty and podcasting engagement: an hr classroom response to covid-19 jared law-penrose indiana university southeast jlawpen@ius.edu abstract: the rapid spread of 2019 coronavirus disease (covid-19) has radically reshaped the human resource (hr) management policies and practices in organizations of all sizes across the country. additionally, covid-19 has had a major impact on the way in which faculty members teach our classes. in this case study, i discuss the way in which i responded to these changes in the courses i teach related to hr. i start with a description of the way in which covid-19 has impacted not only the course content, but also the pedagogical approach i use to engage students across my classes. i describe my attempt to foster trust despite the uncertainty associated with individual experiences related to covid-19. i also explain the process for rapidly transitioning to a virtual classroom setting. i describe how i combined courses for instructional purposes and the way in which i pivoted the curriculum for each course. specifically, i created time-relevant podcasts for students to use across different courses while maintaining distinct learning outcomes for each course. a sample podcast will be provided upon request for those interested. keywords: podcast, covid-19, uncertainty, engagement, non-traditional students challenges as a professor of hr, students in my classes track the national unemployment rate as an important factor in developing effective employee recruitment and compensation practices. early in the spring 2020 semester, unemployment rates hovered at near record low levels (bls, 2020). according to the bureau of labor statistics (bls), the overall unemployment rate was 4.4% in march of 2020 (bls, 2020). one month later, the unemployment rate increased to 14.7%. for context, this is the highest unemployment rate since the bls started reporting unemployment rates in 1948 (bls, 2020). in a matter of 30 days, greater than one out of every ten employed individuals were out of a job. as an hr professional, this has far reaching policy implications for organizations of all sizes. as a professor at a regional institution, this change created a high degree of uncertainty within my students. my institution is a regional university with approximately 4,500 students. additionally, 22% of the undergraduate student body is non-traditional (< 25 years old), and 29% of the student body are firstgeneration college students. in practical terms, this means that many of my students are employed in full-time jobs as the primary income earner for their families. moreover, it is not uncommon for my students to have children of their own that are in primary or secondary school. on top of this, the public school districts in the area moved to asynchronous classes, causing an even greater degree of uncertainty and anxiety amongst the non-traditional students in my classes. when my institution made the decision to move fully online for the remainder of the spring semester, more than a quarter of my students expressed their concern about maintaining their course work in light of the economic, social, and schedule changes necessitated by the spread of the 2019 coronavirus disease (covid-19). while this is purely anecdotal, it was clear that i had to proactively respond to these changes across all of my classes and pivot not only the content, but also the context of my classes to ensure that students in my courses would be able to meet the learning outcomes of the course and be prepared to enter a radically different work context in light of the high degree of uncertainty we were all facing. law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu i began to consider the ways in which i could rapidly change my courses to respond to the situation. the major challenge i had to address was how i would help my students through an incredibly ambiguous and uncertain period while preparing them to effectively respond to an uncertain work context. as an hr professor, i had to realize that my role was not to provide information but to foster an environment where my students could creatively respond to changing situations. whether a result of increased physical distance or fewer opportunities for real-time interaction, increased distance allows for fewer social cues and norms (antonakis & atwater, 2002). the move to an online classroom format undoubtedly increased the ambiguity my students were facing. berger and calabrese argue that individuals are motivated to decrease uncertainty in all situations (1974). in fact, when individuals are not able to reduce uncertainty, it can lead to greater emotional and physical withdrawal (berger, 1986). completely reducing uncertainty for my students, however, would negatively impact their ability to develop creative solutions to their individual and collective situations. berger and calabrese hypothesize a positive relationship between uncertainty and information seeking (1974). as ambiguity is reduced, motivation is also reduced. similarly, increasing ambiguity also increases reliance on individual heuristics and decreases the willingness or motivation to take risks necessary to succeed in the classroom and beyond (frishammer, floren, & wincent, 2011). ultimately, it seemed as though i was facing a dichotomy: i could continue with the same curriculum—though now outdated as a result of the changed economic and social context—to reduce the uncertainty my students were facing and risk decreasing their motivation, or i could modify the curriculum to reflect the new economic and social context, potentially increasing the uncertainty in my students and risk decreasing their motivation. thankfully, this is a false dichotomy. too much and too little uncertainty does indeed negatively relate to a performance decline. the challenge is finding the appropriate level of ambiguity. individuals are motivated to “make sense” of the world around them and consequently attempt to reduce uncertainty in their relationships by retroactively and proactively constructing an interpretable pattern of behavior of their partner (heider 1958; berger & calabrese, 1974). however, on the basis of activation theory, complete disambiguation does not always lead to positive outcomes (gardner 1986; gardner & cummings, 1988). a critical concept of the theory is activation level, which gardner defines as “the degree of neural activity in the reticular activation system, a major part of the central nervous system” (1986). activation theory hypothesizes that there is an inverted u-shape between an individual’s experienced activation level and task performance (gardner & cummings, 1988). the result is that performance suffers when activation level is both high and low, and performance is optimal at a moderate amount of activation. thus, the most effective performance of my students would be found in the ‘sweet spot’ of moderate levels of ambiguity. changes during the spring 2020 semester, i was teaching two separate courses within the same plan of study. the first course is a 400-level hr course that operates like an in-depth survey of each of the functional areas of hr. this course is usually taken in the second semester of students’ sophomore year or later. this course is also the prerequisite for all other advanced hr courses. the second course i was teaching is a 400-level wage and salary administration course where we cover topics that include labor market economics, benefits administration, employee motivation, and wage administration. while there is some overlap in each of these courses, they have very distinct learning outcomes. against the backdrop of covid-19 and having made the decision to adjust the curriculum to respond to the changes necessitated by the pandemic both logistically and economically, i had to determine my next steps. i decided to combine the courses for instructional purposes only while maintaining separate learning outcomes for each course. the key change i made to the curriculum was to reach out 366 law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu to professionals across a broad spectrum of industries and record podcast-style interviews. these interviews would then serve a significant instructional aspect of the course. my hope was that engaging working professionals across a variety of industries would demonstrate that the experiences of uncertainty and ambiguity were not unique to my students and thus engage their activation level by normalizing the amount of ambiguity they were facing. in other words, hearing from working professionals could moderate my students’ perception of the amount of uncertainty they were facing. this would then hopefully lead to improved efficacy and performance as we worked toward accomplishing the learning outcomes in each course through an updated curriculum. to be clear, both courses had unique learning outcomes that had to be maintained despite the fact that i combined the two courses for instructional purposes. what this meant is that i had to manage each course independently from an administrative role. in other words, each class section used the changes in organizational hr policies and practices as a result of covid-19 as the case study for their particular learning outcomes. this is similar to using the adventures of huckleberry finn in both an american literature course and a course focusing on pedagogy in secondary education. alternatively, this would be similar to using an archeological sample in both a geology course and an anthropology course. this is important in that the students from each individual class had the opportunity to explore the content provided by the unique situation of covid-19 through the lens of the learning outcomes for each course. this could have been done with each class individually since podcasts were recorded and available asynchronously. however, based on the uncertainty theory discussed previously, combining the courses for meetings and instructional purposes created an environment designed to foster a higher level of engagement in light of the challenges students were facing in their personal lives. the result of combining the classes for meetings did not eliminate the need to take the other course as the outcome and lens applied to each course was unique. both courses where originally scheduled to meet on monday’s and wednesday’s. as we moved fully online, it was clear based on communication from many non-traditional students that they would not be able to attend two weekly synchronous class sessions as a result of issues either with child care, changed work schedules, or internet access issues. what i decided to do was to hold two synchronous sessions via zoom each week. the first was scheduled during one of the course time slots on monday, and the second live session was scheduled for the other course time slot on wednesday. each zoom session was recorded and posted in canvas. students were responsible for content provided in both live sessions and were expected to attend at least one of the live sessions each week for the remainder of the semester. logistically, this had the effect of providing a variety of times for students to engage with the content and allowed each student flexibility in attending the live session that worked best for them given their individual constraints. combining the courses for instructional purposes also had the unintended positive outcome of further normalizing the individual experiences of students based on the structure of the live zoom sessions. i would start each live zoom session with a brief personal story about my experience during the week and my own experience with uncertainty. for example, during the first live zoom session, i shared about my own challenges as a working parent during a time when my children’s school closed. i then dedicated the first seven minutes of each class to zoom breakout rooms and asked my students to share a similar story with two or three of their classmates. because the courses had been combined for instruction, these breakout rooms would occasionally group students together who did not know one another. since i first modeled the behavior as the instructor and then introduced new students to one another, students from both classes began to show a collective interest in the experiences of one another. while i did not have the tools to measure “the degree of neural activity in the reticular activation system” (gardner, 1986), it became clear that this short activity stoked student curiosity about the experience of others during an unprecedented time. this led to what i believe is an increase in affective and cognitive trust amongst my students, which resulted in a higher overall tolerance for 367 law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu ambiguity, in turn improving their activation level. put differently, students began to trust one another and became genuinely interested in the experience of others as a way to normalize their own experiences. ultimately, my students became increasingly curious about each other, which seemingly led to an increase in curiosity about the course content. this was made manifest in the discussions around the podcasts i recorded with working professionals. i pivoted the curriculum by recording a series of weekly podcasts with working professionals from a variety of industries. these podcasts then became the topic of conversation for each of the subsequent zoom class sessions. the synchronous time in class via zoom was dedicated to analyzing the changes that had to be made in each of the industries as a result of covid-19 and the impact these changes had on hr policies and practices. using podcasts as an educational tool is not an innovative development. in fact, podcasts have been utilized in the classroom for at least the last decade (goldman, 2018). the podcast has become one of the most ubiquitous forms of information since the turn of the century. google currently has over 2 million registered podcasts (jovic, 2020). as of 2018, there have been 50 billion podcast downloads (locker, 2018). the appeal of podcasts for younger generations is exploding. according to billboard, podcasts now represent more than 10% of everything millennials listen to (cirisano, 2019). consider these statistics published by pr neswire (2019): 60% of gen z and 52% of millennials usually listen to podcasts that are at least 26 minutes in length… millennials and gen z were also 5% more likely to listen to podcasts for professional reasons often or very often compared to older generations… a third of millennials listen to podcasts daily. this trend is only likely to grow given that three-quarters of gen z respondents have a paid subscription for a streaming audio/music service compared to 60% of millennials and only 52% of those over 35. to be clear, the appeal of podcasts in a college classroom setting is not their ability to replace the traditional powerpoint-driven lecture. in fact, podcasts are “associated with little increase in performance…and little increase in learning gains” when used solely as a lecture replacement tool (moreavec, williams, aguilar-roca, & o’dowd, 2016). rather, the use of podcasts is effective when they are used to augment the learning process, not replace it. as a professor facing an uncertain classroom dynamic, i had to find a way to engage with my students in a way that is native to their technological lifestyles. recording a high-quality podcast has a relatively low barrier to entry. simply creating the podcast is not difficult; rather, the challenge lies in effectively using the podcast to help students achieve the learning outcomes. podcasts and learning outcomes i first had to identify various professionals that would be willing to share their experiences via a recorded podcast. for this process, i relied mostly on my existing professional network. guests were made up of colleagues from a previous career, former students, and other professionals with which i had an existing relationship. while this is not necessarily ideal for all disciplines, it allowed me to rapidly create the content needed to create the podcasts. the podcasts included the following guests: an airline cargo pilot for ups, a corporate recruiter for yum! brands, a city commissioner, the foreman for a construction company, a recent hr graduate working as a trainer for a manufacturing company, and an engineer. these podcasts followed the same general outline. we would start with a brief introduction where the guest would talk about themselves personally and professionally. i would ask the guest to then talk about their professional career and how they ended up in the position they are currently in. we would then transition into talking about what their job duties entailed before 368 law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu covid-19. as part of this discussion, i would ask follow-up questions and probe more deeply if needed. next, the guest would share about how their job had changed as a result of covid-19 and the impacts the pandemic had on their professional life. these were general questions that would guide the majority of the conversation. finally, i asked each guest two to three questions pertaining to hr and the individual learning outcomes from both courses. these questions were specifically tailored to the modules from each class. once the conversation was complete, i would export the audio file, upload it to canvas, and send out an announcement to my students that the podcast had been uploaded. by utilizing the metrics provided by canvas, i was able to monitor the time each student spent listening to the podcast. overall, more than 80% of the students listened to the entire podcast each week. as noted earlier, the weekly podcasts provided the content for the synchronous zoom sessions each week. because two different courses were combined for instruction purposes and each class maintained their separate learning outcomes, i had to ensure that we were able to relate the podcast back to the content of the course learning objectives. i accomplished this by crafting specific discussion questions that related the podcast back to the unique content of each course. this was perhaps the most challenging aspect of combining two distinct courses. prior to each synchronous zoom session, students from the different courses were given distinct discussion questions based on the specific podcast from that week. these discussion questions were directly related to each module’s learning outcomes for the student’s specific course. as previously noted, the first 7-10 minutes of each synchronous class was devoted to normalizing uncertainty and fostering trust across the class. the next 15 minutes of the live session was devoted to providing a brief summary of the podcast and asking students to share the aspect of the podcast they found the most interesting. following this, students were then randomly placed into discussion rooms of three to five students. one student in each room was assigned to be the discussion lead. discussion rooms lasted approximately 25-30 minutes. during this period, i would join each room to monitor and observe the conversation. i would typically spend one or two minutes in each discussion room to ensure that the students were engaging. after checking in on each discussion room, i would return to each room for approximately five minutes. during this time, i would take a few handwritten notes on the discussion. after the discussion rooms concluded, we would come back together as a group and spend the remaining class time debriefing the conversation as it related to the specific discussion questions. i would end each class session with a short two to three minute reflection on what i observed about the class conversation overall. charting a course through this process, i learned a great deal about fostering engagement in the classroom and using technology to pivot the delivery mechanisms. as i reflect on the experience, there are a number of things that worked well. first, as noted earlier, there seemed to be a greater degree of trust and connection fostered across the courses. while i had hoped to normalize the experiences of each student in the class, i was surprised by the intensity of relationships developed amongst groups of students. despite randomly assigning discussion groups across both courses, students began showing up early to the zoom room to socialize with one another. at least three times throughout the remainder of the semester, around seven students were logged into the zoom room 30 minutes before class officially started. when i talked with the students about why they were showing up to class early, their response was that they valued connecting with one another and that having a chance to socialize, even via zoom, provided a sense of normalcy that they otherwise lacked. as i probed further, i found out that while some of these students had taken classes together previously, they had never socialized in a formal manner with each other. essentially, this group of students built a connection with one 369 law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu another despite the physical distance between them. i take this as qualitative evidence of the course format fostering high levels of trust with one another. this independent, non-required interaction seemed to also foster more complex in-class conversations. within the field of hr, i often tell students that there are very clear wrong answers but rarely is there a clear right answer. instead, student evaluation in the hr courses i teach are frequently assessed on the complexity of their responses, their ability to explain their positions, and their justification for stances on a particular topic. the changes implemented in these courses required a greater depth of thought about specific topics. this differed significantly from the typical discussionbased classroom lecture i had conducted in the past. while students were still responsible for the course content, the shift in pedagogy away from a text-based class setting toward a ‘real world’ discussion seemed to engage a different level of critical thinking in my students. as we made the shift to addressing the way in which aspects of hr were manifested in real organizations, students seemed to suddenly begin thinking through more complex responses. rather than relying on a ‘textbook’ answer, students were considering implications to scenarios that i as the instructor had not given thought to. as i move forward, i absolutely plan to use aspects of this approach in my classes. specifically, i plan to continue to record podcasts with professionals and use these as important content/case studies across multiple classes. i plan to continue devoting a portion of the beginning of each class session to building and fostering trust by sharing and inviting conversation about relevant personal experiences. similarly, there are changes that i made as a result of covid-19 that i do not plan on incorporating in future classes. while combining courses for instructional purposes was necessitated by some of the schedule challenges faced by a non-traditional student body, i am not eager to combine different courses in the future. combining students from different courses into the same class session posed challenges related to maintaining distinct course learning outcomes. overall, this made it challenging to focus the in-class conversation as specifically as i had hoped on the topics relevant to each course. instead, i found the conversation bouncing between content from both courses. the result was a conversation that was relevant to half the class part of the time and the other half of the class part of the time. a further question that remains is the marginal value of using podcasts over recorded video lectures. in this particular case, i opted to use podcasts based on the demographics of my students and the fact that many of my students were employed full time and/or cared for children as a result of schools being closed. i believed that providing access to podcasts would create more accessible content for my students. this is entirely speculative, however, and i am unaware of any research that specifically compares podcasts to recorded video lectures as a learning tool. while i suspect that there are differences in learning outcomes based on the use of podcasts versus recorded video lectures or interviews, this is an open question that future research should explore. one of the other inevitable challenges i faced in pivoting to a technology-reliant course so quickly was the lack of preparation. by the end of the semester, i was completely worn out from spending hours contacting guests, recording and editing the podcasts, and preparing class discussions and relevant examples based on each podcast. each week was an almost endless cycle of preparation. while the pandemic necessitated such a change, it is not one that i am eager to repeat. i have become a firm believer that podcasts can be an incredibly effective educational tool, but to effectively execute and incorporate podcasts within the curriculum, it would be ideal to have at least one semester in advance to prepare and relate the podcast content back to the course content. overall, podcasts are an effective educational tool when students have a high degree of trust with one another and are able to tolerate a moderate degree of ambiguity. i will likely be using some form of podcast in each of my future courses, and i encourage other instructors to do the same. i would be happy to share an example podcast upon request. 370 law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu epilogue now that almost 12 months have passed since moving entirely online, i find that i have continued to incorporate podcasts into my classes. i originally noted that one of the challenges was the speed at which i had to pivot from face-to-face to an entirely virtual setting. as i am now teaching these two courses again, i do in fact find that having additional time to prepare has made the use of podcasts more effective. i have now been able to tailor specific discussion questions for each class based on the podcasts instead of combining both courses together. this has allowed me to more easily distinguish between the learning outcomes for each course and keep synchronous discussions of the material more targeted. while i faced some challenges with the combining of classes and the use of podcasts, covid-19 has forced me to reevaluate important aspects of my pedagogy and has provided an opportunity to rapidly experiment with new tools. references antonakis, j., & atwater, l. (2002). leader distance: a review and a proposed theory. the leadership quarterly, 13(6), 673-704. https://doi.org/10.1016/s1048-9843(02)00155-8 berger, c. r. (1986). uncertain outcome values in predicted relationships: uncertainty reduction theory then and now. human communication research, 13(1), 34-38. https://doi.org/10.1111/j.1468-2958.1986.tb00093.x berger, c. r., & calabrese, r. j. (1974). some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication. human communication research, 1(2), 99-112. https://doi.org/10.1111/j.1468-2958.1975.tb00258.x bls. (2020). unemployment rate rises to record high 14.7 percent in april 2020. retrieved from https://www.bls.gov/opub/ted/2020/unemployment-rate-rises-to-record-high-14-point-7 percent-in-april-2020.htm cirisano, t. (2019). rise in podcast listening by millennials, the biggest ‘audio generation’. billboard. retrieved from https://www.billboard.com/articles/business/8516818/ipsos-studypodcastlistening-audio-millennials-gen-z-radio frishammar, j., floren, h., & wincent, j. (2011). beyond managing uncertainty: insights from researching equivocality in the fuzzy front end of product and process innovation projects. ieee transactions on engineering research, 58(3), 551-563. gardner, d. g. (1986). activation theory and task design: an empirical test of several new predictions. journal of applied psychology, 71(3), 411 .https://psycnet.apa.org/doi/10.1037/0021-9010.71.3.411 gardner, d. g., & cummings, l. l. (1988). activation theory and job. research in organizational behavior, 10, 81-122. http://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._ (1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._s taw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._gree nwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcffa09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg 1mdcwnuaxndawodi4otu2ode4&el=1_x_2 goldman, t. (2018). the impact of podcasts in education. scholar commons. https://scholarcommons.scu.edu/engl_176/29 heider, f. (1958). the psychology of interpersonal relations. new york: wiley. 371 https://doi.org/10.1016/s1048-9843(02)00155-8 https://doi.org/10.1111/j.1468-2958.1986.tb00093.x https://doi.org/10.1111/j.1468-2958.1975.tb00258.x https://www.bls.gov/opub/ted/2020/unemployment-rate-rises-to-record-high-14-point-7-%20percent-in-april-2020.htm https://www.bls.gov/opub/ted/2020/unemployment-rate-rises-to-record-high-14-point-7-%20percent-in-april-2020.htm http://www.billboard.com/articles/business/8516818/ipsos-study-podcasthttp://www.billboard.com/articles/business/8516818/ipsos-study-podcasthttp://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._(1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._staw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._greenwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcff-a09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg1mdcwnuaxndawodi4otu2ode4&el=1_x_2 http://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._(1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._staw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._greenwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcff-a09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg1mdcwnuaxndawodi4otu2ode4&el=1_x_2 http://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._(1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._staw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._greenwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcff-a09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg1mdcwnuaxndawodi4otu2ode4&el=1_x_2 http://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._(1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._staw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._greenwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcff-a09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg1mdcwnuaxndawodi4otu2ode4&el=1_x_2 http://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._(1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._staw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._greenwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcff-a09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg1mdcwnuaxndawodi4otu2ode4&el=1_x_2 http://www.researchgate.net/publication/256980929_gardner_d._g.__cummings_l._l._(1988)._activation_theory_and_task_design_review_and_reconceptualization._in_b._m._staw_and_l._l._cummings_(eds.)_research_in_organizational_behavior_(vol._10)._greenwich_ct_jai_press_inc?enrichid=rgreq-b07c7b9e-13e0-4de0-bcff-a09c8fb77f5b&enrichsource=y292zxjqywdlozi1njk4mdkyottbuzo5otg5odaznjg1mdcwnuaxndawodi4otu2ode4&el=1_x_2 law-penrose journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu jovic, d. (2020). 40 powerful podcast statistics to tune into. small biz genius. retrieved from https://www.smallbizgenius.net/by-the-numbers/podcast-statistics/#gref locker, m. (2018). apple’s podcasts just topped 50 billion all-time downloads and streams. fast company. retrieved from https://www.fastcompany.com/40563318/applespodcasts-justtopped-50billion-alltime-downloads-and-streams moravec, m., williams, a., aguilar-roca, n., & o’dowd, d. k. (2010). learn before lecture: a strategy that improves learning outcomes in a large introductory biology class. cbe—life sciences education, 9(4), 473-481. https://doi.org/10.1187/cbe.10-04-0063 pr newswire. (2019). americans listening to podcasts at work more than doubles from 2018. retrieved from https://www.prnewswire.com/news-releases/americans-listening-topodcasts-at-workmore-than-doubles-from-2018-300815494.html 372 http://www.smallbizgenius.net/by-the-numbers/podcast-statistics/#gref https://doi.org/10.1187/cbe.10-04-0063 http://www.prnewswire.com/news-releases/americans-listening-to-podcasts-at-work503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 microsoft word 3094-jotlt.doc journal of the scholarship of teaching and learning, vol. 12, no. 4, december 2012. pp. 57 – 58. book review how to design and teach a hybrid course: achieving studentcentered learning through blended classroom, online and experiential activities gordon hensley1 citation: caulfield, j. (2011). how to design and teach a hybrid course: achieving student-centered learning through blended classroom, online and experiential activities. sterling, va: stylus publishing. isbn: 9781579224226 publisher description: this practical handbook for designing and teaching hybrid or blended courses focuses on outcomes-based practice. it reflects the author’s experience of having taught over 70 hybrid courses, and having worked for three years in the learning technology center at the university of wisconsinmilwaukee, a center that is recognized as a leader in the field of hybrid course design. jay caulfield defines hybrid courses as ones where not only is face time replaced to varying degrees by online learning, but also by experiential learning that takes place in the community or within an organization with or without the presence of a teacher; and as a pedagogy that places the primary responsibility of learning on the learner, with the teacher’s primary role being to create opportunities and environments that foster independent and collaborative student learning. starting with a brief review of the relevant theory – such as andragogy, inquiry-based learning, experiential learning and theories that specifically relate to distance education – she addresses the practicalities of planning a hybrid course, taking into account class characteristics such as size, demographics, subject matter, learning outcomes, and time available. she offers criteria for determining the appropriate mix of face-to-face, online, and experiential components for a course, and guidance on creating social presence online. the section on designing and teaching in the hybrid environment covers such key elements as promoting and managing discussion, using small groups, creating opportunities for student feedback, and ensuring that students’ learning expectations are met. a concluding section of interviews with students and teachers offers a rich vein of tips and ideas. how to design and teach a hybrid course: achieving student-centered learning through blended classroom, online and experiential activities by jay caulfield offers a summary of effective pedagogy one can apply to any classroom, and proposes practical design tips for teachers of hybrid courses. 1 associate professor and theatre education coordinator, department of theatre and dance, appalachian state university, gordonhensley@att.net hensley, g. journal of the scholarship of teaching and learning, vol. 12, no. 4, december 2012. jotlt.indiana.edu 58 effective teachers often find themselves redesigning curriculum, researching new pedagogical approaches, and seeking refreshing changes to their courses. this helpful text provides an introduction to the hybrid model, teaching pedagogy, and designing a hybrid course. the book also highlights interview data about hybrid learning, teaching, and best practices. caulfield concludes with actual interview data from hybrid course students and teachers, which is a clever way to investigate both sides of the hybrid experience. the language of this book is directly aimed at teachers. jay caulfield succinctly explains hybrid learning and teaching, compares pedagogy styles and learning theories with a focus on experiential learning, and she compares traditional teaching to hybrid teaching. the book includes useful charts and samples of key components for visual learners. caulfield dedicates two entire chapters to hybrid course learning strategies: discussion, and small group. in each chapter, caulfield clearly previews information to be covered, gives information, provides examples, and ends by summarizes the information. the text is broken up into small, easily digestible, one-to-two paragraph sections. the book is absolutely accessible and not written solely for the tech savvy expert as one might expect. this text is applicable to all teachers. from this book, teachers can expect to learn: • the concept of hybrid learning and teaching; • skills and ideas for effectively creating hybrid learning experiences; and • data-driven justification for hybrid teaching and learning. this text would be most useful as a resource to consider when preparing or redesigning a course, whether it be online, hybrid, or face-to-face. teacher training programs could also recommend this book to their students because of the survey of pedagogical styles. reading this book with time to reflect on each chapter is thought provoking as there are opportunities for practical application throughout. caulfield seems to genuinely want to share her depth of knowledge and experience. the final reflection reveals her intentions with this book: “what i’ve written is, in essence, a reflection and culmination of my life experiences as a learner and a teacher.” her years of teaching and curriculum design experience are humbly reflected in this practical text. journal of teaching and learning with technology, vol. vol. 11, special issue, pp.55-56. doi: 10.14434/jotlt.v11i1.34458 creating authentic assessments through controversy lamia scherzinger indiana university purdue university indianapolis abstract: we are surrounded by controversy—politics, religion, diets, and even science are all up for debate in our 24/7 world of social media and the internet. with this controversy comes a lot of misinformation and competition with what our students might otherwise be learning in our classrooms. i know this intimately, since i teach fitness and nutrition courses, two topics widely addressed by internet “experts” who continually contradict what i teach in my class. whereas some may say this makes an instructor’s job more difficult, i have decided to rise to the challenge and use controversy to enhance my students’ learning. by using an assortment of technologies and platforms—web searches, twitter, tiktok, and more—i have been able to move beyond the classroom to engage my students in real-world problems, a strategy that results in more authentic assessments. keywords: authentic assessments, controversy, technology, misinformation, impact controversy is only dreaded by the advocates of error. —benjamin rush a brief background on my teaching situation: all my classes are 100% online and asynchronous, so my use of technology for assessments is a necessity. i teach general education courses, so my students are all types of majors and come with all levels of knowledge (or lack thereof) on exercise and nutrition. this means what i teach is competing with whatever nutrition and fitness information they receive daily from social media influencers, unscientific online articles, and hearsay, which for today’s socialmedia-obsessed students, can be a lot! one of my course learning objectives can help illustrate how i use technology and a controversial topic in an authentic assessment: evaluate the relationship between food intake and physical health. we know the quality of one’s food intake is tied to one’s health; foods high in saturated fats increase the risk of heart disease, for example. one way i assess this is by posing the following question in a discussion: is access to healthy foods a privilege or a basic human right? this is controversial for a few reasons: first, one cannot talk about healthy food access without addressing race, since studies show that black americans have a much higher rate of poverty and therefore much less access to fresh foods than white americans (drewnowski & eichelsdoerfer, 2010). second, it asks the students to share their thoughts on what they philosophically consider basic human rights, which differs depending on one’s personal beliefs. and finally, i ask that once they are done posting their contribution to the discussion, they then reply to two other students who chose a different stance from their own. technology is a great help in this, and i use several technologies to facilitate this discussion. the most obvious one here is the use of my learning management system’s discussion board. i also use youtube to share a minidocumentary entitled “divided cities: the food deserts of memphis” that illustrates the way race, socioeconomic status, and basic human rights come together to cause a huge dichotomy of haves and have nots when it comes to fresh food access. i then have the students use governmental websites to look up statistics on poverty levels, food deserts, and health markers of americans to support their claim. they also must use their class-assigned e-text to cite nutritional facts and food access information. finally, students have the option to reply via a regular discussion post or to record their response and replies to each other, allowing them to have more of scherzinger journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu a “conversation” than just a typed discussion. therefore, in one assignment, i might use up to five different technologies to authentically assess one course learning objective in a real-word situation. another example of this, and one where i address the obsession many students have with acquiring much of their health and fitness knowledge from social media, is to approach our dietobsessed culture with science. i first have them list two popular diets they have seen on social media apps and write out a brief explanation of what these diet guidelines and restrictions are. they then are to try to find, via the university’s library databases, at least three scientific articles that support that particular diet. finally, they use what they have learned in our class via the e-text and class notes and assignments to compare these diets to the u.s. department of agriculture’s dietary guidelines for americans. this assignment provides a way for the students to learn a few things: (1) not everything you learn on social media is true; (2) overall, scientific research does not support most fad diets; and (3) when compared to the dietary guidelines, many of these diets fall woefully short. participating in assignments such as these allows my students to use the information they learned in class to combat possible misinformation, participate in a respectful, engaged discussion that is supported by facts, and achieve the course learning objectives. by facing these topics head-on, i have provided my students with not only the power to overcome the inaccuracies they see every day but also a chance to engage in real-world situations that may arise or questions they may be faced with in their futures as health professionals or even in everyday life. and although in my examples i used multiple technologies, this does not have to be the case for all topics. for example, when discussing a diet that is popular among the students in a particular class, an online diet analysis tool might be the only technology necessary to illustrate that by nearly cutting out an entire food group on this diet (a low-carb diet, for example), the student is actually missing some much-needed nutrients that food group provides. thus, while the controversy might be involved and multilayered, the assessment of it can be quite simple yet meaningful. here, i am showcasing my topic, but the applications for this approach are numerous—in political science, theology, history, philosophy, and law, for instance. instead of just teaching the material and assessing whether students learned it through multiple choice and true/false questions, with this approach teachers give their students the chance to apply their knowledge to real-world situations and come away with a firmer grasp on how to overcome misinformation. in a journalism class, for example, whether photojournalists should be allowed to capture and then share images of dead soldiers has been discussed since the civil war. this can lead to a robust discussion on the first amendment and journalists’ efforts to show the true cost of war. or in a political science class, the long-debated question of when life begins can be analyzed from the perspectives of different religions and personal belief systems, highlighting how these can influence law making. both are “hot topics” on which students can use a variety of technologies to research the subject and formulate informed, factual arguments. many people have been taught that to have civil conversations, they need to ignore certain topics or “agree to disagree.” however, this does not have to be so in the classroom. with the right technologies, careful assessment construction and instruction, and the selection of topics that can be useful in students’ possible real-word work and/or life situations, controversy can be impactful. references drewnowski, a., & eichelsdoerfer, p. (2010). can low-income americans afford a healthy diet? nutrition today, 44(6), 246–249. https://doi.org/10.1097/nt.0b013e3181c29f79 56 https://doi.org/10.1097/nt.0b013e3181c29f79 3367-11738-1-ce journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 77 – 80. book review the online teaching survival guide: simple and practical pedagogical tips shradha kanwar1 citation: boettcher, j.v., & conrad, r.-m. (2010). the online teaching survival guide: simple and practical pedagogical tips. jossey-bass, a wiley imprint(pbk). publisher’s description: the online teaching survival guide provides an overview of theory-based techniques for online teaching or for a technologyenhanced course, including course management, social presence, community building, and debriefing. based on traditional pedagogical theory, this resource integrates the latest research in cognitive processing and learning outcomes. from a practical approach, this guidebook presents instructional strategies in a fourphase timeline, suitable for any online or blended course. faculty with little knowledge of educational theory and those well-versed in pedagogy will find this book a key to developing their practical online teaching skills. the advent of digital classrooms and online learning has transformed the educational ecosphere. the exponential growth in information has augmented the importance of technology in classrooms. teachers across the globe are experiencing this driving force and are exploring diverse ways of harnessing the potential of online teaching. the book brilliantly deals with this most fascinating yet challenging issue of online learning, and gives an orientation to its various facets. rightly presented as a survival guide with simple and practical pedagogical tips for online teaching, the book showcases an array of strategies to structure an online course, design the pedagogy and also formulate an assessment plan. the book reinforces the significance of pedagogical theories in establishing the framework on which online teaching practices are orchestrated. “innovative communication technologies often drive pedagogical change,” and the book highlights this transit from face-to-face instruction to online. the pedagogical practices for online teaching are useful to learners with different learning styles and ability levels. as mentioned by the authors, “tips comprising the heart of this book were crafted to meet the needs of actual faculty from veteran classroom instructors to novice teachers.” the suggestions could be incorporated in fortifying one’s own teaching practice or to support the extended academic community. the book noticeably demonstrates its intent as a forerunner of active and ongoing support for online faculty to ensure an effective and efficient teachinglearning experience. the authors highlight the challenges faced because of the exponential growth in information and the blistering speed at which the environment is becoming technologically immersive. 1 area director, educational technology, at niit university, shradha.kanwar@gmail.com kanwar, s. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 78 a primary theme of the text is that an effective teacher will be equally effective in all formats of teaching, be it face-to-face or online, but this evolution is neither mechanical nor sudden. therefore, an orientation to the process of online teaching and to the content is critical. the first chapter of the book gives a holistic perspective on the macro picture of learning and effectively illustrates the distinction between a face-to-face and an online course plan. it also sets the context for the subsequent chapters. the chapter is focused on creating and continuously improving online courses and there is a constant emphasis on the unique style and orientation needed for an online course plan. the component dealing with “how are online courses unique?” sets the foundation for various facets of online teaching, which stand out as very important references for constructing the course. the authors illustrate a variety of inputs on the importance of a real-time learning environment to create well-designed asynchronous interactions, thus leading to improvement of the teaching-learning experience. further, the uniqueness of the online course plan exemplifies the role of a learner in the process as being more dynamic and purposeful and conspicuously engaged in the creation of knowledge. chapter two of the book is structured around the theoretical foundations of pedagogy and its significance for practitioners. it appropriately draws attention to the evolving educational scenario where traditional teaching practices no longer suffice the purpose of meeting learning objectives. the authors introduce the readers to ten core learning principles – the foundation on which the online course plan is designed. these core principles act as guidelines in designing and managing the online teaching environment. they reinforce the role of faculty as mentors, directing the learning experience with emphasis on learning processes to ensure the different learning outcomes. the insights from this section of the book reiterate the role of a learner as the pivot point around which all processes are activated. it significantly points out the aspect of varied experiences accumulated over a period of time and resulting in new learning. the context around which the learning event takes place is critical and there is an adequate emphasis on the advantages of the dynamic digital learning space to ensure richness of perspective and effective learning outcomes. the theoretical foundations expounded in the book act as important references to develop metacognitive abilities. the authors constantly reinforce the need to develop high order thinking skills of deep understanding and lifelong learning, so noteworthy in today’s learning context. chapter 3 begins with familiarizing the readers with the practical aspect of online teaching. it draws attention to the importance of preparation, presence, and participation in both the synchronous and asynchronous scenario. the best practices highlighted in this section provide an end-to-end course-plan structure, putting emphasis on customized and personalized learning. the second part of the book, comprising eight chapters, extends the discussion on useful strategies for online teaching. from setting the right foot forward in course beginnings, through an appropriate selection of tools, to avant-garde pedagogical suggestions, to essential course pieces, and defining quality standards, the discussion leads to interesting cognitive revelations around the “zone of proximal development.” additional precepts are shared to hone the talent of interested faculty members with focus on framing the right kind of questions, rubrics for evaluation, discussion forums, and posting to create an immersive learning experience. the tips provide immediate and relevant references to create a stimulating course and handle intensive engagements. the themes and tools projected by the authors are useful in developing good practices for learning. these practices act as useful guidelines in ensuring engagement and progress of kanwar, s. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 79 learners. it critically examines the array of offerings in the digital space and emphasizes the importance of right tool selection based on the requirement of the learners. the ultimate goal of any teaching process is to ensure meaningful learning and stimulate intellectual curiosity of the students; this book characterizes this very aspect of learning. another important aspect of technology customization that is brought out in the book is the necessity of a learner-centric knowledge management system (cms). according to the authors, the cms should support deep learning processes and promote collaborative learning experiences. community building to improve teaching processes is also a focus area where the authors share best practices for online course design and delivery. interesting and readily available tools are shared for the benefit of online teachers. these include simple tools for collaboration and communication as well as more refined applications. the authors continuously advocate the need to reinforce the cognitive presence in the classroom through intelligently crafted discussion sessions and projects. peer collaboration is strongly encouraged through conversations and assessment interventions. selfdevelopment of teachers is an area where the tips induce reflective practices and a sense of accountability amongst teachers. the progression of the chapters is done in a very coherent manner and the reader surfaces with new ideas with every chapter. phase 3 of the book focuses on leveraging the power of questions and inculcating inquiry as a reflective practice. these are indeed essential prerequisites for today's millennial generation who are so used to obtaining responses to their queries through a simple google search. the authors insist on the new and emerging role of the teachers in linking students’ new information and concepts with previous knowledge through the art of questioning. as the authors take us to the tips for the" late middle," the emphasis now shifts to integrating knowledge in anticipating and solving problems. feedback is an important indicator to take stock of the course objectives and to understand the progression of the course to realign it with the learners’ knowledge. tips on feedback strategies that deal with the prospect of improving learning outcomes are illustrated effectively. the suggestions on creating a feedback mechanism that is personal, formative, timely, and efficient are very pragmatic. concept mapping for authentic problem solving, collaborative project discussions, and tips to conduct them are hugely relevant in today's learning space, where team building and synergy are the key success differentiators. the book delineates the need to energize learners and maintain a flow to ensure that students are neither underwhelmed nor intimidated in mapping the content. social networking sites, which are more common as personal interaction forums, are presented in the book as useful cognitive tools in co-constructing knowledge and building a learning community. phase 4 of the book gives the modus operandi to embellish and present the neatly designed final product and is directed toward making a learner independent and self-initiated. finally, the book explores future problem areas that might interfere with the smooth conduct of the course. suggestions pertaining to other formats of online learning like mobile platforms, could have added further value to the practical advice section. some inputs on technology as a liberating mechanism for learners could be added in further editions. the book is a useful reference for teachers who are beginning their online teaching journey. it is equally useful for teachers who have attained a certain degree of proficiency in this area. the challenge of teaching in an unfamiliar territory is gradually erased and replaced with excitement about designing the kanwar, s. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 80 course plan. the uniqueness with which the book focuses on leveraging technology to provide differentiated instruction in creating an inimitable learning experience is noteworthy. 3893 journal of teaching and learning with technology, vol. 13, no. 2, december 2013, pp. 1 – 4. invited essay moocs and me donald a. coffin1 abstract: this invited essay explores one emeritus faculty members experience as a student in a mooc. keywords: mooc, learning, elearning, massive open online course i recently enrolled in, and completed, a mooc, partly because i have been following the discussion about them (i have, at the moment, compiled a library of 27 articles, blog posts, and other materials that have appeared since the middle of april), but perhaps primarily because i had volunteered to make a presentation about moocs to an organization of which i am a member.2 our meeting was in mid-may, and i thought i would be able to talk more intelligently about moocs in general if i had some experience with at least one of them. my plan was to enroll in a course outside my discipline (economics), and in which i was interested, so that i would be able to experience the course as a student, not as someone with a considerable store of knowledge. unfortunately, given my time frame, i was unable to find such a course that began in early-to-mid april. i wound up in a course that explored the economic development of the world from roughly the mid-18th century to the present. the course, which began near the end of april, ran through about the end of june. according to an end-of course communication from the instructor, 28,922 people enrolled. during the eight weeks of the course, the instructor presented eleven video lectures, which were subdivided into a total of 63 individual videos, which totaled about 26 hours and 40 minutes of material.3 in general, each section of the lectures (all 63 of them) had one or more ungraded learning activity for which sample answers were provided.4 the lectures, which were wellorganized, well-sourced, and generally well-presented, were (essentially) a talking head accompanied by power point slides. copies of the slides were available as .pdf documents. generally, an un-graded learning activity accompanied each video, with suggested answers. according to the end-of-course communication to the class from the instructor, “there were 164,946 viewings” of the lectures, or an average of over 2,600 for each lecture part. in addition we were provided with an extensive (25 page) list of suggested readings, of which over 100 items were available on-line. i obviously do not know how much anyone else read, but (given that i am actually quite interested in this material) i downloaded what was available and have so far read about half of it. the course also had, for each week’s lectures, on-line discussion forums. the forums were created (and their topics determined) by the enrollees; in general, the discussions were wholly among the students, although the instructor and a couple of assistants would add 1 emeritus associate professor of economics, school of business and economics, indiana university northwest, dcoffin@iun.edu 2 indiana university’s faculty colloquium on excellence in teaching. 3 in a typical 15 week semester for a class meeting 150 minutes per week, and (in the case of a typical class that i would teach) losing 150 minutes for in-class tests, there would be about 35 hours of class time, to be used for all in-class activity. so this looks to be roughly equivalent in terms of time to a regular semester course. 4 all this is from the course website. coffin, d.a. journal of teaching and learning with technology, vol. 13, no. 2, december 2013. jotlt.indiana.edu 2 comments that dealt with issues that the enrollees really could not provide answers for. the discussions were generally civil and well-conducted, although my estimate is that well over half the posts were personal opinions of the posters, often with little support beyond the personal experience of the poster. most posters did so using their own names, which i think contributed to the civility of the conversations. we were informed at the end of the course that there were 423 discussion threads, with a total of 2,370 posts and 1,413 comments. so the discussions were fairly active.5 but, as usual, a small number of enrollees accounted for most of the activity. based on the list of most active discussion forum participants, i calculate that only about 90 people were really active in the forums; they created about 44% of the threads and made about 44% of the posts; the 10 most active posters accounted for about 19% of the posts. (as i note below, only about 500 people completed all of the course activities.) (i don’t completely trust this, because this includes people who created 0 or 1 threads and made even a single post. i don’t see how it’s possible to be less active than that.) there were three peer-assessed activities; if one submitted a response to the activity, one was expected to grade a minimum of three posts by fellow-students. many participants graded more than three. two of these had word limits (activity 2 had a limit of 1,000 words and activity 3 had a limit of 1,500), which were widely ignored.6 we were provided with rubrics for assessing the work, but i’m not sure that they were detailed enough. also, we were not provided with examples of what an assessment should look like.7 grading was on a 1 (inadequate) to 4 (outstanding) scale; to receive a certificate of completion, one had to complete all three activities with a minimum score of 2. while there was clearly a large variance in how people approached grading, most of the feedback that i received was reasonable, and included comments that i agreed with about the strengths and weaknesses of what i submitted. my judgment is that the course content is similar to what would be provided in a campus—based course, either in-class or on-line. but the assessment of learning was, i think, wholly inadequate for course credit to be awarded for the class in any college or university. the inadequacy of the learning assessment is partly the extremely limited amount of the course material for which assessments were done, but also because of the peer assessment system. as i noted above, my experience with it was fairly positive. but, in fact, the assessment was handled by people who were not fully capable of assessing the validity of the arguments made in the activities. so as hard as people worked, it is unreasonable to expect them to have been able to provide the kind of assessments and feedback that someone with extensive disciplinary knowledge could provide. and how did the students in the course do? again, based on the end-of-course information provided by the instructor, out of the (roughly) 29,000 registrants, only 12,917 did anything in the course, only about 700 completed the first activity, and only about 500 completed the second and third activities. we were told that the average scores on activities 1 & 2 were 2.86 and 3.06, respectively; no data were provided for the third activity. at most, then, the completion rate was somewhere under 2% of registrants and under 4% of those who participated at all. granted, in a free course with no real payoff at the end (unless you think a 5 the most active thread appears to have been one of the gold standard (which i initiated), with 112 posts. 6 my own judgment is that these limits were considerably too strict. 7 my approach was to list all of the items in each rubric and “score” each assignment of each component of the rubric. i then tried to provide fairly detailed comments based on each part of the rubric. i generally spent somewhere around 30 minutes of each submission that i graded. (i wrote to the instructor to tell him that i thought people needed more guidance on assessing student work, and included a description of what i did, once the course was over. at that time, i also told him why i had been in the class and what my background was.) coffin, d.a. journal of teaching and learning with technology, vol. 13, no. 2, december 2013. jotlt.indiana.edu 3 certificate of completion is a payoff), we can expect a lot of “drive-by” registrants. but it does suggest that encouraging and maintaining effort in such a course will be an even greater problem than it is in campus-developed and campus-run online courses. what can i take away from my experience in this one course for an assessment of moocs in general? first, at the current state of development, attempting to use an existing mooc as a substitute for a course developed and managed on campus is clearly not a good idea.8 based on my experience, i would suggest that, at a minimum, using a mooc as the basis for a credit-bearing course will require a strong on-campus component—a supervising faculty member on-campus, locally-designed (and graded) learning assessments, locally-operated and staffed discussion sections or on-line discussion forums (depending on the institution, the discussion groups/forums could be staffed by graduate teaching assistants, advanced undergraduate students, or a mix of the two. whatever the mix, providing adequate training to the discussion leaders seems to me to be an essential component of making this work.) using a mooc in this way, of course, will not lead to the sort of cost savings that is one of the strongest lures for university administrators. in addition, even in introductory-level courses, whatever online component the course has will need to be updated often; in the case of (for example) an introductory macroeconomics class, annual updates would almost certainly be necessary because the macro economy will get you otherwise. this also reduces any cost savings that may occur. but done this way, i can easily see moocs playing a major role in courses now being taught oncampus in large lecture sections. there is, it seems to me, little difference between a talking head with power points on video and a talking head with power points at the front of a large classroom. whether this means outsourcing course content (and learning objectives), or keeping control and creating your own mooc, is the issue, i think. done correctly, moocs may well kill the large lecture class. it is also clear that, judiciously used, moocs might be able to add a dimension to existing courses that continue to meet in classrooms or operate as single-campus online courses. this, of course, will work only if the schedule of the mooc and the class schedule can be synchronized. an alternative is to take the power of video and online instruction much more seriously than many people seem to do. indeed, i have yet to see a mooc that seriously goes beyond an online talking head (but i have not seen even a significant fraction of everything that is currently available.) alex tabarrok, one of the creators of a mooc site emphasizing economics courses, has written that “to take full advantage of the online format, an online lecture has to be different from an in-class lecture. different mediums demand different messaging.”9 doing this—finding and incorporating interesting, informative images, sound, and video clips—would surely enhance the educational process. but it would also be likely to increase considerably the cost of producing the videos for an online course.10 8 recent commentary on the experience at san jose state university would suggest that at least one effort to use an existing mooc as a substitute for on-campus courses has not worked out well http://chronicle.com/blognetwork/tenuredradical/2013/07/f-is-for-failure-or-dont-invest-your-pension-in-moocs-yet/ and http://www.slate.com/blogs/future_tense/2013/07/19/san_jose_state_suspends_udacity_online_classes_after_students_fail_final. html) . 9 “why online education works,” cato unbound (http://www.cato-unbound.org/2012/11/12/alex-tabarrok/why-onlineeducation-works). in an ironic twist, the course that tabarrok and one of his colleagues (tyler cowen) developed for their mooc site (marginal revolution university, at http://mruniversity.com/) uses still images with voice-over narration. while the images are often appealing, they do not add information to the process and are static. 10 such material is often available on publisher websites as supplements to textbooks. my conversations with publishers’ representatives suggests that these are expensive to produce (including rights fees) and to keep current). coffin, d.a. journal of teaching and learning with technology, vol. 13, no. 2, december 2013. jotlt.indiana.edu 4 finally, i would encourage anyone who wonders about the implications of moocs for higher education to enroll in one, participate fully in it, and see how it works, either for something you want to learn about or for your own discipline. it’s clear that they are not going away soon, and knowing, from the inside, how they work—and don’t work—will be an essentially part of making sure that, however they wind up being incorporated into our lives, it will not be any worse for us than it has to be. journal of teaching and learning with technology, vol. 10, special issue, pp.117-126. doi: 10.14434/jotlt.v9i2.31580 emergency remote studio teaching: notes from the field tara winters the university of auckland t.winters@auckland.ac.nz abstract: the creative arts use primarily visual, kinesthetic, and somatic modes of teaching that depend on face-to-face communication in contrast to many other university subjects that rely more heavily on the written word. the hands-on, practice-based nature of art education makes it perhaps one of the least transferrable subjects to a fully online model. what can be learnt, then, from the forced situation of teaching and supervising studio-based learning in a higher education context under the 2019 coronavirus disease lockdown conditions? this reflective essay draws on the writer’s experience as a fine arts lecturer involved in emergency remote teaching of studio-based visual arts courses during the first half of the 2020 academic year. organized as a series of “fieldnotes,” it aims to capture those fleeting, yet significant, thoughts and reflections so easily lost once things quickly reach a level of “new normal.” notes from the field include the effects of the shifted social dynamic of online communications in a teaching and learning context; the challenges of the video call as a dialogic space for the studio critique; the impact of the more structured nature of online systems with regard to documenting and recording creative work in progress; and the affordances of the dynamic, multimodal nature of the digital medium for working with contextual research material for creative practice. developed as a pedagogical perspective combining reflection in action and reflection on action, this essay offers firsthand observations and discussion, in the context of relevant literature, as a contribution to urgent conversations on the shape of the future learning environment. keywords: emergency remote studio teaching, education during covid-19, studio education, art and design education, online teaching and learning. on march 25, 2020, the university at which i teach moved into full online delivery of courses following a government announcement that aotearoa, new zealand was shifting to a “level 4 lockdown” to combat the spread of the 2019 coronavirus disease (covid-19). like many educators, prior to this i had had no significant experience with online teaching. familiarity with a digital learning management system (lms), used largely to post course information, offer asynchronous discussion forums, and provide access to lecture recordings represented the extent of my “online” experience. we shifted from an on-campus, in-person teaching and learning environment to a fully online, remote learning situation under urgency and with little time for preparation. the following notes are based on my own experience as a fine arts lecturer at the university of auckland, aotearoa, new zealand. our practice-based studio programs are, under normal circumstances, taught in small student groups, on campus, and with access to dedicated physical studio spaces and specialist workshops. my observations here are offered in the context of emergency remote teaching (ert). ert has emerged as a term to differentiate between courses designed in advance to be delivered online and those that would normally be delivered on campus but have been shifted online due to unforeseen circumstances so that students can continue their learning. ert has been described as: a temporary shift of instructional delivery to an alternate delivery mode due to crisis circumstances. it involves the use of fully remote teaching solutions for instruction or education that would otherwise be delivered face-to-face or as blended courses and mailto:t.winters@auckland.ac.nz winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu that will return to that format once the crisis or emergency has abated. (hodges, moore, lockee, trust, & bond, 2020, para. 13) in response to the covid-19 pandemic, universities worldwide shifted to using digital technologies to continue teaching in early 2020. what we experienced was remote learning, not online learning. it is important not to equate ert with online learning with respect to evaluations of our teaching in 2020 (hodges et al., 2020). furthermore, greene (2020) suggest a shift from an evaluative approach to assessing ert to a documentary one, proposing a move toward narrative and reflection: “curiosity rather than critique might be the most appropriate, and informative, response” (p. 4). mindful of this, my approach here is searching and speculative rather than critical and evaluative. the notion of a “fieldnote” provides a positive archetype for the content and organization of my searching, speculative thinking. fieldnotes are qualitative notes recorded during or shortly after observation of the phenomenon under study. they can be descriptive and reflective (bogdan & biklen, 2007). reflective fieldnotes capture the impressions and ongoing analytic processes of the researcher (brodsky, 2008). in qualitative education research, fieldnotes are an aid to documenting observations, descriptions, and interpretations and can also “provoke critical processes for facilitating reflexivity and situating researcher positionality and subjectivity” (burkholder & thompson, 2020, p. 1). the vignettes that follow are offered in this spirit. they offer descriptions, observations, and reflections that may inspire further conversation and critical inquiry. reflection in action and reflection on action this work engages an autoethnographic method in the sense of using a researcher’s personal experience to describe and reflect on practice and experience (adams, holman jones, & ellis, 2015). more specifically, it uses a first-person approach—a method that relies on the experiences of the researcher. the first stage of learning from and through a first-person method requires an unprejudiced openness to the details of experience (roth, 2012). this level of openness provides insight into the taken-for-granted everyday activities and experiences that may go unreported (patton, 2015) yet contain rich and meaningful potential—material that raises new questions and drives further research. my fieldnotes combine reflection in action and reflection on action (schön, 2016). reflection in action suggests that we can think not only about doing, but also about doing something while doing it. descriptive, narrative content notates reflection in action: looking to our experiences, connecting with our feelings, and building new understandings that inform our actions as a situation unfolds (smith, 2001). this approach is in line with warnings about “evaluating” our teaching under the tumult of pandemic conditions. greene (2020) suggested “a cautious and compassionate evaluation of what—if anything—we have learned about specific technological tools and flexible teaching practices” (p. 2). reflection on action (schön, 2016) is done later, after the encounter, allowing us time to explore what we did and how/why things turned out the way they did. by doing this we develop sets of questions and ideas about our activities and practice (smith, 2001). interwoven throughout my fieldnotes is material from the surrounding literature that extends first interpretations, prompts questions, and points to sites for the further works of analysis and inquiry. fieldnote 1: a first response as announcements of a pending nationwide lockdown were broadcast, our university quickly made plans for a teacher-only week that would involve planning for remote teaching, including a rapid 118 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu upskilling of technical know-how to enable staff to operate in a fully digital teaching environment. we managed 2.5 days working collectively on site before we found ourselves working from home, full time. each of us swiftly prepared new learning planners, updated course outlines, prepared new project briefs, and adjusted project requirements so that teaching and learning could simply continue in some form, a form we had no idea would work at all for our studio-based subjects. we were 3 weeks into a 12-week semester when we shifted to online delivery. i came away from my 1st week of emergency online teaching exhausted, but also thinking “this is not too bad, in fact, this seems to be working ok!” many of my colleagues felt the same way. we may have just been relieved that we made something work at all. the students turned up, the tech didn’t break, and we could see and hear each other live in the zoom-sphere, the online video conferencing space we used. we worked through our updated studio project plans with our 1stand 2nd-year students, explaining our thinking and decisions for their adjusted first-semester course. attendance was high, goodwill was high, and a new at-a-distance collective connectedness was taking shape. a heightened sense of community was apparent and our efforts were immediately acknowledged by students. things seemed ok in our emergency online-studio bubble. we learned that we could be together, and get things done, at a distance. perhaps our initial feelings of success were a response to the fact that things didn’t appear to be completely falling apart? after all, we had no idea quite how this would go and very little experience, for the most part, with the tools and technologies we were now relying on. we relished the feeling that it was at least possible to do something, to get on with things albeit in a very different way. the short-term outlook also seemed to help. maybe this would only be for 4 weeks, and then we would return to campus? that would be ok; we could do that. it wasn’t until we were told that we would be teaching and learning remotely for the remainder of the semester (9 weeks), and possibly into semester 2, that our response started to shift a gear. we had been running on an initial burst of energy, an adrenaline rush, a fight or flight response. everything had been happening at a very fast pace. it was crisis stuff. once we moved a little beyond that first period, and with the knowledge that this was to continue for some time, things slowed down, and a deeper kind of rumination set in. i began to think more closely about what was happening in our online classroom. fieldnote 2: shifted social dynamics the role of the teacher as a facilitator of learning was immediately heightened in the emergency online learning space. the change from in-person studio classes, where students and staff are present in each other’s company for long periods of time and where there is extended opportunity for dialogue, effected a shift in emphasis away from the teacher as the first point of call for help. students needed to be more resourceful without the immediate and continuous contact offered in the face-to-face classroom. dialogue—between students and students and between students and teachers—is of central importance in studio education settings with the teacher holding a particular role in the group, that of subject expert (ashton & durling, 2000). this places teachers at the center of the learning experience (lave & wenger, 1991). ashton and durling (2000) noted that this apprentice–master model, with its high degree of contact between individual students and staff, is quickly becoming unsustainable in today’s educational environment. education technology research has noted a distinct change associated with online learning where the teacher becomes a secondary figure in the learning process: face-to-face education is teacher centered. we are subject matter experts in the same physical space as students, they are our audience. this is not happening in online education. here, the student is at the centre. we are promoting their active learning 119 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu and engagement. we are facilitating and enhancing their learning process by providing them all the necessary tools. (vlachopoulos, 2020) in online studio environments, social interaction and peer learning are things that are actively constructed and sought by students depending on the usefulness of this experience as perceived by students: in the absence of immediate “expert” feedback in the studio, students make use of (and develop) their own expertise through their prior knowledge, the guidance and cues provided by the module material and prior engagement with tutors outside the studio. (lotz, jones, & holden, 2015, p. 22). it was easy to notice an increased self-reliance in the online classroom. the online environment seemed to shift the dynamic from students being reliant, to an extent, on staff for responses to their questions, to students finding alternative ways to find things out and testing their knowledge. using the breakout rooms feature in zoom to facilitate student-to-student discussion proved helpful for collaborative learning and interaction, and for peer-to-peer feedback. students regularly commented on how useful breakout sessions had been and routinely asked for more of these. they valued the possibilities for learning together in this way. there also seemed to be less distractions in a zoom breakout room compared to the physical studio space. time was precious and focus was high. student survey information from our creative arts faculty revealed that 72% of students agreed they felt part of a community of learners during the time our course was online and 77% of students agreed that the online learning environment allowed effective communication between teaching staff and students. self-directed learning and peer learning are central learning concepts in studio education. one of the primary goals of studio education is to help students become self-operating learners. there may be good potential here for a review of current studio pedagogy to take advantage of online formats in further stimulating self-reliant and self-directed ways of working. the studio critique is a signature pedagogical tool in the creative arts, characterizing the strong position of the dialogic approach in studio education compared with other disciplines. the critique is framed around the open participation of staff and students, who share different perspectives about the work that is being critiqued. while online video calls offered us the means to see and hear each other live, not being in the same physical space for our critique sessions was an immediate challenge. as expected, the loss of intimacy that in-person exchanges provide impacted the quality of our felt experience. the all-important gestures, body language, and physical interactions that communicate so much were missing. research tells us that at least half of how we communicate with others is through nonverbal cues (mehrabian, 2008). without the richness of this information available to us we needed to work harder, which took its toll, and it was difficult to sustain the extended periods of critique time that we would normally manage in a face-to-face setting. we quickly learned that shorter sessions with more breaks were necessary. what we had attempted to do was transfer, wholesale, the structure and time frame of a regular, in-class critique session to an online setting. in hindsight this was problematic. digital learning expert dimitrios vlachopoulos pointed out the importance of not comparing, and not trying to imitate online and faceto-face teaching: they are different. they are two, autonomous, high quality pedagogical models that can provide equally high quality education if they are implemented correctly. the 120 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu traditional strategies of face-to-face teaching probably won’t work as well as we would like in an online environment. (vlachopoulos, 2020) dimensions of sensory affect and social interaction are fundamental to practice-based studio learning. the social interchange of ideas in the physical space of the studio, whether as part of organized class events or just from being on site in the studio, is a critical part of the pedagogy. sensory experiences of space affect the people working in them, how they feel about their learning (the socialemotional aspects of learning), and what meaning they are able to make of it (marshalsey, 2015). orr and shreeve (2018) noted that: a space may not seem like pedagogy, but in its widest sense the studio helps structure what can and does take place when students learn, and it has been a central part of organised learning in visual arts for more than a century. (p. 90) emulating the real-world artist’s studio, the physical environment of the on-campus studio provides students and staff with social and intellectual cues for working and thinking like an artist. daniels (2011) described the studio as “a canonical site of creativity, ‘imagination's chamber’” (137). the studio space is a physical and conceptual laboratory for making, testing, exploring, risk taking, reflecting, evaluating, and critiquing. entering the studio space, we are transported to a particular geographical venue for knowledge and imagination (daniels, 2011). relationships are set for particular types of knowledge to come into focus, including somatic, tacit, and embodied knowing. on-site, inperson studio critique dialogues make full use of these physical-social dimensions. at the same time, there were some interesting outcomes of the altered social space of the online studio and the equalizing plane of the zoom screen in our critique sessions. there seemed to be a different social dynamic in the more anonymous space of the online classroom, a less public space, perhaps a less exposing one. the option to turn one’s computer camera off seemed to help reduce the anxiety of public speaking for some students. students seemed to find this situation more amenable to contributing to critique discussion with many more, and otherwise usually quiet, students speaking up more often. this offered a distinct advantage over face-to-face formats where often the same voices are regularly heard and it can be difficult for some students to find ways to contribute. the online format seemed to offer a more balanced and even space for participation in this regard. studies of the practice of critique as a form of feedback and assessment for learning in fully online art and design courses have revealed that online critique can lead to higher levels of participation and collaboration from students (see mcintyre, 2007). feedback from our student surveys also indicated that many students felt more comfortable asking questions during live, online classes, especially during small group sessions. they also found the chat facilities useful, offering a relatively low risk way to participate in class discussion. social media conventions influence peer interaction and learning when studio pedagogies move from proximate to online worlds (lotz et al., 2015). students were able to bring their online communication skills into the online studio, which seemed to further energize critique dialogue. e-learning research has regularly turned to the topic of the social context in studio education as one of the biggest challenges to online education (ashton & durling, 2000; lotz et al., 2015; marshalsey & sclater, 2018; wragg, 2020). wragg (2020) observed that “the barriers to online design education relate to interaction and the social environment” (p. 2295) and concluded that while online studio education should not try to replicate the on-campus educational experience, it is possible to create an equivalent experience conducive to experiential learning and iterative development by recognizing and reprioritizing the social component of the studio. this is said to be achieved through creating inherently social activities that build a community of practice (wragg, 2020). interestingly, the 121 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu author reviewed the history of the studio education model still used by most departments of art and design, based as it has been for almost 100 years on the bauhaus model, commenting that “the social aspect of the university that was once taken for granted is no longer guaranteed” (p. 2296). with increased and competing demands on students time it has become harder to spend long hours in the studio, a situation that used to be more common and contributed to the social aspect of the art school learning culture. similarly lotz et al. (2015) pointed to the potential of developing online tools and technologies for social interaction and peer learning. from their detailed analysis of social engagement in online design pedagogies, the authors concluded that “social learning mechanisms represent one of the oldest and most natural pedagogies, and online studios, one of the newest forms of human interaction, offer novel opportunities in which such learning can take place” (p. 22). these opportunities make use of the way the online environment facilitates sharing and discussion of work asynchronously with peers at a distance. the authors identified a number of themes that were found to have a positive effect on student outcomes, including time on task, listening in, quick social engagement, comment on conversation, a core stable network, and spectrum of engagement (see lotz et al., 2015). these themes offer interesting potential for integrating online features into studio courses and programs. fieldnote 3: documenting creative work in progress students are encouraged, and usually expected, to keep a record of their thinking and making as part of active documentation of their creative work in art and design courses. workbooks, journals, notebooks, and visual diaries serve an important function in the life of the artist and their creative practice and are a key part of studio pedagogy. they are often required to be presented at assessment points, regularly accounting for a percentage of a student’s grade. a workbook (or other record) typically contains all kinds of notations as a record of influence and inspiration, reflection, and evaluation. these involve visual, textural, and other kinds of documentation of work in progress, as well as notes from critiques, responses to advice, collections of research material, and so on. a range of modalities is used—writing, drawing, photography, collage, detritus, and video, in analogue and digital formats. students typically use a mix of physical notebooks and digital systems for this purpose. the definition of a workbook is left purposefully broad in the art school setting, allowing students to create and curate these personal workspaces with few restrictions. this freedom allows the necessary room for students to develop their own working methods and ways of recording developments. they are remarkable in their diversity, deeply personal, and rich in content. continued access to this material is important in a studio teaching context. workbooks function as a central resource for conversations between students and staff, a practice at the center of studio-based pedagogy. we worked with this component of studio learning differently in the emergency online classroom. i noticed shifts in the way that the workbook component was structured and made use of—they played an even more critical role than usual. used as part of establishing a routine and as a means for checking in (which seemed more necessary than usual in the online teaching space), the presentation of workbook material by students was set as a regular requirement of our zoom classroom. it was a way of encouraging continued engagement in projects, used for goal setting, and centered studio teaching around what the students did between classes in a more direct and focused way. ordinarily, workbooks are often kept relatively private and can sit in the background for long periods of time before students reveal their existence at assessment time. the more structured nature of online learning obliged students to more consciously present their material for discussion and review at each class. seeing workbook content more regularly and presented in a considered way 122 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu helped with providing frequent and directed feedback on student work. the digital format was also an advantage in terms of collating everything in one place. students maintained a variety of ways of working, including working hands-on with physical materials, but ultimately everything was collected together and documented in a single format that could be accessed at any time. fieldnote 4: contextual research a subset of workbook or support material is the documentation of verbal, visual, and written resources students use to locate and understand the field or context they are working in. this is often referred to as “contextual research” or “artist model research.” contextual research includes the gathering and analysis of material from a variety of sources (books, websites, artist talks, films, gallery visits, etc.), though increasingly, the internet is used for the majority of this kind of work. learning outcomes related to contextual research include being able to identify, locate, and record contextual information (basic research skills) and to engage critically with information gathered (development of critical and analytical skills). students are generally well skilled in collecting and shifting digital material around using a range of tools and systems and working with digital media in a collaborative way in online spaces. though students regularly use digital tools and technologies to source and explore contextual research material under usual studio learning circumstances, the shift to using only digital systems revealed the inherent strengths of this modality for these purposes. the technology easily supported nonlinear ways of organizing and reorganizing large amounts of information in ways that are not always possible in the analogue world. digital tools made it possible to include or link to a wide range of different content types in a single space. padlet, for example, offered a digital “wall” space to make notes, add urls, and import a range of different file types—text documents, images, moving images, audio, and video. google drive was useful for storing multiple documents and sharing files to be worked on collectively. the dynamic nature of these media spaces afforded all kinds of updating, reorganizing, and editing. sharing functions allowed staff continuous access to student work. staff could log in at any time and add comments, suggest new resources, and direct students to specific content by directly adding files or hyperlinks. while this practice was already happening to an extent in our studio courses, the shift to online learning forced additional productive experimentation with processing contextual materials by both staff and students. the ability to work with contextual research material in nonlinear, alternative ways facilitates modes of thinking and working that are important in creative learning. often, the way one organizes material and has access to it allows for certain kinds of thinking to occur and connections to be made, and not others. “visual contextual research,” for example, refers to collecting visual materials to be actively used in the generation of ideas and concepts for artworks. playing with material in order to visualize different possibilities, discover unexpected connections, and engage in associative thinking is part of this process. being able to easily duplicate material and re-order it, and to place different images beside/against each other in infinite combinations that accommodate chance, randomness, and intuition, supported the improvisatory modes of thinking and action used by artists (danvers, 2003). conclusion in 2020 the world changed, prompting a radical rethinking of the way we do things in education. experiments that are going on right now are likely to have an immediate impact on our pedagogies postpandemic. the online studio reflects a complex and sometimes contradictory situation of benefits versus challenges. working “alone together” in the online studio misses the fullest experience of all 123 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu that comes with in-person, community-reliant, hands-on studio learning. much of the highly student centered approach of art and design education, based on guided learning through ongoing feedback in cycles of action and reflection, does not easily translate into a fully online learning experience (fleischmann, 2015). the fundamental materiality of practices that are based on qualities of physical objects, surfaces, and spaces, and those that require specialist equipment (kilns, printing presses, darkrooms, high-end digital equipment, etc.) cannot be replicated in an online environment. the continuation of the on-campus experience in the context of practice-based studio teaching and learning is essential. at the same time, some features of online learning may offer potential enhancements to the traditional studio. limits to the on-site studio classroom are being exposed by alternative, digital forms of engagement. artist and educator constantina zavitsanos explained how she has always allowed students to attend studio art classes using zoom, pointing out the presence of several categories of existing inequity: when a student has a reason not to use the physical classroom to display their work, [because they are disabled or sick, or because the work they create is best presented outside of traditional classroom critique] it reveals the physical and conceptual limits the classroom imposes. (dancewicz, 2020) the current emergency redesign and reinventing of our pedagogies is stimulating a deep reflection on the presumed defaults of studio education and there is value to be gained from what is achievable online, beyond a pandemic. e-learning is part of the new dynamic that characterizes educational systems at the start of the 21st century (sangrà, vlachopoulos, & cabrera, 2012) and will impact all areas of education in time. bender and vredevoogd (2006) suggested studio courses could be enhanced with online technologies through blended learning models that involve face-to-face learning supplemented with asynchronous and/or synchronous communication via the internet. while not advocating technology as a substitute for the existing model, they suggested that “the use of digital media is a logical addition to the traditional design studio” (p. 114). augmenting the on-campus studio learning experience with online components as part of a blended model is likely to be a critical proposal for studio education as we enter the latest educational paradigm. references adams, t. e., holman jones, s., & ellis, c. (2015). autoethnography: understanding qualitative research. new york, ny: oxford university press. ashton, p., & durling, d. (2000). doing the right thing—social processes in design learning. the design journal, 3(2), 3–13. https://doi.org/10.2752/146069200789390123 bender, d. m., & vredevoogd, j. d. (2006). using online education technologies to support studio instruction. educational technology & society, 9(4), 114–122. bogdan, r. c., & biklen, s. k. (2007). qualitative research for education: an introduction to theories and methods (5th ed.). boston, mass: pearson a & b. brodsky, a. (2008). fieldnotes. in l. m. given (ed.), the sage encyclopedia of qualitative research methods (pp. 342–343). thousand oaks, ca: sage publications. https://www.doi.org/10.4135/9781412963909.n172 burkholder, c., & thompson, j. (2020). fieldnotes in qualitative education and social science research. new york, ny: routledge. https://doi.org/10.4324/9780429275821 124 https://www.doi.org/10.4135/9781412963909.n172 https://doi.org/10.4324/9780429275821 winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu dancewicz, k. (2020, april). can you teach art online. art in america. retrieved from https://www.artnews.com/art-in-america/features/teaching-art-online-covid-19-professorsstrategies-1202684147/ daniels, s. (2011). art studio. in j. agnew & d. livingstone (eds.), the sage handbook of geographical knowledge (pp. 137–148). london, england: sage publications. https://doi.org/10.4135/9781446201091.n11 danvers, j. (2003). towards a radical pedagogy: provisional notes on learning and teaching in art & design. international journal of art and design education 22(1), 47–57. https://doi.org/10.1111/1468-5949.00338 fleischmann, k. (2015). the democratisation of design and design learning: how do we educate the next-generation designer. international journal of arts & sciences, 8(6), 101–108. greene, j. (2020, april 6). how (not) to evaluate teaching during a pandemic. the chronicle of higher education. retrieved from https://www.chronicle.com hodges, c., moore, s., lockee, b., trust, t., & bond, a. (2020, march 27). the difference between emergency remote teaching and online learning. the educause review. retrieved from https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remoteteaching-and-online-learning lave, j., & wenger, e. (1991). situated learning: legitimate peripheral participation. cambridge, england: cambridge university press. https://doi.org/10.1017/cbo9780511815355 lotz, n., jones, d., & holden, g. (2015). social engagement in online design pedagogies. in r. vandezande, e. bohemia, & i. digranes (eds.), proceedings of the 3rd international conference for design education researchers (pp. 1645–1668). aalto university, finland. marshalsey, l. (2015). investigating the experiential impact of sensory affect in contemporary communication design studio education. international journal of art & design education, 34(3), 336–348. https://doi.org/10.1111/jade.12086 marshalsey, l., & sclater, m. (2018). critical perspectives of technology-enhanced learning in relation to specialist communication design studio education within the uk and australia. research in comparative and international education, 13(1), 92–116. https://doi.org/10.1177/1745499918761706 mcintyre, s. (2007). evaluating online assessment practice in art and design. unsw compendium of good practice in learning and teaching, 5, 1–32. retrieved from https://www.unsworks.unsw.edu.au/primoexplore/fulldisplay?vid=unsworks&docid=u nsworks_2062&context=l mehrabian, a. (2008). communication without words. in c. mortensen (ed.), communication theory (2nd ed., pp. 193–200). new brunswick, nj: transaction. orr, s., & shreeve, a. (2018). art and design pedagogy in higher education: knowledge, values and ambiguity in the creative curriculum. london, england: routledge. patton, m. (2015). qualitative research & evaluation methods: integrating theory and practice (4th ed.). los angeles, ca: sage. roth, w. m. (2012). first-person methods: towards an empirical phenomenology of experience. rotterdam, the netherlands: sense publishers. sangrà, a., vlachopoulos, d., & cabrera, n. (2012). building an inclusive definition of e-learning: an approach to the conceptual framework. international review of research in open and distributed learning, 13(2), 145–159. https://doi.org/10.19173/irrodl.v13i2.1161 schön, d. a. (2016). the reflective practitioner: how professionals think in action. retrieved from https://ebookcentral.proquest.com smith, m. k. (2001). donald schön: learning, reflection and change. in the encyclopedia of pedagogy and informal education. retrieved from www.infed.org/thinkers/et-schon.htm 125 https://www.artnews.com/art-in-america/features/teaching-art-online-covid-19-professors-strategies-1202684147/ https://www.artnews.com/art-in-america/features/teaching-art-online-covid-19-professors-strategies-1202684147/ https://doi.org/10.4135/9781446201091.n11 https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning https://doi.org/10.1017/cbo9780511815355 https://www.unsworks.unsw.edu.au/primoexplore/fulldisplay?vid=unsworks&docid=unsworks_2062&context=l https://www.unsworks.unsw.edu.au/primoexplore/fulldisplay?vid=unsworks&docid=unsworks_2062&context=l https://doi.org/10.19173/irrodl.v13i2.1161 https://ebookcentral.proquest.com/ http://www.infed.org/thinkers/et-schon.htm winters journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu vlachopoulas, d. (presenter). (2020). re-imagining higher arts education online [webinar]. european league of institutes of the arts, amsterdam, the netherlands. retrieved from https://janeckert.ch/blog/?p=485 wragg, n. (2020). online communication design education: the importance of the social environment. studies in higher education, 45(11), 2287–2297. https://doi:10.1080/03075079.2019.1605501 126 https://doi:10.1080/03075079.2019.1605501 3194-11498-1-ce journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 66 68. anonymous online student surveys anywhere vicky j. meretsky keywords: assessment, cats, knowledge survey, opinions, student-centered teaching framework anonymous surveys can be a valuable tool to gather information from students regarding their perceptions of their own learning styles and progress, of an instructor’s teaching styles, assignments, tests, and of other aspects of the learning environment. some course-management software systems provide a built-in capacity to administer an anonymous survey, but not all do, and not all instructors have access to course management software. in addition, students may not always trust the anonymity of one module of a software system whose other modules are explicitly not anonymous. free online surveys are available through several providers including (in early 2013) surveymonkey, kwiksurveys, questionpro, and others. instructors from anywhere in the world can access the services. surveys are easy to construct and a survey-specific url makes them available to students for any desired period of time. results of multiple-choice questions can be summarized and answers to essay questions can be collected within the software. survey results can be used to promote reflection by students and instructors, monitor student progress, and finetune teaching approaches. making it work because instructors have no means to compel students to take anonymous online surveys or confirm that students have taken them, these surveys are best used in a support role, rather than as a required activity. surveys targeted to assess specific assignments, events, etc., can be timelimited, but surveys could also be used to provide a means of general, anonymous feedback throughout the semester. online sites that provide free surveys tend to permit a wide variety of question types, including single-answer multiple choice, multiple-answer multiple choice, essay questions, ranking, ratings, and matrixes. fixed answers, such as in multiple-choice questions, are easier to summarize, but essay questions permit more thoughtful responses. templates may be available, with standard questions for various uses, including university instructor evaluations. i find it very helpful to quickly distribute a short, targeted survey to sample student reactions to a new teaching approach or an activity based on difficult subject. the kind of information i elicit in a targeted survey is different from the on-the-fly, in-class classroom assessment techniques (cats; angelo & cross, 1993) such as asking students to list the most difficult or least clear concept in a given class period. i try to keep targeted surveys short (5-6 questions). i begin with a multiple-choice question or two, such as a likert scale (strongly disagree, disagree, neutral, agree, strongly agree, don’t know) question, because students can answer those quickly, and generally do so. if they choose not to take the time to answer a later essay question, i at least have their answer to the summary question. for an end-of-semester survey to supplement the required survey at my institution, i often use slightly longer (6-10 question) surveys that combine focused (was homework feedback sufficiently timely and meretsky, v. j. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 67 detailed?) and completely open questions (please add any other comments you like). instructors who have not previously written survey questions may want to consult some basic reference material on survey design but i find my information needs are usually fairly clear-cut, which simplifies question construction. anonymous surveys can also be used for pre-post learning assessments as one measure of learning outcomes. some sites (e.g., surveymonkey, questionpro) limit the number of questions or the number of survey respondents in their free services, others (e.g., kwiksurveys) do not. some providers also have commercial versions with increased support and services, as well as more flexible downloading options. advanced analyses of survey results require transferring survey results to another platform such as a spreadsheet or statistical package, and free services vary in the ease with which large or complex response sets can be downloaded. for surveys that may be pilots for larger studies, designers might use the freeware version from a supplier that also offers commercial support; if the pilot study evolves into a larger project, the commercial support may be welcome. indiana university presently supports discounted prices on several levels of annual survey monkey subscriptions for instructors on all its campuses, and other universities may also support such services. future implications metacognition – the practice of reflective learning – is encouraged both in students and in instructors (brookfield, 1995, 2006; schön, 1987). anonymous surveys provide us with the means to do both simultaneously: to learn about how our teaching is perceived, while asking to students to reflect on their learning. i have rarely had all students in a class respond to either targeted or end-of-semester summary surveys, but i generally get answers from well over half my students (graduate and undergraduate, class sizes of 25-50) and from a range of levels of progress and satisfaction. students often provide thoughtful and well-reasoned critiques that give me an opportunity to consider aspects of the course through the lens of their experiences. if i receive conflicting responses on a question, i may take the issue back into the classroom to explore it further. giving students the opportunity to understand that they are not uniform in their responses can help to defuse frustration or stronger emotions. evidence of diversity in student responses reminds students that the instructor’s goal must be to support all class members in their learning and that they, the students, are all part of each other’s learning environments. clickers and other instant-feedback devices could also supply this kind of rangeof-reaction information, but they are not in wide use, whereas online surveys are freely available wherever internet access is available. as we strive to become more intentional and transparent in our teaching–clearly enumerating desired learning outcomes and linking activities and assignments to those outcomes–quick, anonymous online surveys are a useful source of evidence to support teaching decisions. in contrast to quick classroom assessment techniques, anonymous online surveys are well suited to address aspects of a course beyond basic comprehension of content. they give students more time for thought, but still can provide instructors with student feedback in a timeframe of days: much closer to the same-day or same-week response of classroom assessment techniques than the after-the-semester response of institutional course evaluations. to give closure to students, instructors should, in turn, give students feedback on what they learned from surveys and how or whether they will act on information (angelo & cross, 1993). closing the meretsky, v. j. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 68 loop with students is easier to do with surveys that are taken during the course when conversation with students is still straightforward, but instructors may be able to provide feedback on end-of-course survey results by email, if desired. online survey results are a good way to demonstrate reflective teaching and teaching that promotes reflective learning. results of these quick, shorter surveys can be used in teaching portfolios and can be a foundation or stepping-stone for scholarship of teaching and learning. anonymous online surveys are quick to create, easy to administer, and easy to archive. they produce useful results that can promote better, evidence-informed teaching and better learning. references angelo, t.a., & cross, k.p. (1993). classroom assessment techniques: a handbook for college teachers. san francisco, ca: jossey-bass. brookfield, s.d. (1995). becoming a critically reflective teacher. san francisco, ca: jossey bass. brookfield, s.d. (2006). the skillful teacher. san francisco, ca: jossey bass. schön, d.a. (1987). educating the reflective practitioner. san francisco, ca: jossey-bass. journal of teaching and learning with technology, vol. 10, special issue, pp. 50-57. doi: 10.14434/jotlt.v9i2.31542 covid-19 attitude correction: rather than crash in the crisis, the author corrected attitude and began to fly scott wasmer university of alaska, anchorage sawasmer@alas ka.edu abstract: the author, an assistant professor in an aviation maintenance technology (amt) program, teaches future aviation maintenance technicians at the university of alaska anchorage (uaa). certified by the federal aviation administration (faa), the amt program is a pathway to becoming a licensed aviation maintenance technician and offers an amt associate of applied science (aas) degree as well as three certificates. the amt program’s faa certification requires an faa-approved curriculum (subjects and learning objectives) as well as adherence to regulatory standards for teacher–student contact hours. the university’s amt program consists of a combination of didactic and hands-on teaching/learning styles, including student performance of aviation maintenance tasks (e.g., aircraft inspections and engine overhaul). the 2019 coronavirus disease (covid-19) pandemic required uaa faculty to convert courses to a suitable online delivery format and change the curriculum of an entire semester of courses. the author’s initial response: it would be impossible to accomplish the conversion and still maintain faa requirements. canceling the program until after the pandemic was discussed. this was not an option, as current students would lose faamandated credits and hours, and the amt program could be closed permanently because of state funding issues. so, the complicated conversion began, and online learning commenced midsemester. as the semester progressed, the author began to embrace the online modality and champion an effort to complete conversion of the entire program. through this experience, the author realized the tremendous benefits of online teaching: a greatly improved learning and lifestyle experience for the students as well as economic benefits to a financially challenged institution. the online program creates a learning environment that more closely matches the students’ future technology-driven careers and increases the knowledge and skills they will gain. pandemic gathering restrictions have limited the number of students allowed in labs and field activities. though this was initially a concern, students have benefitted through increased student–teacher contact and learning opportunity during these activities. keywords: change theory, educational philosophy, online education strategies blissful ignorance march 9–13, 2020 was spring break at the university of alaska, anchorage (uaa) and i spent the time in kodiak, alaska, doing a part-time side job. i was installing an upgrade to the radios of a de havilland dhc-2 beaver aircraft. i teach in the aviation maintenance technician (amt) program at this university. i am a certificated airframe and powerplant mechanic (known as an a&p) and do some jobs during school breaks. it helps me to stay current with the ever-changing technology and i have found that students relate and connect well with professors who are current in the industry. kodiak is a small town on an island about 250 air miles from anchorage (equivalent to the distance between chicago and columbus, ohio). alaska itself is something of an island, separated from the contiguous states by distance, culture, and time. so now i was on an island of an island. i was not paying a lot of attention to things happening around the world. of course, i had heard of the mailto:sawasmer@alaska.edu wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu coronavirus and knew it was wreaking havoc in italy. however, italy is a long way from kodiak, and the virus seemed hardly likely to influence my day-to-day life. i was very wrong. on wednesday, march 11, i began receiving emails from the aviation technology department director stating we would complete the semester teaching online. we were given an extra week before classes resumed to get organized and change our curriculum to an online format. my reaction was to immediately approach a local air carrier about employment. i was initially certain we would not be able to teach entirely online. i thought there was a very good chance our program would be cut, as online teaching of the subjects seemed impossible. our program is approved by the federal aviation administration (faa) under title 14, part 147 of the code of federal regulations. this approval allows graduates of our program to take the tests for certification as an a&p. without this faa approval, our students have no reason to participate in our program. part of the faa approval includes our curriculum meeting certain standards and contact hours between instructor and students. during this time, the university was conducting program review. i knew cuts were coming, including cuts to entire programs as a result of state budgetary concerns. i feared that if we had to close for a semester or more, the financial burden would simply be too large, and our program would get axed. calling me pessimistic about the future is an understatement. i focused on completing the work on the beaver and flew home thinking i was likely moving to kodiak and returning to full-time mechanic employment. i had mixed feelings around that. on the one hand, i very much enjoy the hands-on work as a mechanic. the air carrier in kodiak is a well-run and well-funded company. the job would be enjoyable and small-town life in kodiak was appealing. the isolation of living on a semi-remote island during a pandemic seemed like a good idea, also. on the other hand, i love teaching. watching and helping students achieve their goals is an indescribably wonderful experience. i did not want to give that up. return to the storm i returned to work with the rest of my team in the amt program and began to look at the situation. most of our courses have a strong hands-on component. about one third are two cooccurring classes with a theory and a lab class. at the time, i was teaching three of those courses, half theory and half lab. the lab portions are not experimental-type labs but application labs. one of the classes was aircraft fuel systems and aircraft fuel systems lab. in the theory portion, i teach theories of carburetion, for example, and in the cooccurring lab class, students overhaul actual aircraft carburetors using the identical process and tools they will use on the job as an a&p. our initial plan was to suspend the lab classes until later, when we could all meet face-to-face. we would finish the semester by teaching only the theory classes online. i was skeptical these theory classes could be taught effectively online. i had taken online courses before and had a very positive experience with online learning. i received my undergraduate degree online through a large national collegiate athletic association division i school. i did well in school, earning my degree while simultaneously owning/running an aircraft repair and modification business. while getting my degree online, i also taught some classes in the amt program at my current university as an adjunct professor. so, i was in favor of online learning for certain subjects and programs. aviation maintenance was not one of those programs. my initial suggestion was that we simply stop the entire program and pick it up in the fall 2020 semester, if we were still around. fortunately, cooler heads prevailed and we began the process of seeking faa approval to teach our curriculum online. i began to analyze the courses by goals and requirements and to contemplate how to leverage technology. i had some experience developing technical training online as the maintenance training manager for a large regional airline in the 51 wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu midwest. i had also been employed in various maintenance management roles for air carriers here in alaska. i reflected on those experiences as l looked at the courses and thought about what exactly the airline industry expects of newly minted a&ps and how to deliver that. at this time, the uaa academic innovations department held some remarkable training on technology tools and resources, namely, the ubiquitous zoom. i began to see the task was possible. i thought back to my undergrad experience and what had been effective and what not so effective. that university as well as uaa use blackboard as the instructor–student web interface. like every other web-based or software tool, it is not perfect and has limitations. however, it is the tool i had, so i began to learn more and attempt to become proficient with it. i had not been using it extensively in my courses prior to this time. i mainly used it as a repository for assignments and technical data. fortunately, i had been using the grading function already and was proficient with that. as part of our faa approval, we maintain records of student grades. for each course we are required to file them in a specific format. consequently, using the gradebook and grading features of blackboard is an extra layer of complexity in course administration. i had previously learned to use the gradebook so that students could always and easily see their grades and status in the class. this turned out to be a visionary move and i was able to train colleagues in this skill. as i progressed in switching classes from in-person to online, i realized the paradigm shift is much greater than simply delivering course material into a webcam rather than in front of the classroom. i have long believed a great advantage to online learning is the flexible schedule. i have since found other advantages. i also realized the entire concept of thinking in terms of a static “class time” had to be thrown out. i learned course material had to be thought of as “chunks” of information, and effective online chunks ought to be delivered in smaller pieces (blackboard, n.d.). i strive for 10to 15-min chunks. i base that more on my experience and gut feeling than hard, empirical data. however, i can’t be far off. it appears to me many popular youtube and other social media videos are less than 10 min. setting out in the storm we recommenced the classes and my first task was to teach the students how to learn online. i was using new tools in the middle of a semester, such as blackboard discussion boards, shareable content object reference model modules of self-paced learning, and other resources. not only did i have to navigate this new world, i had to lead the students in these uncertain skies. for the students, simply checking blackboard online a few times per week was one of the first paradigm shifts they had to master, and a change i had to encourage and foster. with this change i saw a lot of fear among the students that this would not work, and they would not learn the subjects. i noticed some institutional fear among faculty and staff about whether this would work. as i stated above, i was among the most fearful, initially. add to this all our generalized fears about life and our futures while in the early stages of the global 2019 coronavirus disease (covid-19) pandemic. that is a lot of fear and i believe fear kills. i see fear killing spirit, drive, and hope. people’s reaction to fear varies. however, i observe the nearly universal reaction to fear is control. nothing scientific or quantifiable here but think on a common experience we have all have had or seen. imagine you are in an airplane flying at high attitude across the country. we often encounter some turbulence, even on a bright clear day. what is the reaction of people, perhaps yourself if you are afraid of turbulence, or even of flying itself? nearly everyone will grip their seat or seatmate tightly. think on this for a moment, and on the absurdity. first, we are in a perfectly safe and airworthy craft. we are enjoying incredible travel that is beyond the imaginations of our grandparents when they were children. this travel is in an industry tightly directed and regulated and has a deeply entrenched 52 wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu culture of safety and performance. the fact is air, at altitude especially, has a lot of wind and movement. it is entirely reasonable that, at times, it should be as rough as an old washboard gravel road. now, if we are frightened as this “washboard air” bounces our airplane a bit, we don’t instantly think through these logical, plausible, and positive facts. rather, we grip the seat as if we could somehow hold a 60-ton airliner traveling at nearly 550 mph and keep it from bouncing up and down or side to side in response to the “bumpy” air. to me, this gripping of the seats points to a very deep, perhaps primal desire or instinct for control in the face of fear. my own reaction when i first heard we were going to teach online is another example of this. i like to think of myself as being compassionate and altruistic. and in fact, i exhibit these characteristics often. yet, when i first heard of teaching online, i did not think of the logical and plausible facts. i did not immediately consider the likelihood of the faa having to become flexible and interpret their guidelines broadly. i did not consider the state’s need for and interest in the success of the aviation technology department or even that closing a college or program requires years to occur. i honestly admit that i did not even think much, at first, about the implications or the real health concerns for others, my family, and friends. my very first fear-driven thoughts and actions were to secure employment in the event my job ended. the actual antidote to fear i believe is to relinquish control. reflect with me again on the image of the 737 flying in turbulent air. when i experience turbulence, i now take different actions. i slightly loosen my lap belt, sink back into my seat, take a few breathes. i relax and let go, and the fear subsides. i can literally feel my heartbeat slowing down and my breathing become more regular. in the absence of fear, i can begin to think logically about windy, bumpy air, airline safety, mechanical strength of forged aluminum wing spars, and so on. i can only let go by faith. in the case of the turbulence, i take these actions to let go because i have faith they work. my faith is based on my experience. the first time i tried to relax, it was because somebody i trusted suggested i try it. i did not have a lot of faith in the idea, yet i trusted and had faith in my pilot friend and his experience. my faith in him and his advice allowed me to let go and enjoy a flight and the miracle of modern airline travel. since then, i can let go and have the fear dissipate in turbulence. in this pandemic, with the initial fears my students and coworkers were experiencing, i knew letting go would be the key to displacing the crippling fear. one of my daily goals, in addition to the simple lecturing on facts, was to try and instill some faith. in each of the classes, i decided to foster an environment where faith could grow while teaching the students how to use the discussion boards. the first assigned threads in each class asked them to share what positive things they could foresee with switching to online learning. each student was required to list ways they thought this switch to online learning could be positive. they were then to comment on at least two other students’ posts. reading and grading these posts was very enjoyable. i got some new ideas and saw more silver linings in these cloudy skies. i began to call them “covid silver linings.” these began to change my attitude as well. i, myself, began to search for some positives in this environment. i had already known the old teaching paradigm of a “sage on the stage” who pours out information is not the most effective teaching method. at least it is not the most effective in all cases. i had wanted to incorporate more technology-driven, interactive activities in the pre-pandemic didactic courses. as for so many other ideas and plans, i simply never found the time to learn and use these tools. now, i was going to be forced to learn these things. i had always hated grading exams. within my department, we traditionally use a lot of multiplechoice questions. we have a good reason for this, namely, that when the students graduate their first step toward becoming certificated mechanics is to take a series of faa-administered exams. all are multiple choice. i do not “teach the test.” however, i strongly believe i ought to be helping them prepare for this, their first objective after graduation. additionally, most of our subjects and topics are 53 wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu concrete facts, such as ohm’s law, or the strength of aluminum alloys, or how to properly service a turbine engine, to name a few. these topics and outcomes work well with multiple choice. when testing in blackboard, the multiple-choice questions are automatically graded. another covid silver lining. we have another odd paradigm in our testing: using closed-book exams. we stress to the students they must always use the books and manuals on the job. we stress listing references in homework and in the labs. yet, we traditionally gave closed-book tests. i never liked that and believe open-book tests are a better idea. again, this was something i had never found the time to implement. now, i was going to have to make the time and learn how to effect this change. after all, with students taking tests online and at home there is no simple, practical way to proctor closed-book exams. it is much simpler to make the exams open book. i began to learn something about open-book tests. i had always heard they should be timed. reflecting on my own online learning experience, i recalled they were all timed. i began to think on and look for research regarding effective time limits. i honestly did not find much; however, somewhere i got the idea of 2 min per question. i am not certain where this comes from, but it works if i design questions well. i gave my first exams and the grades were down significantly. i did more research, found articles that made sense and lined up with my own undergrad experience in taking open-book exams, and shared these with the students (lundin, 2019; silverman, 2018). again, i was teaching them how to do online learning. i realized also i had to rewrite exams. that did not bother me a lot. the process i used is simple, but time consuming. i had to rethink how i measured learning objectives. rather than ask the students to recall facts, such as “define ohm’s law,” i had to rewrite questions that measured their application and understanding of those facts. a little effort was needed, but the result was a much better exam and test of their learning, an exam that could be used in the traditional, face-to-face environment as well. another covid silver lining. riding out the turbulence this was one of the big breakthroughs for me in moving from acceptance of the online teaching to championing it. i was forced to literally rewrite not only the exams, but all the course material. the basic outline and flow remained, yet the delivery tools obviously changed. i had been teaching primarily with a very traditional model and i was not especially content with it. each course taught had two class periods per week. and, as previously mentioned, many also had a cooccurring lab course. my old model had been to primarily make an outline of the textbook that supported the learning objectives and then lecture from that. my lectures were typically a powerpoint presentation with some youtube videos for emphasis. there were usually homework assignments to support the learning. this method is okay for simply teaching facts, of course, but it is not effective for teaching and exploring critical thinking skills. in my opinion, critical thinking is a particularly important skill for a successful mechanic. a&ps work within a very rigid box of regulations and standards. yet, within this box we typically have wide latitude. take any nine mechanics and no two of them are likely to skin the proverbial cat the same way. this aspect of the job is what so many mechanics find appealing. with my old way of teaching i was effective at describing the box, and even some of my own personal “cat-skinning” techniques (so to speak). but i was not doing much to encourage the students to think for themselves. i believe developing critical thinking, coupled with resources, ought to be the advantage and reason for enrolling in a university program such as ours to obtain the a&p certificate. the faa offers two paths for this certificate. one path is to apprentice under a certificated mechanic for 3,000 hr and then pass a test series. the other path is to attend and complete a course at a school approved under part 147 of the code of federal regulations and then pass the identical test series. 54 wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu among the part 147 schools, some are purely a vocational-technical type that can complete the faa required subject matter and minimum 1,800 contact hours in a calendar year or less. our program requires five semesters with no summer sessions. that is a substantial investment of time and money for our students (2.5 years and substantial tuition costs), when they could obtain the same certificate in 1.5 years while getting paid as an apprentice. we have to offer a good reason for people to make that choice—an advantage. in my opinion the significant advantage to students in our program is being taught to think critically and have opportunities to practice that skill. students get some opportunity to develop critical thinking skills during lab periods. i had wanted to do more of this development in the regular class periods. the traditional teaching model, “sage on a stage,” provides limited opportunities for critical thinking. if the “sage” has excellent inclass questioning technique this provides a great opportunity. as we know, there are limitations to this, in addition to the problem of variable questioning skills. i was forced to rewrite tests and course material. i had to learn new skills using technology for the purpose of generating interaction with students. these course rewrites could also be incorporated into the traditional, face-to-face classes if we return to that. in this milieu of change and paradigm shift, i began to see opportunities to develop critical thinking skills where online learning is more effective. for example, i use blackboard’s discussion boards to ask a question about the application of a technical course concept. the students are required to answer these with a minimum number of words and also post responses to two classmates’ answers. i design the forum such that each student must create their own post before being able to see other students’ posts. this prevents simply copying others’ ideas. i also set a maximum number of responses to each initial post, so that if a thread already has two replies, students cannot reply to it for a grade. surprisingly, many students make three to four replies weekly, though only two are required. using online tools such as this, student interaction and idea sharing is happening more than i have seen in the traditional classroom setting. another covid silver lining. flying above the storm the last area that changed my attitude about online learning for aviation mechanics is mostly economic. as i mentioned above, uaa, like many universities, is facing significant fiscal challenges and the leadership is forced to make difficult decisions regarding budget cutting. if we examine only the business side, there are obviously two choices. a business in hard times must either cut expenses or raise net revenue if it is to survive. gross revenue is only helpful to the extent there are not also significant cost increases. i began to imagine the possibility of an entire paradigm shift in our program and the incredible opportunity for increased net revenue. our pre-pandemic model is captive to the old paradigm of college education, namely, classes forced into static periods and lab sessions. practical application and critical thinking development are also locked into these static periods. for example, the aviation fuels course i taught last spring (2020) met monday and wednesday afternoons and was scheduled for 1.5 hr of lecture followed by 1.5 hr of lab. when we began to study aircraft carburetors the lab was to overhaul an aircraft carburetor. the first day on this subject students do not have the knowledge to even begin the overhaul. typically, i teach this topic with a few double class periods followed by a few double lab periods. this works, to a point. the actual task of the carburetor overhaul typically takes 6 to 8 hr including researching documentation. with the old paradigm, a student might start this project on a wednesday afternoon and then must stop at an arbitrary point no matter how disruptive to the learning process. the student then resumes this project 5 days later at an arbitrary time. the impediments to learning and inefficiencies are selfevident. 55 wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu i began to imagine a new model where students spend time learning the theory of operation and principles of carburetion. they would learn with a variety of instructional methods including zoom lectures, online group peer discussions, reading, videos, and other web-based interactive activities. then, after completing these assignments, they would have the opportunity to physically come into our building and be given 8 hr dedicated to exploring and overhauling carburetors. this paradigm could be extended to the entire amt program where students enroll and are scheduled to attend the physical building only on certain dates. on these dates, they would have 4or 8-hr lab periods. with the lengthy tasks such as turbine engine overhaul, they would be given lab periods on several consecutive days. this model has some obvious benefits to learning. however, i also saw significant financial opportunity for the university. we could double enrollment without having to double the faculty. the faa has strict standards regarding class size and instructor to student ratio. we typically have enrollments near that limit. under the traditional teaching model, if we are to expand and increase enrollment, we need to double the faculty. this has many problems, including finding qualified professors as well as a significant investment before the increased revenue of double enrollment is realized. additionally, there is of course no guarantee of doubling enrollment. with a new model and vision, we could double enrollment without adding faculty. or at least without doubling the faculty. this requires a complete paradigm reversal and significant investment of time to rework our program. returning to the business case, i believe if any business were told of a system that would double gross revenue without increasing expenses, they would want to learn more. once it was explained that the system required them to learn new skills and make a small investment in technology, they would ask for more information. when told it required a focused effort to learn new skills and acquisition of technology that could be used across their entire company, they would all ask for help in seeing this new model and learning these skills. in the amt program, we can double enrollment without doubling faculty by simply reassigning duties. those who are technically inclined and motivated to learn these skills could teach and develop the online courses. those faculty who are technology challenged typically have a lot of practical, hands-on experience. they are great at the lab periods and they could focus on teaching them. i can easily teach two sessions of my online classes without doubling my labor hours per week. a professor focused on the lab classes could hold multiple lab periods in a week, perhaps repeating monday and tuesday labs on wednesday or thursday. reworking the schedule while maintaining faa mandates is not simple, but the payoff is immense. online aviation maintenance, rather than becoming the death knell of the university amt program, may be the savior of the program. yet another covid silver lining. clouds have silver linings at the time of this writing, the fall 2020 semester has begun. we are teaching a modified format with a hybrid of face-to-face time for lab and online classes for the didactic portions. it is not the entire paradigm shift i envisioned and hoped for, but it is a step. systems and groups take time to change course. sometimes, like on the naval aircraft carriers i served on in stormy weather, a gentle course correction is safer than a hard-over, immediate turn. a couple of weeks ago, i was feeling sick with several covid-19 symptoms including fever. this was a saturday. i went for a test (negative, thankfully) and self-quarantined while waiting on results. i had an in-person lab period scheduled for later that week. suddenly, all these terms such as “contact tracing” and “contingency plans” were very real. thinking of how to continue teaching this fall if i were sick was a dark and cloudy vision. i had some doubts and fears about the implications. the fear came back, and i was at first tempted to try all sorts of control measures such as announcing 56 wasmer journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu schedule changes and implementing a new course syllabus. then i was able to remember how so far this had all worked out well despite the storms. i remembered how resilient the students had been, and how helpful my coworkers were in these times. i was able to reflect on these silver linings and lean back and relax in faith that this would all work out fine. yes, i did come up with a few contingency ideas, but i kept them to myself and did not allow them to distract me from what were immediate tasks and student needs. these are cloudy and fearful times for us all. president roosevelt in his first inaugural address said “the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance” (roosevelt, 1933). are these times not like the conditions of 1933? uncertainty and doubt about the future are all about us, in the headlines and coffee shop conversations. this is the ideal breeding ground for fear. i have already stated my belief that faith, not control, is the proper response to fear. all these clouds have silver linings. i look for covid silver linings everywhere. they add to the faith, which is the antidote to the fear. i try to share them with friends, coworkers, even strangers. when it comes to teaching, there are many, and they make cloudy days beautiful. references blackboard. (n.d.). are your courses exemplary? retrieved march 17, 2020, from https://www.blackboard.com/resources/are-your-courses-exemplary lundin, e. (2019, march 26). how to study for (and take!) open book exams. retrieved april 1, 2020, from https://collegeinfogeek.com/open-book-exam/ roosevelt, f. (1933). "only thing we have to fear is fear itself": fdr's first inaugural address. retrieved september 23, 2020, from http://historymatters.gmu.edu/d/5057/ silverman, r. (2018, october 15). exam preparation: strategies for open book exams. simon fraser university library. retrieved april 1, 2020, from https://www.lib.sfu.ca/about/branchesdepts/slc/learning/exam-types/open-book-exams 57 https://www.blackboard.com/resources/are-your-courses-exemplary https://collegeinfogeek.com/open-book-exam/ http://historymatters.gmu.edu/d/5057/ 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 3294-11481-1-ce journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 62 65. strategies for engagement in online courses: engaging with the content, instructor, and other students beth dietz-uhler1 and janet e. hurn2 framework in recent years, there has been an increasing focus on student engagement (e.g., pike & kuh, 2009; porter, 2009). student engagement occurs when "students make a psychological investment in learning. they try hard to learn what school offers. “they take pride not simply in earning the formal indicators of success (grades), but in understanding the material and incorporating or internalizing it in their lives” (newmann, 1992, pp. 2-3). research (e.g., kinzie, 2010; prince, 2004) strongly suggests that when students are engaged, they tend to perform better. when students are actively engaged in the material, they tend to process it more deeply, which leads to successful retention of the material (e.g., craik & lockhart, 1972). in this paper, we describe several ways in which online courses can be designed to promote student engagement. all of these techniques are consistent with quality matters rubric standards (quality matters, 2011) area number 5: learning interaction and engagement. ● 5.2 learning activities provide opportunities for interaction that support active learning. ● 5.3 the instructor’s plan for classroom response time and feedback on assignments is clearly stated. ● 5.4 the requirements for student interaction are clearly articulated. consistent with quality matters, we have used a number of strategies in our course designs to foster student engagement with the course content, with the instructor, and with other students (see table 1 for a summary of these strategies). below, we will describe in more detail how these simple course design and implementation strategies can be used to promote student engagement. making it work student engagement with course content. to encourage students to engage with the course content, we employ several strategies. in most of our courses, students primarily receive content from a textbook and from videos and interactive activities. one strategy we use is to create short (no more than five minute) audio introductions to each module. these introductions involve the instructor talking enthusiastically through four to five powerpoint slides and presenting a general overview of the module content. we use knovio (www.knovio.com), which is free and does not require any software for students to download. additionally, we require students to complete a number of engaging, online, interactive activities. these activities are generally in the form of a game, which most students find to be stimulating (e.g., davidson, 2011). many activities of this sort can readily be found online (e.g., merlot: www.merlot.org) or through textbook publishers (e.g., pearson’s mystatlab: www.mystatlab.com). 1 beth dietz-uhler, department of psychology, miami university, middletown, oh, 45042; uhlerbd@miamioh.edu, (513-7273254). 2 janet e. hurn, coordinator of regional e-learning initiatives, miami university, middletown, oh, 45042; hurnje@miamioh.edu, (513-727-3341). dietz-uhler, b. and hurn, j.e. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 63 table 1. summary of strategies for student engagement. engagement with content engagement with the instructor engagement with other students listen to the audio introductions listen to audio introductions respond to classmates’ critical thinking answers in discussion board engage in the online interactive activities watch short, how-to videos participate in “open discussion” in learning management system complete mini projects read frequent feedback in email and in learning management system participate in exam review activities respond to critical thinking questions in discussion forum read “bookend” weekly emails participate in “ask the professor” discussion in learning management system read and respond to individualized “how’s it going?” emails read and respond to professor’s email responses another strategy we use is to require students to complete a “mini project” for each module. the mini projects are designed to require students to apply the material from the text and the interactive activities, relate the material to their own lives, to learn or make use of existing skills such as technology or creative abilities, and to be fun. one example of a mini project includes writing a letter to your grandparents telling them what you will learn in this course, how it applies to your life and to their lives, and what questions you have about the material. when students apply course material to their own lives, they tend to remember the information better (e.g., roediger, gallo, & geraci, 2002). another example is for students to create a short video (we suggest they use screenr or screencast-o-matic) explaining the parts of the brain and the nervous system. other mini projects involve creating posters, public-service brochures, and letters to a newspaper editor. student engagement with instructor. we employ a number of different strategies to encourage interaction with the instructor. in addition to the audio introductions previously described, we also create short, “how-to” videos (using screenr or screencast-o-matic) to present “frequently asked questions” about the course, to show students how to access feedback in the collaborative learning environment (cle), or to show students how to use software to create a poster. like the audio introductions, it is important that students know that it is their instructor’s voice they are hearing in the audio. additionally, for each module, students receive feedback from the instructor on their work. feedback is given in the course cle as well as via email. the instructor also sends “bookend” emails each week which provide general feedback on the prior module and previews the next module. typically, the instructor will try to add a sentence or two dietz-uhler, b. and hurn, j.e. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 64 that is not course-related, such as a comment about a sporting event or the weather. we also engage with students in an “ask the professor” discussion board in the course cle. the idea is for students to ask questions about the course, the material, or anything else. other students can then see the student’s questions as well as the instructor’s response. one of the most important strategies that we use is to send personalized “how’s it going?” emails to students two times per semester. the goal of these emails is to let students know that we care about them, which we know is vitally important to student success (e.g., christophel, 1990; swan & richardson, 2003). we estimate that about 90% of students respond to these emails to let us know how the class is going for them and how they are doing in general. finally, we respond quickly to students’ emails to us. we hear often in course evaluations that students appreciated our quick responses as it let them know that the instructor cared about them. all of these strategies are employed to achieve the goal of promoting student engagement. student engagement with other students. there are three primary mechanisms we use to encourage student engagement with other students. first, students are required to post a response to two other students’ critical thinking answers in the cle discussion board. students post these responses for all modules, so they are interacting every week with their classmates. second, there is an “open discussion” board in the cle, which students (and the instructor) can use to post comments or questions about anything. in general, if students do not initiate discussion, then the instructor will. topics might include queries about favorite movies or books, requests for comments on current events, or a simple query asking how everyone’s weekend was spent. third, for each exam, students are required to complete some type of review and post to the discussion board. the review might take the form of generating questions about the material, creating a concept map, or writing a few paragraphs about how the material across three modules is connected. the “interaction” takes place with the requirement that other students are required to read what students have posted (and yes, students are told that the cle records, for the instructor, who reads what post). future implications we have been employing these engagement strategies in our courses for many years as they are consistent with how we design our courses with quality matters in mind. how do we know if our students are engaged? research (e.g., johnson, 2012) suggests that students are engaged when they exhibit the following behaviors: ● paying attention ● taking notes ● listening ● asking questions ● responding to questions ● reacting ● reading critically ● writing to learn, creating, planning, problem solving, discussing, debating, and asking questions ● performing/presenting, inquiring, exploring, explaining, evaluating, and experimenting ● interacting with other students, gesturing and moving dietz-uhler, b. and hurn, j.e. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 65 anecdotal evidence suggests that our students are exhibiting many of these behaviors, leading us to believe that they are engaged with the material, the instructor, and other students. for example, students are frequently interacting with other students in the online discussion board, they seem to take pride in the mini projects for each module, and they typically exceed minimum word counts on projects and critical thinking questions. they also regularly engage via email with the instructor and report that they are enjoying the class and learning. references christophel, d. m. (1990). the relationships among teacher immediacy behaviors, student motivation, and learning. communication education, 39, 323-340. craik, f. i. m., & lockhart, r. s. (1972). levels of processing: a framework for memory research. journal of verbal learning and verbal behavior, 11, 671–684. davidson, c. n. (2011). now you see it: how the brain science of attention will transform the way we live, work, and learn. new york: viking. johnson, b. (2012). how do we know when students are engaged? edutopia. retrieved may 21, 2012 from http://www.edutopia.org/blog/student-engagement-definition-ben-johnson. kinzie, j. (2010). student engagement and learning: experiences that matter. in j. christensen hughes and j. mighty (eds), taking stock: research on teaching and learning in higher education (pp. 1390153). kingston, canada: school of public policies, queens university at kingston. newmann, f. (1992). student engagement and achievement in american secondary schools. new york, ny: teachers college press. quality matters (2011). retrieved march 2, 2013 from http://www.qmprogram.org/lit-review 2011-2013-rubricpdf/download/qm%20lit%20review%20for%202011-2013%20rubric.pdf. pike, g. r., & kuh, g. d. (2009). a typology of student engagement for american colleges and universities, research in higher education, 46(2), 185-209. prince, m. (2004). does active learning work? a review of the research. journal of engineering education, 93(3), 223-231. porter, s. (2009). institutional structures and student engagement. research in higher education, 47(5), 521-558. roediger, h. l., iii, gallo, d. a., & geraci, l. (2002). processing approaches to cognition: the impetus from the levels-of-processing framework. memory, 10, 319–332. swan, k., & richardson, j. c. (2003). examining social presence in online courses in relation to students’ perceived learning and satisfaction. journal of asynchronous learning networks, 7, 68-82. 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 1203hanna journal of teaching and learning with technology, vol. 1, no. 1, june 2012, pp. 24 – 34. the impact of technology on student perceptions of instructor comments kathleen j. hanna1 and david yearwood2 abstract: the lack of writing skill among college graduates is often blamed on poor teaching, or alternatively, failure on the part of schools and instructors to teach the basic grammar and punctuation skills that employers remember learning in their own school years. while it may be true that teaching techniques and course content have changed over the years, a far greater cause of student inability to write clearly may be students’ negative perceptions of instructor comments. if this is indeed the case, as borne out in some earlier studies by bardine, then how might students who grew up in a digital era view electronic comments? the prevalence of technological tools to make electronic notations increases readability, but what impact might instructors’ use of technology in making comments have on tone, completeness, and length of comments when viewed through the lens of the student writer? keywords: teaching, writing, technology, teacher comments, grading i. introduction. a cursory search for information about faculty grading practices reveals that there is no dearth of research about instructor comments. indeed, qualitative research into this subject often produces recommendations such as making positive comments, and not making so many comments that students are overwhelmed (monroe, 2002), and making sure comments are as clear as possible (fife & o’neil, 2001). other research focused on length, tone, type of comments (bardine, 1999), placement of comments, use of hedges, (ferris, 1997; fife & o’neil, 2001). other research focused on length, tone, type of comments (bardine, 1999), placement of comments, use of hedges, (ferris, 1997: fife & o’neil, 2001), and on the relative ease of on-line as opposed to hand-written commenting (monroe, 2002: monroe, 2003). information gleaned from these works clearly suggests that instructor comments are important tools in teaching students to write. however, advice on grading papers and making comments is used only to change a narrow aspect of the comments themselves, often without addressing the overall impact of the comments upon students. the result is that comments continue to have the same impact they have had for many years, and students’ negative perceptions continue to be a problem (fife & o’neil, 2001; wiltse, 2002). what appears to be certain is that the effective utilization of instructor comments, including the use of technology to deliver those comments, could potentially change writing in the classroom and affect student writing (bardine, 1999; bardine, bardine, & deegan, 2000). more recently, faculty, particularly those who teach online, have begun to use technological tools to make comments about students’ writing, but how these comments are perceived and the effect that the use of technology is likely to have on student perceptions of the comments made is just one issue that warrants 1 department of language and literature, dickinson state university, 291 campus drive, dickinson, nd 58601 2 2department of technology, university of north dakota, 10 cornell st., stop 7118, grand forks, nd 58202-7118 hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 25 investigation. further, there is some concern as to what might be the long-term impact of comments made about student writing using technological tools. a. justification for research. the purpose of this study was to explore the relationship between the use of technology to provide comments to students, and students’ perceptions of these comments. studies of this nature are necessary and important in view of the current emphasis on writing across the curriculum. while it may be the responsibility of composition instructors to teach basic writing skills, instructors in all disciplines who make comments on papers will likely have an impact on student perceptions, and an awareness of that impact among teachers could be beneficial to students in every field of study. this article examines the following questions: 1. in what way or ways does placement of faculty comments, i.e., in the paper’s margins, at the end of the paper, close to where there are structural or other issues associated with sections of the students’ work, or on a separate page, as determined by the necessities of the use of various technologies in delivering comments, affect how the comments themselves are interpreted and perceived by students? 2. how, and to what degree, are student perceptions of faculty comments affected by the appearance of the comments, especially as determined by the use of technological tools to deliver those comments? 3. what relationships, if any, exist between the completeness of comment marks provided via computer technology, such as symbols, abbreviations (i.e., frag., tr., sp.), single words, phrases, complete sentences, and explanatory paragraphs, and student perceptions of teacher criticism? the possibility of a relationship between the use of technology as a comment delivery system and students’ perceptions of the comments received from instructors was explored in this study. an examination of student reports about the tones of comments they received is one way to explore student perceptions of those comments. the comment tones explored in this research included resigned, encouraging, positive, negative, impartial, and hostile tones. b. theoretical framework. an instructor’s primary goal in making comments on student papers is to teach student writers to do something differently in the next draft or the next paper (wiltse, 2002). however, despite this noble goal there do not appear to be clear and concise conclusions about how students might interpret comments made about their writing (sommers, 1982). most of the research into instructors’ comments to students seems to focus primarily on written commentary style, and is based on the assumption that the problems of ineffective response stem from the way those comments are written, insofar as poor wording, vagueness, or insufficient information may apply (bardine, 1999; bardine, bardine, & deegan, 2000; fife & o’neil, 2001). however, given the possibility of the use of programs such as electronic markup and track changes, it is increasingly likely that teacher feedback would be in an electronic format. while this has not been addressed in the literature it does raise some question about the potential impact of the use of technology on students’ responses to instructor comments. hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 26 placement of comments (at the end, in the margins, or near an issue to be addressed), appearance of hand-written comments (color and legibility), and the use of typed comments (email or list-serve) (monroe, 2003) may also have an impact on how these comments are perceived by students. bardine (1999) found that end comments tended to be longer than margin comments, with 87% of the end comments being rated as average or long. this may be in part because instructors have more space to write comments at the end of the paper. would comments delivered by technology-based methods be perceived differently, though, because of their tendency to be placed at the end of the paper? an often-overlooked aspect of instructor comments is the tone, which students often interpret far differently than intended by the instructor. tone can range from positive and encouraging to negative, hostile, or resigned. for example, a comment with a positive tone would be, “good work,” while an encouraging tone might be perceived in a comment that pointed toward future accomplishment, or recognition of improvement, such as, “good start, keep working.” in contrast, a comment with a resigned tone might imply a sense of futility, while one with a negative tone would be more critical, and less hopeless in nature. for example, a comment with a negative tone might say something like, “sloppy, careless work.” hostile tone, on the other hand, is more aggressive and even personally critical, and comments perceived as hostile may sound almost like accusations, such as, “you really do not belong in this program.” the important issue is not necessarily what the instructor intended (though some may indeed intend to make negative comments) but rather how the recipient perceives the tone of the comment. finally, comments can be evaluated for completeness, which, though similar to ferris’s (1997) category of length, refers not only to the actual length of comments, but to how complete and effective students perceive those comments to be. the readers in lunsford and straub’s (2006) study made a point of providing full comments, generally in complete sentences. in contrast, the use of symbols, abbreviations, and one-word responses can leave students uncertain about what they are being asked to do, while lengthy comments may be overwhelming. the question to consider is how the use of technology affects students’ perceptions of those comments, and whether that effect is positive or negative. it could be important to examine the impact placement of comments has on student perceptions and anxieties, as well as how technology influences the placement of those comments. does the typescript appearance of technology-delivered comments have any relationship to the way in which students perceive the comments? are comments delivered through the use of various technologies generally more or less complete than those delivered in other ways? these questions could be important in determining how, and to what degree, technology should be used in responding to student writing. ii. methodology. a. survey instrument. the student survey was developed after examining literature from various researchers on the topic, as well as comments about common student responses that seemed to warrant investigation (bardine, 1999; bardine, bardine, & deegan, 2000; ferris, 2001; fife & o’neill, 2001; monroe, 2002; popovich & masse, 2005; wiltse, 2002). a pilot study was conducted of the instrument hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 27 with a selected sample consisting of instructors from the university’s language and literature department and students from a freshman composition class. instructor comments were broken into four sections: placement, appearance, tone, and completeness. placement referred to whether comments were written in the margins, close to problems associated with student writing, at the end of the paper, or on a separate sheet of paper. questions pertaining to appearance requested information about the color of writing implement used as well as instructors’ penmanship styles, including case, darkness, underlining, legibility, and the use of typed or electronic transmission. to evaluate student perceptions of the tone of comments they had received, students were asked, using a likert-type scale, how often they had received comments with tones that were, respectively, positive, encouraging, negative, impartial, hostile, or resigned. to enhance clarity, each of the questions regarding tone included a brief example, such as, “good start, keep working” as an example of encouraging tone. finally, questions about completeness asked how often students received comments in the form of symbols, abbreviations, single words, phrases, sentences, and complete paragraphs. b. demographic information. the population for this study consisted of college seniors at dickinson state university from the departments of business, nursing, and education, though many of the education students carried a second major in their teaching subject areas, such as history, music, or math. the majority, n = 64 (81%) were traditional students, ranging from 20 to 25 years of age. an additional 11 students (13.9%) were 26 to 30 years old, and four students (5.1%) were over 30 years of age. male students made up 27.8% (n = 22) of the students responding to the survey, while 72.2% (n = 57) were female. the majority (89.8%) of these graduating seniors were fulltime students (n = 71), completing a minimum of 12 credit hours in the semester during which they were surveyed. an additional 10.2% (n = 8) were part-time students. iii. results. an examination of student reports of the tones of comments they received is one way to explore student perceptions of those comments. comment tones explored in this research included resigned, encouraging, positive, negative, impartial, and hostile tones, as well as comments that sounded like orders, instructions, suggestions, and questions, respectively. a. population sample. table 1 provides a summary of the descriptive statistics analyzed with respect to study participants. these data include the participants’ age, gender, cumulative grade point average, native country, and native language. research question #1: in what way or ways does placement of faculty comments, i.e., in the paper’s margins, at the end of the paper, close to structural errors or other issues associated with sections of students’ work, or on a separate page as determined by the necessities of the use of various technologies in delivering comments, affect how the comments themselves are interpreted and perceived by students? comments in any of the locations studied could be delivered by technology, though some locations are more feasible than others. hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 28 table 1. descriptive statistics of study participants frequency percent age gender gpa native country 20-22 33 41.8 23-25 31 39.2 26-30 11 13.9 over 30 4 5.1 male 22 27.8 female 57 72.2 1.0-1.9 1 1.3 2.0-2.9 9 11.4 3.0-3.9 63 79.7 >4.0 6 7.6 us/can. 68 86.1 other 11 13.9 native languag e english 69 87.3 other 10 12.7 in examining the data related to this question, several significant findings were discovered with regard to the relationship between the placement of the comments and the tone the students perceived in those comments. for example, a statistically significant correlation was found between comments placed at the end of the paper (r = .38, p < .01) and encouraging tone. a similar correlation (r = .29, p < .05) was found between comments placed on a separate page and encouraging tone. this information is shown in table 3. not every specific element of instructor comments studied could be related to the use of technology; however, the findings with regards to comment placement are of particular interest because further statistical analysis showed a strong correlation (r = .33, p < .01) between the use of comments that were typed or electronically transmitted and placement of comments on a separate page. this information is shown on table 2. if computer-generated comments are placed at the end of the page, those comments could then be shown to have a positive relationship with comments having an encouraging tone. no statistically significant correlations were found between comment placement and any other comment tones, or between typed and computer-generated comments and any other comment placement. no significant relationships were found between any aspects of demographic information, i.e., age, gender, grade point average, native country, or native language, and the student perceptions of comments in various places. research question #2: how, and to what degree, are student perceptions of faculty comments affected by the appearance of the comments, especially as determined by the use of technological tools to deliver those comments? once again, interesting findings were uncovered with respect to the relationship between the appearance of comments and the tone students reported. this is of particular interest because of the close tie between comment appearance and the use of various programs or techniques hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 29 designed for commenting on student papers using computer technology. this aspect of instructor comments is directly related to the use of technology in responding to student writing since comments delivered using computer technology are typed, and students were specifically asked how often they received instructor comments that were typed. table 2. correlation between comment placement and use of typed or computer-generated comments. in this case, typed or electronically submitted comments showed a statistically significant relationship (r = .36, p < .01) to negative comment tone. this was the strongest relationship seen in this area of exploration. illegible comments, on the other hand, showed a statistically significant relationship (r = .26, p < .01) with hostile comment tone. since computer-generated comments are generally not illegible, this is an interesting finding, if somewhat contradictory. no other aspects of comment appearance showed significant relationships with comment tone, or with typed or computer-generated comments. no significant relationships were found between any aspect of demographic analysis and the perception of comments with different appearances. these results are shown in table 3. research question #3: what relationships, if any, exist between the completeness of comment marks provided via computer technology, such as symbols, abbreviations, (i.e., frag., tr., sp.), single words, phrases, complete sentences, and explanatory paragraphs, and student perceptions of teacher criticism? this question was not as closely tied to the issue of technology use as the previous question, but it still provided interesting results. both one-word comments (r = .23, p < .05) and paragraph-long comments (r = .28, p < .05) showed statistically significant correlations with hostile comment tone. in addition, abbreviations showed a statistically significant negative relationship (r = -23, p < .05) with positive tone. although this research showed no significant correlations between the use of typed or computer-generated comments and the completeness of those comments, the correlations between completeness and tone are important to keep in mind, since comments of any level of completeness could be delivered by the use of computer technology. these correlations are shown in table 3. no significant relationships were found between the various demographic analyses and the perception of the tone of comments of varying levels of completeness. the examination of all of the correlations between the various aspects of instructor comments and the tone students reported perceiving in comments, as well as between those aspects of instructor comments and the use of typed or computer-generated comments indicates that some degree of correlation does in fact exist between specific aspects of instructor comments and the use of technology to deliver instructor comments, as well as between those specific aspects and the tone perceived in the comments. those correlations, however, vary and typed end of paper pearson correlation .107 sig. (2-tailed) .346 separate paper pearson correlation .331** sig. (2-tailed) .003 hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 30 are limited to the specific aspects identified. the implications of the findings will be explored in greater detail later. table 3. correlations between various aspects of teacher comments and perceived comment tone. iv. discussion. although research about writing and instructor comments, as separate issues, is available in plentiful amounts, little attention has been given to the relationship between the use of technology in delivering instructor comments, and its impact on student perceptions. the results of this study provided some significant findings in this regard. however, before any changes to student comments can be addressed it may be necessary to examine the nature of the relationship between technology used in instructor comments and students’ perceptions of those comments. in this study, comments made by faculty on students’ papers appeared to be perceived as having or conveying certain elements of tones, e.g. positive or hostile, none of which might be intended, but which nonetheless must be considered in the evaluation of students’ responses. examples of generally positive tones might include statements like, “well done,” or alternatively, an error being pointed out in a positive way. “your punctuation is generally very good, but this comma can be deleted.” an encouraging tone could be similarly demonstrated, where an instructor might point out an error, but then encourage the student by saying, “this is a good start. keep working.” comments that were perceived as negative or hostile are also worth noting. a negative comment might be one that indicates a negative perception on the part of the instructor, like, “this is immature and undeveloped.” comments with hostile tone, on the other hand, might include phrases such as “you really do not belong in this program.” encouraging negative hostile positive end of paper pearson correlation .38** .04 .03 .15 sig. (2-tailed) .00 .75 .78 .20 separate paper pearson correlation .29* .11 .13 .05 sig. (2-tailed) .01 .33 .26 .64 typed pearson correlation .02 .35** .07 .14 sig. (2-tailed) .88 .00 .53 .23 illegible pearson correlation .17 .21 .26* .03 sig. (2-tailed) .13 .06 .02 .81 abbreviation pearson correlation -.11 .17 -.07 -.23* sig. (2-tailed) .32 .14 .57 .05 one word pearson correlation -.01 .20 .23* -.14 sig. (2-tailed) .94 .08 .04 .23 paragraphs pearson correlation .19 .04 .28* .22 sig. (2-tailed) .09 .70 .01 .06 **. correlation is significant at the 0.01 level (2-tailed). *. correlation is significant at the 0.05 level (2-tailed). hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 31 the first area of instructor comments researched was that of comment placement. placements considered included in the margins of students’ papers, near structural or other issues that warranted the comments, at the end of papers, and on separate pieces of paper. the impact of comment placement is particularly interesting because of the correlation (r = .33, p < .01) found between typed or electronically transmitted comments, and comments placed on a separate page. keeping in mind that this study examined the impact of the use of technology in delivering comments, the finding that comments at the end of a paper are frequently typed provides a link between the use of technology and the perceived tone of the comments. why do students perceive comments placed at the end of the paper as having a positive tone and comments placed on a separate piece of paper as having an encouraging tone as discovered in this study? perhaps this distancing of comments from a particular section of the paper that needs further work or attention is viewed as less threatening, which may cause those comments to be perceived by students as less judgmental or attacking and thus more encouraging of their work. in addition, it might be important to consider that although the distance between the student’s writing and the comment may in itself be a factor, it is also possible that teachers unintentionally write a different type of comment at the end of the paper, because they may be conscious of addressing the quality of the paper as a whole. regardless of the reason for students’ perceptions, the lesson may be that the placement of comments, combined with an awareness of the need for a positive tone, can help increase the beneficial aspects of teacher comments overall. these findings support the conclusions of elbow (1989), who suggested writing comments separately, in letter form, in order to have those comments be perceived in a less threatening manner by students. the results of this study are therefore encouraging for those who provide computer-generated comments on a separate piece of paper. instructor penmanship styles, including the use of typed or electronically transmitted comments, as well as underlined, uppercase, or lowercase lettering were also investigated. only typed or electronically transmitted comments were found to be strongly related to negative comment tones (r = 35, p < .01). this raises questions about online classes, where nearly all communication between instructor and student is typed or electronically transmitted. interestingly, in this study, illegible comments showed correlations (r = .26, p < .05) with comments having a hostile tone, and appeared to be generally perceived as having hostile rather than positive tone. in fact the only other aspects of comment appearance that showed any significant correlations with tone were those such as color, darkness, and handwriting versus hand-printing, none of which would be influenced by the use of computer technology to deliver the comments. another issue that was not addressed by this study was the impact of technology delivered comments made using computer writing implements such as a pen mouse for hand written comments. those comments might, depending on the instructor, be either more or less legible than hand-written comments due to factors related to screen rendering. do students react to the varying range of legibility in such cases, or are these comments considered separately based on the delivery method? this is a topic that may require further research. in order for instructors to successfully convey a positive or encouraging tone, there are several steps that might be taken. since both typed and illegible comments seem to be negatively perceived, the use of carefully handwritten comments, which are legible to students, might be hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 32 helpful. an alternative would be to focus more intensely on wording, in order to overcome the negative impact of either typed or illegible comments. among the aspects of comment appearance that could be connected to the use of technology is readability, which might be worth exploring, because illegible comments may simply be difficult for students to read, leading to frustration, confusion, and a final impression of hostility. there are a number of possible explanations, aside from innate penmanship styles, for illegibility of instructor comments. these could include a combination of grading fatigue and physical fatigue of the hand muscles, as well as haste, lack of time, overwork, insufficient attention to detail, or general indifference. the link with technology arises because the solution for many instructors may be typing their comments. however, from a students’ perspective, those typed comments may seem negative, though not hostile. the primary suggestion for instructors that can be gleaned from this study of penmanship styles is that comments need to be legible, but if they are typed, even at the end of the paper or on a separate page, care must be taken with the wording and intended tone to be sure that the impact of the typed appearance does not overwhelm any positive tone attached to the placement of the comments. the third aspect of instructor comments that was explored was that of comment completeness. interpreting the findings of this research project with regards to the use of technology to deliver instructor comments was more difficult and complex than interpreting the findings related to comment appearance or placement, because comments of any level of completeness could be provided either by hand, or via technology. however, the use of comments written as paragraphs could be the most easily tied to the use of computer technology in delivering comments, and responses provided in paragraph form were related to student perceptions of hostile comment tone. at a time when instructors are urged to provide longer, more detailed comments by such noted experts as elbow (1989), bardine (1999), ferris (1997), and lunsford and straub (2006), the findings in this study raise questions about whether such lengthy comments are actually beneficial to students. these findings suggest that they are not. the fact that comments that are longer, such as paragraphs, might more often be provided through the use of computer technology, because it is physically easier for many people to type a paragraph than to write one, is a critical element in this examination of the impact of the use of technology in responding to student writing. still, both abbreviations and one-word comments could also be provided by technological methods, using one of the several computer programs available for this purpose, and those showed relationships with much more positive comment tones. however, it is important to make sure students understand the abbreviations. since students perceived comments presented as symbols, single words, and paragraphs negatively, the use of technology could further add to their negative response. comments provided in typed or computer-generated form also show a correlation (r = .35, p < .01) with negative comment tone, and it is possible that the combination is viewed in an even more negative light. instructors who use technological comment delivery systems might do well to carefully monitor the wording and tone of the comments they make on student papers, especially when using abbreviations such as “frag.,” “sp.,” “ tr.,” when using single words like “awkward,” “vague,” or even simply “good,” or when providing full paragraphs. hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 33 v. conclusion. regardless of the root cause of students’ sometimes negative perceptions of instructor comments, if instructors can begin to use commenting techniques that are neutral if not positive, they may be able to improve student perceptions, at the very least. in fact, minimal use of those aspects of instructor comments that showed a connection with negative student perceptions, including the use of technology to provide comments on a separate page, might work to actually decrease negative student perceptions. for many years, instructors at all levels have discussed ways to respond to student writing, looking for the most helpful and effective ways to do so. responding to student writing using one of the numerous computer programs designed for the task has been discussed, and much more research remains to be conducted. however, without careful attention to the impact of various aspects of written comments on student writing apprehension, this coordinated effort cannot reach its full potential in helping students become less apprehensive about writing. references bardine, b. (1999, april/may). students’ perceptions of written teacher comments: what do they say about how we respond to them? high school journal, 82(4), 239. bardine, b. a., bardine, m. s., & deegan, e. f. (2000). beyond the red pen: clarifying our role in the response process. the english journal, 90(1), 94-101. elbow, p. & belanoff, p. (1989). sharing and responding. new york: random. ferris, d. (1997). the influence of teacher commentary on student revision. tesol quarterly, 31, 315-339. fife, j. m., & o’neill, p. (2001, december). moving beyond written comment: narrowing the gap between response practice and research. college composition and communicatioin, 53, 300 321. lunsford, r. f., & straub, r. (2006). twelve readers reading: a survey of contemporary teachers’ commenting strategies. in r. straub (ed.), key works on teacher response (pp. 157 189). portsmouth, nh: boynton/cook publishers. monroe, b. (2002, september). feedback: where it’s at is where it’s at. the english journal, 92(1), 102-104. monroe, b. (2003, january). how e-mail can give you back your life. the english journal, 92(3), 116-118. popovich, m. n., & masse, m. h. (2005, summer). individual assessment of media writing student attitudes: recasting the mass communication writing apprehension measure. journalism and mass communication quarterly, 82, 339-355. sommers, n. (1982, may). responding to student writing. college composition and hanna, k. j., and yearwood, d. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 34 communication, 33(2), 148-156. wiltse, e. m. (2002, summer). correlates of college students’ use of instructors’ comments. journalism and mass communication educator, 57(2), 126-138. wiltse, e. m. (2006, summer). using writing to predict students’ choices of majors. journalism and mass communication educator, 2(61), 179-194. journal of teaching and learning with technology, vol. 10, special issue, pp. 80-87. doi: 10.14434/jotlt.v9i2.31409 residence to online: collaboration during the pandemic jacqueline l. cahill air university jacqueline.cahill@au.af.edu kristopher j. kripchak air university gaylon l. mcalpine air university abstract: when 2019 coronavirus disease (covid-19) arrived with vengeance, face-to-face colleges were scrambling to brainstorm and problem-solve how to best deliver the curriculum in a physically safe manner to complete the semester. at air university, the intellectual and leadership development center of the air force, eschool of graduate professional military education (eschool) is the online graduate college, which offers squadron officer school (sos), air command and staff college (acsc), air war college (awc), and online master’s program (olmp). sos, acsc, and awc all have residence colleges too. at the fully in-residence graduate college—air command and staff college—adult learners, who are airmen, geographically move to attend the college. the instruction has always been fully face-to-face, so they did not have online curriculum nor are their professors trained to effectively teach online. in order to best meet the students’ needs for in-residence acsc, the eschool was asked to help. this is when brainstorming sessions started as to how to pivot instruction during the pandemic, followed with sharing of resources, expertise, and faculty training. as a result, acsc in-residence students received the second half of their semester courseware online, which followed significantly more best practices than if a collaboration of the online and residence colleges had not occurred. perhaps there was a silver lining in the pandemic that may bring about additional educational options in the future. keywords: pandemic teaching, professional military education, online learning, online teaching, collaboration, teamwork, residence to online, adult learners introduction in march 2020, air command and staff college (acsc), the united states air force’s intermediate developmental education (ide) graduate college, was facing the real possibility that air university’s in-residence schools would suspend classes due to 2019 coronavirus disease (covid-19). this is the same concern that overcame most (if not all) in-residence civilian universities. air university, a major component of the air force’s air education and training command and lead agent for air force education, is headquartered at maxwell air force base in montgomery, alabama. air university was beginning to implement base-wide health protection measures and was signaling to its co-located residence schools to plan for the possible shutdown of their academic operations. however, with only one core course remaining for the academic year for these students and over 500 learners already having spent seven months away from their operational jobs and duty stations, finishing the 2020 academic year on time was necessary. this is the same concept at civilian universities where students were already three-fourths of the way through their academic school year. as faculty conversed about various options, a recommendation was made to leverage the distance learning faculty expertise and cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu distance learning acsc curriculum from the eschool of graduate professional military education (eschool) to finish the semester from a distance. the eschool and air force officer professional military education the eschool is the only non-resident arm in the dynamic system that is air force officer career education. this would be similar to a civilian graduate college having one college that focused on the online learning modality. as a military institution with an academic mission, officer professional military education (opme) institutions of squadron officer school (sos), air command and staff college (acsc), and air war college (awc) are the keystones of maxwell air force base’s ‘academic circle.’ selected officers from across the services (and eligible federal government civilian equivalents and international officers) attend in-residence to receive their primary (sos), intermediate (acsc), and senior level (awc) opme. students whose lifestyles do not support relocating to maxwell air force base for the required time or those who were not selected to attend in-residence earn their opme via distance learning by enrolling in the respective eschool program (sos, acsc, awc, or the master’s degree). the eschool teaches approximately 13,000 globally-dispersed learners per year. the programs are designed to meet opme requirements as established by the air force and the chairman of the joint chiefs of staff and to foster life-long learning habits to support the profession of arms. acsc, both in-residence and distance learning, are joint professional military education (jpme) phase i accredited, and air university (as a whole–similar to civilian universities) is regionally accredited through southern association of colleges and schools commission on colleges (sacscoc). the eschool serves predominantly captains through colonels and federal government civilian equivalents. the online master’s program (olmp), one of four eschool programs, is what we focused on in this situation. it is designed to produce more effective air force majors and lieutenant colonels serving in operational-level command or staff positions. it covers topics such as contemporary air and space force operations, national security, leadership, and joint warfare. the program consists of asynchronous, instructor-facilitated courses that are each eight weeks in length and take approximately 18 months to complete. this is a common set-up for an online master’s degree at civilian universities, where courses are eight weeks and students focus on taking two courses at a time. upon successful completion of the program, learners are awarded a master’s of military operational art and science degree (mmos). the olmp is offered with an option of four concentrations, although some are only options if the learner is a prior graduate of two other air force education programs. the four concentrations are: (a) joint warfare concentration (awards both the mmos degree and jpme phase i credit), (b) leadership concentration (awards the mmos degree and is for captains only), (c) operational warfare concentration (awards the mmos degree and is available only to air force weapons instructor course graduates), and (d) the nuclear weapons concentration (awards the mmos degree and is available only to air force nuclear weapons effects, policy, and proliferation certificate program graduates). this is a similar concept to civilian universities where graduate degrees are offered with an emphasis or a minor. air command and staff college the acsc resident curriculum is a 10-month graduate-level program taught through small group seminars and engaging lectures. the curriculum covers topics that include the profession of arms, war theory, leadership and ethics, joint warfighting, airpower, and the international security environment. additionally, learners have the opportunity to conduct research and participate in elective courses that 81 cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu explore different topics relevant to the nation’s defense. successful completion of the acsc resident program are awarded jpme phase i credit and an mmos degree. description with in-residence acsc’s decision to seek assistance from the eschool, the respective leadership of both colleges met within a day to discuss the scope of the challenge and the most effective and efficient way to proceed. this would be similar at a civilian university for the deans and leadership teams of two colleges, one being focused on online learning, to meet to discuss this same concept. gaining the eschool’s agreement to assist was immediate, with the first discussions and the brainstorming sessions that followed being fairly straightforward and productive given the already established close relationships with some of the individuals of both colleges. such collaboration at the start was straightforward as both colleges reported to the acsc commander (i.e., the dean’s supervisor at a civilian college), and both of the intermediate developmental education programs share similarities since the programs are to meet the same opme requirements. this would be similar at a civilian university to having degrees or courses that had similar program outcomes, but one was taught online and another was taught face-to-face. in addition, some eschool professors teach courses for inresidence acsc, so they are familiar with the in-residence curriculum and were accustomed to working collaboratively with the faculty. seeing this in action supported the importance of soft skills. emotional intelligence is incredibly important for instructors and adult learners to develop or improve, which includes teamwork (majeski, stover, valais, & ronch, 2017). these pre-established relationships proved critical when addressing the challenge before them and bridged the connections for those who had not yet met. the team as with any team tackling a problem, the members—with their different skillsets, attitudes, backgrounds, experiences, and motivations—had a strong impact on team dynamics and overall effectiveness. in this situation, a positive team dynamic was established almost immediately given the trust and sense of accountability many members already had for each other due to established working relationships and the mutual respect each had for the other’s organization given their shared missions. in addition, two to three eschool professors volunteer every year to teach this last in-residence course for acsc. as part of their preparations, the eschool faculty members participate in and sometimes lead resident faculty development sessions. this experience built mutual trust and knowledge in the content and delivery of similarly-themed courses in both institutions. at a civilian university, there are various methods to cross-collaborate across colleges or modalities, but someone usually needs to initiate it as it is rarely required. the shared experience and comradery made for a solution-focused team that trusted the expertise each brought to bear on the problem. furthermore, both acsc’s and the eschool’s deans included their respective personnel with the best knowledge, skills, and experience in the initial discussions on determining the scope of the effort. this inclusive approach created an open environment for everyone’s input from the beginning of brainstorming, and it resulted in inherent buy-in for the selected solution as those who would execute it were the ones responsible for the development. from the eschool, these included course directors (i.e., subject matter experts)—who were responsible for eschool courses that addressed similar outcomes and covered similar materials as the course remaining for acsc resident faculty to deliver—a curriculum designer, learning management system (lms) administrators, and leadership. from acsc, this included the department 82 cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu chair, course director (subject matter experts), deputy course director for the last resident course, learning management system administrators, and leadership. the learning challenge the eschool and resident acsc faculty and staff met several times over the course of three days to brainstorm potential courses of action for online delivery of the last resident course. the first considered possibility was using air university’s microsoft teams account to deliver the resident course as it was designed—synchronously, but online instead of in-person. this is an option that many civilian universities incorporated. however, accounts were not fully provisioned at that time, so using it was not a viable option. next, the team considered the synchronous delivery of the resident curriculum using video conferencing tools such as zoom. the idea was still to replicate some sense of the in-person experience that both learners and instructors were comfortable with overall. however, with so many learners quarantined at home with their families (with childcare being a key concern given the closing of daycares due to the pandemic), synchronous options quickly fell out of favor (although early on during execution, some faculty did add optional synchronous sessions via zoom to facilitate group discussion). in the end, these early brainstorming sessions resulted in all parties agreeing that they could not move the last resident course online. not only did the available technology not support such delivery, but the impact on the learning experience due to stressful life situations thrust upon their learners was a serious concern. therefore, it was determined that leveraging the asynchronous design inherent in the eschool’s olmp program was the best course of action. online education increases the opportunity to have more frequent interactions between students and with the professor, albeit less intense interactions than face-to-face learning (holley, 2017). with covid-19 risks, it was allowing interactions to still occur between the student and instructor and amongst other students, which is best practices. in making this decision, the question that needed to be answered before any further progress could take place was which olmp course or course(s) would best serve as a replacement for the resident course, which was six credit hours delivered over 10 weeks. two olmp courses providing six hours of credit each and together covering similar concepts and objectives as the resident course were immediately considered. the first option explored included the possibility of having the learners take both of these olmp courses but on an accelerated timeline to meet all six credit hours of material covered in the resident course. however, after discussing the merits of such a plan, concern grew again over how effective an accelerated approach would be given that learners would be taking the courses while hastily adjusting to life in the ‘new normal’ of being at home (often with family members who were also attending school and working from home) learning online instead of what they were accustomed to, which was face-to-face learning. research, reflection, and evaluation supports that first-time online students need scaffolding (ainsa, 2017). with concern about the effectiveness of the learning and the wellbeing of the learners, this option was rejected. furthermore, there was a concern for the resident faculty. they were not trained to facilitate online learning, nor were they familiar with the structure or curriculum of the olmp courses. as such, the idea of having them teach an accelerated curriculum starting immediately and without any prior experience was clearly problematic. our solution as a result of these brainstorming sessions, a second option coalesced that addressed these concerns: (1) have learners complete one of the olmp courses as designed with a few minor assignment modifications to more closely replicate the in-resident experience to which they were accustomed; (2) 83 cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu address key concepts required by the resident course that were not covered in the olmp course where learners would also conduct a self-study on select course material from the resident course and write short essay responses to prompts that would have otherwise driven in-person discussion; and (3) learners would develop and be evaluated on small group case study presentations from the resident course, with individual contributions being more heavily weighted than if in the residence class. the first action was coordinating the move of a copy of the olmp course from the eschool’s lms sub-account to the acsc resident’s sub-account. these are details that lms administrators manage in a civilian university, as well. on successful transfer, the eschool’s course designer and lms administrators collaborated with the acsc lms administrators to configure and ensure the quality of the transferred course, which included making it a template course that would be used to create each of the 36 course seminars. as acsc predominantly uses the lms only for course content sharing, message/announcement boards, and the gradebook, this collaboration was rather critical so that no time needed to be spent on training the resident course lms administrators on features not used in resident courses. after all setup actions were complete, the resident course director and deputy director were able to share the course and discuss its structure and curriculum with their course development team. this led to a number of discussions with the eschool’s course designer and its course director on what was possible in regards to quickly and easily adjusting the course without impacting the curriculum so much that it would ruin the design. in accordance with the previously published resident class schedule, learners would spend the first nine days of the course doing the self-study reading assignments and submitting their daily written responses to lesson prompts (designed by the resident course team) via email to their instructor. other changes to the olmp course included the removal of the requirement for submitting assignments through a student work similarity checking tool and the adjusting of assignment due dates to accommodate the inclusion of the resident course case study assignments, as both course calendars overlapped with certain assignments. in addition, adjustments of due dates were incorporated to give faculty inexperienced with facilitating online learning some welcomed breathing room so that they could adjust their teaching practices to the new environment. the end result was curriculum that met the intended outcomes of the resident course and was approximately the same length but was able to be delivered 100% asynchronously online. faculty development and support to prepare the resident faculty for teaching the olmp course, eschool personnel developed a multipronged training and support approach. this included conducting synchronous faculty development webinars prior to execution; distributing weekly guidance, best practices, and recommendations during execution; and providing just-in-time advice during execution, as requested. as they graded and provided daily feedback to the learners’ daily self-study submissions, these first nine days of the course also served as the learning space during which the resident faculty who had previously prepared to teach the resident course would spend time reading and reviewing the lesson materials for the olmp course they were now going to teach. unfortunately, there was not enough time for the resident faculty to take the eschool’s sevenweek instructor orientation and certification course (eioc) that would have prepared them to be facilitators of online learning and familiarized them with eschool courseware. this is a similar practice at many civilian universities that offer online courses or degrees; new online instructors must take an online faculty development course to learn best practices in teaching online and to practice the requirements of that particular university as a student and an instructor. in place of the eioc and given the restrictions on gathering in groups due to the pandemic, the olmp course director and a curriculum designer conducted three, two-hour webinars via adobe connect. these webinars— 84 cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu attended by all 36 resident faculty, the resident course director, and the deputy course director— provided familiarization with and lesson-by-lesson ‘how to’ training on the olmp course they would teach. in addition, materials were extracted from the eioc course and made available to the resident faculty via a module in the course that only they could access. this resource included guidance and best practices on facilitating online discussions, establishing and maintaining presence, making course announcements, providing effective feedback, and grading with a rubric. for management and communication, the resident faculty were divided into three teams of 10 to 12 personnel. the eschool had the three faculty who were also assigned to teach the resident course spread across each of those teams in order to more easily answer any resident faculty questions or requests for assistance that could arise as they taught the course. additionally, the eschool provided each team with two (six total) of its most experienced contract instructors who could provide additional faculty support and advice on the details of olmp course delivery. moreover, the olmp course director conducted weekly ‘just-in-time’ faculty development for the current and following week’s lessons, provided copies of the announcements he used in his own course seminar, and made himself available for any direct questions on a broader basis. one form of beneficial professional development for new online instructors is to have an experienced instructor to directly support the new instructor (holland, sherman, & harris, 2018). evaluation one may expect that hastily shifting from a planned resident graduate course to an online graduate course in a quarantine environment during the early months of a global pandemic would negatively impact faculty and learner surveys of their experience. however, even with these challenges, both the faculty and learners generally gave positive feedback on the curriculum, and learners praised faculty members’ efforts. both also offered thoughtful and constructive comments for consideration in future courses. in addition, many expressed great appreciation for the asynchronous approach as it allowed them maximum flexibility in supporting their individual and family efforts in adjusting to the ‘new normal.’ consistent predictors of academic success in online courses is high self-efficacy and positive self-regulatory behaviors (bradley, browne, & kelley, 2017). the most significant takeaways from this experience that are applicable to a military or civilian university setting include the following: ● previously established collaboration built by the eschool faculty teaching the resident course facilitated a high degree of trust between the resident and eschool team members. ● communicating clear expectations to learners and faculty early and often eased tensions and calmed concerns. ● it is helpful to have the online course team share best practices and offer prewritten course announcements to lessen the learning curve for faculty inexperienced in online delivery. ● continuing to support the faculty for the duration of the event and frequent faculty development sessions were worth the investment. ● to ensure inclusion of a potentially diverse set of students, be prepared to support nonnative speakers that now find themselves in an environment that relies heavily on written communication. ● give resident faculty the opportunity to see what they will experience and do so guided by a seasoned online instructor and/or course subject matter expert/developer and designer. ● establish and maintain resident and online college relationships, from the senior leadership to the faculty levels. 85 cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu ● include instructional designers and education technologists early in the discussion. it is important to know beforehand what is technologically possible, what resources are already available, and what could be quickly built as you are brainstorming possible solutions, not after decisions are finalized. conclusion the cliché, ‘cooperate and graduate,’ is frequently heard in professional military education institutions. although there are different ways to interpret that phrase, in our experience, cooperation between and amongst students, faculty, and support staff is what makes the positive impact. this was especially evident in the ‘emergency’ change of plans required to address the sudden quarantine of in-resident academics at air university. while the situation created an environment where many faculty and staff had to leave their comfort zones to perform duties and tasks they normally would not do, no one involved ever stated that what was thrust upon him/her was ‘not their job.’ in addition, the positive impacts of pre-existing relationships that the in-resident and online colleges had with one another, from the senior leadership level down, cannot be understated. these relationships, both personal and professional, drove highly productive and efficient brainstorming sessions that made the pivoting of instruction during the pandemic and the sharing of resources and expertise seem completely natural. furthermore, constant open communication, agile thinking, and a willingness to adapt as the situation unfolded allowed the team to not only resolve and implement solutions, but to also make just-in-time adjustments to execution when they were required. as a result, acsc in-residence students were able to complete their master’s degrees online in a manner that followed significantly more best practices than if collaboration between the online and residence colleges had not occurred. most importantly, the students recognized how important their learning experience was to the faculty, which is a testament to the professionalism and determination of all of those who were involved. epilogue the unforeseen canceling of resident instruction in spring of 2020, due to covid-19, necessitated collaborative efforts between the resident college, air command and staff college (acsc), and the online college, eschool of graduate professional military education (eschool), at air university. given the requirements and the swift execution of our efforts, we collectively traveled united along unchartered paths during our collaboration. at that time, it was unknown if our shared efforts would benefit the greater air university going forward. other resident colleges, possibly encouraged and informed by our example, shed trepidations about moving some of their curriculum online, and despite an enduring pandemic, are continuing to support their students via a distance. in fact, shortly after the acsc effort was underway, air university’s six-week resident squadron officer’s school (sos) pursued asynchronous, remote delivery of their resident program for summer 2020. just as with acsc, the eschool assisted sos in developing a plan and preparing their instructors to facilitate eschool courses. not only did this afford the opportunity to apply lessons learned from the acsc effort while still fresh in our minds, but this time, sos resident instructors took the eschool’s instructor orientation and certification course (eioc) as part of their preparations, which provided additional lessons learned. it is our sincerest hope that the covid-19 experience not only raises a flag signaling how we must be proactive in preparing for disruptive events in the future, but it also illuminates the art of the possible when high functioning teams come together with a common purpose – to do what is best for their students. 86 cahill, kripcrak, and mcalpine journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu acknowledgements the authors would like to thank dr. christopher weimar and lt col travis eastbourne at the air command and staff college, as well as dr. jay varuolo and mr. mark burge at the eschool, who were instrumental in making this effort a success. references ainsa, t. (2017). sos: observation, intervention, and scaffolding towards successful online students. education, 138(1), 1. https://www.ingentaconnect.com/content/prin/ed/2017/00000138/00000001/art00001 bradley, r. l., browne, b. l., & kelley, h. m. (2017). examining the influence of self-efficacy and self-regulation in online learning. college student journal, 51(4), 518. https://eric.ed.gov/?id=ej1162424 holland, t., sherman, s. b., & harris, s. (2018). paired teaching: a professional development model for adopting evidence-based practices. college teaching, 66(3), 148. https://doi.org/10.1080/87567555.2018.1463505 holley, r. p. (2017). thoughts on online teaching with a focus on management. journal of library administration, 57(3), 367. https://doi.org/10.1080/01930826.2017.1288966 majeski, r. a., stover, m., valais, t., & ronch, j. (2017). fostering emotional intelligence in online higher education courses. adult learning, 28(4), 135. https://doi.org/10.1177/1045159517726873 87 https://www.ingentaconnect.com/content/prin/ed/2017/00000138/00000001/art00001 https://eric.ed.gov/?id=ej1162424 https://doi.org/10.1080/87567555.2018.1463505 https://doi.org/10.1080/87567555.2018.1463505 https://doi.org/10.1080/01930826.2017.1288966 1205chang journal of teaching and learning with technology, vol. 1, no. 1, june 2012, pp. 1 – 23. electronic feedback or handwritten feedback: what do undergraduate students prefer and why? ni chang1, a. bruce watson2, michelle a. bakerson3, emily e. williams4, frank x. mcgoron5, and bruce spitzer6 abstract: giving feedback on students’ assignment is, by no means, new to faculty. yet, when it comes to handwritten feedback delivered in person and typed feedback delivered electronically to students, faculty may not know what undergraduate students prefer and reasons behind their preferences. the present study explored which form of feedback, i.e., electronic or handwritten feedback, undergraduate students preferred and rationale behind their preferences. two hundred fifty respondents completed an online survey, which consisted of three closed-ended questions and two open-ended questions. nonparametric tests were used to analyze the quantitative data. qualitative responses were read and analyzed by four researchers and six themes were identified. the qualitative data were rechecked against the six themes independently first and then collectively. discrepancies were discussed before complete consensus was made. the study found that nearly 70% of the participants preferred e-feedback for its accessibility, timeliness, and legibility. yet, with respect to the quality of feedback, the majority of handwritten supporters chose handwritten feedback, as they perceived this type of feedback as more personal. the article discusses the marked discrepancies between the two groups and ends with educational implications and suggestions for future research. keywords: feedback, electronic feedback, handwritten feedback, teaching and learning, instructors, students i. introduction. feedback is important to student learning (case, 2007; ferguson, 2011; krause & stark, 2010) and a basis for supporting and regulating the learning process (ifenthaler, 2010) regardless of who students are and where they are from and regardless of what form instructors choose to provide feedback on students’ assignments, be it electronic feedback or handwritten. quality feedback should work as a guiding light, promoting student learning (chang, 2011). krause and 1 department of elementary education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, nchang@iusb.edu 2 department of professional educational services, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, watsonbr@iusb.edu 3 department of secondary education and foundations of education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, mbakerso@iusb.edu 4 department of professional educational services, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, williaee@iusb.edu 5 department of elementary education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, fmcgoron@iusb.edu 6 department of secondary education and foundations of education, indiana university south bend, 1700 mishawaka ave. south bend, in 46634, baspitze@iusb.edu chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 2 stark sampled 2,137 university students and found that individual learning with feedback had significant effects on student learning. increasingly students are demanding feedback from their instructors (siew, 2003). yet, students’ perceptions of different forms of feedback are some times inconsistent and contradictory (krause & stark, 2010). the main objective of this study, therefore, was to examine which undergraduate students preferred; handwritten or electronic feedback and to understand the underlying reasons for these preferences. ii. theoretical framework. a. indifference to feedback. some instructors do spend time providing feedback directly onto hardcopies of students’ assignments (handwritten feedback) while others use a keyboard and send feedback electronically to students (electronic feedback). the national union of students (nus) survey (2008) reported that 85% of respondents did receive written comments. however, winter and dye (2004) found that despite time and work exerted by instructors to offer students feedback, some students did not even collect their feedback (wojtas, 1998 in higgins, hartley, & skelton, 2001). sinclair and cleland (2007) concurred, as a result of a survey study with undergraduate medical students, that fewer than half of the students did not want to be bothered to collect feedback when given a choice. other students simply gave a quick glance to grades before slipping their assignments into backpacks (wojtas, 1998 in higgins, hartley, & skelton, 2001). wojtas (1998) furthered, “some students threw away the feedback if they disliked the grade, while others seemed concerned only with the final result and did not collect their marked work” (in higgins, hartley, & skelton, 2001, p. 270). still others justify that they do not appreciate feedback returned to them late (winter & dye, 2004). b. discontent with feedback. discontent among students with the quality of instructor’s feedback was commonly noted in the nus survey (2008) and quality assurance agency for higher education (2007). after surveying 465 graduate students and 101 undergraduate students at a major australian university, ferguson (2011) substantiated that feedback failed to play the role as it was expected. price, handley, millar & o’donovan (2010) had a similar observation. students felt feedback given on assignments was often vague and ambiguous, making it hard to follow. additionally, students complained that feedback was overly negative and not useful to them. it might be a reason that students were less likely to act on feedback to improve their subsequent work. all seemed to think that instructors were not willing to spend time writing helpful feedback and did not seem to care about student learning (price et al., 2010). in all, 90% of students at fourteen australian universities (scott, 2006) described feedback they were getting as insufficient. c. expected feedback. to improve their learning, students want useful and high quality feedback. with the promise of feedback, students would be happy to wait, even if it would be a little longer (ferguson, 2011). research indicates that students attach greater importance to quality and detail than to timing in regard to feedback, even though timeliness is continually described as an important component chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 3 of effective feedback in any form (bai &smith, 2010; bridge & appleyard, 2008; denton, madden, roberts, & rowe, 2008; price et al., 2010; scott, 2006). with the growing demand for online course delivery, more instructors are offering electronic feedback. timeliness of electronic feedback has been found helpful to students’ learning (dickinson, 1992; seliem & ahmed, 2009). electronic feedback also encourages students to be responsible for their own assignments, facilitates collaboration, and increases student participation (seliem & ahmed, 2009). it also allows an instructor to review, clarify (chang, 2011), and “tone down criticism” on feedback (dickinson, 1992, p. 6). feedback is one of the imperative factors affecting students' perceptions of course quality (yang & durrington, 2010). yet, some students distrust the receipt system if feedback is delivered electronically (bridge & appleyard, 2008). studies have reported some students’ antipathy toward electronic feedback (ferguson, 2011; scott, 2006). one of the disadvantages of e-submission is a lack of social interaction, as it lacks personal touch. since learning remains a profoundly social experience (scott, 2006), students expressed their hunger for more opportunities to have a dialogue with instructors (price et al., 2010). some research has found that handwritten feedback is personal (morgan & toledo, 2006). others (denton et al., 2008; ferguson, 2011; price, et al., 2010) have reported that handwritten feedback is difficult for students to read, due to illegible writing. students may not perceive that handwritten feedback is part of the process that would help them improve their performances (dickinson, 1992). as such, it is felt that the interactive face-to-face communication would help clear up students’ concerns and offer reassurance. nonetheless, nus (2008) found that only 25% of the respondents set up individual meetings with instructors, because setting up face-to-face meetings “was dependent on a good relationship with the tutor; such good relationships where they felt comfortable to go and ask for verbal feedback” (nus, 2008, p 31). this may indicate that it was not because those students would want to intentionally avoid individual meetings, but it was because they might not feel they had good relationships with instructors. one overlooked aspect in defining feedback is a feed-forward component (price, 2010), the opportunity for students to use the information to affect future work. it is a cyclic and ongoing in the process of longitudinal development (denton et al. 2008), stemming from dialogues between instructors and students (price et al., 2010). students may inappropriately view each assignment as a discrete final project and regard feedback as simply justification for a given grade without this feed-forward opportunity. if feedback is considered a finished product, merely to correct errors on assignments, or if it is not delivered in time for student action, it is ineffective and more than likely ignored (dickinson, 1992; gibbs & simpson, 2004; price et al., 2010). evaluative feedback can become useful and meaningful when there is a consensus on shared understanding between instructor and student about the purpose of feedback (case, 2007; price et al., 2010; seliem & ahmed, 2009). when give-and-take opportunities exist throughout the ongoing, cyclical process, instructors can offer additional explanations or elaborations on feedback (hattie & timperley, 2007; price et al., 2010). this practice can clarify the information instructors have disseminated to students about their work and thereby help improve learning outcomes (denton et al., 2008). in an assessment continuum between student and teacher, feedback and instruction are intertwined (hattie & timperley, 2007) as a component of an ongoing dialogue between the stakeholders, increasingly desired by students (price et al., 2010). hence, feedback is most effective when it is understandable to the extent that learners are able and willing to use it and when instructors focus on “how to improve” subsequent learning (ferguson, 2011, p. 56, author added emphasis). the assessment process should not be a “bolt-on chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 4 addition at the end” of the curriculum, but “an integral part of the educational process” (national curriculum tgat report, 1987, p. 6). both feedback and feed-forward should be an ongoing part of the educational process in a forward-looking relational process, allowing students to use the information to improve subsequent assignments (dickinson, 1992; gibbs & simpson, 2004; price et al., 2010). iii. methods. a. participants. this study invited 664 undergraduate students from the school of education at a mid-western university to take part in an investigation of students’ preference for either handwritten or electronic feedback and their rationale for this preference. two hundred seventy nine students responded, making the return rate 42%. out of 279 respondents, 29 respondents did not complete all of the survey questions. as these surveys were incomplete, they were discarded from the sample, leaving the total sample of 250 with a response rate of 38%. except for seven students (3%) who did not report their gender, among 250 participants, 80% were female, while 17% were male. except for two who did not report their age, there were 147 participants (59%) ranging from 18 to over 45 years of age. except for 19 students 8% failed to report their gpa, most participants 65% indicated that their gpa was 3.01-4.00. over half of all respondents 66%, described their major as elementary, while 33% self-identified as secondary education majors (see table 1 and table 2). table 1. gender and age. variable n % gender female 200 80 male 43 17 missing 7 3 age 18-24 147 59 25-34 61 24 35-44 27 11 45 & over 13 5 missing 2 1 note. all percentages add up to 100% chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 5 table 2. class standing, gpa and major. variable n % class standing freshman 47 19 sophomore 58 23 junior 58 23 senior 82 33 missing 5 2 gpa 3.01-4.00 164 65 2.01-3.00 62 25 2.00 & below 5 2 missing 19 8 major elementary 165 66 secondary 70 28 special ed. 14 5.6 missing 1 0.4 note. all percentages add up to 100% b. research design. to best understand the research problem, a mixed methodology approach was used in the study, which obtained different but complementary data on student perceptions pertaining to handwritten or electronic feedback. it also combined the differing strengths and weaknesses of quantitative methods (large sample size, trends, generalization) with those of qualitative methods in the form of a questionnaire. c. instrument. an online application of lime survey was used to collect data. the survey questions were developed by the four researchers and reviewed by a faculty member with expertise in instructional technology. in light of his suggestions, the questions were revised and refined until consensus was reached. the survey instrument consisted of three closed-ended questions: 1) which kind of feedback do undergraduate school of education students prefer – handwritten or electronic, 2) to what extent do school of education undergraduate students prefer either handwritten feedback or electronic feedback, and 3) how useful was your instructor’s feedback? in addition to questions of demographic information including: gender, age, class standing, gpa (grade point average), and major; there were also two open-ended questions: 1) i prefer handwritten feedback because . . . (this question was answered by handwritten supporters) or i prefer electronic feedback because . . . (this question was answered by e-feedback supporters), and 2) do you have any other comments to make about assessment feedback that may help faculty better facilitate your learning? (this was asked of both groups of supporters). in the survey, handwritten feedback was defined as feedback that is written by hand on students’ assignments and physically delivered to students.” the definition of electronic feedback was “feedback that is typed and shared electronically with students via emails, forums, facebook, etc. d. procedure. chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 6 two weeks after the spring semester of 2012 started, all undergraduates admitted into the teacher preparation program were invited to participate in the study via an email. the potential participants were then redirected to the online site where they were first prompted with a consent letter, which informed them of the purpose of the study, ensured confidentiality and also made it clear that participation was voluntary. if potential respondents agreed to participate, they continued on to complete the questionnaire. students could stop or quit answering the questions at any point they liked. all potential participants received a first follow-up letter electronically three weeks after the initial invitation letter was sent out. a second follow-up letter was emailed to all potential participants three weeks later. e. data analysis. to answer the first research question of whether the undergraduate students of the school of education preferred electronic or handwritten feedback, nonparametric tests were utilized. spss 19 was used to answer part of the second research question of why either of these options was preferred over the other. a crosstabs procedure, using the chi-square test of independence was used to analyze the nominal variables. a chi-square test of independence measures the degree to which a sample of data comes from a population with a specific distribution (bakerson, 2009; mertler &vanatta, 2005 rosenberg, 2007; stevenson, 2007). it tests whether the observed frequency count of a distribution of scores fits the theoretical distribution of scores. this issue was addressed through the use of the pearson's chi-square procedure (bakerson, 2009; mertler & vanatta, 2005 rosenberg, 2007). the rest of the second research question was answered thorough the analysis of qualitative responses, which consisted of coding the survey responses and of aggregating the codes to identify themes (charmaz, 2000; creswell, 2002). four researchers read and analyzed the respondents’ responses with respect to their justifications of preferences for handwritten or electronic feedback, and their responses to the last survey question: “do you have any other comments to make about assessment feedback that may help faculty better facilitate your learning?” six themes were identified, which include: accessibility (a), timeliness (t), legibility (l), quality of feedback (q), personal (p), and miscellaneous (m) (see table 3 and table 4). in light of the themes, the researchers went back to check the codes and then discussed the discrepancies of the coding through two meetings. the inter-rater reliability was 0.82 for electronic feedback preference, 0.84 for handwritten feedback preference, and 0.72 for the last question. the qualitative responses under each theme were then calculated to answer the second question of why the respondents preferred one form of feedback over the other and what they valued the most in terms of those six themes. chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 7 table 3. coding with themes and examples for accessibility and timeliness. codes themes example quotes a • able to get information easily • convenience • able to ask questions • secure • i spend the majority of my time on the computer. • i am able to access the information needed without having the hardprinted paper(s). can access information anytime i have wireless connection through phone/laptop/or computer. • i check my email several times a day so that is what is convenient for me. also, getting electronic feedback means that i will always be able to go back to it without losing it, whereas a handwritten feedback you can lose or misplace. • i can ask the professors in class what they mean if i have questions about it. t • readability • understanding • i also appreciate that electronic feedback is a faster way to receive constructive feedback. note. accessibility (a), timeliness (t) table 4. coding with themes and examples for legibility, quality, personal and miscellaneous. codes themes example quotes l • quick return • [y]ou don't have to wonder what a comment says due to poor penmanship, • sometimes it is harder to read hand written feedback. q • constructiveness • usefulness • helpfulness • understanding the content • revise and improve • summary vs. in-text comments (location) • more detail is better • canned responses • physical touch • i like handwritten feedback on tests because they can point out exactly where i messed up and explain it right on the test. • i can see what my answers were and see what was wrong, why it was wrong and what the instructor thought. i also like to be able to touch the actually feedback because for some reason i feel like i understand it better when i can touch it. p • close rapport between student/professor • feeling obligated to read • appreciation • caring about students • when i receive handwritten feedback i feel that my professor entered into a dialogue that required reflection, interpretation, and evaluation on my performance as a student. by providing me with handwritten feedback, i feel that the professor took the time to personalize their thoughts on my performance as a student and pre-service teacher. • handwritten feedback is something i usually feel more obligated to read as it is all on my returned assignment. m • wish • use of oncourse, gradebooks • use of word review features • save paper • [t]he feedback has to be precise not just "good work" • [i]t saves paper, • i believe in going paperless to many extents, but when it comes to engaging with comments or feedback, having a marked up paper with comments and input is the most helpful. note. legibility (l), quality of feedback (q)personal (p), and miscellaneous (m) chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 8 iv. results and discussion. a. preference. the majority of soe participating undergraduate students (68%) preferred electronic feedback/efeedback to handwritten feedback (34%). the primary reason for those who supported efeedback was accessibility, which accounted for 38% of the comments made by the e-feedback supporters (see figure 1). in the following, along with the quantitative results, discussed are six identified themes, including: accessibility, timeliness, legibility, quality, personable, and miscellaneous. figure 1. qualitative responses by electronic and handwritten feedback supporters by six themes. b. accessibility. the respondents most commonly noted that they were able to receive feedback effortlessly and found it convenient for their professors to provide electronic feedback. in addition, given that the internet is omnipresent, it is also easy for students to check feedback, as they have laptops, smartphones, ipads, and other mobile devices: “i prefer electronic feedback because you get to check your emails.” “i am able to see the feedback right away through my phone, and anywhere else i have [i]nternet access.” “. . . i am generally always available to get to my laptop. i'm on my laptop so much that it just makes it easier for me.” chang (2011) confirmed that instructor’s responses could conveniently be received electronically and entirely independent of location and 38% 16% 30% 1% 10% 5% 25% 3% 0% 32% 40% 0% 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% electronic handwritten chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 9 time. those who supported electronic feedback also felt that e-feedback could be easily organized. the possibility of misplacing papers would be unlikely, so was carrying around papers. one student commented, “i prefer the electronic feedback because it is easier to keep a record of and less likely to become misplaced.” in this sense, they also noted that they felt secure. in contrast, 25% of the comments made by the respondents supporting handwritten feedback were on accessibility (see figure 1). the respondents rationalized that handwritten feedback was independent of the internet, which was convenient for their learning: “i like to read the handouts in my own time anywhere i want without having to get on a computer and see it.” “currently, [m]y life is very busy, the feedback written on my papers is sufficient.” “i am able to take it home with me and really look at it. i can also make extra notes on the handwritten feedback that i get.” these comments were supported by chang (2011) that those who did not own computers and/or who did not have easy access to the internet did not support e-feedback. b. timeliness. timeliness is the second reason for those who favored e-feedback (30%) (see figure 1). students explained, “it can get back to the student quicker especially if they are in a once a week class.” “[i]t is usually a much faster turn-around; the feedback comes back much quicker.” “i . . . appreciate that electronic feedback is a faster way to receive constructive feedback.” some respondents associated timeliness with the ownership of learning: “it is faster! i am more likely to respond!” “i can also respond quickly from any location.” immediate feedback was helpful to students’ learning, as the content just discussed in classes is still kept fresh in their minds (chang, 2011; dickinson, 1992; ferguson, 2011, seliem & ahmed, 2009; winter & dye, 2004). it could be the very reason that students were likely to respond to e-feedback. electronic feedback encouraged students to be responsible for their own assignments and active participation (chang, 2011; dickinson, 1992; nicol & macfarlane-dick, 2006; seliem & ahmed, 2009). in comparison, those who preferred handwritten feedback did not make any comments on timeliness (see figure 1). with handwritten feedback, timing is one of the major reasons for students’ dissatisfaction (ferguson, 2011; winter & dye, 2004). mostly, when instructors are able to return students’ assignments with feedback, it is when there are class meetings on campuses. if feedback is returned to students rather late and if students have already moved onto the next assignments or tasks, feedback would become useless to student learning. students explained, “a lot of time i get this feedback before the next class and before i have started the next homework. i have another class where the teacher does it all by hand and it takes forever to get the feedback and the next homework is due before the feedback gets back to me.” “i think that instructors should allow time to provide feedback on all assignments before an exam or written assignment is given over that material. i have taken exams without feedback from prior assignments that covered material that was on the exam. this seems that instructors are simply going through the motions of handing out assignments then testing on the material. how am i supposed to know what i need to study, if i do not know what i misunderstood on the assignment portion?” these comments imply feedback after all is essential to student learning if students are able to benefit from it (chang, 2011; dickinson, 1992; ferguson, 2011, seliem & ahmed, 2009; winter & dye, 2004). chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 10 c. legibility. legibility (16%) is the third reason given by those who supported e-feedback (see figure 1). the respondents explained that typed messages allowed them to read without much difficulty; they did not have to guess what comments were intended to say to them. at least, students did not have to make a special visit to professors just decipher what was written, as commented by some students: “[d]on't have to track down a professor to help read what [he] wrote.” “[w]hen their responses are typed[,] i can clearly read . . . their input. . .” this is supported by prior research which found that handwritten feedback was difficult for students to read, due to illegible writing (denton, 2008; ferguson, 2011; price et al., 2010). in other words, if students are able to read comments, they can “hopefully use their (professors’) input.” this signifies that students care about their learning and want feedback to better their work (ferguson, 2011). yet, when it comes to the quality of e-feedback, surprisingly, only 10% of the respondents supporting e-feedback made comments on this topic (see figure 1). d. quality. this section reports and discusses the data with respect to quality of feedback. in order to help the reader follow the results and discussion with ease, there are two sub-sections with one focusing on the views of e-feedback supporters while the other on views of handwritten supporters. perceptions of electronic feedback supporters. ten percent of the comments made by the e-feedback supporters were largely about how feedback helped them learn. that is, the respondents recognized that instructors were able to explain their thoughts completely. feedback was specific and detailed, as some wrote, “i . . . feel that electronic feedback gives instructors a chance to fully explain their thoughts and consideration.” “i find comments are more thorough.” a student also acknowledged that instructors took time, reading students’ submitted work: “professors take more time to respond to what i wrote, the comments written about my work seem to be more thought out and i can read them with an understanding of where the professor is coming from . . .” a clear expression of wanting to improve their performance can also be observed from the respondents’ comments: “electronic feedback gives a student a chance to read, then review the written feedback later. this is important because student[s] can improve and learn from feedback.” chang’s (2011) study confirmed that students appreciated the time instructors spent in providing detailed feedback on their assignments. the feedback was helpful and useful to their learning. some respondents underscored the role technology plays in providing quality feedback, as technology allows for easy typing, which could lead to more detailed feedback. students said, “. . . i find electronic feedback is more specific and detailed (perhaps because typing is faster?)” “i feel electronic feedback tends to be more detailed because typing is faster for most than handwriting.” “it also is more in depth because the professor is not trying to condense it into the margin of my work.” from some students’ viewpoints, if feedback was sent to them electronically, they seemed able to receive more from professors: “[p]rofessors tend to give more comments when feedback is given electronically.” in addition, technology enables instructors to place feedback near areas where students are able to understand specifically what was done well and what they need to improve. a respondent wrote, “on a paper, professors can provide feedback in certain spots in microsoft word, indicating exactly where they agree or think could chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 11 use some work.” this finding echoes the report by chang (2011) that students wanted feedback that was specific and that enabled them to know what needed their attention. moreover, students felt that using technology to offer feedback could turn sharp criticism into something easier for them to accept, as a student said, “[e-feedback] is more like constructive criticism than just criticism.” this is in line with the findings of chang (2011) and dickinson (1992) that using technology to compose feedback allows an instructor to review, clarify and tone down criticism. however, taking advantage of technology does not seem widely used with all instructors, which seems a cause for concerns. some respondents pointed out, “there is a feature in [m]icrosoft [w]ord where as a professor you can highlight words of phrases and sections and add specific feedback for that word or phrase. . .” “we live in a world full of technology and so many of us get online frequently throughout the day . . . ” inconsistent with chang’s (2011) study are the priorities the present study respondents ranked. the e-feedback supporters preferred e-feedback predominantly due to accessibility (28%) and timeliness (20%) (see figure 1). quality of feedback fell in the distance third, whereas the participants in chang’s study enjoyed the feedback due to the quality of feedback. the students placed the accessibility in the distance second and timeliness the third. the low percentage of comments (10%) on the quality of feedback in the present study could indicate that at the time when the survey was administered, e-feedback might still be something new to most students, considering nearly 60% of the respondents were between ages 18-24. although technology is by no means novel to this generation, receiving e-feedback from instructors might not be something familiar to them; they are much more conversant with handwritten feedback than e-feedback. perceptions of handwritten feedback supporters. in comparison with the percentage of comments on the quality of feedback made by those preferring handwritten feedback (40%) (see figure 1). a number of comments were four times more than those made by the respondents with a preference for e-feedback (10%). the handwritten feedback supporters appeared to have attached much greater importance to the quality of feedback than the e-feedback group, rating this category as a key ingredient for success. like those who preferred e-feedback, the qualitative responses made by handwritten feedback supporters conveyed a similar justification; the feedback was placed in proximity to what needed to be worked on and what was done well, “i enjoy having handwritten feedback because usually handwritten feedback is placed on papers in the areas that need to be fixed.” “i can . . . look at exactly where and what the feedback is about and can improve off of that, where as if it is electronic i can not necessarily see exactly what the feedback is talking about or how to improve.” like e-feedback supporters, handwritten feedback supporters also pointed out that when professors wrote feedback by hand on their assignments, the feedback tended to be more detailed and specific than when given electronically. the respondents said, “i felt that my professors actually took the time to read and evaluate my performance and in doing so allowing each of us to get to know each other on a better level by being able to discuss the comments right then and there.” “[i] feel like the instructor will say more with handwritten feedback rather than with electronic. with electronic they tend to be short with comments and few.” yet, what is different from the responses made by the e-feedback group is that feedback written by hand is more tailored to an individual learning level: “. . . it is ni[c]e to see that your teacher is taking the time to look over the assignments that you spent your time on and individualizing your comments.” feedback is shaped by individual student assignments as a means of individualized instruction (chang & petersen, 2006). an additional difference is that professors allowed students to revise their work if the feedback was chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 12 written on their assignments: “also with handwritten feedback, most professors will allow you to fix the paper and resubmit it.” the findings of the present study mirror chang’s study (2011) in that students felt making revisions to their assignments promoted their learning. yet, the findings were incongruent with dickinson’s (1992) notion that handwritten feedback does not help students improve their performances. the respondents’ expressions clearly indicated that they found handwritten feedback was advantageous to their learning and that they would rather take extra time decoding professors’ handwriting than receive assignments without feedback. what also differed from the view of e-feedback supporters was that handwritten feedback supporters were able to physically touch the feedback, which they perceived had an effect on their learning: “i also like to be able to touch the actually feedback because for some reason i feel like i understand it better when i can touch it.” e. personal. supporters of handwritten feedback seemed to tie the quality of feedback to personal attributes (32%) (see figure 1). handwritten feedback seemed to allow for establishing a closer rapport with instructors than e-feedback. some students noted, “the feedback that is rece[i]ved from the instructor is more [personal] than the electronic issued feedback . . .” “[i]t makes the feedback feel more personal and shows an interest in all students, whereas electronic could be set up to give the same feedback to multiple people. . . it makes . . . me feel as if my professor really knows who i am.” the findings were supported by the reports of ferguson (2011) and scott (2006), both of which found that some students still felt a strong dislike toward e-feedback. asking professors questions in person, from the perspectives of the handwritten feedback supporters, was an avenue to establish a relationship with professors. in contrast, there was only 1% of e-feedback respondents (see figure 1) making comments on the same topic. the comments principally pointed to e-feedback being impersonal: “it's more impersonal [than handwritten feedback].” “ . . . sometimes electronic feedback feels generic and impersonal.” “. . . when receiving all feedback from a computer, it becomes easy for the student to feel like a number.” scott (2006) had a similar concern and identified that e-communication lacked social interaction and personal touch. an explanation of rating quality of feedback and personal by handwritten supporters as the first and second is that most of the respondents are millennial generation or generation y (59%), who were born between 1980-1999 and who may be extremely comfortable with technology and have no real memory of life without computers, cell phones, and digital music (rockler-gladen, 2006 in chang, 2011). therefore, typing is natural and ordinary. as such, the participants might answer the survey questions based on their past experiences. from their perspectives, if instructors were willing to sit down and write on students’ submitted assignments, it shows that instructors would read their work carefully and give thoughts to students’ work. this seems to imply, what was also highly valued by the handwritten supporters, which was the time spent by instructors reading their assignments and the time on writing feedback. that is, time spent by instructors writing by hand represented a level of care that instructors had about them, as noted by a student, “it . . . shows that the professor actually cares about the student's work and doesn't just gloss over it . . .” the care given by professors who wrote feedback by hand also seemed encouraging; students felt a sense of obligation to read the feedback: “handwritten feedback is something i usually feel more obligated to read as it is all on my returned assignments.” chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 13 f. longing for feedback. the last survey question, “do you have any other comments to make about assessment feedback that may help faculty better facilitate your learning?” invited all respondents to respond, irrespective of handwritten feedback supporters or e-feedback supporters. the findings revealed that 57% of the responses were about the quality of feedback (see figure 2). it is evident that the respondents generally were interested in receiving feedback in order to improve their learning. some students commented, “i don't have a preference on electronic or handwritten, i just prefer to receive feedback.” “professors don't tend to give a lot of feedback so whatever we get is helpful.” “i love timely feedback that is specific instead of just a general grade. i really want to know what i did great on and what i need to improve on and the reasons behind them.” “. . . i like to see the red ink on my page...there is always room for improvement.” “. . . when it comes to engaging with comments or feedback, having a marked up paper with comments and input is the most helpful.” this is consistent with chang’s (2011) findings that students expect to receive feedback that is useful, helpful, constructive, specific, detailed, in-depth, and thorough. the findings, however, differ from those by winter and dye (2004) that students were careless about feedback as they had no intention to pick up graded assignments with instructors’ feedback. discrepant with the present study’s findings is also the notion by wojtas (1998 in higgins, hartley, & skelton, 2001) that students only glanced over their grades, but they did not read feedback. “feedback in any form is greatly appreciated. . . [.] we do so many assignments in the school of education and receive relatively small amounts of feedback from certain teachers. not all of the teachers are lacking in the feedback department, but when being asked about which kind of feedback i prefer all i can think of is how much i would just like feedback regardless of the chosen delivery method.” students’ strong desire for feedback also led them to offer suggestions: “i would appreciate all instructors familiarizing themselves with oncourse, using it, and entering grades and communication in a timely and consistent manner.” (note, oncourse is a course management system developed by indiana university along with a few other major universities, which is similar to blackboard). figure 2. qualitative responses to final open-ended question in light of six themes. 14% 3% 16% 10% 57% 0% 0% 10% 20% 30% 40% 50% 60% chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 14 all this data illuminates that there is extensive work to be done, which is in a sense concurred with ferguson’s (2011) and price et al.’s (2010) assertion that feedback has not yet fully played its expected role in facilitating student learning. feedback needs to be unambiguous and detailed enough for students to understand with ease. instructors also need to write feedback in a way that learners are willing to act on and that shows instructors care about student learning. taken into account that 57% of the comments were about quality of feedback and that timeliness was in the distance second (16%), these findings do confirm with ferguson’s (2011) report that if students expect to receive quality feedback, waiting a bit longer would not cause a huge issue. even though there is a 41% difference between the quality of feedback (57%) and timeliness (16%), these two categories, being next to one another, are a good indication that students not only expect quality feedback, but also want it in a timely fashion in order to benefit their learning (bai &smith, 2010; bridge & appleyard, 2008; chang, 2011; denton et al., 2008; price et al., 2010; scott, 2006). the practices of the quick delivery of quality feedback with computer technology coupled with communication/dialogue between instructors and students have been termed as feed-forward (duncan, 2007; murtagh & baker, 2009; price et al., 2010). that is, feedback should not be seen as simply as justification for a given grade without an opportunity for students to use the information to better future work. the findings echo hattie and timperly’s (2007) report that feedback is an assessment continuum between instructors and students where feedback and instruction are intertwined. price et al. also supported that feedback was a component of an ongoing dialogue between the stakeholders. it becomes most effective when learners are able and willing to use it and when instructors provide information of “how to improve” subsequent learning (ferguson, 2011). g. miscellaneous. with respect to miscellaneous, there is a difference between handwritten feedback supporters and electronic feedback supporters. handwritten feedback supporters did not make any comments at all under this theme, whereas the e-feedback supporters did (5%) (see figure 1). students rationalized three reasons for supporting e-feedback, including saving trees, having less paper to deal with, and potentiality of e-feedback. some respondents noted, “[it] saves trees and money.” “i … prefer to use as little paper as possible for environmental reasons.” some found it easier to receive e-feedback, because students would have “less paper to deal with.” chang’s (2011) study supported these findings. some respondents might not have direct experience of interacting with e-feedback, but imagined that the feedback could offer more to student learning, “i feel that electronic feedback has the potential to be more thoughtful as well.” h. degree of preferences. although the majority of students were interested in e-feedback, more respondents who preferred handwritten feedback (88%) favored the feedback to a moderate or large extent more so than those with a preference for e-feedback (81%) (see figure 3). chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 15 figure 3. the degree of preferences for feedback. i. usefulness to learning. the same pattern is observed when it comes to the usefulness of feedback to learning. eleven percent more respondents were in favor of handwritten feedback (99%) than were in favor of efeedback (88%). students felt feedback was somewhat to very useful to their learning (see figure 4). figure 4. the degree of usefulness of feedback. j. gender, age, class standing, gpa, and major. table 5 reports the frequency analysis of gender, age, class standing, gpa, and major corresponding to handwritten feedback and e-feedback. there are twice as many female respondents and male respondents preferring electronic feedback than handwritten feedback. the 76% 78% 80% 82% 84% 86% 88% handwritten electronic 88% 81% 80% 90% 100% handwritten electronic 99% 88% chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 16 same is true for majors. except for juniors, twice as many seniors, freshmen, and sophomores preferred electronic feedback than handwritten feedback. table 5. handwritten or electronic feedback data. handwritten feedback electronic feedback total variables n % n % n % gender female 62 0.3 138 0.69 200 100 male 14 0.33 29 0.67 43 100 age 18-24 59 40.1 88 59.9 147 100 25-34 13 21.3 48 78.7 61 100 35-44 2 7.4 25 92.6 27 100 45-54 3 23.1 10 76.9 13 100 class freshman 13 27.7 34 72.3 47 100 sophomore 19 32.8 39 67.2 58 100 junior 24 41.4 34 58.6 58 100 senior 23 28 59 72 82 100 gpa 4.00-3.01 49 29.9 115 70.1 164 100 3.00-2.01 28 45.2 34 54.8 62 100 2.00-1.01 0 0 5 100 5 100 major elementary 51 30.9 114 69.1 165 100 secondary 24 34.3 46 47.8 70 100 special education 4 10 71.4 14 100 note. percent ranges refer to the partitioned group or n. a crosstabs procedure, using the chi-square test of independence, revealed there were no statistically significant differences between the observed and expected frequencies on the variables of interest. the results failed to reveal a statistically significant difference in terms of gender, χ2(1, 243) = 0.040, p=0.842 between handwritten and electronic feedback. a crosstabs procedure, chi-square test of independence, also failed to reveal a statistically significant difference χ2(3, 245) = 3.335, p=0.343 regarding class standing between handwritten and electronic feedback. lastly, there was no statistically significant difference χ2(6, 249) = 3.876, p=0.693 among majors. this means that regardless of gender, class standing, or major, there was no preference between handwritten or electronic feedback. no other crosstabs procedures, using chi-square test of independence, revealed any statistically significant differences in terms of gender, class standing, or major. chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 17 yet, the chi-square test of independence indicates a statistically significant difference, χ2(3, 248) = 15.807, p=0.001, among age group respondents. in the 35-44 age group, 93% preferred electronic feedback while only 60% of the 18-24 age group preferred electronic feedback (see table 6). table 6. age and feedback preferences. feedback total handwritten electronic age 18-24 count 59 88 147.0 expected count 45.6 101.4 147.0 % within age 40.1% 59.9% 100.0% % within feedback 76.6% 51.5% 59.3% % of total 23.8% 35.5% 59.3% 25-34 count 13 48 61.0 expected count 18.9 42.1 61.0 % within age 21.3% 78.7% 100.0% % within feedback 16.9% 28.1% 24.6% % of total 5.2% 19.4% 24.6% 35-44 count 2 25 27.0 expected count 8.4 18.6 27.0 % within age 7.4% 92.6% 100.0% % within feedback 2.6% 14.6% 10.9% % of total 0.8% 10.1% 10.9% 45-54 count 3 10 13.0 expected count 4.0 9.0 13.0 % within age 23.1% 76.9% 100.0% % within feedback 3.9% 5.8% 5.2% % of total 1.2% 4.0% 5.2% total count 77 171 248.0 expected count 77.0 171.0 248.0 % within age 31.0% 69.0% 100.0% % within feedback 100.0% 100.0% 100.0% % of total 31.0% 69.0% 100.0% a chi-square test of independence also revealed a statistically significant difference, χ2(2, 248) = 7.284, p=0.026, among gpa respondents. in the 2.00 or lower gpa group, 100% preferred electronic feedback while in the 3.00-2.01 only 54.8% preferred electronic feedback (see table 7). chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 18 table 7. gpa and feedback preferences. feedback total handwritten gpa 3.01-4.00 count 49 115 164 expected count 54.7 109.3 164.0 % within gpa 29.9% 70.1% 100.0% % within feedback 63.6% 74.7% 71.0% % of total 21.2% 49.8% 71.0% 2.01-3.00 count 28 34 62 expected count 20.7 41.3 62.0 % within gpa 45.2% 54.8% 100.0% % within feedback 36.4% 22.1% 26.8% % of total 12.1% 14.7% 26.8% 1.01-2.00 count 0 5 5 expected count 1.7 3.3 5.0 % within gpa .0% 100.0% 100.0% % within feedback .0% 3.2% 2.2% % of total .0% 2.2% 2.2% total count 77 154 231 expected count 77.0 154.0 231.0 % within gpa 33.3% 66.7% 100.0% % within feedback 100.0% 100.0% 100.0% % of total 33.3% 66.7% 100.0% perhaps younger students still need quite a lot of encouragement and appropriate assistance from professors in order to increase their awareness of the importance of feedback in their learning and of how to act on it. with respect to the difference between students’ preferences for either form of feedback and gpa, an explanation of this may be that the students in the mid-range might feel satisfied with their mediocre grades and thereby cease to make extra effort to achieve better grades. the findings are inconsistent with those by chang (2011), as she did not find any statistically significant differences among preference of e-feedback, and age or gpa. k. limitations. this study was only focused on the soe undergraduate participants’ perceptions of e-feedback and handwritten feedback. the data from this survey study were the respondents’ subjective reports, which mostly rest on the respondents’ mood, feelings, degree of carefulness and attentiveness in reading questions and writing answers, and the effect of the surroundings when the responses were being composed. it also depended on the various levels of experiences that the respondents had had with e-feedback and handwritten feedback. in addition, the responses might be affected by how the respondents understood a certain definition, such as that of efeedback. in the survey, e-feedback was defined as feedback that is typed and delivered electronically to students via emails, forums, etc. based on the responses received, this definition did not seem to suffice, as it resulted in various interpretations or misunderstandings: some understood that e-grades were e-feedback. some others referred it to general feedback chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 19 received via email while some might think that e-feedback meant canned responses preset by professors or automatically generated by computers after an exam or a quiz was taken. some interpretations could be that e-feedback was identical and sent to multiple students in the class using some application, e.g. turnitin-grademmark ®. others might have defined feedback as detailed and individualized, especially tailored to each student’s assignments. furthermore, owing to these distinct variations, even though no responses were read indicating students had never received any feedback from faculty, the report issued by the national union of students (nus) survey (2008) that 85% of respondents did receive written comments could not be addressed. perhaps those students excluded themselves from the survey altogether. nonetheless, the study provides preliminary insights into the preference of the form of feedback undergraduate students preferred and an explanation of why. the threshold will begin the path of continual investigation about how feedback is provided to better facilitate students’ learning. l. educational implications. even though nearly 70% of the soe undergraduate participants claimed that they preferred efeedback, the comments made by this group on the quality of feedback were not nearly equivalent to those by handwritten feedback supporters. in terms of the degree of preferences, there were fewer e-feedback supporters than handwritten supporters who felt that the feedback was somewhat to very useful. however, there were an alarming number of responses made by both of the groups on the quality of feedback, when they answered the last survey question: “do you have any other comments to make about assessment feedback that may help faculty better facilitate your learning?” many responses were of their longing for feedback, “i prefer feedback in general which is greatly lacking in some classes.” in light of this, it would be wise for instructors to take some action to offer feedback useful and beneficial to student learning. in addition, instructors need to enhance or strengthen their capabilities to provide feedback on students’ assignments with computer technology, as we are in a technology era; technology is omnipresent. with computer technology, instructors are able to place comments on places where students are better able to determine where they need to revise and how their work can be improved. typing on computers also allows for more words and clearer messages. students want more specific and detailed feedback rather than a few brief notes on their assignments: “i think that feedback needs to be more specific and to the point. not just a 'good job' or a check mark. i want to know what i did [well] and what i did wrong. i also think that the more detail the professor can give the better.” “i feel that electronic feedback has the potential to be more thoughtful as well.” typing should eliminate illegible writing, thereby reducing unnecessary frustration. before writing feedback, instructors should read students’ work carefully so that feedback is especially tailored to a student’s learning level. instructors also need to give feedback plenty of thought and try to find out, by trial and error, how to provide constructive, thorough, specific, clear, unambiguous, and friendly feedback so that students are encouraged to read and act on it for the amelioration of their performances. with computer technology, instructors may also consider writing a general summary at the end of a paper or exam in addition to specific feedback. in providing feedback on students’ assignments, instructors also need to bear in mind that they ought to make every effort to steer clear of e-feedback that has potential to be misconstrued chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 20 by students, as a student commented, “i think that miscommunications can often happen with electronic feedback that can cause rifts in the teacher/student communication.” even though some professors still intend to maintain writing feedback by hand, they also need to keep in mind to consistently offer quality feedback, as pointed out by a student: “however, handwritten feedback does not always equal quality in terms of being helpful and constructive.” by and large, students, irrespective of e-feedback or handwritten feedback supporters, yearn for useful and helpful feedback. yet this study demonstrates that providing quality feedback has not been a widely acceptable practice, thereby a need for effective faculty training to facilitate students’ learning with quality feedback feed-forward. to affect student learning, instructors should pay particular attention to those in the 18-24 age category and with those whose gpa falls 2.01-3.00. particular attention to “double dip” students, those who are young and have an average gpa, should prove especially beneficial. m. suggestions for future research. future research may involve the replication and expansion of the present study and examine preferences of undergraduate students and graduate students alike. since the issue of feedback being personal seemed to surface as one of the principal reasons behind students’ preference, research questions could also include: “how could instructors compose e-feedback that is personal and appreciative?” students expressed frustration and disappointment when feedback is too unclear or brief to help their future learning and the findings seem to have indicated that more feedback is better. one student remarked, “professors tend to give more comments when feedback is given electronically.” future research could delve deeper into how much feedback is enough for students to feel a benefit. information overload can easily discourage students to enhance learning, as a student pointed out, “ridiculously little font sizes are almost as annoying as bad handwriting and information saturation leads to the type of visual clutter that frustrates me as i look for the spec[i]fic area i need.” on the other hand, students wanted specific and detailed feedback: “on the feedback please be specific and tell us how we should have answered.” “[m]ore detail makes things much more clear.” research focus could also be placed on what an explicit definition of e-feedback is and how to feed-forward so that students are helped to genuinely gain knowledge and skills. n. conclusion. the vast majority of soe undergraduate participants preferred feedback that is sent to them electronically because this form of feedback was said to be easy to access, considering many students have cell phones, laptop computers, and other mobile devices. feedback sent to them electronically is faster than handwritten feedback returned back to them during face-to-face meetings. typed feedback is more readable than most handwritten feedback. although the groups did not virtually provide an equal number of comments on the quality of feedback, both clearly indicated that undergraduate students in general not only welcomed but also wanted feedback that is detailed, tailored, specific, in-depth, and thorough. timeliness was an additional reason for undergraduates supporting e-feedback. even though there was a polarized view on feedback being personal between the two groups, a close rapport with instructors was what most students would appreciate. the students also urged instructors to familiarize themselves with technology in order to efficiently provide them with helpful feedback. when working with chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 21 students who are at ages 18-24 and whose gpa is between 2.01 and 3.00, instructors should make the effort to encourage these students to use feedback to advance their learning. references bai, x., & smith, m. b. (2010). promoting hybrid learning through a sharable elearning approach, journal of asynchronous learning networks, 14(3),13-24. bakerson, m., (2009). persistence and success: a study of cognitive, social, and institutional factors related to retention of kalamazoo promise recipients at western michigan university. ph.d. dissertation, western michigan university, united states -michigan. retrieved from proquest dissertations & theses database: a&i (publication no. aat 3392137). bjorkman, m. (1972). feedforward and feedback as determiners of knowledge and policy: notes on a neglected issue. scandinavian journal of psychology, 13, 152–8. bridge, p., & appleyard, r. (2008). a comparison of electronic and paper-based assignment submission and feedback. british journal of educational technology, 39(4), 644-650. brown, j. (2007). feedback: the student perspective. research in post-compulsory education, 12(1), 33–51. case, s. (2007). reconfiguring and realigning the assessment feedback processes for an under graduate criminology degree. assessment & evaluation in higher education, 32(3), 285–99. chang, n. (2011). pre-service teachers’ views: how did e-feedback through assessment facilitate their learning? journal of the scholarship of teaching and learning, 11(2), 16-33. chang, n., & petersen, n. j. (2006). cybercoaching: an emerging model of personalized online assessment. in d. d. williams, s. l. howell, & m. hricko (eds.), online assessment, measurement, and evaluation: emerging practices (pp. 110–130). hershey, pa: the idea group. charmaz, c. (2000) grounded theory: objectivist and constructivist methods. in n. denzin, & y. lincoln (eds.), handbook of qualitative research. 2nd ed. london, sage. cobb, r. jr., graham t, kapur a, rhodes, c, & blackwell e. (2005, december). give better feedback on engineering drawings. tech directions, 19-21. creswell, j. w. (2002). research design. london, sage. denton, p., madden, j., roberts, m., & rowe, p. (2008). students’ response to traditional and computer-assisted formative feedback: a comparative case study, british journal of educational technology, 39(3), 486-500. dickinson, p. f. (1992, october). feedback that works: using the computer to respond. paper presented at the annual national basic writing conference. college park, md. (eric chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 22 document reproduction service no. ed35649) duncan, n. (2007). feed forward: improving students’ use of tutors’ comments. assessment and evaluation in higher education, 32(3), 271–283. ferguson, p. (2011). student perceptions of quality feedback in teacher education. assessment & evaluation in higher education, 36(1), 51–62. gibbs, g., & simpson, c. (2004). conditions under which assessment supports students’ learning. learning and teaching in higher education, 1(1), 1–31. hattie, j., & timperley, h. (2007). the power of feedback. review of educational research, 77(1), 81-112. http://growthmindseteaz.org/files/power_of_feedback_jhattie.pdf higgins, r., hartley, p., & skelton, a. (2001). getting the message across: the problem of communicating assessment feedback, teaching in higher education, 6(2), 269-274. higher education funding council for england. (2007). annual national student survey. retrieved from http://www.hefce.ac.uk/news/hefce/2007/nss.htm ifenthaler, d. (2010). bridging the gap between expert-novice differences: the model-based feedback approach. journal of research on technology in education, 43(2), 103-117. krause, u., & stark, r. (2010). reflection in exampleand problem-based learning: effects of reflection prompts, feedback and cooperative learning. evaluation & research in education, 23(4), 255-272. limniou, m., & smith, m. (2010). teachers' and students' perspectives on teaching and learning through virtual learning environments. european journal of engineering education, 35(6), 645653. mertler, c. a., & vanatta, r. a. (2005). advanced and multivariate statistical methods (3rd ed.) glendale, ca: pyrzcak publishing. morgan p., & toledo, c. (2006). online feedback and student perceptions. journal of interactive online learning, 5(3), 333-340. murtagh, l., & baker, n. (2009). feedback to feedforward: students’ response to tutors’ written comments on assignments. practitioner research in higher education, 3(1), 20-28. national curriculum task group on assessment and testing: a report. (1987). department of education and science and the welsh office. kings college, london. retrieved from http://www.kcl.ac.uk/content/1/c6/01/54/36/tgatreport.pdf national union of students (nus) (2008). student experience report. retrieved from http://aces.shu.ac.uk/employability/resources/nusstudentexperiencereport.pdf chang, n., watson, a. b., bakerson, m. a., williams, e. e., mcgoron, f. x., and spitzer, b. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 23 nicol, d., & macfarlane-dick, d. (2006). formative assessment and self-regulated learning: a model and seven principles of good feedback practice. studies in higher education, 31(2), 199-218. price, m., handley, k., millar, j., & o’donovan, b. (2010). feedback: all that effort, but what is the effect? assessment & evaluation in higher education, 35(3), 277–289. quality assurance agency for higher education. (2007). enhancing practice. retrieved from http://www.enhancementthemes.ac.uk/documents/integrativeassessment/iamanaging.pdf. rosenberg, k. m. (2007). the excel statistics companion. belmont, ca: thomson higher education. sadler, d. r. (2010). beyond feedback: developing student capability in complex appraisal assessment & evaluation in higher education, 35(5), 535–550. scott, g. (2006). accessing the student voice: a higher education innovation program project. canberra, australia: department of education, science and training. seliem, s., & ahmed, a. (2009, march). missing: electronic feedback in egyptian efl essay writing classes. online submission, paper presented at the centre for developing english language teaching (cdelt) conference, cairo, egypt. siew, p. f. (2003). flexible on-line assessment and feedback for teaching linear algebra. international journal of mathematical education in science & technology, 34(1), 43-52. sinclair, h., & cleland, j. (2007). undergraduate medical students: who seeks formative feedback? medical education, 41, 580–582. http://onlinelibrary.wiley.com/store/10.1111/j.13652923.2007.02768.x/asset/j.13652923.2007.02768.x.pdf?v=1&t=h1dxgkhr&s=98da8e0463e211288733384af16aa4a5be4c643a stevenson, j. p. (2007). applied multivariate statistics for the social sciences (5th ed.). new york, ny: routledge. store, r. e., & armstrong, j. d. (1981). personalizing feedback between teacher and student in the context of a particular model of distance teaching. british journal of educational technology, 12(2), 140-157. doi: 10.1111/j.1467-8535.1981.tb00420.x winter, c., & dye, v. l. (2004). an investigation into the reasons why students do not collected marked assignments and the accompanying feedback. celt learning and teaching project. yang, y., & durrington, v. (2010). investigation of students' perceptions of online course quality. international journal on e-learning, 9(3), 341-361. morroneqh journal of teaching and learning with technology, vol. 1, no. 1, june 2012, pp. 61 – 62. storyboarding with powerpoint to bring cases, case problems, and course content to life michael morrone1 keywords: powerpoint, engagement, case studies, storyboard framework the case method is widely used in business, law and other disciplines as a way of contextualizing course content. most commonly, cases are delivered as paper descriptions of problems that arise in the field being studied. the case method leads to student engagement as students use course content to understand and to propose solutions to real world problems. technological developments, however, empower teachers to easily move away from paper presentation of cases and to bring cases to life with multi-media elements. making it work in order to integrate course readings and a business case for my business communication class, i use a storyboard approach in powerpoint (sample screenshots left and below). the case discussed here includes five acts (modules) and centers on potential problems a jeweler faces because unscrupulous diamond vendors still find ways to sell conflict diamonds to jewelers. the first slide of each act includes a link to a discussion of learning objectives presented in print and audio. as the students read the act, they discover other embedded links in the story. pictures (all pictures are royalty free) of corporate offices, jewelry stores, etc. were used to create context and setting. characters (again royalty free pictures were used) involved in the case converse with each other. links to course content appear in conversation and thought bubbles, computer screens, and work files pictured in the act. for example, in the first act the executives 1 senior lecturer, kelley school of business, indiana university, mmorrone@indiana.edu morrone, m. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 62 at a regional jewelry chain begin to deal with fallout from a 60-minutes episode featuring the arrest of one of the jeweler’s diamond vendors. the students see the executives in a conference room discussing the company’s public image crisis and a potential ethical lapse. the students click on the computer screen in the conference room and it shows a video that discusses crisis communication. as the story develops, the students become a part of the story as their assignments represent the company’s attempts to deal with the crisis. implications in class students take a readiness assurance quiz regarding course content. in follow up class days we apply and generalize the course content to other business contexts. culminating assignments for each act and the class as a whole relate to the storyboarded case. this approach allowed me to create one storyline and easily use the case in class. students showed enthusiasm for the case by learning character names and discussing, sometimes with surprise, the ways business messages have to change depending on purpose, audience, and channel. in general, the engagement with the lifelike story helped students remain engaged in the course and course content, while connecting course content to a variety of business situations. this approach to case/course delivery can easily be replicated for other cases and classes in other disciplines. 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 abernethyqh journal of teaching and learning with technology, vol. 1, no. 1, june 2012, p. 63. reducing ‘death by powerpoint’ michael abernethy1 keywords: powerpoint, best practices, student engagement framework powerpoint use in the classroom has increased dramatically in the last ten years, although not always successfully. when powerpoint presentations take precedence over lecture material, students lose interest and feel that they are being read to, not taught. numerous studies show that overuse of powerpoint actually decreases student – teacher interaction in the classroom, as instructors focus on the presentation and not the class, while students are afraid to interrupt the “flow” of the powerpoint with questions or comments. making it work to help increase student interaction, only use brief bullet points in your powerpoint, as opposed to putting all the information on your slides, so that you have to explain the material to the students. more importantly, after each main point or every 3 to 4 slides, include a blank slide. this serves as a “discussion” slide, which allows students the opportunity to ask questions or engage in interaction and forces the instructor to turn away from the powerpoint to face the class and get feedback. audience: any class in which powerpoint is used tools: powerpoint presentations implementation: immediate. requires no additional work beyond adding extra slides to powerpoint presentations. future implications outcomes/assessment: outcome: increase student engagement and student-teacher interaction assessment: assessment may be achieved by comparing tests/quiz results before and after changes to the use of powerpoint hybrid/online contexts: when powerpoint presentations are posted online for students but won’t be discussed in person, replace the “discussion” slide with a “questions” slide. this would include questions over the material just covered. make it clear that if students struggle to answer any of the questions, they can contact the instructor for further clarification. 1 senior lecturer, communication studies, indiana university southeast, mabernet@ius.edu microsoft word 2126-jotlt final.docx journal of teaching and learning with technology, vol. 1, no. 2, december 2012, pp. 43 – 47. a case for the use of pedagogical agents in online learning environments noah l schroeder & olusola o. adesope keywords: pedagogical agent, cost-effectiveness, multimedia, learning framework progressive multimedia learning tools have been extensively researched over the past twenty years. two of these tools include intelligent tutoring systems (graesser et al., 2004; ma, adesope, & nesbit, 2011; vanlehn, 2011) and pedagogical agents (mayer & dapra, 2012; moreno, mayer, spires, & lester, 2001). in this paper we discuss pedagogical agents, which are visible characters in multimedia learning environments designed to facilitate learning (moreno, 2005; schroeder, adesope, & barouch gilbert, 2012). some researchers have expressed reservations that pedagogical agents may not be cost-effective (choi & clark, 2006; clark & choi, 2005; 2007). however, while it previously may have taken a considerable amount of time and resources to design and implement a pedagogical agent within a learning environment, recent advances in technology make pedagogical agent-based systems more accessible and affordable to educators. pedagogical agent research is typically grounded in social agency theory. social agency theory is based on previous research which indicates that people treat computers as fellow humans (reeves & nass, 1996), and posits that “social cues in a multimedia message can prime the social conversation schema in learners” (mayer, sabko, & mautone, 2003, p. 419). thus, mayer et al. (2003) suggest that learners may perceive computer interaction as a social exchange of information. in sum, it is hypothesized that if the learner perceives the computer interaction as social communication, it may cause increased performance on transfer tests due to the student engaging in the “sense-making process” (mayer et al., 2003, p. 420). this process describes active learning, which is delineated into three stages: selecting information, organizing it, and integrating it with prior knowledge (mayer et al., 2003; mayer, 2005). alternatively, mayer et al. (2003) posit that a lack of social cues in a multimedia message will not cause a social response in the learner, and thus foster rote learning, or memorization. as such, it is the process of deeper understanding (atkinson, mayer, & merrill, 2005) that pedagogical agent researchers hope to foster to promote meaningful learning (mayer et al., 2003) in pedagogical agent-based learning environments. are pedagogical agents useful in multimedia environments? research suggests that pedagogical agents have the ability to play many roles in the multimedia learning environment, such as demonstrating, scaffolding, coaching, modeling and testing (clarebout, elen, johnson, & shaw, 2002). however, throughout research, pedagogical agents often take the role of an instructor or a coach (clarebout et al., 2002). recent research has started to investigate the use of peer-agents (e.g., holmes, 2007), however this area is underrepresented compared to studies which utilized the agent as an instructor. pedagogical agents are not necessarily artificially intelligent, although in the past researchers have paired them with intelligent tutoring systems (e.g., moreno, mayer, spires, & schroeder, n.l., & adesope, o.o. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 44 lester, 2001). to some this may seem a major limitation. however, an alternative viewpoint suggests that constructing artificially intelligent agents generally requires computing and programming knowledge that many educators may lack. thus, the ability to incorporate a nonintelligent agent into a multimedia learning environment with relative ease may increase the effectiveness of the environment for minimal cost. cost-effectiveness should be an important consideration for educational researchers, as it is well known that budget cuts continue to affect many higher education programs (potter, 2003). empirical results clarebout et al.’s (2002) seminal review concluded that “pedagogical agents do have possibilities for supporting learners when working with complex tasks…the potential of these pedagogical agents offer opportunities that should be grasped” (p. 281). these claims were reiterated by kim and ryu’s (2003) meta-analysis, which indicated that pedagogical agents presence in multimedia learning environments increased both learners’ retention (d=.30, p<.05) and transfer (d=.64, p<.05) scores. mayer’s (2005b) review revealed a median effect size of d=.22 for studies in which an agent was present. similarly, moreno’s review (2005) investigated pedagogical agent research in relation to mayer’s (2005) cognitive theory of multimedia learning. moreno found support for the redundancy principle, in that learners were able to learn more when the learning material did not provide redundant text and narration. additionally, moreno found support for the modality principle, in that learners were able to perform better on post-tests if the pedagogical agent provided narration as the modality of communication rather than text. moreover, moreno’s review did not find support for the deleterious effects of the split-attention principle (ayers & sweller, 2005). in other words, while learner’s split their attention between the agent and the learning material, it did not produce negative learning effects. finally, and perhaps most importantly, moreno found that pedagogical agents can foster the active learning process. recently, heidig and clarebout (2011) reviewed pedagogical agent research; however their results were not promising. they summarize that “the majority of studies (9 out of 15) yielded no difference on learning” (heidig & clarebout, 2011, p. 51). however, schroeder, adesope, and barouch gilbert’s (2012) recent meta-analysis indicates that pedagogical agents produce a small, positive effect on learning. making it work as mentioned, researchers have suggested that pedagogical agents may not be cost-effective (choi & clark, 2006; clark & choi, 2005; 2007). in the past, pedagogical agent learning environments needed to either be created from scratch, or through the use of complex computer programs. recently, inexpensive and easy to operate software options are becoming available to educators who want to include an agent in their instruction. for example, xtranormal (2012) can be used to create presentations which include pedagogical agents (see figure 1). xtranormal (2012) allows the user to create videos using animated characters in virtual environments. the characters range from cartoons characters and stick figures to fully anthropomorphized humanoids dressed in business attire. the program is very simple to operate: you choose whether you want one or two agents, select the setting in which they will appear, select which the characters you will like to use, choose background sounds and type in the text which the text-to-speech engine will generate as narration. alternatively, one could record schroeder, n.l., & adesope, o.o. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 45 human voices and upload the recording to provide the narration. the program also allows the user to customize the agents’ gestures and movements to make them more realistic. figure 1. a screenshot which shows the user-interface of xtranormal. from xtranormal (2012). future implications it is plausible that creating a short presentation in xtranormal (2012) may take slightly longer than a comparable slideshow or other multimedia presentation. however, the novelty of the presentation may facilitate student learning and motivation. while pedagogical agents are not the panacea of multimedia learning, in certain situations where something different is needed to grasp students attention, the use of pedagogical agents may be beneficial. references atkinson, r.k., mayer, r.e., & merrill, m.m. (2005). fostering social agency in multimedia learning: examining the impact of an animated agent’s voice. contemporary educational psychology, 30, 117-139. ayers, p., & sweller, j. (2005). the split-attention principle in multimedia learning. in r. mayer (ed.), the cambridge handbook of multimedia learning (pp.19-30). new york, ny: cambridge university press. choi, s., & clark, r.e. (2006). cognitive and affective benefits of an animated pedagogical schroeder, n.l., & adesope, o.o. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 46 agent for learning english as a second language. journal of educational computing research, 34(4), 441-466. clarebout, g., elen, j., johnson, w.l., & shaw, e. (2002). animated pedagogical agents: an opportunity to be grasped? journal of educational multimedia and hypermedia, 11(3), 267-286. clark, r.e., & choi, s. (2005). five design principles for experiments on the effects of animated pedagogical agents. journal of educational computing research, 32(3), 209-225. clark, r.e., & choi, s. (2007). the questionable benefits of pedagogical agents: response to veletsianos. journal of educational computing research, 36(4), 379-381. graesser, a.c., lu, s., jackson, g.t., mitchell, h.h., ventura, m., olney, a., & louwerse, m.m. (2004). autotutor: a tutor with dialogue in natural language. behavioral research methods, instruments and computers, 36, 180-193. heidig, s., & clarebout, g. (2011). do pedagogical agents make a difference to student motivation and learning? educational research review, 6, 27-54. holmes, j. (2007). designing agents to support learning by explaining. computer & education, 48, 523–547. kim, m,. & ryu, j. (2003). meta-analysis of the effectiveness of pedagogical agent. in d. lassner & c. mcnaught (eds.), proceedings of world conference on educational multimedia, hypermedia and telecommunications 2003 (pp. 479-486). chesapeake, va: aace. ma, w., adesope, o.o., & nesbit, j.c. (2011). intelligent tutoring systems: a meta analysis. american educational research association meeting, new orleans, la. mayer, r.e., (2005). cognitive theory of multimedia learning. in r. e. mayer (ed.), the cambridge handbook of multimedia learning (pp.19-30). new york, ny: cambridge university press. mayer, r.e., (2005b). principles of multimedia learning based on social cues: personalization, voice, and image principles. in r. mayer (ed.), the cambridge handbook of multimedia learning (pp.201-212). new york, ny: cambridge university press. mayer, r.e., & dapra, s.c. (2012). an embodiment effect in computer-based learning with animated pedagogical agents. journal of experimental psychology: applied, 3, 239-252. mayer, r.e., sabko, k., & mautone, p. (2003). social cues in multimedia learning: role of speaker’s voice. journal of educational psychology, 95(2), 419-425. moreno, r. (2005). multimedia learning with animated pedagogical agents. in r. mayer (ed.), the cambridge handbook of multimedia learning (pp. 507-523). new york, ny: cambridge university press. schroeder, n.l., & adesope, o.o. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 47 moreno, r., mayer, r.e., spires, h.a., & lester, j.c. (2001) the case for social agency in computer-based teaching: do students learn more deeply when they interact with animated pedagogical agents? cognition & instruction, 19(2), 177-213. potter, w. (2003, august 8). state lawmakers again cut higher-education spending. the chronicle of higher education, p. a22. reeves, b., & nass, c. (1996). the media equation: how people treat computers, television, and new media like real people and places. stanford, ca: csli publications. schroeder, n. l., adesope, o. o., & barouch gilbert, r. (2012). a meta-analysis of the effects of pedagogical agents on learning. paper presented at the american educational research association annual meeting. vancouver, british columbia. vanlehn, k. (2011). the relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. educational psychologist, 46(4), 197-221. xtranormal. (2012). xtranormal [computer software]. http://www.xtranormal.com 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 microsoft word 2094-jotlt final.doc journal of teaching and learning with technology, vol. 1, no. 2, december 2012. pp. 54 – 56. using short video tutorials scott jones1 keywords: tutorials, video framework teachers have argued for the use of the web 2.0 tool screencast software, such as techsmith’s jing, to provide feedback on student course assignments (lee, 2012). a screencast is a digital recording of computer screen output, also known as a video screen capture, often containing audio narration (davis, 2012). a teacher might review a student’s assignment on a computer and use screencast software to record video of the teacher’s evaluation of the student’s assignment, along with the teacher’s comments. however, screencast software has other uses within a course, including video tutorials. silva (2012) notes that few studies of the use of screen video capture software to create tutorials have been conducted, and of those the overall student impressions of such tutorials have been favorable. the goal of this paper is to describe how teachers can use web 2.0 screencast software to provide students with short video tutorials. as urtel and fernandez (2012) note, audio podcasts work best when short, and it is likely true of video tutorials as well. although numerous screencast software applications exist, the focus of this paper will be on jing (http://www.techsmith.com/jing.html), as at the time of this writing, it is free and relatively simple to use. jing is available for windows and macintosh operating systems. the discussion of this application should be construed as one example of many, rather than as a specific product endorsement. the jing application captures the actions on a screen and stores them as a video file. additionally, if the reviewer’s computer is equipped with a microphone, audio may also be captured. jing is limited to video captures of up to five minutes, which can be converted to the flash format for viewing on windows and macintosh systems. application upgrades from the free service allow longer videos and varied formats, including mpeg-4 for viewing on apple mobile devices or uploading to a video-sharing website, such as youtube. captured videos can be shared: via a course management system, such as blackboard or oncourse; by converting the video to a format compatible with a video-sharing service such as youtube, then uploading the video and sharing a link; or by uploading video to a website affiliated with jing (www.screencast.com) that provides limited, free access for uploading and sharing videos. instructors can use video tutorials in several ways. they can be used to create quick lectures or demonstrations for a class as part of a planned lesson, which can be used to supplement instruction in a traditional classroom; or they can provide the student with a tutortype of resource by allowing students to replay material. further, the video tutorials would be useful as online content for hybrid (blended traditional face-to-face and online courses) or even fully-online courses. the software would also provide an excellent tool for specifically answering student questions electronically, in an easy to understand or explain manner. when a student emails an instructor with a question, the instructor can generate a brief video explanation/response tailored 1 scotjones@iuk.edu jones, s. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. josotl.indiana.edu 55 to the student’s concern. while a valuable tool for many courses, the strength of this tool would be best recognized in the hybrid and fully-online courses where visual content might be more beneficial for responding to student questions. making it work as an instructor, this author uses the screencast software in web design courses. regardless of whether the courses are taught in a traditional or hybrid format, these courses require demonstrations on how to perform various actions with software or with code, and the screencast software easily captures such video and audio of the demonstrations and makes it available for replay. should tutorials require more than the five minute limit imposed by the free version of the software, they can simply be broken down into a series of shorter segments. after examining other similar products, it is the opinion of this author that the screencast software is also superior for creating high quality, precise, static captures of parts of a screen. this feature is valuable for creating print documentation to supplement recorded videos. as an instructor, this author also uses the screencast software to generate impromptu videos in response to student requests. if a student needed specific guidance on how to use adobe photoshop to slice an image into a webpage, this author would create a brief video demonstrating how to perform the activity. similarly, if a student emails a question concerning issues with software, or with problems with html or css, a brief video response tailored to the solution would be generated and sent to the student. the use of the screencast software allows the instructor a similar degree of flexibility as if she were present in the classroom with the student. the benefit for the student is the receipt of tailored information she can review and replay until she gains the necessary degree of understanding, without requiring the student to necessarily be in the classroom. lastly, since many of the general student questions are repetitious in nature, the use of the screencast software allows the instructor to develop a frequently asked questions (faq) resource of videos which students can examine for solutions to faqs. future implications as digital media becomes easier to use, instructors across the spectrum of education will continue to find new ways to integrate it into teaching. networked communication tools will become further integrated into our students’ lives as bandwidth, processing power, and mobility improve, allowing web 2.0 tools such as screencasts to become increasingly important means for interacting with students. lastly, as institutions migrate to different platforms and formats for courses, tools such as screencast software can provide students with more of a sense of the presence of the instructor. this topic was based primarily on pre-existing sources of information and could be enhanced by future researchers who might conduct larger scale studies of instructors and students. jones, s. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. josotl.indiana.edu 56 references lee, k. (2012). technology-mediated feedback. in robin k. morgan & kimberly t. olivares (eds.), quick hits for teaching with technology (pp. 80-81). bloomington, indiana: indiana university press. "screencast." ziff davis. pc magazine: encyclopedia. (http://www.pcmag.com/encyclopedia_term/0,2542,t=screencast&i=60127,00.asp) retrieved 31 july 2012 silva, m.l. (2012). camtasia in the classroom: student attitudes and preferences for video commentary or microsoft word comments during the revision process. computers and composition, 29(1), 1-22 urtel, m., & fernandez, e. (2012). to podcast or not to podcast. in robin k. morgan & kimberly t. olivares (eds.), quick hits for teaching with technology (pp. 37-38). bloomington, indiana: indiana university press. wohlfarthqh journal of teaching and learning with technology, vol. 1, no. 1, june 2012, pp. 59 – 60. record your way to shorter grading dede wohlfarth1 and nathanael mitchell2 keywords: assessment, skills-focused teaching, grading framework one of the most effective strategies we found for teaching novice clinicians new, specific, skills is through observation of student role play and timely formative assessment of student practice. many subjects require students to demonstrate competence in concrete behavioral skills, including nursing, teaching, physical and occupational therapy, psychology, and social work. when direct observation of such student skill development is not a viable option, the use of an inexpensive video camera can be a valuable tool for students to create video role plays and post them on the internet for instructor review. students can post their videos on you tube and make the link to the video accessible only to the professor, or, if desired, students in the class providing peer review. because the video is now on the web available for review, the student no longer needs to turn in the video on expensive media (e.g., flash drive) or inexpensive media (dvd). furthermore, instructors can use their own camera to record video formative feedback while watching student videos, allowing for copious amounts of useful feedback created in about half of the time it would take to write the same feedback. making in work this teaching strategy could be highly effective for any clinical disciplines where specific clinical skills need to be evaluated and mastered. we have used this technique in clinical psychology courses and in teaching courses; colleagues have utilized this technique in occupational therapy with great success. with changing technology, there are many inexpensive cameras that could be used to record videos. while this could be seen as financial hardship for some students, we help manage this cost by: 1) explaining the need for a video recording device before entering our program; 2) using the device across several skills building courses; and 3) reminding students they can also use the device for fun, such as recording themselves doing super human tricks. additionally, many students opt to utilize their smart phones as recording devices and have found that the most sophisticated of these phones allows them to record and post digital videos. and we have had no 1 professor and director of child, adolescent, and family emphasis area, spalding university, dwohlfarth@spalding.edu 2 assistant professor and director of health psychology emphasis area, spalding university, nmitchell01@spalding.edu wohlfarth, d. and mitchell, n. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 60 difficulty convincing students that they might want to purchase top-of-the-line cell phones with remarkable technological advances! the majority of students who own video cameras have found them to be very user friendly. individuals with just a modicum of technological savvy (the authors of this tip fall into this category; one of us just barely so) will be able to record, save, upload, and share videos. the advantage of video recording student feedback when grading is that, in addition to reducing feedback time for professors, students can understand the nuances and context of our comments when the comments are “live” compared to in writing. the major disadvantage, ironically, is also an advantage. if you grade at home, as we do, students may see a glimpse of you outside of the “ivory tower” as rambunctious children scream for you or pets run into the video frame. students say they love this feedback because it makes their professors seem more human. future implications students consistently provide feedback that creating video role plays improves their learning, especially in learning specific behavioral skills that are foundational to learn for success in their chosen field and difficult to learn via traditional pen-and-paper assessment measures. on course evaluations, students have also noted that receiving timely, specific, constructive feedback on their developing skills is the single most helpful aspect of the course. additionally, rubrics are helpful and can be used in conjunction with the video feedback to provide written feedback on specific microskills (e.g. good eye contact—present or absent; open body language—yes or no). finally, having students post their videos online instead of turning in several forms of media has decreased instructor stress about being responsible for possibly expensive student property (e.g., flash drives). while the use of the video camera is an easy tool for creating and sharing videos, it is not required for the creation of student video role play or instructor video feedback. if a student turns in the video on a media source that is able to be modified or has the capacity for an additional video files to be added (e.g., flash drive, dvd-rw), the instructor can provide video feedback while observing the student video and then save the video feedback file to the student’s media. 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 journal of teaching and learning with technology, vol. 6, no. 1, january 2017, pp.76-80. doi:10.14434/jotlt.v6n1.22367 designing and managing engaging discussions in online courses micah pollak1 abstract: as 100% online courses become more popular, the need for engaging student interactions through online discussions becomes more important. unlike a traditional face-to-face course, where student-instructor and student-student interaction often naturally in the classroom, in an online class interaction needs to be consistently and deliberately promoted. as velez-solic (2015, p. 40) states, “interaction is the heart of an online course.” extensive interaction is also the core of the social constructivism approach to education (vygotsky, 1980). the use of weekly discussions surrounding a set of “discussion questions” is an increasingly popular way to achieve both student engagement and interaction. to enhance this interaction, i share two sets of tips for creating engaging discussions and effectively managing them in online courses. the first deals with developing discussion questions and the second focuses on the role of the instructor in moderating and evaluating student posts. keywords: teaching, online, discussions to design effective and engaging discussion questions is not easy. dillon (1983) argues that to conceive of an educative question requires thought, to formulate it requires labor and to pose it requires tact. to meet the challenges of creating engaging discussion questions, i designed the a.v.i.d. approach to question design, which stands for (a)ctive, (v)aried, (i)nteresting and openende(d). this approach assists with creating discussion questions that are likely to promote discussion and to be more engaging for students. the four parts of this acronym are: (a)ctive one of the common pitfalls when designing discussion questions is to choose those that lead to passive responses. questions that begin with “what do you think about...” or “apply what we’ve learned…” encourage students to treat discussion questions as a regurgitation of opinion. in an online course, students already spend much of their time sitting in front of their computer and discussions provide an opportunity to make the environment more dynamic. whenever possible, i include at least some active elements to discussion questions, elements that may require them to reflect and return to the discussion later or physically get up from the computer. these active elements are similar to the “authentic activities” of herrington, oliver and reeves (2002). they argue that activities which have real-world relevance and require students to investigate and define 1school of business and economics, indiana university northwest, 3400 broadway, gary, in 46408, mpollak@iun.edu mailto:mpollak@iun.edu pollak journal of teaching and learning with technology, vol. 6, no. 1, january 2017 jotlt.indiana.edu 77 some tasks of the activity on their own are consistent with the constructivist educational philosophy and can be beneficial to the learner. for example, in one discussion on the supply & demand model i have students pick a good (such as “coffee”) and then poll five friends on the “most they would be willing to pay” for the good. i then have them visit several stores and record the price they find of the good. when they return to the computer, they construct a miniature model of supply and demand for the good (aided by an excel sheet i provide). they then explain to their classmates what the equilibrium price and quantity would be and who would buy the good based on their data. discussions that blend the material with activity away from the computer and allow some individual choice promote experiential learning and make discussions more engaging. (v)aried if students have multiple discussion questions per week, it is important to make sure the types of questions are varied. for example, if one question is primarily active (like the example above), this should be balanced with a question that is more reflective. this variation serves to both prevent discussions from being too repetitive as well as give students some flexibility in the type of questions they focus on first. akin and neal (2007) argue that variation in the types of discussions questions accommodates different student learning styles and backgrounds as well as allowing more time for discussion and reflection. (i)nteresting while it may seem like an obvious feature, a discussion question should be interesting. to make discussion questions interesting, i focus on topics that are relevant, personal and even controversial for students. i make questions topical and draw from recent politics, news and popular culture. for example, topics like minimum wage laws, unemployment rates affecting college graduates, student loan debt, gun control and national debt are topics with which most students have at least some familiarity and many may already have strong opinions. while there is always a chance of creating discussions that are “too heated,” a “too heated” discussion is typically preferable to a boring one. rossman (1999) finds that students prefer discussions that relate course material to their life or work situations. when students can directly relate to the topic of discussion they are likely to participate more enthusiastically as well as return and continue participate longer, creating a richer discussion. open-ende(d) finally, and perhaps most importantly, engaging discussion questions should be open-ended. while discussions should relate to and reinforce course material, they typically should not have a single “correct” answer (velez-solic, 2015). nothing ends a discussion more quickly than a majority of students providing the same response or reaching the same conclusion. this can be especially challenging for more analytical subjects like economics or mathematics, where much of the course is focused on how to correctly solve a problem or apply a model. for example, a pollak journal of teaching and learning with technology, vol. 6, no. 1, january 2017 jotlt.indiana.edu 78 question like “if the minimum wage is raised, what does the supply and demand model predict will happen?” does not promote discussion. however, rephrased as “do you personally think the minimum wage should be raised or not? use the predictions of the supply and demand model to support your view” allows students to reach a variety of conclusions. a good discussion question is fundamentally different than an essay question. students often enter a course from a wide variety of backgrounds and by choosing discussion questions that take advantage of differing views and provide an opportunity to relate these views back to course material makes discussions more engaging. besides the principles of the a.v.i.d. approach, some learning management system (lms) tips i have found to be useful in promoting engaging discussions in 100% online courses include: • enabling a “users must post before seeing replies” feature. this prevents students from worrying that their response will be too similar to that of another student’s. rossman (1999) finds that students may experience guilt when not posting because other students have already posted a similar perspective. i would rather have repeated similar posts than students struggling to find something “new” to post. • disabling “allow students to edit and delete their own posts.” this encourages students to be more careful with their wording/proof reading before posting. it also prevents students from posting and then deleting and reposting again. i include the following sentences in the syllabus to explain this policy: “think of your posts as similar to saying something in a classroom. if you say something out loud you cannot delete or edit it after the fact (as much as we sometimes might want to!). if you wish to explain or expand on a post, then post a reply to your original post.” • use a separate discussion for each discussion question. trying to moderate and organize responses to different discussion questions in the same discussion rapidly becomes too difficult. • use threaded discussions. threading allows better organization of replies, making it clearer when students are replying directly to each other. managing discussions and the role of the instructor effectively moderating and grading discussion posts in an online course can be an intimidating and sometimes overwhelming job, especially in larger enrollment sections. in my typical class with around 50 students i will often have 200-300 posts per week to view and evaluate, many of which are made at unusual hours of the day. to help stay organized and avoid being overwhelmed, i separate my activity in the discussions into what i believe should be the three roles of an instructor in group discussions. moderation the first role of an instructor in discussions is to moderate student posts. this means making sure that the discussion is progressing and on-topic. as a moderator it is not my role to respond to every student, but i will read and review every post. rovai (2007) argues that in facilitating discussions, an instructor should avoid becoming the center or focus of a discussion and instead allow studentpollak journal of teaching and learning with technology, vol. 6, no. 1, january 2017 jotlt.indiana.edu 79 to-student interaction to develop. i will respond to students who have posts without another response for a substantial amount of time. when i do respond to students i try to do it in a way that encourages further discussion, usually ending each response with a follow-up question or new perspective for them to consider. asking follow-up probing questions and providing encouragement promotes continued discussion (rovai, 2007). i also use moderation as an opportunity to make sure students remain on-topic and address the discussion question. instruction one of the challenges in online discussions is that it is very easy for misunderstandings to be perpetuated if not caught early. when a student uses a term incorrectly or applies an idea in a flawed manner it can quickly create confusion or spread misinformation. my role in this case is to regularly review student posts to make sure they are not posting incorrect information. if they are applying a concept incorrectly, then i step in and provide clarification for them as well as the other students. i generally combine the roles of moderation and instruction when reviewing student posts. one learning management system tool that has been helpful in this regard is the option to “manually mark posts as read.” enabling this option allows me to mark as “read” posts i have viewed and do not need further attention while leaving as “unread” posts that i plan to return to, as well as those i have yet to read. in addition to clarifying concepts, a second part of this role is to expand on course material in new ways. to this end, i generally respond to student posts when an opportunity presents itself to provide an alternative view, which often motivates further discussion, or else when a student posted a particularly interesting or unique view. evaluation finally, my third role in discussions is evaluation and grading of discussion posts. in my experience, if i am successful in keeping up in my other two roles (moderation and instruction) this can be fairly easy and is best done after the discussion deadline. by the discussion deadline, through the roles of moderation and instruction, i often have already completed many aspects of evaluation, such as making sure posts are on-topic and contribute to the discussion. this generally only leaves technical evaluation for things like length of posts, frequency of posts and distribution of posts, which are aspects that are much easier to evaluate independently from content. following the two sets of tips outlined in this article will help instructors design engaging discussion questions and effectively manage them in an online environment. the a.v.i.d. approach helps instructors write discussion questions that are more appealing to students and encourage greater and more active participation. defining the role of the instructor into the three separate tasks of moderation, instruction and evaluation helps the instructor manage discussions without being overwhelmed. while designing and moderating discussion questions can be challenging, as online courses become increasingly popular with students as well as a larger part of many programs, promoting interaction and engagement through online discussions becomes crucial for effective instruction. pollak journal of teaching and learning with technology, vol. 6, no. 1, january 2017 jotlt.indiana.edu 80 references akin, l., & neal, d. (2007). "crest+ model: writing effective online discussion questions. journal of online learning and teaching, 3(2), 191-202. dillon, j. t. (1983). teaching and the art of questioning. bloomington, in: phi delta kappa educational. herrington, j., oliver, r., & reeves, t. c. (2002). patterns of engagement in authentic online learning environments. ascilite 2002 conference proceedings, (pp. 279-286). auckland, new zealand. rossman, m. h. (1999). successful online teaching using an asynchronous learner discussion forum. journal of asynchronous learning networks, 3(2), 91-97. rovai, a. p. (2007). facilitating online discussions effectively. the internet and higher education, 10(1), 77-88. velez-solic, a. (2015). teaching online without losing your mind: a comprehensive overview. charleston, sc: avs academic services. vygotsky, l. s. (1980). mind in society: the development of higher psychological processes. harvard university press. (a)ctive (v)aried (i)nteresting open-ende(d) managing discussions and the role of the instructor moderation instruction evaluation references 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 journal of teaching and learning with technology, vol. vol. 11, special issue, pp.37-50. doi: 10.14434/jotlt.v11i1.34595 microsoft teams supports authentic assessment of learning nancy evans indiana university bloomington nanevans@indiana.edu abstract: in computer technology, statistics, and business courses that i have taught over the past decade, students have worked in groups/teams. i assumed students communicated with teammates within the learning management system (lms). in 2017, i was surprised to discover students were using groupme. to help me understand the attraction, i signed up for a groupme account. i was underwhelmed with its purpose but understood the ease of the application. i was also learning more about slack, thinking it was a much better tool for communication and collaboration, and realizing that communicating within the lms was not ideal, even awkward, and not real world. however, slack was not readily available for student use, so when our institution introduced microsoft teams in 2020, i began exploring using microsoft teams for teamwork/projects because it is a real-world tool that students will likely use/encounter after college. i have been using microsoft teams consistently since fall 2020. using microsoft teams moves students from conceptual to applicable knowledge related to teamwork, communication, and even leadership skills. microsoft teams can supplement any content/course project. requiring students to use microsoft teams as it would be used in the workplace allows teams to choose when they meet to collaborate, manage their “channel,” and ultimately create their project. all the work leading up to the final deliverable is archived in one space accessible by instructor (manager) as would be in the real world. the assessment(s) takes on a level of authenticity that is missing with a traditional lms. in this reflective essay, i will show how microsoft teams supports authentic assessment while engaging students in a real-world technology that adds to their post-undergraduate toolkit and future success. keywords: microsoft teams, authentic assessment, student teams teaching with technology has always been a part of my teaching life. creating assignments and activities that aid students in applying what they are learning so they see the practicality of the subject matter has been a foundation of my teaching. this application of concepts/theories and development of practical skills for undergraduate students is an important aspect of authentic assessment and my teaching. thus, integrating technologies to facilitate such authentic assessment has been a natural and continuous, albeit somewhat unintentional, path in my development as a teacher. in 2001, i began teaching computer information technology courses at a large university where my interest and focus on teaching practical skills were indeed expected. the program i taught in served undergraduate students seeking hands-on computer technology skills. our courses were applicationbased more than theory-based, and as often as possible, it was critical to provide assessments that were authentic to the informational technology field. one example is using microsoft excel to solve a problem versus a multiple-choice test about microsoft excel. my current teaching role is in a different school at the same institution, yet the focus and expectation of teaching practical skills is foundational to the undergraduate business program i serve. i still integrate technologies, specifically microsoft teams, to help students authentically prepare for using such technology in their “real world” jobs/careers. i love theory – the concept of theory, reading about and learning new theories, teaching theory. i have found that most 100to 300-level undergraduate students do not. to be an effective teacher, even when theory has been part of the content i teach, i have had to find a way to reach the students. mailto:nanevans@indiana.edu evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu the answer has been to make the theory applicable, relevant, and practical to their lives (gagne, 1985) while allowing space for students to learn from each other. this social constructivist approach to learning (vygotsky, 1978) allows the content i am teaching to be accessible to students in a way they can relate to and make sense of it now and later in life. when i taught 100level programming and 200and 300level statistics to computer information technology students, one way to help students relate and make sense of programming and statistics was by using student groups/teams in class sessions to practice and solve problems after individually working on homework. having students work in teams to solve problems is, to me, real-world and authentic. in the remainder of this essay, i will connect this teamwork component to technologies that support authentic environments and assessment. from authentic process to authentic assessment before i share my experience using microsoft teams as a technology that supports authentic assessment, i want to provide background information on how my teaching path has led me to the present. when teaching statistics/programming from circa 2010-2017, i was simulating how we authentically tackle problems in the workplace through cooperative and collaborative learning (oakley et al., 2004, p. 10) and active learning (ma et al., 2021) in various teaching modalities (in-person, hybrid, and online modalities) using the following three-step process: 1) individual prep work to think through a given problem and attempt a solution, 2) meetings to share individual ideas with the team, informed brainstorming, where ultimately, the team makes a final decision on the best solution, and 3) creation of deliverables and presentation to an audience. i still use this three-step process in a course with content that is drastically different from statistics/programming, yet there is a similar authentic problem-solving approach. my current course is a required career/professional development undergraduate business course that includes virtual teamwork, emotional intelligence, communication, and leadership skills content. i emphasize the difference in courses because this three-step process is the foundation of using technologies that support authentic assessment, regardless of course content. further, using technologies that support authentic assessment can be accomplished in online, hybrid, and in-person course modalities. in the past i have used zoom as the platform for student team meetings. zoom is an authentic meeting platform, but it does not contribute to authentic assessment in the same way as microsoft teams. using microsoft teams instead of zoom allows me to bridge an authentic problemsolving approach with authentic assessment. since fall 2020, i have been using microsoft teams as an authentic/real-life technological platform for authentic process and most recently to authentically assess students’ team performance and deliverables. with microsoft teams, students choose how to meet weekly objectives, which simulates how they would collaborate in a real job. from the work that student teams complete in microsoft teams, i provide formative feedback and summative assessment throughout the semester as a manager would check in on project work. learning outcomes that connect to using microsoft teams for process and assessment are related to communication, leadership, and teamwork (i.e., working effectively with others). in table 1, i provide of the student learning outcomes and related assignments, including where required team meetings are relevant to the outcome. any course that has a final team project, or any team component, could benefit from using microsoft teams as a major component/platform for their course. 38 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu table 1. program competency, student learning outcomes, and related assignments. program competency 4: communication and leadership communicate effectively in a wide variety of business settings employing multiple media of communications. slo 4.1: deliver clear, concise, and audience-centered team presentations. assignment: team capstone project slo 4.2: write clear, concise, and audience-centered business documents. assignment: team capstone project program competency 6: working effectively with others collaborate effectively and respectfully with teammates who look, think, or believe differently from you to build trust and community. slo 6.1: participate actively in team meetings and collaborate effectively in face-to-face or virtual interactions. required team meetings with teammate evaluations slo 6.2: create a cohesive and integrated team deliverable. assignment: team meeting deliverables assignment: team capstone project slo 6.3: assess individual or team collaboration with respect to both productivity and interpersonal relationships. required team meetings with teammate evaluations the way we meet the outcomes in the course i currently teach is through teamwork. my use of teamwork in a course replicates a flipped classroom model, which is also connected to cooperative/collaborative and active learning because students do individual prep work prior to attending class. students need to have interacted and/or struggled with material prior to problem solving together because “trying to solve a problem before being taught the solution leads to better learning, even when errors are made in the attempt” (brown et al., 2014, p 4; oakley et al., 2004, p. 15). the teamwork i expect students to do occurs during required meeting time1; so, most coordination, communication, and collaboration is done synchronously. generally, the only out of class coordination would be to schedule the required team meetings. i had no idea that what seems a simple coordination task for students would place me on the path to use microsoft teams in my courses. my internal struggle with students’ use of groupme the first semester that i required student teams to communicate or coordinate outside of class was in an online statistics course in the spring 2017 semester. i assumed students would use the lms feature to communicate and coordinate because that capability is built into the system where they discover who is on their team. other possibilities i considered were that students would email each other or form a group text chat. however, in fall 2017 i heard students were using an app called groupme (figure 1), a group chat system that students seemed deeply committed to using. since groupme is a group chat system, i wondered why students did not use group texts as part of mobile phone apps. in spring 2018, i created a groupme account to explore and better understand the attraction to this platform over other collaborative tools/mechanisms that already exist for students. i also asked students why they used it. students explained that they could share files, whereas, in a phone text group chat, they could not share files. also, if a student did not have a smartphone, they could use groupme on a computer. i was underwhelmed with what the app provided students, and for everyone in the group to have to create a groupme account for such a 1 this meeting time refers to the online course i currently teach. if teaching in-person, class time could be when students meet to solve problems together. 39 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu limited purpose seemed unnecessary. i was also perplexed by the impracticality of groupme from a teaching and learning perspective. i would happily have embraced the technology if it would have aided in course design, structure, learning, or assessment. i did not encourage its use, but i did not discourage it. i thought it made more sense to use the lms collaborative space and communicate where teams were created. however, i also recognized the convenience of using a chat system to communicate, and the clunkiness of communicating through the lms. hence, my lack of discouragement of using groupme. i still struggled with the inefficiency and limitation of students’ use of groupme. i could have required students to communicate within the lms. however, force goes against my dissertation findings around meaningfulness that also informs my teaching philosophy related to “choice” (evans, 2012). further, forcing lms communication is counter to solving problems in a real-world, authentic way. lms communication did not replicate authentic communication any more than groupme did, and there also was nothing i could assess from lms communication. for the moment, i resolved to let go of my obsession with figuring out why they used groupme and accept their choice of how to coordinate and collaborate. learning outcome focus and finding a collaborative, project management tool as time passed, i continued to have the desire to steer students toward something more efficient and more powerfully collaborative because “working effectively with others” is a major learning outcome for the course. in the academic year 2019-2020, i pondered the possibility of using slack to aid with project management and help move students away from the “divide and conquer” approach (heflin & meganck, 2017, p. 50). a divide and conquer approach, while appropriate for some tasks, does not integrate the inclusive three-step process, described in the previous section, that brings each teammates’ ideas to solving a problem. again, this three-step process simulates problem solving in the business world that i am preparing my students to enter. using a technology tool that i can integrate into the course to develop transferable skills is critical to my continuous course development. integration into the course in an applicable, practical, relevant way goes back to my teaching roots as the way to reach students. i searched and studied the various tools that integrate into our existing lms. i continuously returned to the concept of slack. i had attended a teaching and learning conference session in fall 2017 in which an instructor used slack, and i studied my notes. i signed up for a slack webinar to learn more. i was convinced that slack was the tool i was looking for. slack seemed more promising and a more real-world business application that would ultimately build the teamwork and communication skills in a project management system that students could apply long term. unfortunately, slack was not an application that was available for student or faculty at my institution. figure 1. groupme functionality. a look at what groupme provides. 40 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu within a couple of weeks of being determined to find a way to use slack, i learned that our university microsoft agreement was going to include microsoft teams. i quickly discovered microsoft teams was a one-stop shop (figure 2) for collaboration using channels, files, and posts. i no longer needed slack. i began imagining how i could use microsoft teams in my course. teams abound: interrogating current practice using microsoft teams as the space for students to help each other solve problems and do their collaborative work interrogates the current practice of how student teams are used for course projects. often, student teamwork is assigned with the assumption that students know how to collaborate effectively and efficiently, or that they will figure it out. i contend that while many students have previous extracurricular team experience from high school, they are not naturally prepared to work effectively, efficiently, and collaboratively with others on an academic, let alone a workplace, project. team contracts, peer review, and other structure are sometimes provided (oakley et al., 2004) and may help team processes and function. however, in a non-cohort academic semester setting, too often the team contracts, structure, and peer reviews function merely as academic assignments rather than artifacts or documentation that creates an effective team culture. further, even if all team members theoretically know how a team develops and functions according to tuckman’s stages of team development (tuckman, 1965), there is no guarantee that teams will operate functionally in postcovid virtual teamwork. oakley et al. (2004), for example, provides outstanding advice on moving student groups to effective teams. their approach is a best practice, pre-covid. microsoft teams interrogates the current best practice by suggesting that we are still missing the mark in our use of student teams to prepare students for the real world unless we are using a technology that is a realworld platform for teamwork. also, when relying on a technology for team function and process, we need to emphasize psychological safety (edmonson, 2012) and the importance of emotional intelligence (goleman, 2020) in virtual work, both of which further interrogate current team process/practice in the post-covid work world. effective teamwork requires a healthy culture of teamwork. both “healthy” and “culture” are independent key words that current practice does not recognize or foster. in typical higher education undergraduate academic team situations, there is not a culture to enter because the student teams are always new each semester and there is typically not a system in which students are taught a consistent way of operating on teams. while many students have had previous extracurricular team experience in which an established team culture existed, too often, frankly most often, there is not an academic figure 2. communication and collaboration hub. post and add files. 41 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu “integrated” team experience where expected norms, behaviors, mentors, and mechanisms to handle conflict are provided consistently. a systematic, standardized, and consistent teaching of teams speaks to the “healthy” aspect of team culture. without this approach, student teams tend to be ineffective at collaborating, rely on dividing and conquering, and often are dysfunctional in efficiency, psychological safety, and conflict resolution on teams2. when students enter the workforce, they will be entering an existing culture and will be expected to conform to the team norms. our students will be well-served if we can better prepare students for this authentic experience. without a culture that invites, encourages, and requires individual accountability within the team structure, students will continue to approach teamwork as follows, which is nothing like realworld expectations and is impractical and impossible to authentically assess: • “we have to meet because our professor said we do, and we are supposed to fill out this team contract thing.” • “we can divide the work up in this first meeting and put everything in a google doc and figure things out later.” (i.e., divide and conquer approach) • two days before project due date (perhaps a week, perhaps the day before), one or two team members finish the project. creating a healthy culture is most likely in the hands of the professor. for me, requiring students go through the motions of meeting six times per semester via zoom and submit a collaborative deliverable in the lms was not fitting the concept of a healthy culture of teamwork. nor were students working effectively with others, a student learning outcome and authentic outcome that prepares students for the workforce. student meetings seemed forced within an academic structure of the lms using zoom and kaltura; they did not feel natural. students reasonably approached the goals of the course with a “checklist” mentality which means “the professor said we have to do x, so we will do x.” microsoft teams is a tool that moves the student experience closer to a healthy culture of teamwork by situating students in a real-world teamwork environment. it is likely that after graduation, most students may “pretty much live in microsoft teams” and could easily “spend six hours per day on microsoft teams, including meetings” (lansmann et al., 2019, p. 3). statistics of microsoft teams usage sales also supports the statement that most students will use microsoft teams upon graduation. for example, monthly active users of microsoft teams surpassed 270 million in january 2022, frontline worker usage has increased, and 90 percent of fortune 500 companies use teams phone (endicott, 2022). also, microsoft quantifies the value of collaboration with microsoft teams, making a bottom-line case for companies to use the platform (wright, 2019). furthermore, not only does microsoft teams help students practice and gain skills in an authentic work environment in a low-stakes manner because no real-world jobs are on the line, martin and tapp (2019) argue that “teaching and learning with the app is located within the social constructivism paradigm of educational theory” (p.58). furthermore, with a little coaching related to the importance of psychological safety on teams, which can be done in any course using student teams, the “healthy” aspect is more fully integrated. any course using student teams can also use microsoft teams. 2 there are assessments to aid with emotional intelligence/psychological safety and conflict resolution on teams. blueeq™ (georgia center for assessment, 2019) and the tki® (kilmann & thomas, 1977) are two such assessments that could be integrated into teaching students how to function on teams. 42 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu authentic formative and summative assessment the reason i use student teams/groups in courses i teach is to simulate real-world problem solving, particularly in a business context. the courses i have taught and currently teach are situated in content that is preparing them for their future work upon undergraduate study in a business environment. students taking these courses expect to use what they learn in these courses in their internships and jobs immediately upon graduation with their bachelor’s degree. even outside of work, humans do not typically solve problems in a vacuum or isolation because problems in adulthood tend to involve other people. there are, of course, exceptions, but to prepare students for employment and adult life, providing opportunities to solve problems with others by using student teams helps students practice skills that they can transfer to other settings. the reason i use microsoft teams is it is a powerful, collaborative, project management platform that brings team members together virtually to solve a given problem or problems. i have become firmer in my commitment to integrating microsoft teams into my current and future course structure given the impact of covid on remote/virtual work. in-person meetings are less common and, in some instances, non-existent in the workplace, and many companies use microsoft teams as their platform. providing students practice in this space before they launch into their careers is a practical way for me to give them a boost. and finally, i not only simulate authentic problem solving but also assess students authentically with the use of a technological tool. using microsoft teams is more efficient for: (1) instructor-student interaction and engagement (chickering & gamson, 1987; kuh, 1995; kuh et.al, 2006), (2) simulating management in the work world, (3) assessing and grading, and (4) “supporting collaborative knowledge building” (buchal & songsore, 2019). everything i need to see from teams’ work is in a microsoft teams private channel, and how they use their channel is their choice. choice contributes to authenticity and relevancy. finally, just as a manager would provide feedback to a team that is providing documentation and deliverables in a microsoft teams space, i do the same as the professor by providing formative and summative feedback. use of private channels and deliverables at the start of the semester, i form teams that work together weekly throughout the semester. i create a private channel for each team in microsoft teams. only the team members and i can see their work. the grade for a required meeting requires each team member’s attendance, the recording, and deliverables posted in the private channel. the recording is a way for me to check attendance and provide formative feedback to the team, sometimes individually to team members, related to communication, lack of preparation for leading a meeting, and meeting process fundamentals. the weekly deliverables are building blocks, scaffolded steps and assignments, for the final project. these weekly deliverables are complete/incomplete checkpoints, like we would expect in the workplace. if the deliverable is not in the channel, there is a “low-stakes” point deduction. deducting points emphasizes the importance of paying attention to what is required and the importance of taking notes during a meeting. teams who miss points (approximately twenty percent of teams in first and second meetings) only do so that one time. checking student teams’ work in microsoft teams is more authentic than in lms assignments because it replicates what a business manager would do. of course, a manager would not deduct points, but we have a layer of academics that we must transpose onto the authentic space and assessment. 43 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu authenticity of requiring recordings one may wonder how authentic it is to have teams record a meeting. post-covid, recording meetings is commonplace. if a teammate must miss a meeting, the recording can be viewed. this situation relates to authentic process. i use it for authentic formative assessment too. i want teams to collaborate and have a dedicated leader that takes teams through an agenda so they can stay on task, operate efficiently, and deliver what is expected. the only way to find out if that is happening is for me to observe the meetings, or participate in them, which is not reasonable. if the team does not have a deliverable for the week or i hear from teammates that they are confused on what to do for the week, it is likely because the leader did not do a good job leading with an agenda and relying on notes from meeting with me. when i observe this situation, i ask the team leader to meet with me, and i provide formative feedback. this is not unlike a manager having to meet with an employee when the manager has “heard” that the employee is not doing what is expected. a manager might sit in on a team meeting in that case, which is like viewing a recorded meeting. the manager would then meet with the employee to provide some formative feedback. recordings can be done in other ways but using microsoft teams combines process and assessment because i also use teams chat (with video) to meet with students. microsoft teams is the one-stop shop, much like in the workplace. absences from team meetings are easy to check from a recording in a private channel (figure 3). since the learning outcomes are related to working effectively with others, attendance at all team meetings is expected. of course, as in real life, students sometimes must miss a meeting. when meetings are missed, i coach the students how to make up for the miss. coaching is a form of formative assessment. absences early in the semester are different from ones later in the semester in terms of how i coach, and typically if a student misses a team meeting, they only miss the one. when they miss later in the semester, they understand that they need to contribute their prep work prior to the meeting. asking teammates what they missed, which is a typical response from an absent student, is not acceptable collaboration. what is being assessed and evidence of impact in addition to weekly deliverables being assessed, the final project is delivered in both the private channel as part of the teams’ semester archives of work, but also in public channels that all students figure 3. checking attendance from recording. 44 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu can visit. the purpose of the public channels is for students to visit channels of interest to further their own learning. the final project is a culmination of research that students have done individually, an informational interview with a professional or a coaching appointment with a career coach, and application of emotional intelligence content that they have individually and as team worked with throughout the semester. the final project is created using adobe express which produces a shareable link. the link is provided in the microsoft teams channels for me to grade and classmates to learn from. everything that students do in microsoft teams simulates authentic process. my grading of the final project process simulates authentic assessment. there are several aspects of the final deliverable that i check and assign points toward the required meeting. students must post their project link in their private channel and appropriate public channel. they must record their virtual meeting and they post two documents to their private channel files space: a rubric checklist and an after-action review. as their “manager,” i am looking for these items. if i do not see them, i do not take off points at this stage; rather, i reach out to the team leader and ask about the missing item. i know they have done the work and while ideally, i expect them to submit everything where i expect it, i understand that real life is messy, not concrete, and we do not have a rubric to follow. the action i describe is what i would do in real life. for example, if there was a deadline for a deliverable in real life that was missing an aspect, the team would not get a 0 for that portion. they would have a chance to still deliver. and i would not set a deadline that did not have a buffer for mistakes to be fixed. the process i take students through using microsoft teams also accounts for building in aspects of the course for change agility, requiring students to check email, and ambiguity. these aspects are skills that recruiters say are areas of improvement for students, and aspects that will set them apart from other programs/schools. when building a course with intentional and purposeful ambiguity and adaptability to help students practice real life, i too need to allow for those aspects in that authentic manner. ultimately, teamwork tasks are better, and learning is more authentic. one student wrote the following related to teamwork and learning: though we are technically an asynchronous class, you've provided clear instructions for each team meeting and have given a lot of suggestions regarding team working. without your instructions, i would never have leaded an entire team meeting or talked with ucs staff about their perspectives of the career. we've spent weeks in class, building team relationships, doing research about industry facts, as well as conducting evaluations to know more about our own emotional intelligence level. i updated my linkedin profile based on the evaluation results, which allowed me to attract 11 profile views and 17 search appearances. provided with these facts, i'm confident that i can find an internship for next summer. this feedback is significant because the student points out teamwork and individual tasks that she never would have done on her own; yet the activities she describes are learning activities that could be expected of students to do outside of class – meet with a career coach (ucs staff), research an industry of career interest, and lead an entire team meeting. these are real-life, authentic activities. while these individual activities could be completed without microsoft teams, the individual activities were integrated with team activities and the final team project, and the emotional intelligence work the student describes came to life in team dynamics. another example of the impact of using microsoft teams with required, recording meetings in a jumbo (320+ students), asynchronous, online class is that students get to know teammates well and form strong bonds, which presented me with a dilemma for the final required meeting. a team 45 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu met in person for their finalizing piece of the project. initially, my reaction was, “oh no, they have to meet in microsoft teams.” after that momentary reaction, i realized that their meeting in person was a positive, authentic sign of “adjourning” (tuckman, 1965). for example, it serves as a celebration and a time to figure out where to go from here. the team took a picture working together and submitted that rather than the recording. i decided that i need to be ok with their decision because when the team’s work concludes and it is time for the team to dissolve, there can be feelings of mourning that they need to resolve. meeting in person can facilitate that process. in real-life as a manager, if a team that typically worked virtually decided to all meet for the polishing aspect of a project, i certainly would accept that work. how using microsoft teams in an academic setting works first, there are three assumptions i have made upon deciding to use microsoft teams for authentic teaming and assessment: (1) students will be working in a team environment likely involving virtual/remote work in their first jobs out of college, (2) their work environment will be a healthy culture, and (3) workforce teams are using microsoft teams, or similar project management technology. i begin this section with an introduction to the look of microsoft teams. i believe it is important to see the platform before discussing the function of microsoft teams as a technology for authentic assessment. our institution has a microsoft teams for education license (there are other industry specific versions as well) and launched a microsoft teams classes pilot program that i signed up for fall 2022. this pilot integration automatically adds all class members to the microsoft teams site and continuously synchronizes the lms course roster when students drop or add the course. prior to this year, i had to create a team manually. regardless which way the course is created, the power behind microsoft teams related to authentic assessment is using a private channel (figure 4) for each student team in the course. private channels have a lock next to their name allowing only those members and the team owner/professor to access to the channel. within the channel, teams meet virtually and record the meeting for their archival purposes (figure 5) in case a team member cannot attend. recording also facilitates grading/assessment, as previously discussed. students choose how to collaborate within their channel, creating folders and meaningful file names, to progressively work on their final project throughout the semester. all their work is visible and archived in their channel which allows for more effective collaboration and storage (figure 6). having all work in one place, the working channel, shows the progression of their work throughout the semester, facilitates assessment of their teamwork and recorded team meetings/deliverables. figure 4. private channels. figure 5. archive of recorded meetings. for reference and anyone who missed a meeting. 46 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu to illustrate the function of using microsoft teams, i will use a final project for the course to show how microsoft teams supports authentic assessment while engaging students in a real-world technology to add to their toolkit upon graduation. assessing students’ teamwork and individual contributions to the work takes on a level of authenticity that is missing with a traditional lms. first, i want to note that communicating with individual students via chat in microsoft teams is the most authentic communication i have experienced in over twenty years of teaching at the college level. it replicates how colleagues in the workplace communicate for quick questions or to initiate a longer call/conversation. a chat or call can both occur within microsoft teams, again reinforcing the onestop shop versatility of microsoft teams. chat messages/calls also provide a more efficient means of communication with students that simulates a real-time conversation we would experience in the workplace. of course, we cannot always immediately answer a student question, but the ease of access and use of microsoft teams as a desktop, tablet, or phone app is unequivocally the most convenient and authentic way of interacting with students (figure 7). i have found students appreciate the justin-time aspect of communicating via microsoft teams. using chat improves faculty-student interaction because it is more intimate than lms messaging or organizational email, faculty appear and likely are more available, and such a feature taps into students’ communication expectations/learning related to informal learning with mobile devices (gikas & grant, 2013, p. 19). chatting, as described above, is fundamental to using microsoft teams for authentic assessment. there is a “filter by name” feature where you can search for students and see your chat history which may include file sharing and record of calls/meetings. this record of student interaction can aid in assessment. now, i turn toward students’ use of the microsoft teams space via private channels as they work throughout the semester building toward the final project. foremost in this technology figure 6. collaboration and storage. create and archive all documents. figure 7. real-time instant messaging. 47 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu supporting authentic assessment is the nature of the project being built in stages, essentially from the first day of class, like the first day in a new job. students need adequate acclimation to the microsoft teams space and notion that any work that is done individually or collaboratively toward the team project should be stored in the team’s private channel. just like in the workplace, everything students create will be in the microsoft teams space, including meeting agendas, notes/minutes, drafts, individual brainstorming used in team meetings, resources from research, meeting recordings, minideliverables, and the final deliverable. real-world work teams that use microsoft teams are not using box, dropbox, evernote, and/or google to collaborate. they are using microsoft teams and may add “apps” such as miro or decisions, for example, to help their workflow, but whatever they use is within created within or added to their microsoft teams space so that all team members have access to read, edit, and truly collaborate. foundational to using microsoft teams as a technology supporting authentic assessment is to require student teams to meet regularly and record their meeting. this meeting time is scheduled collaborative time to work toward the final project deliverable. the recordings aid in formative and summative assessment and serve members when a team member misses a meeting, as occurs in reallife. if students are not required to meet as a team, they are likely to divide and conquer and/or not even meet. they may think they are collaborating and functioning as a team by throwing their work into a google doc, for example, but they are not learning to work effectively with others. since the structure of my course is based on a healthy team culture and functional teaming aspects, i use the first two weeks to acclimate students to their team members, the microsoft teams process, and the final deliverable structure with scaffolded stages. before the semester, i create private channels and assign members manually to their team channel. in week 1, they are ready to get to know each other. this course is an online course, so students meet virtually with exception of during week 2 when they must ideally meet in person to introduce themselves and take a group picture to post in their private channel for me to see and give credit for completion. often, they have a member who cannot meet in person, so they connect with that person via video in their microsoft teams private channel. the team picture includes the virtual team member. the point of the exercise is to create time and space for introductions, an application of the forming stage of team development (tuckman, 1965). students are experiencing team development theory to create a healthy team culture rather than just hearing, reading, and learning about the theory. after a couple weeks of using microsoft teams and getting used to the environment, teams are ready to begin required meetings within their private channel. storming is a stage of team development in which leaders emerge and there can be conflict and power struggles. it is a normal stage, as is conflict on a team and it is often a stage where student teams break down and become dysfunctional. some team members dominate, some stop sharing their thoughts and go along with the dominance, and some may check out entirely leaving the team with that member who never contributes. to help teams through any storming issues, the first meeting focuses on psychological safety where team members discuss best team experiences and worst team experiences. a common denominator in bad experiences is team members feeling shut down by a dominant member. having the team share experiences and nail down how to be better teammates highlights and allows an informal team contract to form. that informal contract is also aided by the structure where one team member will lead a meeting throughout the semester, so all have a chance to practice setting up an effective meeting and facilitate a virtual meeting. students develop leadership skills by ensuring all voices are heard and making all accountable for contributing through prep work done before the meeting. these skills and tasks are easily observed and assessed by me, their manager/professor. the first virtual team meeting recorded in microsoft teams, while not typically contentious, is a storming stage because they do not know each other well and are feeling things out—their new teammates, their deliverable expectations, microsoft teams, what type of feedback i will give, etc. the 48 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu second meeting is still a storming stage meeting mostly because they are unfamiliar with using microsoft teams, still do not know each other well, and are not always feeling like they know what they are doing toward the final project. ambiguity and choice can be freeing but often feel confining to students, which contributes to the storm, particularly if a team member expresses distrust of the process or tools, or dislike for the course. these experiences are authentic to the workplace as well. activities related to psychological safety at this meeting stage are helpful in moving them through storming. by the time the third meeting occurs around week 5, norming is occurring, and students all have a better handle on the course content and expectations of the final deliverable and stages of assessment. keep in mind that each week in my meetings with the team rep/leader for that week, i am coaching them around expectations about facilitating meetings. i provide meeting structure for them so that they can focus on the people part of the meeting – facilitating conversation, checking in with teammates to make sure all who have something to say have said something, making sure all have shared individual thoughts, and ensuring all have prepped before the meeting. the stage of performing happens in the fourth and fifth meetings and pulling the project deliverable together in the sixth meeting, which is sometimes an adjourning stage meeting as well. some teams choose to meet in person because they have gotten along so well that in-person is a rewarding, finalizing event. conclusion: microsoft teams to solve lms challenges related to authenticity in this essay, i shared my journey of observing student team collaboration through groupme to teaching student team collaboration through microsoft teams. i discovered that microsoft teams provides higher education with an opportunity for relevant workplace preparation of students in an authentic environment with authentic tasks, processes, and assessment deliverables. my three-year experimentation using microsoft teams in my teaching led me to question common student teamwork practices still being used in a post-covid virtual world. furthermore, i demonstrated how to use microsoft teams to move students from conceptual knowledge to applicable skills related to working effectively with others. instructors wanting to authentically prepare their students for teamwork in the work world will be drawn to using microsoft teams. finally, students will benefit from using microsoft teams because it is likely the interface they will use in their internships and jobs postgraduation. recruiters expect digital literacy, change agility, and adaptability skills, and using microsoft teams helps students meet those employer expectations. the most meaningful evidence of the impact of using microsoft teams comes from students sharing their recruiting experience. students articulate in employment interviews how important it is to learn how to collaborate and “be in charge of” organizing a meeting and leading their team through a process of organizing, communicating, and providing space for teammates to share and voice opinions toward the common deliverable goals. success in employment interviews is the true authentic form of summative assessment. references brown, p. c., roediger, h. l. iii, & mcdaniel, m. a. (2014). make it stick: the science of successful learning. belknap press. buchal, r., & songsore, e. (2019, june 9-12). using microsoft teams to support collaborative knowledge building in the context of sustainability assessment. [paper presentation]. canadian engineering education association conference (ceea-aceg19), university of ottawa, on, canada. chickering, a. w., & gamson, z. f. (1987). seven principles for good practice in undergraduate education. aahe bulletin, 3-7. 49 evans journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu evans, n. (2012). students’ perceptions of meaningfulness in first year experience courses: a case study. [doctoral dissertation, ball state university]. proquest llc. edmonson, a. c. (2012). teaming: how organizations learn, innovate, and compete in the knowledge economy. jossey-bass. endicott, s. (2022, january 26). microsoft teams now has more than 270 million monthly active users. windows central. https://www.windowscentral.com/microsoft-teams-now-has-more-270-millionmonthly-active-users gagne, r. m. (1985). the conditions of learning and theory of instruction. wadsworth publishing. georgia center for assessment. (2019, august 31). validity and reliability study for the blueeq assessment. college of education, university of georgia. gikas, j., & grant, m. m. (2013). mobile computing devices in higher education: student perspectives on learning with cellphones, smartphones & social media. internet and higher education, 19, 1826. http://dx.doi.org/10.1016/j.iheduc.2013.06.002 goleman, d. (2020). emotional intelligence: why it can matter more than iq. bloomsberry publishing. (original work published 1995) heflin, k., & meganck, s. (2017). from divide and conquer to dynamic teamwork: a new approach to teaching public relations campaigns. journal of public relations education, 3(1), 50-58. https://aejmc.us/jpre/2017/05/24/from-divide-and-conquer-to-dynamic-teamwork-a-newapproach-to-teaching-public-relations-campaigns/ kuh, g. d. (1995). the other curriculum: out-of-class experiences associated with student learning and personal development. the journal of higher education, 66(2), 123-155. https://doi.org/10.2307/2943909 kuh, g. d., kinzie, j., buckley, j. a., bridges, b. k., & hayek, j. c. (2006, november 1-3). what matters to student success: a review of the literature. [commissioned report for the national symposium on postsecondary student success: spearheading a dialog on student success]. washington, d.c. kilmann, r. h., & thomas, k. w. (1977). developing a forced-choice measure of conflict-handling behavior: the “mode” instrument. educational and psychological measurement, 37, 309-325. doi: 10.1177/001316447703700204 lansmann, s., rigby, m., & schallenmuller, s. (2019). teams everywhere – investigating the impact of microsoft teams on knowledge worker. [unpublished manuscript, research in progress]. university of munster. ma, x., azemi, a., & beuchler, d. (2021, october 13-16). integrating microsoft teams to promote active learning in online lecture and lab courses. [paper presentation]. ieee frontiers in education conference (fie), lincoln, ne, united states. doi: 10.1109/fie49875.2021.9637398 martin, l., & tapp, d. (2019). teaching with teams: an introduction to teaching an undergraduate law module using microsoft teams. innovative practice in higher education, 3(3), 58-66. oakley, b., brent, r., felder, r. m., & elhajj, i. (2004). turning student groups into effective teams. journal of student centered learning, 2(1), 9-34. tuckman, b. w. (1965). developmental sequence in small groups. psychological bulletin, 63(6), 384– 399. https://doi.org/10.1037/h0022100 vygotsky, l. s. (1978). mind and society: the development of higher mental processes. cambridge, ma: harvard university press. wright, l. (2019, april 23). quantifying the value of collaboration with microsoft teams. microsoft 365 blog. https://www.microsoft.com/en-us/microsoft-365/blog/2019/04/23/quantifying-valuecollaboration-microsoft-teams/ 50 503 service temporarily unavailable 503 service temporarily unavailable nginx/1.14.1 3701 journal of teaching and learning with technology, vol. 2, no. 2, december 2013, pp. 43 – 59. using iannotate to enhance feedback on written work kristi upson-saia1 and suzanne scott2 abstract: this paper discusses an iannotate feedback model used by the authors to comment on written work in first-year writing courses. we show that the use of iannotate, like other emergent technologies, mitigated a number of issues that regularly undermine high-quality feedback (such as the time it takes for instructors to write detailed comments and the challenge for students to read illegible handwriting or to keep track of hard copies of their papers). yet, we contend that our feedback model goes beyond these practical benefits and, more importantly, enhances student learning. specifically, we argue that it aligns instructor and student standards, elucidates for students the different types of comments instructors make (and clarifies that they ought to prioritize some comments over others), helps students and instructors identify recurrences and patterns of comments (thus also helping students and instructors diagnose general writing strengths and weaknesses), and conditions students to engage with feedback not only as a justification of their grade, but as a launching point from which they can develop as thinkers and writers. the success of this feedback model is partly attributable to the features of iannotate and partly attributable to the classroom complements we designed as part of the feedback model. keywords: feedback; assessment; e-assessment; technology; technopedagogy; elearning tools; iannotate; visual learning; writing instruction i. introduction. if you ask instructors what the most dreaded or onerous part of teaching is, “grading papers” is the response that nearly always tops the list. instructors complain that providing extensive feedback takes time, time that is in short supply for those who are teaching a full load, who have an active research agenda, and who are expected to perform service to the institution. when they find out that their feedback has gone unread by students,3 many instructors become embittered and exchange careful, detailed remarks for simpler notes or just grades (wojtas, 1998; higgins, hartley, & skelton, 2001). in this paper, we propose a feedback model that attempts to alleviate some of the issues of grading commonly registered by instructors. specifically, we aimed to create a feedback model that students understand to be a valuable component of their learning process and that instructors perceive to be worth the time and effort they expend. after a brief overview of pedagogical scholarship on feedback (including the recent introduction of emergent technologies to enhance feedback), we describe our use of iannotate in four writing courses at occidental college from 2011-2012, we explain how our use of the application addresses persistent 1 associate professor, religious studies and director for teaching excellence, occidental college, upsonsaia@oxy.edu 2 assistant professor, film and media studies, department of english, arizona state university, suzannelynscott@gmail.com 3 duncan (2007) argues that students tend to read instructors’ comments only if the grade they receive is misaligned with the grade they expect to have earned, while wojtas (1998) found that students do not read the comments “if they disliked the grade.” journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 44 complaints from instructors and students, as well as how our use of the application aligns with the best practices detailed in scholarship on feedback. ii. pedagogical scholarship on feedback. there is no shortage of scholarly literature on feedback. some scholarship focuses on how instructors can most effectively structure their feedback, while other scholarship focuses on how to motivate students to engage feedback in a meaningful way. with regard to the former, consensus has emerged around the characteristics of high-quality feedback: 1) it is seamlessly aligned with the articulated goals and standards of the assignment (nicol & macfarlane-dick, 2006; duncan, 2007; hounsell et al., 2008; sadler, 2010). 2) it focuses on the most important learning objectives, leaving aside lower order concerns (black & wiliam, 1998; mcneill, gosper, & xu, 2012). 3) it is returned in a timely manner while the material is still fresh in students’ minds (cowan, 2003, hepplestone et al., 2011). 4) there is a required mechanism through which students reflect on and respond to the feedback, increasing the likelihood that students will incorporate suggestions in future assignments (carless, 2006; hepplestone et al., 2011; carless et al., 2011). while there is broad agreement on the features of high-quality feedback, scholars acknowledge that this sort of feedback is exceedingly timeand labor-intensive for instructors. moreover, scholars contend that there are barriers to students’ understanding or apprehension of even high-quality feedback. first, instructors and students hold different perceptions about the purpose and function of feedback. while students understand comments to be merely a justification of the grade they earned, instructors also understand their feedback to be another opportunity in which to (re)teach the material or to offer advice on how students can develop their skills as logicians or writers. the misalignment of feedback’s function—dubbed “feedback” versus “feedforward”—leads to students’ misuse or lack of use of high-quality feedback (bjorkman, 1972; mutch, 2003; rust, o’donovan, & price, 2005; nesbit & burton, 2006; weaver, 2006; lizzio & wilson, 2008; poulos & mahony, 2008; irons, 2008; burke, 2009; draper, 2009; walker, 2009; sadler, 2010; price et al., 2010). second, it is a challenge for students to interpret instructors’ comments because we offer different types of comments. for instance, we write critiques of students’ ideas or writing skills alongside conversational responses to their ideas alongside suggestions for further reading or research (mutch, 2003). we expect students to engage differently with different types of comments, yet we rarely make these expectations explicit nor do we train students in how to properly engage different types of comments. students, thus, tend to treat all comments the same: as criticisms of their work that justify the grade they were awarded. additionally, faculty include a range of comments that they would hierarchize: lower-order comments (e.g., grammatical problems, flawed prose, etc.) versus higher-order comments (e.g., problems with argumentation, reasoning, and marshaling evidence). yet, again, we rarely explain to students how to rank the importance of different comments and thus, to our disappointment, students tend to focus their revision work on less important—but more easily fixable—issues, ignoring the bigger problems (mutch, 2003; weaver, 2006). third, instructors struggle to find the balance between providing highly individualized comments, careful instructions for revision, and advice for future development (i.e., enough feedback so that students have a clear understanding of what is going wrong), while also journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 45 avoiding so much feedback that students are left overwhelmed and paralyzed, not knowing where to start addressing this whirlwind of comments (monroe, 2002; higgins, hartley, & skelton, 2002; nicol & macfarlane-dick, 2006; miller, linn, & gronlund, 2012). within the past several years, a new set of scholarship on feedback using emergent technologies has taken steps toward addressing some of the obstacles to high-quality feedback.4 some studies have argued that new technologies offer a more efficient workflow that reduces the amount of time and effort expended by instructors. as heinrich et al. (2009) put it: ...e-tools can make a real impact on efficiency: providing documents, easily accessible to all involved, anytime and anyplace; accepting assignment submissions, managing deadlines, recording submission details, dealing with safe and secure storage; returning commented-on assignments and marks to students; storing and if necessary exporting class lists of marks. using e-tools for these tasks frees up time that can be used for focusing on quality feedback. heinrich et al. (2009) agree that instructors have found learning management systems (lms) to be efficient ways to manage the submission of student work since the lms automatically records late work and ensures that student work remains secure. other studies propose that instructors compile a bank of commonly-used comments that they can simply cut, paste, and tailor to each individual paper, saving much of the time it would ordinarily take to write the same comments again and again. this time-saving measure enabled them to provide more feedback to students in large courses and to spend their time tailoring their stock remarks to individual students’ work (brown, bull, & race, 1999; heinrich, 2007; irons, 2008; heinrich et al., 2009). instructors are also able to return work in a timelier manner (not needing to wait until class to hand deliver in person hard copies of their feedback); this timeliness increases the probability that students will read and value our comments (denton, 2001; cowan, 2003, hepplestone et al., 2011). electronic feedback is also more legible to students, which means students are no longer required to ask us during office hours to decipher illegible comments; ecomments decrease the chance that they would simply ignore remarks they could not read on their own (denton, 2001; denton et al., 2008). heinrich et al. (2009) report that using technologies, such as the track changes feature in microsoft word, makes it possible to embed links to additional readings or resources into the comments, directing students’ further engagement with the material. these sorts of comments shift the culture of feedback from a means to simply justify the grade to a dialogic engagement between student and instructor that is presumed to continue beyond the individual paper or assignment (e.g., irons, 2008; carless, 2006; price et al., 2010; carless et al., 2011). finally, electronically submitted and commented-upon work facilitates better assessment of student progress over time. when working with hard copies, an instructor would have to make copies of hand-written comments and create an easily navigable file system for those hard copies. electronic papers with embedded electronic comments can be stored and catalogued more easily so that instructors can track the progress of students’ work over the course of the semester (heinrich et al., 2009).5 4 the new technologies discussed in this literature include general software programs (e.g., microsoft word track changes, google.docs), learning management systems (e.g., moodle, blackboard), and specialized applications or assessment tools (markin, turnitin, grademark, re: mark, marktool, adobe, and iannotate). 5 moreover, instructors interested in assessing their own assessment practices have at their disposal an easily navigable set of papers with their comments (heinrich et al., 2009). journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 46 this paper adds to the scholarly conversation about the pedagogical benefits of emergent technologies. we show that the use of iannotate, like other emergent technologies, enhances the efficiency of instructor’s workflow, reduces the time it takes to return papers, and provides students with more legible feedback. yet, we contend that our feedback model goes beyond these practical benefits and, more importantly, enhances student learning. specifically, we argue that our feedback model aligns instructor and student standards, elucidates for students the types of comments we make (and helps them prioritize some comments over others), helps students and instructors identify recurrences and patterns of comments (thus also helping students and instructors diagnose general writing strengths and weaknesses), and conditions students to engage with feedback not only as a justification of their grade, but as a launching point from which they can develop as thinkers and writers. in what follows, we show that the success of this feedback model is partly attributable to the features of iannotate and partly attributable to the classroom complements we designed around the feedback model. iii. approach. beginning in fall 2011, occidental college’s center for digital learning + research and the center for teaching excellence co-sponsored several cohorts of faculty learning communities that explored the pedagogical uses of the ipad. at that time, the authors of this paper began to use iannotate to grade student writing in four first-year writing-intensive seminars.6 although iannotate has received much praise on blog posts, such as profhacker on the chronicle of higher education website, for being an easy, portable, paperless way to annotate and share documents, not enough attention, we contend, has been paid to the pedagogical benefits of the application.7 after a brief description of the features of iannotate and how we used the tool in our writing courses, we will discuss in detail how our feedback model enhanced student learning. iannotate pdf is a productivity application from branchfire that is available on the ipad and android tablets.8 iannotate includes a palette of tools to annotate a document, including the ability to highlight, underline or strikethrough, type or write notes (in the margins or in collapsible balloons), and to bookmark. for commonly used annotation, iannotate enables users to create stamps (text or symbols) that can be imprinted on the document with a single click from the tool bar.9 we found the stamps feature to be exceedingly useful when commenting on student writing. since we tend to evaluate papers on the same criteria—the criteria laid out in our grading rubrics—we tend to write the same sorts of comments on every paper we grade. the stamps feature of iannotate, therefore, enabled us to save an inordinate amount of time writing marginal comments. we simply created stamps for our commonly used comments, such as: “you need to make your reasoning more explicit,” “good, careful reasoning,” “nice use of evidence,” “you need to interpret/analyze your evidence,” “clarify the point of this paragraph,” “nice 6 while our focus has been exclusively on using iannotate to write comments on student papers, the application has been widely adopted in academic ipad pilot programs at stanford university, massachusetts institute of technology, and the university of michigan, among others. the application’s uses in these academic contexts range from annotating course readings, to taking notes on class powerpoint presentations, to sharing documents and working collaboratively. 7 see, for example, jones (2010) and sample (2011). 8 at the time of publication, iannotate retailed for $9.99. 9 iannotate has many default stamps, comments such as “excellent” and “good job,” etc., as well as symbols such as check marks, smiley faces, and exclamation points. we found these comments to be far too vague to be useful and thus we quickly created our own stamps. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 47 guidepost,” “awkward prose,” “citation needed,” etc.10 after the initial time and labor it took to set up the stamp system, the process of inserting a stock comment with a single click made the feedback process much faster. in this way, the stamps mimic the “comment bank” suggested by irons (2008) and heinrich et al. (2009). yet, because the color of the stamps are adjustable, we sorted our individual comments into the categories we used on our rubric—argument, structure, use of evidence, writing style/prose, and mechanics—and then assigned a different color to each category of comments. for instance, comments related to argumentation were colored blue, comments related to evidence were colored green, comments related to organization and structure were colored orange, and so on (see figure 1). in this way, students could easily see how our comments mapped onto the rubric and onto broader areas of thinking and writing. figure 1. example of student paper with instructor comments. further, we used other features of iannotate to demarcate different types of comments. as noted above, we used our custom stamps to mark strengths and weaknesses in terms of 10 for a catalog of our custom stamps, see appendix a. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 48 argument and writing. still more, we used the checkmark stamp (ü) to acknowledge a good point and collapsible balloons (that included lengthier comments) to converse with students’ ideas and arguments (see figure 2). figure 2. example of student paper with additional iannotate markup. in addition to these types of annotation in the margins of the paper, we included comments at the end of the paper. here we interpreted the comments above. we pointed students back to the check marks that indicated where they succeeded and we elaborated on why these were particularly successful moments. we directed them to remarks made in collapsible balloons and tied together our engagement with their ideas in one synthetic remark. finally, we identified the strengths and weaknesses of their argument and writing by drawing their attention to visual patterns of comments: recurring stamps or recurring colors. for instance, if their paper was littered with positive blue comments and negative orange comments, we could praise them for their exceptional argumentation and encourage them to work on their organization. in this way, journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 49 our summative remarks at the end of the paper provided students a key to deciphering the marginalia above. iv. teaching method. although our summative remarks gave students a road map to decipher our comments on their papers, we found that we also needed to spend time in class talking about our feedback method. that is, this feedback model needed to be carefully integrated into a writing course in such a way that students: 1) understood the goals of the method and how those goals aligned to the standards and objectives of the assignment/course; 2) understood how to interpret and use our feedback; and 3) were required to reflect on and incorporate feedback into subsequent writing assignments. in this section, we will discuss these classroom complements to our written feedback. a. aligning student and instructor expectations. as scholars have long demonstrated, in order for students to benefit from feedback, they must understand the standards on which they are being evaluated, the feedback must make clear how their performance measures up to those standards, and the feedback must offer suggestions on how they can move steadily closer to achieving those standards on subsequent work (sadler, 1989; nicol & macfarlane-dick, 2006). in order to align our students’ expectations with our own, in class we introduced the color-coded rubric (see figure 3) on which their work would be assessed and showed them a sample paper marked up using iannotate. we explained how sets of comments lined up with categories of the rubric and we urged students to pay attention to the colors of our comments in order to discern broader writing strengths and weaknesses. we also used this in-class orientation to delineate between higher and lower order comments, again corresponding to the color-coded rubric (e.g., explaining that blue comments on argumentation are more significant than red comments on mechanics). we provided this orientation on both the first day of class and again when we distributed and discussed the first assignment. moreover, on our course sites, we posted the rubric and a description of and key for our iannotate feedback model so students could reference these materials on their own time as well. once students had completed their first draft, we devoted several class periods to writing workshops, pertaining to aspects of the rubric (e.g., one day each on use of evidence, structure and organization, thesis, prose, introductions, and conclusions). in each workshop, we circulated a handout that used language on our rubric and that would show up later in our customized stamps. this synchronization between the writing instruction, rubric, and stamps created a consistent message to students about our standards, promoted transparency in how student work would be assessed, and conditioned students on how they should be evaluating their own work during the pre-writing and drafting phases and in peer review sessions. b. navigating, interpreting, and using feedback. as mentioned above, scholars have found a persistent misalignment between instructors’ and students’ perceptions of the purpose of feedback. while students tend to read feedback only as a justification of their grade, instructors hope students learn—about course material or about their skills as thinkers and writers—from their comments. moreover, although instructors offer a journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 50 figure 3. color-coded rubric. variety of types of feedback (e.g., criticisms of specific ideas, conversational engagement with ideas, comments on broader writing skills, etc.) and although instructors expect students to engage differently with each type of comment, students have a hard time distinguishing these different kinds and levels of feedback. in short, students need to be trained to understand how we expect them to read and use our feedback. after returning their first paper, we devoted classtime to reminding them how to navigate our feedback: we explained the difference between stamps, checkmarks, and discussions in the collapsible balloons. to this we added a discussion of what we viewed to be the purpose and function of feedback and of how they ought to engage with each type of comment. we clarified that feedback was useful to justify the grade they received, but to do more than just that. we explained that they should use some feedback—our stamps that, in aggregate, pointed out broad areas of writing strengths and weaknesses—to inform their reflection on their broader writing and revision process and to modify their current process of drafting and revising. further, we explained that they should use other types of feedback—our comments in collapsible balloons that engaged their ideas—as an attempt to point out areas in which students’ ideas need to be corrected or developed. this might mean that students need to review course material that they did not understand sufficiently or that we are encouraging them to continue to pursue an interesting line of inquiry in a subsequent paper. our aim was to reorient students to the range of ways they might use our feedback, thus maximizing the value of our comments for them as well journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 51 as for us. while any instructor could teach students to use even traditional feedback in this way, we found that the palette of annotation tools in iannotate—namely, the ability to color-code the stamps and to vary the look of our comments—made it easier for us to visually represent these different kinds and types of comments and to coach students on how to engage differently with each. c. reflection. although instructors are regularly disappointed that students do not make good use of their careful feedback, recently several scholars have observed that students are seldom, if ever, required to engage with instructor feedback. these scholars urge instructors to require some sort of assignment in which students read and reflect on feedback given to them (weaver, 2006; hepplestone et al., 2011; carless, 2011). following this advice, we required students to reflect on and respond to our feedback in two ways. at the end of the semester (directly before they began work on the final paper), students had to compose a short written reflection on all of their papers to date in the course. they were to compose a self-assessment that identified broad areas in which they did well and poorly, noted areas in which they had improved over the course of the semester, and devised strategies for continuing to work on areas in which they were persistently weak. then we met with students in one-on-one conferences to discuss their ideas for their final paper, as well as to talk about their research, writing, and revision process as it related to the strengths and weaknesses identified in their reflection. v. findings. because this was an informal and limited pilot program, our findings are grounding in (1) the instructors’ assessment of students’ development over the course of the semester; (2) students’ written self-assessments;11 (3) anecdotal student feedback in one-on-one conferences; (4) student feedback collected in course evaluations;12 and (5) for one of the four courses, pre-semester and post-semester surveys.13 we have divided our findings into three subsections. first, we enumerate how our feedback model enhanced student learning. second, we discuss some of the more practical benefits of this feedback model for students and instructors. and finally, we present issues that arose and offer suggestions on how they might be resolved in future iterations of this feedback practice. a. enhancing student learning. this feedback model had several immediate benefits to student learning. for clarity, we have broken down these learning benefits in a way that most clearly delineates them; we recognize, though, that these are artificial categories. in practice, we saw many intersections between these 11 with sixteen students enrolled in each class, we had a total of sixty-four course self-assessments. 12 with sixteen students enrolled in each class, we had a total of sixty-four course evaluations. 13 after teaching three courses using our feedback model, we gave two surveys to gather both quantitative and qualitative data about students’ attitudes toward feedback in general and to our feedback model specifically. the pre-semester survey was designed to assess students’ exposure to and preferences for hand-written or electronic comments, to assess how students’ use feedback (if at all) in subsequent writing assignments, and to help us design the in-class framing of our feedback model. the post-semester survey was designed to collect student responses to our feedback model to help us identify issues and refine the model in subsequent courses. both surveys were optional, with thirteen of sixteen enrolled students completing both, and were composed of a mixture of multiple choice, ranked/scaled options, and open-ended elaborations or justifications of their responses. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 52 categories. first, we found that, in comparison with prior students in our first-year writing courses, these students had a better understanding of our standards and expectations. because the stamp system was aligned with our grading rubric and with the writing workshops—in terms of verbatim language and color-coded categories—our standards were repeatedly enforced and linked to visual cues.14 students’ self-assessments demonstrated that they had absorbed our standards as a vast majority of them used our own categories and language to discuss their primary strengths weakness and to make a concrete plan for improvement. for instance, one student remarked that the many green stamps that read “you need to interpret/analyze your evidence,” “this evidence isn’t relevant to the point you’re making,” and “insufficient evidence: add more/greater range to substantiate your point” visually clarified that the student had trouble marshaling evidence. the student wrote in her self-assessment, “i need to work on interpreting evidence to create a better dialogue between sources and my own ideas. when writing i will often pull in quotes i find last minute without really thinking about how well they substantiate the point i’m trying to make. in preparation for my term paper, i plan on writing out a detailed outline and mapping out each specific point/quote from sources that i want to use to make sure they’re relevant and explicitly linked to my argument.” another student noted that it was abundantly clear that he was not guiding his readers through the stages of his argument since “every paragraph in every paper has a ‘you need a better transition here’ stamp next to it!” although we did not instruct students to use our categories during peer-reviewing sessions, we regularly overheard them offering feedback to their peers that mimicked the categories and language of our rubrics.15 one student even began bringing her own set of colored pens to peerreview sessions to replicate the colored taxonomy of the rubric when writing comments on her classmates’ papers. second, we found that our feedback model taught students (especially first-year students who were unfamiliar with college-level writing and feedback) how to read and rank their instructors’ comments. students reported that they were able to understand that we offered different types of comments, each with distinct purposes, because they were visually distinct in the margins of their papers.16 moreover, students understood the relative importance of our comments. because dissecting voluminous and uniform marginal comments is challenging for students, color-coding distinguishes visually higher-order concerns from lower-order concerns. when students made reference to writing style or mechanical issues in their self-assessments, their language clearly conveyed an understanding that these were lower-order concerns. for example, one student wrote in her self-assessment: “as for silly spelling and grammar mistakes, this has always been a weakness because i do not put enough emphasis on the editing process. i think it will help me if i print out my essay, read it aloud a few times, and really go through it with a fine comb to avoid these silly mistakes.” on the contrary, they also clearly understood argument and structure to be most salient; one student wrote on her self-assessment: “before this 14 somewhat unexpectedly, this overt alignment to keep the rubrics, writing workshops and stamps cleanly aligned forced us to be more focused and consistent. 15 it was also apparent that, in peer-reviewing sessions, students offered more pointed feedback. in past classes we both struggled to get students to be more hard-hitting and direct with their peers. we had chocked this up to their hesitance to criticize their classmates, but we have come to realize that some of their hesitation stemmed from the fact that they simply did not understand sufficiently the standards of assessment and thus were unable to marshal those standards in their evaluation of their peers’ work. 16 although our papers had the same amount of marginal notes as prior papers, the systematization of the notes—and our explanation of the system—made it easier for students to navigate or, put differently, made it so that students were not overwhelmed (a common problem that plagues overly commented-upon papers; monroe, 2002; higgins, hartley, & skelton, 2002; nicol & macfarlane-dick, 2006; miller, linn, & gronlund, 2012). journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 53 class i spent a lot of time editing the spelling and grammar. i now know i need to spend that time on more important things like my argument.” this new ability to navigate comments has extended beyond our initial pilot program, with students reporting to us that, even after moving into courses that employ more traditional feedback, they are more easily able to parse and prioritize comments. third, students and the instructors were better able to identify patterns of writing strengths and weaknesses. in the past, when writing hand-written comments on papers, we did not flag every instance of a particular writing flaw. for instance, if a paper had weak transitions throughout, we would simply note the first instance and alert the student that this was a problem throughout (with a note that read something like “here and throughout” or “this is a pervasive problem”). with iannotate, however, the ease of the stamp feature allowed us to mark every instance. the repetition of stamps within a student’s paper—and still more the repetition across multiple assignments—alerted students to look beyond any given instance or beyond any given assignment to see more clearly the larger issues with their writing. one student, for example, remarked that he had never really paid attention to his transition sentences, or fully understood the impact they had on how the reader understood his (otherwise compelling and thoughtful) argument until he saw a barrage of orange “weak transition” stamps appearing all over his work. here, the student not only identified a primary weakness, but also gained a greater understanding of how one structural element impacted the strength of his paper overall. as instructors, we noted that students who routinely received the same stamped comments on their first few assignments seemed to resolve these issues more quickly than students in the past. taking the aforementioned student as an example, by the time he submitted his final paper outline, he was including rough transition sentences that he planned to refine in subsequent drafts. the ability to see patterns of writing strengths and weakness was helpful not only for the students, but for the instructors as well. while consulting with students on an upcoming paper, we could glance quickly at the color-coded comments to be reminded of the areas on which students excelled and on which they needed work. we found this ability to track very quickly students’ strengths, weaknesses, and progress to save an inordinate amount of time17 and it made our conferences with students much more specific and productive. fourth, our feedback model allowed us to visualize the relationship between categories on our rubric, and thus between elements of writing. for example, a paragraph that was marked up with multiple comments in blue and green clearly expressed to students the connection between their presentation and analysis of evidence and the strength of their argument. by visually representing these two writing elements in tandem, students perceived how they were integrated and interdependent.18 for example, in her self-assessment one student connected one of her strengths as a writer (identifying strong and appropriate evidence) with one of her weaknesses (analyzing and leveraging that evidence to substantiate her argument): “although i am able to choose evidence properly, i am weak at times at fully analyzing the evidence at the highest level of detail. at times i will make broad claims and fail to fully unpack these claims by analyzing more carefully my evidence, which is necessary to make a more thorough and persuasive argument in my paper.”19 17 although most of the discussion about iannotate’s benefits in terms of efficiency have centered on the time saved during grading, we found the time saved reviewing prior papers to be far weightier. 18 we found it easiest to discuss these sorts of interconnections with students one-on-one. some students, especially those with less preparation in writing, were focused on working on one or two writing issues and simply not ready to think about these more sophisticated interrelationships between elements of writing. 19 again, this is evidence of a student adopting the language used in the rubric and stamps: “unpack this claim.” journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 54 fifth, students began to understand and value feedback as more than merely justification of the grade. because we placed an emphasis (early and often) on how to most effectively read and rank comments with an eye towards refining their arguments and writing, our feedback model functioned to reshape students’ attitude toward feedback. several students who admitted to rarely revisiting, much less revising, their written work in prior courses reported that our feedback model helped them view comments not as punitive remarks to be consumed once and then forgotten, but as a multi-layered conversation about their ideas and about their development as critical thinkers and writers. other students, some whose prior instructors used the microsoft word’s track changes feature to comment on their work, remarked that they began to see comments as more than edits to be “resolved” without further reflection on broader writing issues that transcended the particular assignment.20 b. practical benefits. the students responded positively to our feedback system not only because they learned about themselves as writers and were able to more quickly progress as writers, but also for more pragmatic reasons. they considered improved legibility and increased accessibility to be useful. many students admitted that, in the past, they simply did not read comments when the handwriting was illegible and that they regularly misplaced hard copies of graded papers. because iannotate obviated issues of legibility and made “losing” a paper an impossibility (even if the email containing the annotated pdf was deleted, another copy of their paper with full comments was just an email away), students had no legitimate excuse not to read their instructors’ comments. in fact, even those students who claimed that our feedback model had not fundamentally changed the way they engaged with different types or kinds of comments noted that having all of their papers digitally accessible made them more likely to revisit their written work. further, iannotate streamlines instructors’ grading workflow to maximize efficiency; the practical benefits are five-fold. first, is the portability and extended battery life of the tablet (the device on which most instructors use iannotate). second, is the easy, paperless submission and return of student work, using iannotate’s built-in ability to sync with dropbox or built in email function. third, toolbars can be customized to include the instructor’s most frequently used tools and stamps and easily adapted to any course and/or paper topic. fourth, integrating other free apps further facilitates the process (e.g., we used dragon dictation to dictate and transcribe the summative comments at the end of the paper; some devices, like the ipad 3, now offer direct dictation into iannotate). finally, as noted above, quick accessibility to color-coded stamps makes it faster and easier to track students’ writing problems and progress. c. issues and troubleshooting. despite our overall satisfaction with our feedback model, we encountered three significant issues. first, some students had trouble remembering which colors corresponded to which category of the rubric when they did not have the rubric directly in front of them. when surveyed at the end of the semester, students suggested that we include a stamp at the top of every paper 20 on microsoft word creating the impression of “teacher as editor,” see michael j. faris’ blog post, “using iannotate to grade”: http://blogs.tlt.psu.edu/projects/ipad/2010/10/using-iannotate-to-grade.html journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 55 that could function as a key to the taxonomy. iannotate would also allow instructors to easily insert the full, color-coded rubric at the end of each paper. second, we encountered some technical difficulties. students noted that sometimes the colors were lost when they printed their annotated papers using campus printers whose default was black-and-white. instructors should stress that students need to read comments electronically or need to print them in color. another small group of students mentioned that, depending on the program they used to open the annotated pdf (e.g., adobe reader or preview, ibooks, iannotate, docstogo), some colors were more legible than others. before implementing this feedback model, instructors should investigate which colors and programs are most legible on the programs available at their institution and they should advise students to use those programs to read their annotations. finally, some students reported a lack of “personal touch” associated with the use of eassessment tools. in our pre-semester survey, the vast majority of the respondents indicated that the majority of their written work in high school had been graded by hand (85%). on the survey several students remarked that, while they did not find any fundamental difference between handwritten and electronic comments in terms of content, they generally perceived handwritten comments to be more “personal” and they claimed to “connect” with it more despite issues of illegibility.21 as more than one of these students acknowledged, however, their preference likely also stemmed from the fact that they were simply accustomed to handwritten comments. yet this perception is not insignificant as chang et al. (2012) discovered that students’ perceptions of personable feedback is interconnected with their perceptions of quality feedback; in other words, students think that the care associated with taking time to hand-write comments correlates with students’ perception that caring professors offer higher quality feedback and thus they take that feedback more seriously.22 one way instructors might temper these concerns is to create handwritten comments (rather than text stamps) in iannotate by using a stylus though this might result in issues of illegibility, especially given complaints about the lack of precision of styli, and obviates the practical benefits of saving time for the instructor. alternatively, instructors might also choose to use a new feature of the latest version of iannotate: audio comments. instructors can pepper the paper with audio comments of up to 60 seconds each. in addition to mitigating concerns about “impersonal” feedback, audio files might also create a more expressly dialogic form of feedback (and could stand in for the collapsible balloons as we used them).23 vi. conclusions. we found it interesting that the students who responded most positively to our feedback model were the strongest and weakest writers in terms of the elements of writing emphasized on our rubric. on the one hand, students who entered the course with a strong grasp on writing fundamentals reported that this feedback model helped them pinpoint very nuanced aspects of their writing (within broader categories) that needed improvement. on the other hand, our weakest students, who frequently self-identified as visual learners, found the feedback model especially well-suited to their learning style, enabling them to visualize their writing strengths 21 this finding corroborates student preferences for a “human aspect” to feedback found in budge (2011) and students’ aversion to e-assessment because it is impersonal, as reported in ferguson (2011), scott (2006), and morgan and toledo (2006). 22 this study finds that students prefer e-assessment for its accessibility, legibility, and timeliness, while they value handwritten feedback as higher-quality because of its personability. 23 on using iannotate’s audio feature to make grading more personal, see doug ward’s post on profhacker: http://chronicle.com/blogs/profhacker/grading-with-voice-on-an-ipad/40907 journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 56 and weaknesses. specifically, the color-coding enabled them to compartmentalize writing issues and to more systematically approach revisions, tackling one category at a time. so, in the end, we were surprised, yet pleased, to find that our feedback model addressed existing educational and learning inequities. in addition to speaking to students with differing educational backgrounds and learning styles, we believe that this feedback model could be productively applied across courses, disciplines, and institutions within minimal adaptation. in our small liberal arts college environment, where class sizes are relatively small and there is a premium placed on the professor-student interaction, iannotate functioned to help enrich these interactions by focusing and concentrating our engagements around our learning objectives. the e-assessment tool kept students’ and instructors’ attention firmly trained on a limited set of writing elements and on students’ development as thinkers and writers. when considering how this system might be applied to different courses or different institutional contexts, particularly those with much larger enrollments or those in which student work is graded by a rotating instructors or teaching assistants, the benefits of this feedback model become even more apparent. in particular, for the former, this model would enable instructors to offer more detailed feedback than would be ordinarily possible given the size of their classes. for the latter, this model would create coherent, unified standards that could be used by various graders, providing more consistency for students, thus improving the chance that students—now with a clearer sense of what is going wrong—could develop as writers. appendix a: list of customized stamps argumentation interesting idea develop this idea further good, careful reasoning you need to make your reasoning more explicit you need to make explicit each stage/layer of logic in this argument imprecise reasoning unpack this claim strong thesis, complex argument refine your thesis your intro is lacking a thesis evidence nice use of evidence you need to interpret/analyze your evidence you need to introduce your evidence this evidence isn’t relevant to the point you are making insufficient evidence: add more/greater range to substantiate your point support this claim with evidence structure strong transition weak transition clarify the point of this paragraph clarify how this paragraph contributes to your overall argument nice guidepost style awkward prose well-written/nicely-put vary your word choice vary your sentence structure journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 57 unpack this sentence—too long, too many ideas this language is vague, specify does this word convey precisely what you mean? consider your audience mechanics incomplete/improper citation citation needed proofread your paper sp. references bjorkman, m. (1972). feedforward and feedback as determiners of knowledge and policy: notes on a neglected issue. scandinavian journal of psychology, 13, 152-158. black, p., & wiliam, d. (1998). assessment and classroom learning. assessment in education, 5(1), 7-74. brown, s., bull, j., & race, p. (eds.). (1999). computer assisted assessment in higher education. london, routledge. burke, d. (2009). strategies for using feedback students bring to higher education. assessment & evaluation in higher education, 34(1), 41–50. buzzetto-more, n. a., & alade, a.j. (2006). best practices in e-assessment. journal of information technology education, 5, 251-269. carless, d. (2006). differing perceptions in the feedback process. studies in higher education, 31(2), 219–233. carless, d., salter, d., yang, m., & lam, j. (2011). developing sustainable feedback practices. studies in higher education, 36(4), 395-407. chang, ni, watson, a.b., bakerson, m.a., williams, e.e., mcgroon, f.a., & spitzer, b. (2012). electronic feedback or handwritten feedback: what do undergraduate students prefer and why? journal of teaching and learning with technology, 1(1), 1-23. cowan, j. (2003). assessment for learning—giving timely and effective feedback. exchange, 4, 21-22. denton, p. (2001). generating coursework feedback for large groups of students using ms excel and ms word. university chemistry education, 5, 1-8. denton, p., madden, j., roberts, m., & rowe, p. (2008). students’ responses to traditional and computer-assisted formative feedback: a comparative case study. british journal of educational technology, 39(3), 486-500. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 58 duncan, n. (2007). feed-forward: improving students’ use of tutors’ comments. assessment & evaluation in higher education, 32, 271-283. draper, s. (2009). what are learners actually regulating when giving feedback? british journal of educational technology, 40(2), 306–315. faris, m. j. (2010). using iannotate to grade [blog post]. retrieved from http://blogs.tlt.psu.edu/projects/ipad/2010/10/using-iannotate-to-grade.html heinrich, e. (2007). e-learning support for essay-type assessments. in n.a. buzzetto-more (ed), principles of effective online teaching. santa rosa: informing science press. heinrich, e., milne, j., ramsay, a., & morrison, d. (2009). recommendations for the use of etools for improvements around assignment marking quality. assessment & evaluation in higher education, 34(4), 469–479. hepplestone, s., holden, g., irwin, b., parkin, h., & thorpe, l. (2011). using technology to encourage student engagement with feedback: a literature review. research in learning technology, 19(2), 117-127. higgins, r., hartley, p., & skelton, a. (2001). getting the message across: the problem of communicating assessment feedback. teaching in higher education, 6(2), 269-74. higgins, r., hartley, p., & skelton, a. (2002). the conscientious consumer: reconsidering the role of assessment feedback in student learning. studies in higher education, 27(1), 53-64. hounsell d., mccune, v., hounsell, j., & litjens, j. (2008). the quality of guidance and feedback to students. higher education research and development, 27, 55-67. irons, a. (2008). enhancing learning through formative assessment and feedback. abingdon: routledge. jones, j. b. (2010, june 4). mark up pdfs on your ipad: iannotate pdf [blog post]. retrieved from http://chronicle.com/blogs/profhacker/mark-up-pdfs-on-your-ipad-iannotate-pdf/24500 lizzio, a., & wilson, k. (2008). feedback on assessment: students’ perceptions of quality and effectiveness. assessment & evaluation in higher education, 33(3), 263-75. mcneill, m., gosper, m., & xu, j. (2012). assessment choices to target higher order learning outcomes: the power of academic empowerment. research in learning technology, 20, 283-96. miller, m., linn, r., & gronlund, n. (2012). measurement and assessment in teaching (11th edition). columbus: pearson. monroe, b. (2002). feedback: where it’s at is where it’s at. the english journal, 92(1), 102104. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 59 mutch, a. (2003). exploring the practice of feedback to students. active learning in higher education, 4(1), 24–38. nesbit, p., & burton, s. (2006). student justice perceptions following assignment feedback. assessment & evaluation in higher education, 31(6), 655–670. nicol, d., & macfarlane-dick, d. (2006). formative assessment and self-regulated learning: a model and seven principles of good feedback practice. studies in higher education, 31(2), 199218. nicol, d. (2009). assessment for learner self-regulation: enhancing achievement in the first year using learning technologies. assessment & evaluation in higher education, 34(3), 335–352. poulos, a., & mahony, m.j. (2008). effectiveness of feedback: the students’ perspective. assessment & evaluation in higher education, 33(2), 143-154. price, m., handley, k., millar, j., & o’donovan, b. (2010). feedback: all that effort, but what is the effect? assessment & evaluation in higher education, 35(3), 277–289. rust, c., o’donovan, b., & price, m. (2005). a social constructivist assessment process model: how the research literature shows us this could be best practice. assessment &evaluation in higher education, 30(3), 231–240. sadler, d. (2010). beyond feedback: developing student capability in complex appraisal. assessment & evaluation in higher education, 35(5), 535–550. sample, m. (2011, november 8). making the most of iannotate on the ipad [blog post]. retrieved from http://chronicle.com/blogs/profhacker/making-the-most-of-iannotate-on-theipad/37091 walker, m. (2009). an investigation into written comments on assignments: do students find them usable? assessment & evaluation in higher education, 34(1), 67-78. ward, d. (2012, june 19). grading with voice on an ipad [blog post]. retrieved from http://chronicle.com/blogs/profhacker/grading-with-voice-on-an-ipad/40907 weaver, m.r. (2006). do students value feedback? student perceptions of tutors’ written responses. assessment & evaluation in higher education, 31(3), 379–394. wojtas, o. (1998). feedback? no, just give us the answers. times higher education supplement, september 25. microsoft word 3084-jotlt final.docx journal of teaching and learning with technology, vol. 1, no. 2, december 2012. pp. 13 – 25. student perceptions of classroom engagement and learning using ipads timothy t. diemer1, eugenia fernandez2, and jefferson w. streepey3 abstract: many colleges and universities have launched ipad initiatives in an effort to enhance student learning. despite their rapid adoption, the extent to which ipads increase student engagement and learning is not well understood. this paper reports on a multidisciplinary assessment of student perceptions of engagement and learning using ipads. student reactions following single and multiple classroom activities using ipads were measured via a survey asking them to rate their learning and engagement using a 5-point likert scale. responses to the questions were grouped into thematic categories of perceived learning and perceived engagement. students who reported a high level of engagement while using ipads reported a high level of learning as well. no effects due to age, gender, or language were found. students who characterized themselves as comfortable with modes of e-learning reported significantly greater levels of perception of learning and engagement. those who reported being comfortable were more likely to use ipads for learning and professional development in the future. furthermore, a number of students who initially described themselves as somewhat uncomfortable with e-learning technology also reported interest in continuing to use ipads. keywords: ipads, e-learning technology, learning and engagement, student perceptions i. introduction. within two days after their initial launch in april 2010, ipads were sold out or scarce at apple stores worldwide. before 60 days had passed, apple had sold 2 million ipads (kane, 2010). the wall street journal (sherr, 2011) reported in mid august 2011 that apple had sold 28.7 million ipads since the april 2010 launch. since then several colleges and universities, including stanford, notre dame, and pepperdine universities, oberlin and reed colleges (fischman & keller, 2011; rice, 2011; wieder, 2011) have launched ipad initiatives in an effort to enhance student learning. despite the rapid adoption of ipads for educational and professional purposes, the extent to which this technology enhances student engagement and learning in the classroom is not well understood. however, when other instructional technology has been thoughtfully deployed in the classroom, studies (chen, lambert, & guidry, 2010; nelson laird & kuh, 2005) have found positive correlations between the use of educational technology and student 1 organizational leadership & supervision, indiana university purdue university indianapolis, 799 w. michigan st., indianapolis, in, 46202, tdiemer@iupui.edu 2 computer & information technology, indiana university purdue university indianapolis, 799 w. michigan st., indianapolis, in, 46202, efernand@iupui.edu 3 kinesiology, indiana university purdue university indianapolis, 901 w. new york st., indianapolis, in, 46202, jwstreep@iupui.edu diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 14 engagement, notably in the form of active and collaborative learning and student-faculty interaction. assessments of student perceptions of learning and engagement have traditionally been used for gauging the success of new instructional technology (alavi, 1994). such assessments are especially practical when the breadth of the impact of novel technology spans multiple disciplines and no single tool can be used to directly measure learning outcomes. while it is generally believed that students would prefer classroom sessions that utilize ipads (wieder, 2011), no studies to date have explored factors that may contribute to student perceptions of learning or engagement. the iupui center for teaching and learning along with its university information technology services convened a faculty learning community to explore the benefits and problems associated with the introduction of ipads into the classroom. this learning community, comprised of faculty from multiple disciplines, was given access to 40 ipads to deploy in their classrooms in single or multiple sessions over the length of a 16-week semester. we expected that ipad activities would promote active and collaborative learning, a defining component of student engagement (kuh, 2005) associated with positive learning outcomes (harper & quaye, 2009; kinzie, 2010; prince; 2004). ii. background. prince (2004) defined active learning as activities introduced into classrooms and collaborative learning as students working together on an assigned task. pike, kuh, and mccormick (2008) described “active and collaborative learning” as activity that requires students "to work with other students to solve problems and master difficult material” (p. 7). the ipad features numerous physical characteristics (such as a large screen, motion sensors, and portability) and an expansive selection of inexpensive software that instructors can use to accommodate active and collaborative learning in the classroom. for example, by using the ipad’s motion sensors students can push, pull, and lift their ipads to gain a better understanding of the physics of movement; or by using collaborative software students can make concept maps that appear on multiple ipad screens so that each collaborator can contribute to the design of the map. the present study examines student response to the use of ipads as the catalyst for active and collaborative learning. prince (2004) summarized research on student engagement and described near consensus that student engagement is associated with positive learning outcomes. prince further cited several meta-studies to show that collaborative-learning activities, compared to individual assignments, improved academic performance. kinzie (2010) also explained that student engagement, as defined and measured by national survey of student engagment, is associated with a wide array of desired outcomes. kinzie further described the link between student engagement and academic success: a substantial body of research indicates that once students start college or university a key factor as to whether they will survive and thrive is the extent to which they take part in educationally purposeful activities…quite simply, to ensure that all students graduate and make the most of their undergraduate education, universities must first ensure the learning environment provides rich and educationally meaningful opportunities and then focus squarely on increasing student engagement (p. 140). diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 15 carini, kuh, and klein (2006) described general agreement that student engagement is associated with improved learning. harper and quaye (2009) suggested a connection between student engagement and academic success, explaining that students who are actively engaged in educationally purposeful activities inside and outside the classroom show higher retention rates and higher graduation rates. aston (as cited in axelson & flick, 2011) further suggested a direct connection between the amount of engagement and the amount of learning. kuh (2005) described the benefits of collaborative learning: "... when students collaborate with others in solving problems or mastering difficult material, they acquire valuable skills that prepare them to deal with the messy, unscripted problems that they will encounter daily during and after college" (p. 193). the purpose of this study is to explore student experiences with ipads to determine their perceptions of learning and engagement and to describe factors that may shape student attitudes towards the use of ipad in the classroom. for this study, a multidisciplinary assessment of student perceptions was conducted following single and multiple activities using ipad. specifically, the authors examined how factors, such as age, gender, ownership, and overall acceptance of instructional technology among others, impacted student perceptions of learning and their engagement in active and collaborative learning during ipad-centered activities. iii. methodology. a. subjects. iupui is an urban institution with an annual enrollment of approximately 30,000 undergraduate, graduate, and professional students seeking degrees from indiana university and purdue university programs. in total, 209 undergraduate students from several degree programs participated in the study by enrolling in a course for which ipads had been selected for deployment (see table 1). course selection was determined by the center for teaching and learning and university information technology services from proposals written by the course instructors detailing how ipads could help achieve course outcomes. all data collection and analysis procedures were performed in accordance with the university institutional review board. b. ipad activities. prior to an ipad activity, class instructors requested specific apps to be installed on the ipads. these ipads were picked up by the instructor and brought to the classroom. at the beginning of each activity, each student was issued an ipad to use for the class period. if required, the students were given instruction for connecting the ipad to the internet and setting up email. the class was then given an activity that was intended to promote engagement through active and collaborative learning. activities included the use of collaborative concept mapping, brainstorming, graphing apps using the built-in accelerometer, ear training apps, and mobile access to library resources. using the ipads, the students were free to move about the room and/or pass the ipads around to view each other’s work. following the activity, the students submitted their work to the instructor through email or a file-sharing application such as dropbox. the ipads were then collected by the instructor and returned to the administrator who reset the ipads to remove all student work and login information and prepared the ipads for use in the next class. over the diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 16 course of the semester, students used the ipads from 17 times depending on the class in which they were enrolled (see table 1). c. assessment. at the end of the semester or, in the case of the library class, at the end of a single session, the students were given a survey asking them to rate their perceptions of learning and engagement through ten questions using a 5-point likert scale with possible responses ranging from strongly agree to strongly disagree (see table 2). table 1. courses & ipad activities used in the study. department course(s) ipad activities number of activities per course tourism, convention, and event management global tourism seminar; mechanics of meeting planning evaluation of tourism applications; view virtual venue tours, select meeting sites, design meeting rooms, plan menus, and create staffing grids. 3 organizational leadership and supervision leadership for a global workforce creating and accessing open source learning modules. 1 music musicianship 2; musicianship 4 train musicians to measure intervals and hear the differences between two notes sounding together or in part. 3 communication studies introduction to communication theory demonstrate connections between communication theory and real-life scenarios with mapping applications; exploration of news apps and websites. 7 english communication skills for international teaching assistants; english for academic purposes help international students improve english competency through active learning 2 and 4, respectively physical education biomechanics measure human movement using the ipads’ native accelerometers and video analysis apps. 7 library computer methods for journalism improve academic honesty by teaching when and how to cite another’s work. 1 diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 17 in addition, all students were asked to answer questions about their age and gender as well as questions about their level of comfort with technology (pre-comfort), their future use of mobile devices (post-use), their attitude toward e-learning (e-learning), and their current ownership of mobile technology (ownership, see table 3). table 2. survey questions provided to the students. questions about students’ perceptions of learning the ipad activity helped me apply course content to solve problems. the ipad activity helped me learn the course content. the ipad activity helped me connect ideas in new ways. the ipad activity helped me participate in the course activity in ways that enhanced my learning. the ipad activity helped me develop confidence in the subject area. the ipad activity helped me develop skills that apply to my academic career and/or professional life. questions about students’ perceptions of engagement the ipad activities motivated me to learn the course material more than class activities that did not use the ipad. i participated more in class during the ipad activities than during activities that did not use the ipad. my attention to the task(s) was greater using the ipad. it was easier to work in a group using the ipad than in other group activities. d. analysis. survey responses were manually scored (strongly agree = 5, agree =4, neutral =3, disagree = 2, strongly disagree = 1) and entered into an excel spreadsheet. responses to the questions were then grouped into thematic categories of perceived learning and perceived engagement (see table 2) and were averaged to create perceived learning and perceived engagement variables. any case with a missing value for any question was not included in the average calculation. a pearson correlation coefficient was then calculated for the relationship between participants’ reported levels of engagement and reported levels of learning using ipads. two of the courses included in the study were for students for whom english is not a first language. for analysis purposes, we created two groups: one with responses from these two courses and another with all other courses. this was done to allow comparisons between exclusively non-native english speakers and primarily native english speakers. a 2 x 2 x 2 (age x gender x language) between-subjects factorial anova was used to compare perceived learning and perceived engagement among the three factors. to test whether using ipads in the classroom affected students’ likelihood of using ipads in the future for e-learning or professional development, a chi-square test of independence was conducted comparing pre-comfort to post-use likelihood. to meet the minimum expected cell count requirement, the pre-comfort ‘not at all comfortable’ and ‘not very comfortable’ responses were combined into ‘not comfortable’. on the post-use variable, the responses for ‘not likely’, ‘somewhat likely’ and ‘unsure’ were combined into ‘not or low likely’. diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 18 to test the relationship between students’ e-learning preference and their perceived learning and perceived engagement, spearman rank correlations were used. for this test, subjects with missing or “no preference” responses were dropped from the analysis, leaving only subjects whose preference for e-learning technology ranged from “little or no use” to “moderate amount” to “extensive use.” a one-way anova with bonferoni post-hoc t-tests was used to examine whether those who had “no preference” for e-learning technology differed from the groups. to test whether the frequency of ipad usage affected student reporting of learning and engagement, one-way anovas were computed comparing perceived learning and perceived engagement to number of ipad activities used. table 3. survey of student attitudes toward mobile technology and e-learning. question possible response before using ipads in this class, what was your comfort level using handheld mobile computing devices? (pre-comfort) [ ] not at all comfortable [ ] not very comfortable [ ] fairly comfortable [ ] very comfortable after using ipads in this class, how likely are you to use a handheld mobile computing device for e-learning or professional development? (post-use) [ ] not likely [ ] somewhat likely [ ] unsure [ ] likely [ ] extremely likely considering face-to-face classes that use elearning technology [such as handheld devices, online research guides, oncourse, or other course management systems] in the classroom which of the following best fits your preference? (e-learning) [ ] classes that make little or no use of elearning technology. [ ] classes that use a moderate amount of elearning technology. [ ] classes that make extensive use of elearning technology. [ ] no preference. do you own a handheld mobile computing device that is capable of accessing the internet (whether or not you use that capability)? examples include iphone, blackberry, other internet-capable cell phone, ipod touch, pda, ipad, kindle, etc. (ownership) [ ] no, and i don’t plan to purchase one in the next 12 months. [ ] no, and i plan to purchase one in the next 12 months. [ ] yes. [ ] don’t know iv. results. surveys were collected from 209 students in nine undergraduate courses. table 4 shows the distribution by course. of the 209 students, 91 were female (43.5%), 107 male (51.2%) with 11 (5.3%) declining to answer. the vast majority (82.8%) of the students were aged 19-28 with 26 (12.4%) aged 29-44 and 10 (4.8%) declining to answer. most students (73.7%) owned a mobile device with internet access; 9.6% planned to purchase one within 12 months; 9.1% did not own one and had no plans to purchase one; and 7.7% either did not know or did not answer. diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 19 table 4. number of students by course. course frequency percent intro to communication theory 36 17.2 english for academic purposes 55 26.3 communication skills for international teaching assistants 18 8.6 biomechanics 32 15.3 computer methods of journalism 23 11.0 musicianship 2 9 4.3 musicianship 4 11 5.3 leadership for a global workforce 10 4.8 global tourism seminar: mechanics of meeting planning 15 7.2 total 209 100.0 a large number of students (83.7%) reported high comfort levels with using handheld mobile computing devices prior to using ipads in the classroom. a large percentage (85.1%) of students also reported a preference for moderate or extensive use of e-learning technology in the classroom. tables 5 and 6 provide further details. table 5. student comfort levels with handheld devices. response frequency percent cumulative percent very comfortable 103 49.3 49.3 fairly comfortable 72 34.4 83.7 not very comfortable 25 12.0 95.7 not at all comfortable 5 2.4 98.1 missing 4 1.9 100.0 total 209 100.0 table 6. student preferences for e-learning technology. response frequency percent cumulative percent extensive use 63 30.1 30.1 moderate amount 115 55.0 85.1 little or no use 7 3.3 88.4 no preference 18 8.6 97.0 missing 6 2.9 100.00 total 209 100.0 students, on average, reported high levels of perceived learning and moderate levels of perceived engagement (see table 7). table 7. descriptive statistics for perceived learning and perceived engagement. diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 20 variable n min max mean std. error std. deviation perceived learning 192 1.67 5.00 4.13 .049 .683 perceived engagement 206 1.00 5.00 3.65 .063 .904 a moderate positive correlation was found between reported levels of engagement and reported levels of learning using ipads (r(192) = .684, p < .001; figure 2). students who reported a high level of engagement while using ipads reported a high level of learning as well. figure 2. relationship between perceived learning and perceived engagement. a 2 (age range) x 2 (gender) x 2 (language) between-subjects factorial anova was used to compare perceived learning and perceived engagement among the three factors. no main effects or interaction effects were significant (p > .05). none of age, gender, or use of english as a foreign language had a significant effect on self-reported learning or engagement. a chi-square test of independence found that post-use likelihood was dependent on precomfort level (χ2(4) = 12.50, p < .05; table 8). note that approximately 2/3 of the students who diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 21 reported not comfortable before using ipads reported post-comfort levels of likely and extremely likely. table 8. cross tab of pre-comfort and post use levels. post use level pre-comfort level not or low likely likely extremely likely total not comfortable 9 13 8 30 fairly comfortable 22 31 19 72 very comfortable 14 40 48 102 total 45 84 75 204 spearman rank correlations found a positive relationship between students’ e-learning preference and their perceived learning (ρ(170) = 0.30, p < 0.0001) and perceived engagement (ρ (180) = 0.32, p < 0.0001). students who preferred extensive use of e-learning technology also reported more perceived learning and engagement. for those students with no e-learning preference, significant main effects for e-learning on perceived learning (f(3,182) = 6.87, p = 0.0002) and perceived engagement (f(3,195) = 6.21, p = 0.0005) did not lead to discovery of significant differences between the “no preference” group and the groups who expressed the extent of their preference for e-learning. one-way anovas comparing perceived learning and perceived engagement to number of ipad activities found significance differences for perceived learning (f(4,187) = 2.85, p < .05). tukey’s hsd was used to determine the nature of the differences. students who used ipads 7 times reported higher levels of learning (m = 4.26, sd = .563) than those who used ipads just once (m = 3.86, sd = .776). v. discussion. as the apple ipad becomes increasingly common on college campuses (fischman & keller, 2011; rice, 2011; wieder, 2011), exploration of its impact on instruction and learning is just being established. writing about ipads for the chronicle of higher education, rice (2011) reported preliminary findings from several universities. the most noticeable difference was how students in the ipad classes moved around the classroom more and seemed to be more engaged in the material... ipads increase engagement and collaboration, acting as a facilitator for more easily sharing information. (para. 3-4). wieder (2011) pointed to early analyses showing that ipads promote active learning, collaboration, and student engagement. wieder quoted a pepperdine university administrator who reported that students using ipads for group assignments in a math class were more in sync than were students in a section not using ipads. the ipad-equipped students worked at the same pace as one another and shared their screens to help one another solve tough problems. (p. a22). the present study provides a measure of student perceptions of learning and engagement and describes factors that may affect those perceptions. the study involved ipad-centered diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 22 activities, conducted among multiple academic disciplines, during single or multiple classroom sessions, and a subsequent assessment of student perceptions of learning and engagement. age, gender, and language did not affect students’ perceptions of learning and their engagement in the form of active and collaborative learning. however, students who characterized themselves as comfortable with modes of e-learning reported significantly greater levels of perception of learning and engagement. those who reported being comfortable with mobile technology prior to the ipad activities were also more likely to use ipads for learning and professional development in the future. furthermore, a number of students who initially described themselves as somewhat uncomfortable with e-learning technology also reported interest in continuing to use ipad in coming semesters. parker, bianchi, and cheah (2008) explained that a link between use of instructional technology and increased student engagement is strongly supported in the literature. noting a lack of evidence that the increased student engagement resulted in higher grades or higher exam scores, the authors reasoned that the clearest benefit of instructional technology may be its ability to promote collaboration. as noted earlier, kuh (2005) is among those who asserted that collaborative learning helps students to develop valuable skills that have long-term benefit. mobile devices such as the ipad hold the potential to promote student engagement in the form of active and collaborative learning. positive learning outcomes are likely to accompany use of ipads within university classrooms if the device effectively increases the level of student engagement. though the classroom use of the ipad in the present study varied across disciplines and by instructor, students reported not only a perception of increased engagement (active and collaborative learning), but also a positive effect on their learning. however, evidence of increased learning through exams or course grades is beyond the scope of the present study. age, gender, and the use of english as a first language had little influence on students’ perceptions of learning and engagement. this comes as no surprise. research does not support a stereotype that older students are more resistant to instructional technology or that they are relatively novice in computer use compared to what prensky (2001) called digital natives. data from the pew internet research project (jones & fox, 2009) show no dramatic difference in internet use between users in their 20s compared to older generations. rizzuto and mohammed (as cited in githens, 2007) found that older employees in an industrial setting were in fact more willing to adapt to instructional technology for training programs than were younger employees. like age, gender also had no impact on perceived outcomes. research in this area has primarily focused on studying gender in online courses with mixed results. yukselturk and bulut (2009) reported no gender difference in learning in an online computer programming course. on the other hand, in chyung’s (2007) study of graduate students in an instructional technology course, female students scored significantly higher on the final exam than did male students. in a study involving 12 online graduate education courses, rovai and baker (2005) found women reported learning more than their male peers. parker, bianchi, and cheah (2008) showed that female students were more favorable toward instructional technology than were male students. results were mixed in the one study we found that did look at mobile learning (wang, wu, & wang, 2009). no gender difference was found for performance expectancy (finding mobile learning useful) but the effect of social influence on the intention to use mobile learning (postuse) was significant for men, but insignificant for women. obviously, more work is needed in this area. research on resistance to e-learning provides some insight into how university students might receive the ipad as another component of e-learning technology (annansingh & bright, diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 23 2010; thompson & lynch, 2003). students in the present study who were comfortable with elearning and mobile technologies reported more learning and a greater likelihood to use ipads as instructional technology in the future. research had shown that students who, in contrast, perceived themselves as inadequate or who reported low self-efficacy were generally reluctant to embrace technology in the classroom (annansingh & bright, 2010; thompson & lynch, 2003). the current study showed that it was possible, however, to overcome this resistance through repeated exposure to the ipad. students in the present study reported higher levels of learning when given ipad activities multiple times over the semester. tallent-runnels et al. (2006) explained that a student's perception of self efficacy when faced with new instructional technology is a function of previous experience. the greater a student's experience with instructional technology, the more likely he or she is to accept new applications. though the ipad is billed as an easy-to-use technology, students with poor attitude toward e-learning and instructional technology would likely benefit from multiple exposures to improve their selfefficacy and heighten their perceptions of learning and engagement. the present study is an initial attempt to describe factors influencing the positive impact of ipad activities on perceptions of student learning and engagement. though we believe that the ipad is generally effective in promoting active and collaborative learning, we did not assess the learning styles of our students prior to this analysis. in future studies, learning styles should be measured and the students should be asked directed questions about whether the ipad satisfied their ability to learn using different sensory modalities (visual, aural, kinesthetic). furthermore, while measures of student perceptions are generally indicative of positive student success, we did not directly measure discipline-specific student learning. future quantification of objective, discipline-specific student learning outcomes could further justify the use of the ipad in the classroom. by design, the study was not narrowly focused on repetition of the same activity in multiple sections of the same academic course. instead, the study focused widely among a range of academic disciplines, and each instructor used different ipad software. a controlled study with a single repeating ipad activity across several sections of the same course would provide a different perspective on the effect of ipad on engagement and learning. acknowledgments the center for teaching and learning at iupu and university information technology services at indiana university provided funding and support for this study. references alavi, m. (1994). computer-mediated collaborative learning: an empirical evaluation. mis quarterly, 18, 159-174. annansingh, f., & bright, a. (2010). exploring barriers to effective e-learning: case study of dnpa. interactive technology and smart education, 7, 55-65. doi:10.1108/17415651011031653 axelson, r.d., & flick, a. (2011). defining student engagement. change: the magazine of higher learning, 43, 38-43. diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 24 carini, r.m., kuh, j.d., & klein, s.p. (2006). student engagement and student learning: testing the linkages. research in higher education, 47(1), 1-32. doi:10.1007/s11162-005-8150-9 chen, p.s.d., lambert, a.d., & guidry, k.r. (2010). engaging online learners: the impact of web-based technology on college student engagement. computers & education, 54(4), 12221232. chyung, s.y. (2007). age and gender differences in online behavior, self-efficacy and academic performance. quarterly review of distance education, 8 (3), 213-222. fischman, j., & keller, j. (2011). college tech goes mobile. the chronicle of higher education, 58, 50. githens, r.p. (2007). older adults and e-learning: opportunities and barriers. quarterly review of distance education, 8(4), 329-338. harper, s. r., & quaye, s. j. (2009). student engagement in higher education: theoretical perspectives and practical approaches for diverse populations. new york: routledge. jones, s., & fox, s. (2009). generations online in 2009. pew internet research project. retreived from http://www.pewinternet.org/reports/ 2009/generations-online-in-2009.aspx kane, y. (2010, june 1). apple s ipad sales pass two-million mark. wall street journal eastern edition, p. b7. kinzie, j. (2010). student engagement and learning: experiences that matter. in taking stock: research on teaching and learning in higher education (139-153). j. christensen hughes & j. mighty (eds.), kingston, canada: school of policy studies, queens university at kingston. kuh, g.k. (2005). student success in college: creating conditions that matter. san francisco: jossey-bass. nelson laird, t.f., & kuh, g.d. (2005). student experiences with information technology and their relationship to other aspects of student engagement. research in higher education, 46(2), 211-233. parker, r.e., bianchi, a., & cheah, t. (2008). perceptions of instructional technology: factors of influence and anticipated consequences. educational technology & society, 11(2), 274-293. pike. g.r., kuh, g.d., & mccormick, a.c. (2008, november). learning community participation and educational outcomes: direct, indirect, and contingent relationships. paper presented at the annual meeting of the association for the study of higher education. jacksonville, fl. prensky, m. (2001). digital natives, digital immigrants. marcprensky.com. retrieved from http://www.marcprensky.com/writing/prensky%20-%20digital%20natives,%20digital %20immigrants%20-%20part1.pdf diemer, t.t., fernandez, e., & streepey, j.w. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 25 prince, m. (2004). does active learning work? a review of the research. journal of engineering education, 93(3), 223-231. rice, a. (2011, october 18). colleges take varied approaches to ipad experiments, with mixed results. the chronicle of higher education. retrieved from http://chronicle.com/blogs/ wiredcampus/colleges-take-varied-approaches-to-ipad-experiments-with-mixed-results/33749 rovai, a.p., & baker, j.d. (2005). gender differences in online learning: sense of community, perceived learning, and interpersonal interactions. quarterly review of distance education, 6, 31-44. sherr, i. (2011, august 12). tablet war is an apple route. wall street journal eastern edition, p. b1-b2. tallent-runnels, m.k., thomas, j.a., lan, w.y., cooper, s., ahern, t.c., shaw, s.m., & liu, x. (2006). teaching courses online: a review of the research. review of educational research, 76, 93-135. thompson, l.f., & lynch, b.j. (2003). web-based instruction: who is inclined to resist it and why? journal of educational computing research, 29(3), 375-385. wang, y.s., wu, m.c., & wang, h.y. (2009). investigating the determinants and age and gender differences in the acceptance of mobile learning. british journal of educational technology, 4(1), 92–118. doi:10.1111/j.1467-8535.2007.00809.x wieder, b. (2011). ipads could hinder teaching, professors say. chronicle of higher education, 57(28), a22-a23. retrieved from http://chronicle.com/article/ ipads-for-college-classrooms/126681/ yukselturk, e., & bulut, s. (2009). gender differences in self-regulated online learning environment. educational technology & society, 12(3), 12–22. microsoft word 3093-jotlt final.docx journal of teaching and learning with technology, vol. 1, no. 2, december 2012, pp. 48 – 50. using quality matterstm (qm) to improve all courses diane l. finley1 framework quality matters is a program of quality assurance for online and hybrid education. the program has received national recognition for its process which includes peer review, facultycenteredness and a focus on continuous improvement in online teaching and learning. quality matters is a subscription program whose current subscribers include community and technical colleges and universities in the united states, other countries, k-12, and other academic institutions. it is a systematic process for ensuring quality in the design of online and blended/hybrid courses and its rubric standards align with accreditation standards. using quality matters also has implications for improving student outcomes and retention. i became involved with qm at its inception in 2003 since i worked at one of the original institutions involved with its development under a fund for the improvement of post-secondary education (fipse) grant. i eventually became a certified peer reviewer, a certified master reviewer and now i help to train master reviewers. while not entirely sold on the process at first, i witnessed the improvements in my online courses once i applied the rubric to my courses. students had fewer procedural questions, navigation was smoother, and i was able to focus more on interacting with the students. i became a believer in the rubric and the process. making it work before discussing how i specifically use qm in my courses, let me give a bit of background on qm and some specifics about the process. qm was a collaboration of 14 community colleges, 5 four year institutions in maryland, and nine external partners. the goal of the fipse project was to develop criteria (in a rubric) for quality assurance of online learning and to create training for online faculty. the rubric focused on course design, not delivery, and was not intended to resolve all quality issues in online classes. after the grant expired, qm became an independent subscriber-based organization under marylandonline. subscribers include educational institutions of all levels as well as publishers of online courses. qm also offers online training for instructors and has to date, trained over 16,000 faculty and instructional design staff. the qm process which is researched-based involves a faculty-centered, peer-review process of online and hybrid (blended) courses. the rubric, now in its third iteration (since becoming a nonprofit organization), focuses on course design and is a diagnostic instrument which faculty can use for continuous improvement of their courses. the expectation is that all courses can eventually meet qm expectations. meeting qm expectations involves meeting the 21 essential standards l and receiving at least 85% of the possible points from the rubric. if a course does not initially meet expectation, the faculty member is encouraged to use the feedback from the review to improve the course which is then re-reviewed. the rubric focuses on eight areas: overview, objectives, assessment, materials, learner interaction, technology, learner support, and accessibility. why worry about course design? why use quality matters? since the department of education changed the rules for federal financial aid in 2005 with the higher education 1 professor, department of psychology, prince george's community college, finleydl@pgcc.edu finley, d.l. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 49 reconciliation act of 2005 (hera), the number of institutions offering online courses has increased dramatically. the sloan consortium reported a 10% growth in distance learning enrollments in 2011. the instructional technology council which examined elearning at community colleges reported an 8.2% increase in online enrollments from fall 2010 to fall 2011. i now use the qm rubric in all my course designs, even for courses that have not been officially reviewed. as our institution has increased emphasis on assessment, i find using the rubric forces me to see how my course and chapter objectives align with my assessments and activities. everything in the course has a purpose and that purpose is made clear and transparent to students. students who read all of the objectives and explanations understand why they are doing particular activities or taking certain quizzes. applying the rubric has made me really examine my choice of activities and assessments. they are much more purposeful now. even weekly discussion boards link to specific objectives. i give students a course map that clearly shows this linkage. the research shows that better student outcomes result when a course design relates to the course objectives (swan, matthews, bogle, boles, & day, 2011). it was a "duh!" moment when i looked at these rubric standards and the research. students are also more satisfied when all the course components are clearly integrated (ke & xie, 2009). the rubric has also helped me to make my courses more accessible to all students. i used to use all sorts of font style and colors, not realizing how difficult those can be for some students. now my courses are more simple in design but they are easier to read. i recently had a visually impaired student who was able to use a screen reader in the course with no problems. the third area in which i have found the rubric most helpful is the course overview and introduction (qm standard 1). to meet the specific review standards in this area, i created “start here” areas for students with detailed directions on how to get started. i include information on my expectations and institutional policies relevant to online learners. i also include links to institutional tutorials on using our lms. no longer do i assume students can just find these items. i have streamlined my navigation so there are fewer buttons. students have to click fewer times to find course components. it does take a good deal of time before the course begins to create the designs that meet qm expectations. however, i found that once i completed one course and it met expectations, other courses took less time. there were many items that could be reused with slight edits such as the start here sections. i also found that by using the rubric for the design, i was better able to focus on content. some faculty raise concerns about qm creating packaged courses with no room for individual teaching styles. i have reviewed over 90 courses from all types of institutions. i have not found anything that would resemble a "packaged" course. there are many design elements that can meet qm expectations. it does not tell any instructor how to teach a class. i have reviewed multiple classes on the same topic and have yet to find two that are just alike, even at the same institution. as mentioned above, by using the qm rubric to guide course design, the faculty member is free to focus on content and devising creative ways of presenting that content to students. future implications as the body of research literature on online courses continues to grow, the qm rubric will continue to be revised, to take into account new developments and new information on student success. future iterations of the rubric will enable me to keep my courses up-to-date with the literature on student success. my institution requires that all online courses meet qm expectations. by using the rubric, the department is better able to ensure that courses with finley, d.l. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 50 multiple sections are comparable. not every instructor uses exactly the same activities but each instructor has to show how those activities align with our common course objectives. students are learning the same things; they are just learning them in different ways. using the rubric, especially the standards related to alignment of objectives and assessments/activities, has made it easier to extract data for our department review and course assessment process. we are able to demonstrate precisely how each objective is being achieved. i think the next big use for the rubric is to assess face-to-face classes. the rubric's focus on accessibility, alignment and transparency to students is relevant to synchronous, in-person classes as well. the rubric really is a guide for good teaching. in my department, we have already taken some standards and asked all faculty to use them in their syllabi and teaching. how can you use qm in your own course? many institutions and state consortiums belong to qm. if they subscribe, you have access to the full rubric and can request a course review from the institutional representative at your school. if your institution does not subscribe, you can ask the elearning or distance learning office to become a part of quality matters. if that is not an option, you can still look at the rubric at http://www.qmprogram.org/rubric and use it to help improve your own course. you can incorporate many of qm's principles even without an official review. you can also take qm courses at non-subscriber prices and learn to improve your course by applying some of the rubric to its design. in closing, i would recommend quality matters as a way to improve online (and hybrid as well as face-to-face) classes by focusing on design issues, thereby freeing the instructor to focus on content and on interaction with students. ultimately increased student success and satisfaction can result. references allen, i.e., & seaman, j. (november 2011). going the distance: online education in the united states, 2011. retrieved september 28, 2012 from http://www.onlinelearningsurvey.com/reports/goingthedistance.pdf instructional technology council. (march, 2012). 2011 distance education survey results: trends in elearning: tracking the impact of elearning at community colleges. retrieved september 29, 2012 from http://www.itcnetwork.org/attachments/article/87/itcannualsurveymarch2012.pdf ke, f., & xie, k. (2009). toward deep learning in adult-oriented online courses: the impact of course design strategies. the internet and higher education, 12(3-4), 136-145. http://dx.doi.org/10.1016/j.iheduc.2009.08.001 quality matters. (2012) underlying principles of quality matters. retrieved from http://www.qmprogram.org september 27, 2012. swan, k., matthews, d., bogle, l., boles, e., & day, s. (2011). linking online course design and implementation to learning outcomes: a design experiment. the internet and higher education. doi:10.1016/j.iheduc.2011.07.002 journal of teaching and learning with technology, vol. vol. 11, special issue, pp.3-17. doi: 10.14434/jotlt.v11i1.34594 teaching experiences of e-authentic assessment: lessons learned in higher education audrey raynault université laval géraldine heilporn université laval alice mascarenhas université de sherbrooke constance denis université de sherbrooke abstract: the realities of the 21st century have led professors and lecturers to renew their learning assessment practices so that they are more adapted to and contextualized in the current professional world. despite advances in teaching and learning, assessment methods may still deviate from practice in authentic contexts. although some instructors are already familiar with more authentic assessments, most are accustomed to using exams as standard practices to test students’ achievement of course objectives and essays to prepare students for research or written argumentation. nevertheless, such typical assessments often lack authenticity and do not develop the full potential of students’ 21stcentury learning or literacy skills such as communication, creativity, or working with technologies. the past decade has seen the beginnings of a broader reflection on teaching, learning, and evaluating with technologies, including more authentic assessments. in this reflective essay we present how technologies make it possible to diversify assessment methods, resulting in enhanced authenticity and development of 21st-century learning and literacy skills. authentic assessment methods with technologies (e.g., recorded video presentations, explanatory interviews with descriptive assessment grids, pechakucha presentations, blog posts, social media and e-portfolios) are illustrated with examples from several disciplines. we also explain how proposing a number of methods to students for the same assessment may help answer their various needs and preferences without increasing instructors’ grading load. furthermore, we discuss how diversifying assessment methods with technologies often results in a transformation of assessment modalities. beyond assessments as an evaluation of knowledge and/or skills at a fixed time, authentic assessments with technologies may become continuous or iterative processes with multiple feedback occasions from instructors, thereby combining synchronous interactions and/or discussions with asynchronous reflections to improve students’ involvement and active learning. keywords: e-assessment, educational technologies, authentic assessment, higher education, pedagogical alignment approximately 2 years before the covid-19 pandemic, about 40% of instructors had used eassessment1 in their practices and half of all students had been evaluated using e-assessment, according to an international survey that mainly took place in portugal, the united states, the united kingdom, canada, norway, and australia (rolim & isais, 2019). typically, e-assessment has relied on multiple1 e-assessment is widely defined as the “use of a computer as part of any assessment-related activity” (jordan, 2013, p. 88). denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu choice questions (fluck, 2019; rolim & isais, 2019) or other forms of e-exams including case studies, long essays, or computer coding activities (fluck, 2019), and automatic grading and immediate feedback for students have been cited as important benefits (e.g., m. brady et al., 2019; fluck, 2019; rolim & isais, 2019, stödberg, 2012). however, such typical e-assessment relies on indirect proxy items—efficient and simplistic substitutes—from which instructors think valid inferences can be made about the students’ performance with respect to certain prioritized challenges. unfortunately, most typical e-assessments, and especially quizzes, are not authentic assessments. indeed, according to wiggins (1990), authentic assessments should engage students in tasks similar to those in the workplace setting or in everyday life; they are led by the student or a group of students and allow students the freedom to create according to their interests; they lead to an outcome or product resulting from problem solving or cocreation; they are characterized by the learning processes generated and the mobilization of skills and knowledge as well as by the development of unique responses. supported by digital technology, authentic evaluation must allow for latitude in the choice of tool while keeping traces on the process (wiggins, 1990). in the digital age, teaching, learning, and assessment need to be rethought to align with realworld considerations, while contributing to the development of students’ 21st-century skills, including communication and collaboration, creativity and innovation, and working with technologies (redecker et al., 2012). time-limited exams or exams without reference materials may no longer be relevant, especially in distance education. in the case of online training, siemens (2005) proposed the concept of connectivism, according to which learning takes place through connections between people, between platforms, and between types and levels of knowledge (chekour et al., 2015). in a world where changes are unpredictable, teaching, learning, and assessment need to be relevant and adapted to the large range of possibilities offered by technologies. as gulikers et al. (2004) indicated, authenticity is difficult to define but allows for reflection and demonstration of learning. the authors suggested a return to pedagogical alignment according to biggs (1996), that is, alignment of content, pedagogical methods, and assessments while taking into consideration the limits of available technologies in each context. e-assessments must therefore allow for critical, collaborative, and complex content applications, and/or learning in authentic situations with temporal or technological constraints such as those of the job market. according to st-onge et al. (2022), e-assessments are authentic when students have time to consult and reflect on any information sources they need for the assessment, as they would do in actual professional practice. performance is no longer prioritized; rather, a combination of process, progress, and production is tantamount. although some shift toward more authentic assessments had already begun prior to the pandemic, the forced transition to online teaching during the pandemic has been a catalyst for deeper reflection on pedagogical and assessment practices. furthermore, enhancing the authenticity of assessments also reduces the risks of cheating and plagiarism, while better preparing students for professional practice (sotiriadou et al., 2020). in a digital age, all teachers need to understand that eassessments go far beyond quizzes with multiple-choice questions for assessing low-level cognitive skills. wikis, blogs, simulations, and scenarios are only a few examples of e-assessment opportunities in which higher level cognitive skills can be assessed (appiah, 2018). in the next section, we present five authentic e-assessment methods that we have used with our students, approaches that support the development of 21st-century learning and literacy skills. we also highlight benefits and challenges so that teachers can reflect on implementing these in their own courses. 4 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu diversifying assessment methods with technology this section illustrates several authentic assessment methods that help students develop 21st-century skills. these are (1) collaborative exams, (2) recorded video presentations and/or podcasts, (3) pechakucha presentations, (4) blog posts and social media, and (5) e-portfolios. collaborative exams several north american and european universities have been implementing collaborative exams in nursing, science, health or psychosocial science, and engineering programs (bezerra, 2018). collaborative exams can be authentic assessments and may be useful in eliciting higher levels of abstraction and deeper understanding of content than some other types of reviews that promote lower level cognitive skills, use of rote learning strategies, and knowledge retention (gilley & clarkston, 2014; mahoney & harris-reeves, 2019). thus, authentic collaborative exams avoid multiple-choice questions assessing low-level cognitive objectives, as is sometimes the case in two-stage exams, and instead meet the criteria outlined above. in the case of two-stage exams (kapitanoff, 2009; leight et al., 2012; stearns, 1996; wieman et al., 2014, as cited in cozma, 2021, p. 3), the exam begins with a solo attempt and continues with a collaborative test consisting of the same or similar questions as in the individual stage. when authentic, the second test transforms the exam situation into a learning situation that enhances students’ understanding of the exam content through discussions with their peers. generally, collaborative exams engage the active use of high-level cognitive processes (krathwohl, 2002), such as cocreation, analysis, or complex problem solving (dahlström, 2012; mahoney & harris-reeves, 2019). they also reduce assessment anxiety for students (beilock, 2008; lusk & conklin, 2003; zimbardo et al., 2003), increase the performance and academic results of both struggling and high-achieving learners (woody et al., 2008), and improve students’ perception of the course and their motivation to study for such exams (knierim et al., 2015). however, there are conflicting results regarding knowledge retention after completing collaborative exams. some studies pointed to an improvement in retention (cortright et al., 2003) whereas others were unable to uncover any difference between conducting the review alone or working collaboratively (leight et al., 2012; sandhal, 2010). no studies that reported negative impacts were identified. several factors may account for these discrepancies: the characteristics of the intended student population, the content and type of course, the complexity of the concepts covered, the format and conditions of the review, and the research methodology used. other studies looked at fully collaborative exams, another type of collaborative assessment (cozma, 2021; muir & tracy, 1999; zimbardo et al., 2003). as cozma described (our translation): in this case, the collaboration is not intended to provide feedback following a traditional examination, but to radically transform the design of the examination itself. this type of examination moves away from the idea that test results are able to account for the merit and knowledge of the students and seeks to develop a particular stance, involving the sharing of knowledge, the negotiation of ideas, the justification of beliefs, and a relationship of mutual aid rather than competition. (cozma, 2021, p. 3) the first, individual stage requires students to turn in an individual exam paper at the end of the allotted time. the collaborative stage then begins, with a reduction in the number of questions to allow time for discussion. it concludes with students handing in either an individual or a group paper. the success of the collaborative stage may be dependent on the group composition, especially if there 5 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu are dominant students (zipp, 2017). zipp concluded that collaboration benefits only weaker students, but studies by gilley and clarkston (2014) and leight et al. (2012) found higher scores on the collaborative exams than those achieved by each team member. in addition, dahlström (2012) found that weaker students increased their ability to produce high-level cognitive responses (biggs, 1996), whereas stronger students' scores remained similar. however, the strongest students performed better with new questions asked on the collaborative exams than with questions repeated in the two stages (individual and collaborative). this suggests that pooling knowledge provides better overall content understanding, as demonstrated in studies by bezerra (2018) and mahoney and harris-reeves (2019). in a software engineering course in a montreal engineering university, a mix of both types of collaborative exams (two-stage and fully collaborative) were conducted online in a teaching and learning platform in three stages. this experience highlights the positive relationship between digital technology and the mobilization of collaboration during a learning assessment situation, as the studies we had previously identified on collaborative exams did not take place in a digital context. to complete these collaborative exams, first the students were instructed to carry out a preparatory stage (stage 1), which involved creating teams of four or five students to prepare independently, in synchronous and asynchronous modes, an exam to be as light as the teacher’s learning objective grid, 1 month before the exam. the day of the collaborative exam, they first took an individual exam in the form of open questions on the moodle platform (stage 2). in the following hour, the students gathered in teams to carry out the collaborative phase (stage 3) according to a prescribed schedule in synchronous mode by videoconference on their respective channels on the ms teams platform. results show that stages 1 and 3 were complementary. thus, the teams developed a high level of collaborative performance, which resulted in a high level of general performance and good results at the collaborative stage. the students also testified to having improved the quality and performance of the collaboration communication, synchronization, and coordination (chiocchio et al., 2012; raynault et al., 2020) between the preparatory phase (1 month before the collaborative exams) and the collaborative stage. the students mentioned on many occasions that they did not need to talk to each other to move forward and take risks during the collaborative stage and that they trusted each other thanks to their group cohesion developed during the preparatory stage. finally, according to the teammates, digital literacy enabled them to carry out all stages of the collaborative exam system while developing collective knowledge and an understanding of the dimensions of collaboration, to learn how to form an expert team of expert engineers in an authentic context. recorded video presentations and/or podcasts recorded video presentations represent a simple way to implement authentic e-assessment in higher education, while developing students’ communication, collaboration, creativity, digital literacy, and working with technology skills. students are provided with one or several themes to explore and, when deemed necessary, a starting list of pedagogical resources (e.g., professional or scientific publications, links to web content), along with a detailed list of instructions on the goal and expected content of the recorded video presentation. of course, the more realistic the proposed problem or context, the more authentic the recorded video presentation is for the student (e.g., a presentation on how an online course unfolds and expectations in an educational technology course, a presentation of operations management to the ceo in a business course). then, instead of the teacher presenting the content to them, students explore and present the content on their own. they are actively engaged in the process of constructing knowledge, which involves searching for and critically analyzing information, synthesizing content, and creating ways to present it to their peers and the instructor. although very short videos (under 4–5 minutes) may be produced in individual assessments, students and instructors will usually benefit from working collaboratively on longer videos. 6 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu collaborative assessments reduce the workload for both students and the instructor, in addition to fostering the development of collaboration skills that are now essential in the professional world. a challenge in such e-assessments is to ensure that students watch the videos produced by other teams, since they often cover complementary themes and content. to this end, instructors may plan peer feedback between teams or a subsequent learning activity necessitating that students watch several videos and then answer related questions. where possible, implementing peer feedback accompanied by a detailed evaluation grid encourages students to watch the videos with a critical and objective eye (e.g., providing audio or video feedback to their peers using the instructor’s assessment grid, along with improvement suggestions for future work). another challenge concerns students’ technological skills, or lack thereof, for producing video content (belt & lowenthal, 2021). therefore, instructors must be aware that some students might need technological support along the way, or at least tips on how to produce a video of satisfactory quality. (readers interested in implementing recorded video presentations in their courses are encouraged to consult he and huang, 2020, or belt and lowenthal, 2021, for a more general synthesis about video use in teaching, learning, and assessment.) podcasts provide a variation on recorded presentations that can be even more authentic, particularly in communication or language courses. first, students could be invited to explore and discuss existing podcasts, thus combining sociocultural information, oral comprehension, and discussions, in teams. the instructor can draw attention to a number of potential interests in a language course, among them different accents for the same language, speed of speech, vocabulary used (informal or formal), slang words, and local cultural traits, such as commemorative festivals, cuisine, or politics. the creation of podcasts also allows students to develop communication, collaboration, and organization skills. whether individually or in teams, this initiative requires a personal commitment from the students (catterall & davis, 2013). podcast creation projects go through several stages: choice of topic (preferably chosen by students themselves), research on the topic, construction of a terminology grid about the subject, and planning of the podcast (quantity of episodes, subject and duration of each episode), during which students must follow instructions and constraints associated with the activity. in distance language courses, students practice oral expression by interacting with each other about the podcast creation (catterall & davis, 2013), deepen a specific vocabulary according to the chosen topic, and reinforce previously acquired knowledge. finally, podcasts published on the same online platform may lead to subsequent peer-review and/or discussion activities. however, one must not forget the importance of considering ethical issues (capelle, 2018) related to the creation of videos or podcasts in an educational context. whether the podcast is created on a voluntary basis or as a mandatory course activity, students should be aware of netiquette so that no one ever feels uncomfortable or threatened during participation. the students must not fear that their productions will be used maliciously by others (capelle, 2018). from a professional perspective, students can also discuss the benefits of publishing their videos and podcasts to build and manage their digital identity and enrich their e-portfolio (capelle, 2018; ollier-malaterre, 2018). therefore, potential public distribution of videos or podcasts should be decided by the students. otherwise, it is preferable to use a closed and secure platform, accessible only to members of the same class, and to discuss with students the netiquette to follow (bates, 2015). pechakucha presentations the pechakucha format represents another interesting alternative to traditional video or audio presentations. this storytelling format uses a maximum of 20 slides of 20 seconds each for a total of 6 minutes and 40 seconds (lison, 2020). the traditional oral presentation is thus transformed to 7 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu engage the learner in an authentic task; in the labor market, there are very few occasions when a person has more than 5 to 10 minutes to present a point of view. hence, the format increases awareness of the time restrictions and value of content covered during this period. students are also encouraged to prioritize graphics and limit text (university of british colombia, 2020), allowing for the evaluation of the students’ ability to synthesize and understand a given subject while working on professional development. furthermore, the visual appearance of each slide is important, given the total number of slides. each second needs to be used wisely to ensure the objective is achieved within a limited time frame. moreover, the image and the audio must be properly aligned. creativity can be maximized using technology to ensure that the format is respected. a pechakucha presentation also requires individual or collaborative planning of content and image, and students need to master the content to synthesize and popularize it effectively. opportunities for plagiarism are also minimal given the need to reduce writing or diagramming to extract only the essential information. finally, pechakucha presentations can be delivered in class or online (synchronously or not). this could be a productive activity to facilitate online discussion, where students can critique and debate their position (university of british colombia, 2020). a synthesis activity could be a pechakucka, especially at the university graduate or postgraduate level. this was the case for one of us who proposed it as a postgraduate activity at sherbrooke university (quebec, canada). students were asked to formulate their advice to a peer regarding research supervision. rather than summarizing the entire course content, they were asked to distill, in less than 7 minutes, the main conclusions they drew from it, that is, from readings, discussions, personal reflections, and forum exchanges (lison, 2020). students appreciated that the presentation was authentic and useful. some students addressed the pechakucha to a colleague, new or otherwise, and were encouraged to share it in their department. in continuation of the program, some of them used the pechakucha as “business cards” to recruit a potential supervisor. among the disadvantages noted are that some students mentioned that using the technology in this highly restricted context provoked anxiety. in some cases, the resulting production was not the reflection of the learner’s full potential but rather a result of their anxiety in using a new form of eassessment. to overcome this problem at the current session, we have published several tutorials and have encouraged the students to share their work with each other and ask for feedback before submit it. some of them used their peers’ comments to improve their pechakucha, and certain students also asked for advice on how to record their presentations. blog posts and social media like the authentic and familiar genre of podcasts, social media and blogs provide an interesting environment within which students can perform authentic assessments. indeed, social media and blogs provide a space for real-world interactions, students sharing information and reacting with “likes” and comments. the interactions themselves are the basis of the learning experience, for both lifelong learning and informal learning. for a long time, the use of social networks in an academic context has been feared, discouraged, and even prohibited in some institutions. such fear is not unfounded because the ethical issues related to the use of social networks in education are important. privacy, data sharing, digital identity, intellectual property, and copyright are just a few of the many ethical concerns to consider (anderson, 2019). however, there is no denying the importance of social media in the lives of 21stcentury students. the concept of connectivism today is naturally applied in everyday life. the creation of networks between people is a fact, as is the sharing of knowledge. in this case, the academic use of social media naturally echoes what is already present in the lives of a large majority of students and instructors. according to anderson (2019), social media in education offer, among other things, the 8 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu chance and support for collaborative learning, for strengthening motivation, and for integrating formal and informal learning. thus, as part of an educational activity, students may be called upon to share their discoveries through twitter using a pre-established hashtag and to interact with each other. it is also common to create a facebook group in which students can discuss and share content. however, because of ethical issues related to the protection of private life, it may be more prudent to use a closed, secure platform that would allow similar tasks to be performed, such as through a google or teams account. another specific example of the use of social media in authentic e-assessments consists of asking students to develop and publish blog posts on a digital platform. the creation of a blog involves several steps, whether it be public or open only to students of the same group. first, it is important to understand what a blog is and to identify the tone to use in writing a post that is accessible and appealing to the target audience. this understanding may also contribute to students’ sense of the authenticity of the assessment, since they will be writing and developing digital content for a larger audience than the instructor alone (waycott et al., 2013). then comes the topic of the blog, preferably chosen by the students. blog posts can be used several times in a course as e-assessments for the students, with the goal being to present new ideas, to reflect on assigned readings or other pedagogical material, and to synthetize and present important content. the third step involves the form of presentation. creating a blog gives a great deal of freedom to students who can use their preferred way to express themselves, whether through texts, diagrams, drawings, or concept maps (duplàa & talaat, 2011). however, the instructor may need to provide technological support to students who are less familiar with technologies or building digital content, especially when students use various digital formats such as images, videos, graphics, or other embedded content (alruwais et al., 2018; spector et al., 2016). if used with undergraduate students or in very competitive university programs, another challenge may consist of mitigating the risks of plagiarism as well as some students’ sense of vulnerability (waycott et al., 2013). as for the digital platform, our experience has shown that the easiest way is to create only one blog for the whole class and add students as authors. this reduces the organizational workload for the instructor, who can also initiate students in the use of categories and hashtags to manage all posts. furthermore, it is important that the blog be built on an easy-to-use digital platform (e.g., google sites, wordpress). when choosing the platform it is important to ensure that blog posts are easily accessible to other students in the course (whether publicly visible, or on a university restricted platform) to foster a sense of connectedness and collaboration between students. instructors will then be able to ask or encourage students to visit other students’ posts and comment on them (saying what they liked, suggesting improvements, etc.), thereby promoting the development of collaboration skills (e.g., m. brady et al., 2019). in the world of social media, interactions between peers of course make the experience more engaging and profitable for all. with students commenting on their peers’ posts, students can check if the message is clearly presented and improve it when necessary. e-portfolios over a whole semester (or even over several semesters, when several instructors collaborate in this regard), e-portfolios can be used for interesting and authentic assessments. in these, students are asked to reflect on their own learning path in a course, on collecting and presenting digital traces about important course content, on evidence of what they have learned on their own productions, and so on. to be authentic for students’ professional development, e-portfolios need to be presented in such a way that they can be provided along with a curriculum vitae to a potential employer (e.g., for future teachers). even without this contingency, e-portfolios are authentic to the students in the sense that 9 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu they develop them for a large audience, minimally including their peers and the instructor, similarly to real-world blogs or websites they are familiar with. in our own experience with e-portfolios in a graduate-level course in educational technology, we asked students to use them recurrently throughout the semester so that they could reflect on their learning. in addition to using e-portfolios for collecting reflections about readings and digital tools they explored, they had to build and present two e-assessments of the course in separate sections within the e-portfolio. the first consisted of a critical reflection about integrating technologies in higher education with several leading subquestions, and the second focused on how they would improve a teaching and learning activity sequence, from the problem faced to the planning calendar for preparing the new sequence. in contrast with written essays, they had to think about how to present information on a digital support. we found that students made significant progress from the first eassessment to the second in terms of critical analysis, synthesis, and digital presentation skills. they truly reflected on finding ways to synthetize important information and to present it in a clear and analytical manner. however, we recognize that although these e-assessments within e-portfolios were very relevant in an educational technology course, technological barriers or time constraints may represent a challenge in other disciplines. to overcome this challenge, we suggest asking students to work on digital productions on several occasions during a course (e.g., digital and/or interactive presentations or posters, infographics, blog posts), inviting them to get off the beaten track of readings and writings, while at the same time helping them develop creativity and digital communication and practice working with technology skills on real-world problems. like blogs, e-portfolios should also be built on an easy-to-use digital platform, even one designed for this purpose (e.g., bulb). students should be encouraged to visit each other's e-portfolios to make suggestions and as an additional strategy to improve their own. in addition to fostering the development of 21st-century skills, an important benefit of eportfolios is that they make students’ learning paths more transparent, thus offering a way of continuously monitoring their progress throughout a semester, if students regularly contribute to their own portfolios. however, challenges for instructors using e-portfolio assessments concern the workload for visiting all students’ e-portfolios, providing relevant and timely feedback, and finally grading them (m. brady et al., 2019; spector et al., 2016). as is the case for blogs, technological support to students may also be required. since e-portfolios may have important benefits for students but also involve significant drawbacks for instructors, we suggest that instructors carefully consider their implementation in a course depending on the size of the group, the technological support that students would need, and the expected benefits from developing learning and 21st-century skills. where advantageous, such as in teacher education, educational technology, design, or arts programs, the use of e-portfolios has to be well planned and thoughtfully integrated into course teaching, learning, and assessment activities so that students actually get involved in such coursework throughout a semester. considerations for applying authentic e-assessments as illustrated in the previous section, technologies enable instructors to diversify their assessment methods and to involve students in authentic e-assessments while developing essential professional skills. authentic assessments with technologies help bridge the gap between teaching and learning and professional practice (sotiriadou et al., 2020). as new assessment opportunities will continue to increase as digital technology progresses, instructors will have to reflect on the techno-pedagogical alignment between course objectives and teaching, learning, and assessment activities. as st-onge et al. (2022) mentioned, this seems to have been a preoccupation of instructors while transforming their courses during the covid-19 pandemic, and it should stay at the center of any transformation of assessment methods in the future. 10 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu lessons learned the importance of giving students the tools to go as far as possible in the development of their full potential while learning in a fair, equitable, and transparent manner was recognized by the scientific committee of the international summit on ict in education in 2019 (https://edusummit2019.fse.ulaval.ca) and on numerous other occasions. tier 1 equity issues, those related to access to digital technologies, are decreasing, but tier 2 equity issues, those related to classroom uses of digital technologies and resources, are increasing, including underuse or overuse of all things digital, challenges arising from shallow or deep individual or collaborative learning, and uses for play and for learning (resta et al., 2018). higher performing students easily adjust to new assessment methods. any approach to the improvement of classroom practice that is focused on assessment must deal with all aspects of assessment in an integrated way (black & wiliam, 2018, p. 552). considering our real-life e-assessment experiences, figure 1 presents a synthesis of our lessons learned. figure 1. synthesis of lessons learned about authentic e-assessment digital equity in a higher education setting. offering choices in assessments with technology several assessment methods can be proposed to students as part of a single general assessment so that they can choose the one they prefer. for instance, an instructor could offer students the possibility of recording a video presentation or a podcast or preparing a blog post for a given assessment. similarly, an instructor using e-portfolios would be very flexible in terms of formats used by students to present their contents. hence, instructors can better answer students’ various needs and preferences by offering them choices or allowing them to personalize their work within boundaries imposed by the instructions and expectations related to a given assessment. this idea includes two important points authentic e-assessment digital equity: lessons learned offering choices answering students’ diversified needs and preferences clarifying e-assessment instructions and expectations interrelate learning and assessment activities: pedagogical alignment implement continuous evaluation processes with multiple feedback 11 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu concerning (1) students’ needs and preferences and (2) e-assessment instructions and expectations, which we detail in the following subsections. answering students’ diversified needs and preferences. providing students with a certain degree of choice in an assessment fosters student engagement and participation (rose et al., 2018). these choices could be as simple as selecting the assessment topic from a predefined list or as broad as allowing several e-assessment methods (heilporn et al., 2021). the choice of an assessment topic supports students’ interests and motivation, thereby providing multiple means of engagement according to the universal design for learning (udl) framework (meyer et al., 2014). moreover, the decision to allow for several delivery formats for the same assessment is tantamount to providing multiple means of action and expression, as suggested by udl. by enabling students to select their own e-assessment method, those preferring to express themselves through oral communication could record a podcast whereas others might choose to write a blog post, all reflections of their diverse needs and preferences. whatever the level of flexibility that an instructor is ready to offer, the mere fact of showing flexibility makes the assessment more authentic for students because they can find ways to connect the assessment with their interests or their personal/professional life, and it promotes their engagement in the assessment activities. technologies offer a vast range of opportunities so that students can express themselves and demonstrate their knowledge and skills with accessible and easyto-use applications; therefore, it really is up to instructors to imagine different ways of assessing their course objectives with technologies and to allow students some flexibility and control over eassessments. clarifying e-assessment instructions and expectations. introducing authenticity and flexibility in eassessments can be stressful for instructors, especially the first time they do so, since they do not know what results to expect from the students; that is, they lose some control over the final assessment production, to the benefit of the students. st-onge et al. (2022) found that instructors considered potential increases of their workload when reflecting about changes in course assessments. furthermore, they were concerned about how they would ensure equity between students and/or provide formative feedback. from our experience, the biggest challenge experienced by instructors when transforming their course assessments to more authentic e-assessment methods consists in letting the students have more control over the final assessment production and in trusting their own ability to guide and support students along the assessment production process. first, clear instructions should be presented and explained to the students to avoid potential confusion and misunderstandings. these instructions determine the boundaries of the assessment: explanations regarding the expected content, guiding steps and/or questions, suggested or possible delivery formats, and so on. second, instructors will benefit from accompanying the instructions with a descriptive assessment grid detailing how the evaluation criteria will be applied. by communicating their expectations to students as transparently as possible from the outset, they provide students with important information helping them selfregulate throughout the assessment production process and deliver high-quality work. the detailed descriptive assessment grid should be broad in scope so that it can be applied to any delivery format chosen by the students, which does not rule out including evaluation criteria regarding the visual and/or audio quality of presentation. this will also ensure that instructors’ grading workload will remain stable, by focusing the descriptive assessment grid on course and assessment objectives rather than on the specific topic or e-assessment method chosen by the students. finally, because instructions and evaluation grids are broad enough in scope to provide students with some flexibility and choice in the e-assessments, certain students may ask questions to better understand what the assessment consists of and what is expected from them. this often happens when students experience flexibility and choice in assessments for the first time, especially if 12 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu the instructor has allowed them to select their preferred e-assessment method. in that situation, instructors may help students by discussing the assessment goals, expectations, and boundaries with the students, asking them what they would like to present and how they would do it, sometimes providing examples of what could be done while encouraging them to be creative. such clarifications regarding e-assessment goals and expectations are part of the formative feedback that supports students striving to present an authentic work while developing 21st-century skills such as creativity, working with technologies, communication, and collaboration. this also marks the beginning of a dialogue between students and the instructor about a specific assessment, which we talk about in the next section. interrelating learning and e-assessment activities all assessment methods described above have one essential common element, which is that they strongly interrelate learning and e-assessment activities. instead of a fixed-schedule evaluation of students’ learning, assessment activities are designed and integrated within learning activities over an extended period, as recommended by several authors (e.g., black & wiliam, 2018; redecker et al., 2012; romeu fontanillas et al., 2016). they are assessment for and as learning (black & wiliam, 2018), in which instructors can provide feedback to help students progress, and students themselves can reflect on their learning and self-regulate to enhance their competencies. authentic e-assessment methods such as those described above then become continuous assessment processes, during which instructors provide formative feedback and students iteratively improve their work (romeu fontanillas et al., 2016), a process we describe next. implement continuous evaluation processes with instructor feedback at multiple time points some of the e-assessment methods described above, such as e-portfolios and blog posts, foster communication between students and instructors during the learning and assessment process, as recommended by redecker et al. (2012). in the case of e-assessments such as video presentations, infographics, and podcasts, which are often realized in student teams, we recommend that each team have an online collaborative discussion channel that is also accessible to the instructor, which facilitates collaboration between team members and makes the assessment production process more transparent for the instructor. as lafuente martínez et al. (2015) put it, “the more the instructor knows about the student’s learning process, the better he or she will be able to support it” (p. 11). by establishing a line of communication during the e-assessment process rather than considering only the final production, instructors and students enter in an interactive and ongoing discussion about the assessment, often referred to as a dialogic approach to feedback (lafuente martínez et al., 2015). whereas students inform each other (when in teams) and the instructor of their current learning and where they are in the assessment process, the instructor provides formative feedback that allows students to adjust their work to the assessment goals and expectations. also, having students explicitly write about where they are in the assessment process promotes the development of self-regulation in learning, a form of self-assessment and feedback (nicol & macfarlane-dick, 2006). lafuente martínez et al. (2015) also found that having students work in teams for eassessments increased the transparency of the assessment production process, thereby enhancing opportunities for instructors to provide ongoing feedback to improve the final assessment production. furthermore, they advised that when an e-assessment is implemented in a blended or face-to-face learning environment, instructors should make use of the full potential of online collaborative discussion channels to provide relevant and meaningful feedback to the teams of students instead of relying on face-to-face feedback alone (which the students could interpret as a lack of support). 13 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu therefore, instructors should be aware that monitoring the student assessment process and providing meaningful feedback requires more time than simply evaluating a final production. since this could be a challenge for large groups of students, instructors will benefit from planning how they wish to monitor their students’ assessment processes and to clearly communicate to the students what kind and level of feedback they can expect, to prevent any disappointment or misunderstanding. conclusion in this essay, we have presented five e-assessment methods that we have used in our classrooms that focus on authenticity and 21st-century-skill development. the assessment tasks approximate those that students will face in their future careers, but they also promote student learning and mastery of higher order skills. the lessons learned from authentic e-assessments in our practice underscore the importance of considering the values of social justice, equity, and equal opportunity to succeed. this requires providing students with opportunities and choices for digital activities, topics, and/or tools that meet their diverse needs and preferences for engagement and motivation to learn. ongoing use of digital tools, as well as technology support when needed, must be available throughout the semester. in addition, authentic e-assessments must also incorporate clear instructions and meet planned and preannounced objectives, as well as be aligned with preparatory activities completed with digital tools throughout the semester (align technology, pedagogy, and context). finally, digital technologies facilitate opportunities for exchange and interaction among students and between teachers and students; authentic e-assessments must allow for multiple and varied opportunities for synchronous and asynchronous feedback (between students and from the instructor) so that teachers and students can monitor their learning progress. references alruwais, n., wills, g., & wald, m. (2018). advantages and challenges of using e-assessment. international journal of information and education technology, 8(1), 34–37. https://doi.org/10.18178/ijiet.2018.8.1.1008 anderson, t. (2019). challenges and opportunities for use of social media in higher education. journal of learning for development, 6(1), 6–19. appiah, d. m. (2018). e-assessment in higher education: a review. international journal of business management and economic research, 9(6), 1454–1460. beilock, s. l. (2008). math performance in stressful situations. current directions in psychological science, 17(5), 339–343. https://doi.org/10.1111/j.1467-8721.2008.00602 belt, e. s., & lowenthal, p. r. (2021). video use in online and blended courses: a qualitative synthesis. distance education, 42(3), 410–440. https://doi.org/10.1080/01587919.2021.1954882 bezerra, j. d. m. (2018, 21–23 october). collaborative testing strategies in a computing course [oral presentation]. international association for development of the information society conference. budapest, hungary. biggs, j. (1996). enhancing teaching through constructive alignment. higher education, 32(3), 347– 364. black, p., & wiliam, d. (2018). classroom assessment and pedagogy. assessment in education: principles, policy & practice, 25(6), 551-575. https://doi.org/10.1080/0969594x.2018.1441807 brady, m., devitt, a., & kiersey, r. a. (2019). academic staff perspectives on technology for assessment (tfa) in higher education: a systematic literature review. british journal of educational technology, 50(6), 3080–3098. https://doi.org/10.1111/bjet.12742 14 https://doi.org/10.1080/01587919.2021.1954882 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu capelle, c. (2018). bilan d'expérimentation sur l'éducation au numérique. ims laboratory university of bordeaux. https://hal.archives-ouvertes.fr/hal-01897409 catterall, j., & davis, j. (2013). supporting new students from vocational education and training: finding a reusable solution to address recurring learning difficulties in e-learning. australasian journal of educational technology, 29(5), 640–650. chekour, m., laafou, m., & janati-idrissi, r. (2015). l’évolution des théories de l’apprentissage à l’ère du numérique. revue de l’epi (enseignement public et informatique), 1–8. chiocchio, f., grenier, s., o’neill, t. a., savaria, k., & willms, j. d. (2012). the effects of collaboration on performance: a multilevel validation in project teams. international journal of project organisation and management, 4(1), 1–37. https://doi.org/10.1504/ijpom.2012.045362 cortright, r. n., collins, h. l., rodenbaugh, d. w., & dicarlo, s. e. (2003). student retention of course content is improved by collaborative-group testing. american journal of physiology— advances in physiology education, 27(3), 102–108. https://doi.org/10.1152/advan.00041.2002 cozma, a.-m. (2021). l’examen collaboratif: étude de cas en contexte universitaire finlandais. revue internationale de pédagogie de l’enseignement supérieur, 37(2). https://doi.org/10.4000/ripes.3116 dahlström, ö. (2012). learning during a collaborative final exam. educational research and evaluation, 18(4), 321–332. duplàa, e., & talaat, n. (2011). connectivisme et formation en ligne. distances et savoirs, 9(4), 541– 564. fluck, a. e. (2019). an international review of eexam technologies and impact. computers & education, 132, 1–15. https://doi.org/10.1016/j.compedu.2018.12.008 gulikers, j. & bastiaens, t. & kirschner, p. (2004). the five-dimensional framework for authentic assessment. educational technology research and development. 52. 67-86. 10.1007/bf02504676. gilley, b. h., & clarkston, b. (2014). collaborative testing: evidence of undergraduate students. research and teaching, 43(3), 83–91. he, j., & huang, x. (2020). using student-created videos as an assessment strategy in online team environments: a case study. journal of educational multimedia and hypermedia, 29(1), 35–53. heilporn, g., lakhal, s., & bélisle, m. (2021). an examination of teachers’ strategies to foster student engagement in blended learning in higher education. international journal of educational technology in higher education, 18(1), 1–25. https://doi.org/10.1186/s41239-021-00260-3 kapitanoff, s. (2009). collaborative testing. cognitive and interpersonal processes related to enhanced test performance. active learning in higher education, 10(1), 56–70. knierim, k., turner, h., & davis, r. k. (2015). two-stage exams improve student learning in an introductory geology course: logistics, attendance, and grades. journal of geoscience education, 63(2), 157-164. krathwohl, d. r. (2002). a revision of bloom’s taxonomy: an overview. theory into practice, 41, 212– 218. jordan, s.e. (2013). e-assessment: past, present and future. new directions in the teaching of physical sciences, 9, 87-106. lafuente martínez, m., álvarez valdivia, i. m., & remesal ortiz, a. (2015). making learning more visible through e-assessment: implications for feedback. journal of computing in higher education, 27(1), 10–27. https://doi.org/10.1007/s12528-015-9091-8 leight, h., saunders, c., calkins, r., & withers, m. (2012). collaborative testing improves performance but not content retention in a large-enrollment introductory biology class. life science education, 11, 392–401. https://doi.org/10.1187/cbe.12-04-0048 15 https://hal.archives-ouvertes.fr/hal-01897409 https://doi.org/10.1504/ijpom.2012.045362 https://doi.org/10.1016/j.compedu.2018.12.008 https://doi.org/10.1186/s41239-021-00260-3 https://doi.org/10.1007/s12528-015-9091-8 https://doi.org/10.1080/87567559909596077 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu lison, c. (2020). la présentation orale en contexte de formation à distance : évaluer un pecha kucha. évaluer. journal international de recherche en éducation et formation, numéro hors-série, 1, 173-180. lusk, m., & conklin, l. (2003). collaborative testing to promote learning. journal of nursing education, 42(3), 121–124. https://doi.org/10.3928/0148-4834-20030301-07 mahoney, j. w., & harris-reeves, b. (2019). the effects of collaborative testing on higher order thinking: do the bright get brighter? active learning in higher education, 20(1), 25–37. meyer, a., rose, d. h., & gordon, d. (2014). universal design for learning. theory and practice. cast professional publishing. http://udltheorypractice.cast.org/ muir, s. p., & tracy d. m. (1999). collaborative essay testing. just try it! college teaching, 46(1), 33– 35. https://doi.org/10.1080/87567559909596077 nicol, d. j., & macfarlane-dick, d. (2006). formative assessment and self-regulated learning: a model and seven principles of good feedback practice. studies in higher education, 31(2), 199– 218. https://doi.org/10.1080/03075070600572090 ollier-malaterre, a. (2018). la compétence numérique de gestion des frontières sur les réseaux sociaux numériques : un capital culturel technologique à la bourdieu. lien social et politiques, (81), 121–137. https://doi.org/10.7202/1056307ar raynault, a., lebel, p., brault, i., vanier, m. c. & flora, l. (2021). how interprofessional teams of students mobilized collaborative practice competencies and the patient partnership approach in a hybrid ipe course. journal of interprofessional care, 35(4), 574–585. redecker, c., punie, y., & ferrari, a. (2012). e-assessment for 21st century learning and skills. in a. ravenscroft, s. lindstaedt, c. d. kloos, & d. hernández-leo (eds.), 21st century learning for 21st century skills (lecture notes in computer science, vol. 7563, pp. 292–305). springer. https://doi.org/10.1007/978-3-642-33263-0_23 resta, p., laferrière t., mclaughlin, r., & kouraogo, a. (2018). issues and challenges related to digital equity: an overview. in j. voogt, g. knezek, r. christensen, & k.-w. lai (eds.), second handbook of information technology in primary and secondary education. springer international handbooks of education. https://doi.org/10.1007/978-3-319-71054-9_67 rolim, c., & isaias, p. (2019). examining the use of e-assessment in higher education: teachers and students’ viewpoints. british journal of educational technology, 50(4), 1785–1800. https://doi.org/10.1111/bjet.12669 romeu fontanillas, t., romero carbonell, m., & guitert catasús, m. (2016). e-assessment process: giving a voice to online learners. international journal of educational technology in higher education, 13(1), 1–20. https://doi.org/10.1186/s41239-016-0019-9 rose, d. h., robinson, k. h., hall, t. e., coyne, p., jackson, r. m., stahl, w. m., & wilcauskas, s. l. (2018). accurate and informative for all: universal design for learning (udl) and the future of assessment. in s. n. elliott, r. j. kettler, p. a. beddow, & a. kurz (eds.), handbook of accessible instruction and testing practices (pp. 167-180). springer international publishing. https://doi.org/10.1007/978-3-319-71126-3_11 sandahl, s. s. (2010). collaborative testing as a learning strategy in nursing education. nursing education perspectives, 31(3),142–147. siemens, g. (2005). connectivism: learning as network-creation. astd learning news, 10(1), 1–28. sotiriadou, p., logan, d., daly, a., & guest, r. (2020). the role of authentic assessment to preserve academic integrity and promote skill development and employability. studies in higher education, 45(11), 2132–2148. https://doi.org/10.1080/03075079.2019.1582015 spector, j. m., ifenthaler, d., sampson, d., yang, l. (joy), mukama, e., warusavitarana, a., dona, k. l., eichhorn, k., fluck, a., huang, r., bridges, s., lu, j., ren, y., gui, x., deneen, c. c., diego, j. s., & gibson, d. c. (2016). technology enhanced formative assessment for 21st 16 https://doi.org/10.1080/87567559909596077 https://doi.org/10.1080/03075070600572090 https://doi.org/10.1007/978-3-319-71126-3_11 denis, heilporn, mascarenhas, and raynault journal of teaching and learning with technology, vol. 11, special issue, jotlt.indiana.edu century learning. journal of educational technology & society, 19(3), 58–71. http://www.jstor.org/stable/jeductechsoci.19.3.58 stearns, s. a. (1996). collaborative exams as learning tools. college teaching, 44(3), 111–112. stödberg, u. (2012). a research review of e-assessment. assessment & evaluation in higher education, 37, 591 604. st-onge, c., ouellet, k., lakhal, s., dubé, t., & marceau, m. (2022). covid-19 as the tipping point for integrating e-assessment in higher education practices. british journal of educational technology, 53(2), 349–366. https://doi.org/10.1111/bjet.13169 university of british columbia (2020, august 17). interview with dr. roberta borgen (neault). learning in a pandemic. https://ets.educ.ubc.ca/learning-in-a-pandemic-roberta-borgen/ waycott, j., sheard, j., thompson, c., & clerehan, r. (2013). making students’ work visible on the social web: a blessing or a curse? computers & education, 68, 86–95. https://doi.org/10.1016/j.compedu.2013.04.026 wiggins, g. (1990) the case for authentic assessment. practical assessment, research, and evaluation, 2(2). woody, w. d., woody, l. k., & bromley, s. (2008). anticipated group versus individual examinations: a classroom comparison. teaching of psychology, 35(1), 13–17. https://doi.org/10.1080/00986280701818540 zimbardo, p. g., butler, l. d., & wolfe, v. a. (2003). cooperative college examinations: more gain, less pain when students share information and grades. journal of experimental education, 71(2), 101–125. https://doi.org/10.1080/00220970309602059 zipp, j. f. (2017). learning by exams: the impact of two-stage cooperative tests. teaching sociology, 35(1), 62–76. 17 http://www.jstor.org/stable/jeductechsoci.19.3.58 https://doi.org/10.1111/bjet.13169 https://doi.org/10.1016/j.compedu.2013.04.026 https://doi.org/10.1080/00986280701818540 4003 journal of teaching and learning with technology, vol. 2, no. 2, december 2013, pp. 60 – 78. fostering collaboration and learning in asynchronous online environments karen m. gibson1 abstract: this case study, based on social constructivist learning theory, analyzed the quality of interaction and learning taking place during asynchronous discussions in a graduate level course by focusing on the types of instructional strategies employed to foster discussion. qualitative and quantitative procedures were used to analyze knowledge construction processes based on previously conducted research that provided a set of indicators for replication in coding and comparison of results. the role of facilitator was closely monitored in relation to the quality of responses in regard to knowledge construction in order to determine the types of instructional strategies best suited to draw students into online discussions that are constructivist, collaborative approaches to building knowledge. keywords: constructivist learning; collaborative learning; online learning; computer-mediated instruction i. introduction. online and blended learning has grown significantly in recent years. spurred by the increased interest among faculty in designing effective learning experiences, this rapid growth requires a focus on the types of instructional strategies that will best serve as effective tools to draw students into online discussions that are constructivist, collaborative approaches to creating meaning. while significant research has been conducted on the quantitative nature of online discussion participation (henri, 1992; harasim, 1993; hillman, 1999), far more research should focus on what happens to learning within this environment (schrire, 2006). the asynchronous online discussion environment offers unique opportunities for students and instructors. since participation is not required at a specified time, or during a structured must-be-present-window, students have the luxury of time in order to write and even re-write their responses. without specified time constraints, students can take time to review posts, reflect on the direction they wish to move the discussion, and at their discretion, end or begin new discussion strands (de wever, schellens, valcke, & van keer, 2006; pena-shaff & nicholls, 2004). this careful deliberation and articulation of ideas has the potential to improve students’ writing and thinking skills. most importantly, this makes an online discussion a collaborative, reflective activity (pena-schaff & nicholls, 2004) that is characterized as dialogic in nature (schrire, 2006). such interactions hold interpersonal significance and highlight the importance of learner interaction in view of knowledge construction. “the need to articulate one’s own argument in this type of text-based environment encourages students to engage in analytical and reflective action. this process helps students construct purposeful arguments and transmit them 1 college of education and human services, university of wisconsin oshkosh, 800 algoma blvd, oshkosh, wi 54901, gibsonk@uwosh.edu gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 61 to an audience” (pena-shaff, martin, & gay, 2001, p. 65). careful analysis of such interactions can help determine whether or not they facilitate critical thinking and encourage the process of knowledge construction. within a traditional face-to-face classroom, instructors often spend time preparing students for and guiding students through appropriate, effective discussions, incorporating a variety of strategies based on the pace of the class, the reactions of students to the content and one another, and a general reading of body language and other non-verbal cues. such discussion techniques in the face-to-face classroom are not readily transferable to the online environment. “without having face-to-face interaction, the absence of nonverbal cues and contextual information, it is a formidable task to elicit participants’ sense of social presence in a learning community with only text based asynchronous discussion board communication tools” (an, shin, & lim, 2009, p. 751). new pedagogical approaches must be developed and honed. a variety of instructional strategies can be applied to encourage student learning online— especially those activities for online and blended learning in an asynchronous environment. graduate students bring another dimension to this environment as they enter with a level of confidence gained from more life experiences and an eagerness to share that learned-in-thetrenches knowledge. they often bring an eagerness to serve in a more active role in the online environment, allowing for the instructor to become a less dominant presence. assigning roles is a common means of generating meaningful discussion and knowledge construction (dewever, van keer, schellens, & valcke, 2010; baran & correia, 2009). according to baran and correia (2009), it is helpful to allow for student-led facilitation strategies to overcome the challenges of instructor-dominated facilitation. an instructor as the center of discussion has the potential to create what rourke & anderson (2002) describe as an “authoritarian presence” (p. 4). for many instructors, it is necessary to experience a paradigm shift wherein student-dominated discussions replace instructor-dominated facilitations. according to harasim (1990), the key differences between online and face-to-face discussions are time and place dependence, and the richness and the structure of communication. discussion techniques frequently used by instructors in the faceto-face setting have to be modified in order to facilitate discussion in the electronic forum. finding the most effective pedagogical approach for the online environment can be a challenging task. the classroom setting allows for the careful development of a community of learners. this same sense of community can and should be developed in the online environment. interaction among students in a discussion forum helps them apply and integrate new knowledge in the course of engaging in group interaction (wang, 2010). as students construct meaning through interaction with others, they are participating in a community of learning (rourke, anderson, garrison, & archer, 2001). according to palloff and pratt (2007), “the learning community is the vehicle through which learning occurs online. it is the relationships and interactions among people through which knowledge is generated” (p. 15). the importance of dialogue is founded on principles of social constructivist theory. social constructivists consider individual learning as socially mediated, incorporating such principles as active learning, selfreflection, authentic learning and collaborative learning. learning is collaborative in nature; group settings can further foster learning (schrire, 2004). asynchronous online environments can provide students with opportunities such as self-reflection, elaboration, and in-depth analysis of course content, allowing for purposeful construction of knowledge (pena-schaff & nicholls, 2004). rourke and anderson (2002) assert the importance of online discussion as an essential activity for co-constructing knowledge since “explaining, elaborating, and defending one’s gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 62 position to others forces learners to integrate and elaborate knowledge in ways that facilitate higher-order learning” (p. 3). the level of interaction helps result in learning. dennen and wieland (2007) indicate, “learners must interact in some particular ways, engaging with each other and course material at deep (as opposed to surface) levels, which lead toward negotiation and internalization of knowledge rather than just rote memorization of knowledge” (p. 283). according to andresen (2009), it is important for instructors to make asynchronous discussions successful. in order for this to occur, two important components must be carefully considered: the role of the instructor and how to achieve deeper/higher learning. the work of de wever, van keer, schellens, and valcke (2009) indicates that a significant positive impact of assigning roles to students can be achieved, particularly if the role assignments occur early in the instructional period. facilitation becomes a shared responsibility among instructors and students. according to baran and correia (2009), the majority of research focuses on instructor facilitation strategies and only a limited number of researchers have examined the use of facilitation strategies in peer-facilitation contexts. the online environment presents itself as a critical tool for constructing, representing and mediating discussions between students. facilitating learners to elaborate their knowledge in peer discussions and acquire multiple perspectives on a topic can be achieved through the assigning of roles. roles assigned to students have the potential to increase knowledge construction through social negotiation outside the confines of the brick and mortar classroom. simply placing students in groups does not automatically bring about collaborative learning or effective interaction. a purposeful instructional design, building on collaborative learning environments must focus on embedding certain amounts of structure, such as setting clear goals and defining the tasks (dewever et al., 2009). in the case of this study, the purposeful instructional design included specific facilitation requirements for each of three discussions under investigation. the purpose of this study was to determine the impact of various facilitation strategies on constructing knowledge and increasing collaboration in the asynchronous online discussion environment. an analysis of the interactions within online discussions designed as part of a hybrid delivery of instruction was completed in order to characterize successful student-led facilitation strategies in asynchronous discussions. a. framework. the guiding framework for this work is learning as social construction of meaning. according to social constructivist theory, when students are presented with learning environments that encourage active participation, interaction and dialogue, they become opportunities to create meaning from new experiences (jonassen, davison, collins, campbell, & bannan haag, 1995). a constructivist theory suggests that learning is more effective when students are given opportunity to discuss ideas, experiences and perceptions with their peers. based on the constructivist framework of learning, educational environments should provide activities and opportunities for students to articulate and reflect on the content under study, to negotiate meaning with the self (reflective activity) and with others, and to apply the knowledge learned in real life situations. in this manner, learning becomes an active process in which individuals create meaning by analyzing, discussing and gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 63 experiencing new situations and applying new concepts. (pena-shaff & nicholls, 2004, p. 245) rourke and anderson (2002) conclude that from a social constructivist perspective, online discussions create opportunities for students to construct meanings together and integrate new knowledge into their prior experiences. the asynchronous online discussion environment provides the context and tools for students to engage in meaningful learning experiences. “theoretical models of collaborative learning consider the discourse in a computer conference as both reflecting and shaping the cognitive processes” (schrire, 2006, pp. 52-53). schrire (2006) goes on to note that the cognitive processes are of a social nature in that they arise out of, and contribute to, the interactions among the participants. b. choosing a methodological approach. early research on online learning focused on the quantifiable variables; however, the early 1990’s brought an increased emphasis on the aspects of quality of learning and learning interaction (henri, 1992; hillman, 1999; pena-shaff, martin, & gay, 2001). creating a study that moved beyond the quantifiable variables was important to this researcher in developing strategies appropriate and effective in the online environment. qualitative research, from a philosophical perspective, is based on a view that there are “multiple realities” (schrire, 2006, p. 52). mason (1992) recommends the use of content analysis in studies on computer conferencing. additionally, merriam (2001) asserts that the performance of a content analysis within the case study framework allows a study to move from mere description to meaningful interpretation. content analysis is not only compatible with the case study approach (schrire, 2006) it also provides the basis for interpretation in context (cronbach, 1975). this study, different from a yes or no question-and-answer approach frequently associated with quantitative research, develops around what merriam (2001) describes as a focus on what happens in a given context, how the events take place and why they occur. a case study approach incorporating both qualitative (participation levels, percentages of indicators covered) and qualitative (content analysis of discussion posts) design proved the most effective approach for this study. the application of three different treatments in the form of facilitation approaches provided an opportunity for comparison between discussions. finally, using the knowledge construction category system previously developed by pena-schaff and nicholls (2004) allowed for a comparison to their study regarding the creation of knowledge in the online setting. ii. methodology. a. context. this study took place in the context of a graduate level course at a comprehensive university in the midwestern usa. the master of science degree program, housed in the university’s college of education and human services, includes a 3-credit required course focused on the theoretical background of educational systems in the united states. the degree program was designed for any students seeking increased formal and informal leadership skills in pre-kindergarten to 12th grade (pk-12) settings, higher education institutions, non-profit organizations, or any other systems focused on education and leadership. the hybrid nature of the course incorporates both face-to-face and online components, with students meeting on campus every other week and in gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 64 the online environment during the opposite weeks. also known as a blended course, this approach combines face-to-face instruction with computer mediated instruction as an alternative to the traditional delivery model. such blending has been found to contribute to both achievement and student satisfaction (roblyer & wiencke, 2004) and has become an increasingly popular delivery model in higher education (an & frick, 2006; ng & cheung, 2007). this study focused on the analysis of knowledge construction in online class discussions. b. participants. the course under study during the spring 2012 semester included 17 women and 7 men (n=24). all 24 successfully completed the course. the researcher served as the instructor of the course. all 24 students were pk-12 teachers, counselors or library media specialists seeking a master of science degree. eleven of the participants were also seeking pk-12 administrative licensure. c. discussion assignments. throughout the semester, there were six online discussion sessions. the first discussion assignment focused on introductory statements from participants. this was meant to provide some instruction on using the desire2learn (d2l) discussion features and comfort in navigating this particular platform. d2l is the university-adopted platform serving multiple functions, one of which is its online learning environment. the final two discussions focused on group project progress. the study, therefore, focused on the discussion assignments in weeks 4, 6 and 8 of the 14-week semester. each of these discussion assignments was different in regard to the type of facilitation required. the week 4 discussion treatment was a loosely structured (non-facilitated) approach. the week 6 discussion required each student to facilitate a specific topic within the broader discussion. the week 8 discussion treatment was a single volunteer serving as facilitator for the overall discussion. each discussion assignment was open for a 10-day window. during face-to-face instruction time, information was provided to students regarding quality posts. handouts to further clarify were also provided (see appendix a). students were placed in groups of four for each discussion. group membership changed with each discussion. the instructor monitored the online discussions, providing comments and feedback during faceto-face classroom time but not directly participating in the online group discussions. the purpose of the study was to analyze the quality of interaction and learning taking place during asynchronous discussions by focusing on the types of instructional strategies employed to foster knowledge building in a collaborative online environment. using three different discussion techniques allowed for comparison of the three in terms of levels of participation and depth of knowledge construction. d. data collection and analysis. both quantitative and qualitative approaches were employed to describe and analyze levels of participation, interaction, and meaning construction. the quantitative data included the total number of messages posted for each treatment, the percentage of overall messages posted per treatment, and the percentage of knowledge construction posts per the work of pena-shaff and nicholls (2004). the administrative functions of d2l were used to note frequency of participation and threads of interactions; however, since paragraphs were the unit of measure for gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 65 this study, that data was of far less importance than the content of the messages. the qualitative data of this study consisted of the content analysis of the three discussion assignments. content analysis was conducted on the transcripts of the discussions each week under study. rourke and anderson (2004) suggest that instead of developing new coding schemes, researchers should use schemes that have been developed and used in previous research, fostering replicability and the validity of the instrument (stacey & gerbic, 2003; hannafin & kim, 2003). this study, therefore, utilized the coding schema developed by pena-shaff, martin, and gay (2001) and further modified by pena-shaff & nicholls (2004). using the existing category system, or set of indicators, allowed for coding and categorizing of discussions and the opportunity for comparing results to the patterns identified in the work of pena-shaff and nicholls (2004). as the previous study already revealed the types of posts that could be identified as knowledge building, the current study used those findings to better identify the strategies that could be identified as knowledge building. the codes and descriptions of this model can be viewed in table 1. the discussion transcripts from the three selected discussion assignments were coded by the instructor/researcher. an initial coding was completed for each week under study. as a follow-up at the end of the data-collecting weeks, a second coding of the messages was conducted to check for ambiguity in the coding. paragraphs were chosen as the unit of analysis. each discussion contribution reflects a level of social construction knowledge. these levels were determined by applying the pena-shaff and nicholls’ knowledge construction category system and indicators. each message (paragraph) received one code. when a message was comprised of multiple levels of knowledge construction, the most prominent was assigned. for example, when a student provided clarification of a previous statement but went on to provide interpretation of the discussion topic, the more prominent or more elaborated upon indicator was assigned. table 1. knowledge construction category system and indicators2 category and description indicators question: gathering unknown information, + information seeking questions inquiring, starting a discussion or reflecting on the + discussion questions problems raised. + reflective questions reply: responding to other participants’ + direct responses to information-seeking questions questions or statements. + elaborated responses that include information sharing, clarification and elaboration, and interpretation clarification: identifying and elaborating on ideas + stating or identifying ideas, assumptions and facts and thoughts. + linking facts, ideas and notions + identifying or reformulating problems + explaining ideas presented by -using examples -describing personal experiences -decomposing ideas -identifying or formulating criteria for judging possible answers or to justify own statements (making lists of reasons for or against a position) -arguing own statements -defining terms -establishing comparisons 2 pena-shaff, j. & nicholls, c. (2003). analyzing student interactions and meaning construction in computer bulletin board discussions. computers & education, 12, 243-256. gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 66 -presentation of similarities and differences -listing advantages or disadvantages -using analogies -identifying causes and consequences interpretation: using inductive and deductive + reaching conclusions analysis based on facts and premises posed, + making generalizations making predictions and building hypotheses. + predicting includes reflection and analysis when originating +building hypothesis from the clarification point. + summarizing + proposing solutions conflict: debating other participants’ point of view, + presenting alternative/opposite positions (debating) showing disagreements and information in previous + disagreements messages, and taken to an extreme, friction among + friction participants. assertion: maintaining and defending ideas +re-statement of assumptions and ideas questioned by other participants by providing + defending own arguments by further elaboration on the explanations and arguments that defend original ideas previously stated statements. consensus building: trying to attain a common + clarifying misunderstandings understanding of the issues in debate. + negotiating + reaching consensus or agreement judgment: making decisions, appreciations, +judging the relevance of solutions evaluations and criticisms of ideas, facts and + making value-judgments solutions discussed as well as evaluating text + topic evaluation orientation and authors’ positions. + evaluating text orientation and authors’ position about the subject being discussed reflection: acknowledging learning something + self-appraisal of learning new, judging importance of discussions topic in + acknowledging learning something new relation to their learning. + acknowledging importance of subject being discussed in their learning support: establishing rapport, sharing feelings, + acknowledging other participants’ contributions and ideas agreeing with other people’s ideas either directly + empathy: sharing of feelings with other participants’ or indirectly, and providing feedback to other comments (“i felt the same way…”) participants’ comments. + feedback other: includes mixed messages difficult to + messages not identified as belonging to a specific category categorize and social statements. + social comments not related to the discussions: greetings, jokes, etc. + emotional responses table 2. total numbers and percentages by treatment. treatment total number of paragraphs posted percentage of paragraphs posted percentage of pkcc paragraphs posted #1 212 31.5 64 #2 340 50 69.1 #3 124 18 87 total paragraphs 676 gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 67 as was the case in the pena-shaff and nicholls (2004) study, content analysis was used to identify the most common patterns of discourse. the category system previously developed and applied in that study was applied to the current study. according to pena-shaff & nicholls (2004), “statements of clarification, interpretation, conflict, assertion, judgment and reflection appear to be most directly related to the process of knowledge construction” (p. 252). for discussion purposes, this researcher has labeled these six indicators as primary knowledge construction categories (pkcc). treatment one, or a loosely structured (non-facilitated) approach, included a total of 212 paragraphs posted (see figure 1). of these 31.5% of the overall 676 posted during the study weeks, 64% were coded as pkcc posts. figure 1. percentage of knowledge construction in a loosely structured open discussion. treatment two, where each student was required to take responsibility for facilitating a specific topic within the overall discussion, consisted of 340 posted, or 50% of the total posts under study. this treatment generated 69.1% of pkcc posts (see figure 2). figure 2. percentage of knowledge construction with required facilitation. 39.20% 12.70% 12.70% 11.30% 10% 8.50% 1.80% 0.47% treatment #1 loosely structured open discussion clarifica4on (39.2%) ques4on (12.7%) reply (12.7%) judgment (11.3%) support (10%) interpreta4on (8.5%) asser4on (1.8%) 49.40% 17.10% 9.70% 7.90% 6.50% 4.10% 1.50% 0.30% 0.30% treatment #2 facilitation required by each participant clarifica4on (49.4%) ques4on (17.1%) judgment (9.7%) reply (7.9%) interpreta4on (6.5%) support (4.1%) conflict (2%) gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 68 the final treatment, where an individual in each discussion group volunteered to serve as facilitator for the length of the discussion, generated a total of 124 paragraph posts. this small number, only 18% of the total study posts, also generated the highest level of pkcc posts with 87% falling into the categories identified by pena-shaff and nicholls (2004) as knowledge construction categories (see figure 3). figure 3. percentage of knowledge construction with a volunteer facilitator. iv. discussion. this small-scale case study provided a great deal of information regarding the role of facilitation in a graduate-level hybrid delivery course. according to andresen (2009), “the primary difficulty in making any assessment of an asynchronous discussion forum is the huge volume of data that are available to be assessed…” (p. 252). despite the small number of participants in this study, there was a “huge volume” of data, with a total of 676 paragraphs to be coded. had the volume been low, this researcher would have felt uncomfortable making generalizations in regard to the facilitation types as knowledge construction contributors. hew, cheung, and ng (2009) conducted a study to determine what motivates students to contribute to student-facilitated discussions. their findings indicated that 66% of the study participants agreed or strongly agreed that familiarity with the discussion facilitator motivated them to contribute more frequently to message postings. the findings by hew, cheung, and ng clearly address the impact of the hybrid nature of a course versus a fully online version. the face-to-face sessions provide opportunity for community building that carries over into the electronic environment. activities conducted in face-to-face settings to build community likely contributed to the strong online presence found in this study. the significant volume of data may be directly attributable to the learning community previously built. another study relevant to the findings of the current study was conducted by baran and correia (2009) where, similar to the current study, the researchers used three separate facilitation treatments, conducting each as a separate mini-case. each case represented a different facilitation experience in their search to discover whether peer facilitation strategies could be used to overcome the challenge of instructor-led facilitation, enhance the sense of a learning community and encourage students’ participation. what they found in their study was that regardless of the 59.70% 24.20% 3.20% 2.40% 2.40% 1.60% 0.80% treatment #3 volunteer facilitator clarifica4on (59.7%) interpreta4on (24.2%) ques4on (3.2%) reflec4on (2.4%) other (2.4%) reply (1.6%) judgment (.8%) gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 69 type of peer-discussion facilitation, whether highly structured, inspirationally facilitated, or practice-oriented, peer facilitation can help generate innovative ideas, motivate students to participate actively and provide an atmosphere for involvement and commitment. also relevant was that it did not matter if the group sizes or memberships changed; all three treatments promoted meaningful dialogue, produced high levels of participation, and included quality conversation. the current study found the same to be true. as noted earlier, previously developed indicators served the critical role of providing categories of knowledge construction. content analysis was used to identify the most common patterns of discourse, just as was done in the pena-shaff and nicholls (2004) study, and the category system indicators were applied to the data. in their work, pena-shaff and nicholls determined six categories as indicators of knowledge construction. in the current study, those six categories were evaluated in each of the three treatments to determine levels of knowledge construction. a volunteer facilitator during a group discussion generated the highest level of pkcc posts, providing insights into discussion strategies that support learning in the online environment. in addition to the six indicators labeled as pkcc (clarification, interpretation, conflict, assertion, judgment and reflection), pena-shaff and nicholls identified secondary levels of indicators in relation to knowledge construction. questions, according to pena-shaff and nicholls (2004), also indicate that students are trying to make sense of and understand topics being discussed. while quality reflective questions can certainly serve this purpose, this study found questions to be generally overused in terms of simple discussion generation. during treatment two, facilitation was apparently defined by students as generating questions in order to start and/or continue an online discussion. this is not necessarily counterproductive, except that several questions were raised without any follow-up to them by other discussion participants. in fact, 17 questions were raised during the second discussion period (treatment two) without any response. of the categories identified, clarification statements formed 48% of the total (676) paragraphs. this means that students spent a great deal of time explaining and elaborating upon their ideas. pena-shaff and nicholls (2004) had similar results. they noted, “although in many cases clarification statements began as messages either questioning or responding to previous messages, they tended to become reflective monologues in which students focused more on explaining their own ideas, perspective and beliefs than on addressing specific points in others’ contributions” (p. 257). the following represents an example of this type of message: that's a good question! i think i would have a ‘senior’ teacher who has bought into a school-wide system give a little presentation to the teacher who has not yet bought in. obviously, we want to make the teacher understand why we are implementing a system and to be able to see the benefits. i think having another co-worker explain the situation may make the teacher more receptive. also, i think the administrator should make unscheduled ‘visits’ to all classrooms. this is not to look for any problems or issues, but rather to keep current with curriculum and classroom tendencies in all grade levels. interpretation statements, including inferences, conclusions, discussion summaries, generalizations, hypothesis building, and suggesting solutions to problems stated represented, overall, just 10% of the statements; however, this category also showed the greatest amount of change between the first two discussion treatments and the third: treatment one: 8.5%; treatment two: 6.5%; treatment three: 24.2%. this indicates that in the first two treatments, students did not gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 70 provide a summary of ideas presented in a discussion thread. in the third discussion, with a single facilitator, this increased significantly. conflict was almost non-existent in the discussions. despite this researcher spending time in class assuring students that a healthy discussion can include disagreements with one another, and facilitating such disagreements in the face-to-face setting, students were loath to disagree in the online format with a mere 1.4% labeled as such. conflict has the potential to enhance discussion through a quality debate. this, however, was absent from the three discussions analyzed. equally low in number (1.1%) were statements of assertion. this seems to indicate that very few students replied to messages that challenged ideas they had presented in previous messages. it appears from this analysis that the treatment applied to each discussion influenced participation levels as well as knowledge construction. based on the categories established by pena-shaff and nicholls (2004), when discussion was left as an open forum without facilitation, less knowledge construction occurred. participation levels were, of course, much higher when all students were required to facilitate some portion of the discussion (treatment two); however, the pkcc stayed very close in percentage to when no facilitation occurred (treatment one). the greatest level of pkcc occurred when a student served as a facilitator of the discussion, as was done in treatment three. the participation level declined for this treatment (only 18% of all paragraphs posted throughout the entire study period), but the overall quality of the discussion in terms of knowledge construction was far greater with 87% of the paragraphs posted falling into the pkcc categories. gilbert and dabbagh (2005) offer one possible explanation for this significant difference between the treatments. in a study examining the impact of highly structured versus less structured discussions, gilbert and dabbagh found that participation levels were higher when specific facilitator guidelines were provided. this was certainly the case in the current study as guidelines were carefully spelled out for treatments one and two, those with the highest participation levels, but far less structured in the third treatment where participation dropped. a sample of discussion postings and their codes can be found in appendix b. some limitations of the study must also be noted. benefits certainly exist in using a previously-developed coding scheme. clearly, this allows for comparison and replication. penaschaff and nicholls (2004) provide a variety of samples and examples to further clarify and define indicators; however, limitations exist. there is still the limitation of one researcher closely using the work of another without being able to fully guarantee reliability as it is impossible for the exact interpretation of terms, indicators and samples. also, the student sample is very small so it is difficult to make broad generalizations based on the results. finally, the researcher knows the students quite well, even serving as their program advisor as they complete their graduate degree. this potentially increases the possibility of bias as it is difficult to completely extricate the role of researcher from the role of course instructor. v. conclusions. this small-scale case study appears to support what other researchers have reported. online discussion forums have great potential to encourage critical thinking and the process of knowledge construction. the use of specific strategies to better encourage that potential is critical. finding ways to build community in the classroom setting that can be carried to the online setting is important. additionally, appropriate tasks, clear guidelines and defined facilitator expectations also increase the likelihood of success. according to andresen (2009), gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 71 “knowledge construction only occurs because of careful planning: clear, well-defined, wellcrafted questions and discussion topics. without such planning and subsequent guidance, only lower levels of cognitive engagement will occur” (p. 252). the assignment of facilitation roles in this study showed an increase in the types of interactions believed to lead to increased knowledge construction. more specifically in this study, having one participant volunteer in the role of facilitator led to the greatest increase in knowledge construction. keeping groups small appears to optimize participation also, with group sizes of four being utilized in the current study. “interaction among course participants helps them apply and integrate newly gained knowledge in the course of engaging in group activity” (wang, 2010, p. 832). unlike the findings in many online discussion studies, low participation was not a factor. students participated far beyond the minimum requirements and expectations. using activities in the classroom to encourage collaboration seemed to carry over into the online environment. had students not been given such opportunity to become a learning community face-to-face, it is likely they would not have been so willing to participate at the same level in the online setting. as pena-shaff and nicholls (2004) found, courses that include online discussion as a supplement to regular class meetings need to carefully integrate this activity into the overall course design “so students see it as integral to the class and not as a disassociated activity” (p. 263). an appropriate follow-up study might include the specific activities that are most effective in creating this environment. similar to other studies, students clearly did not go back to follow conversation threads as frequently as this researcher would desire. it is quite evident by the low number of conflict, consensus building and assertion statements that students usually did not return to a discussion thread after posting a question, clarification or interpretation. this researcher believes more work needs to follow on methods of motivating or challenging students to a greater extent so that discussions are not so much reflective monologues as they are dialogical interactions. courses that meet in a face-to-face structure with an online component allow for in-class work on clarifying these expectations. social constructivist ideas about the most productive characteristics of learning environments can be supported through an online discussion opportunity where students reflect on others’ ideas as well as their own. this is particularly true when students are required to share ideas in writing. according to the results of this study, the most effective instructional strategy of the three employed for a constructivist, collaborative approach is using a student volunteer as discussion facilitator. future research might focus on other strategies not included in this study which might prove even more effective. appendixes. appendix a. guidelines and rubric for online discussion requirements educational leadership courses in this class, online discussions will be graded assignments. the purpose of the online portion of the course is to frame and promote collaborative learning. active and regular participation is not only an important part of your responsibilities to the class but also important for you in learning new course content and in developing your thoughts and positions on various topics. there are three very important rules for using online discussion boards: gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 72 1. please remember that the culture of mutual respect that is part of our face to face time extends into the virtual classroom environment. 2. participation is required. 3. participation alone is not enough. your posts require a thoughtful and meaningful approach. quality does count! the total of your participation in a single discussion topic (noted as a weekly assignment) will be graded on a 10-point scale. please follow this protocol for posting and responding to online discussions: a. you are expected to participate on multiple days. as this is an asynchronous discussion format, not everyone will be ready to post on the same day. check your discussion board on at least three different days to get the full effect of your group’s discussion. b. you should follow the specific posting requirements noted for each week. make sure you meet the minimum requirements for the week. c. there is a rather fine line between a post that is too short and one that is too long. whether you agree or disagree with someone else’s post, explain why with supporting evidence and concepts from the readings or a related experience. include a reference, link, or citation when appropriate. d. be organized in your thoughts and ideas. e. incorporate correlations with the assigned readings or topics. f. stay on topic. g. provide evidence of critical, graduate-level thinking and thoughtfulness in your responses or interactions. avoid summarizing. h. contribute to the learning community by being creative in your approaches to topics,, being relevant in the presented viewpoints, and attempting to motivate the discussion. i. be aware of grammar and sentence mechanics. j. use proper etiquette. being respectful is critical. a discussion (9-10 points) a-level postings: § are made in a timely fashion, giving others an opportunity to respond. § are thoughtful and analyze the content or question asked. § make connections to the course content and/or other experiences. § extend discussions already taking place or pose new possibilities or opinions not previously voiced. § are from participants aware of the needs of the community, motivate group discussion, and present a creative approach to the topic. § follow the conventions of quality writing. § meet the minimum posting requirement. b discussion (8-9 points) b-level postings: § are made in a timely fashion, giving others an opportunity to respond. § are thoughtful and analyze the content or question asked. gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 73 § make connections to the course content and/or other experiences, but connections are unclear, not firmly established or are not obvious. § contain novel ideas, connections, and/or real-world application but lack depth, detail and/or explanation. § are from participants who interact freely and occasionally attempt to motivate discussion. § have few errors in writing conventions § meet the minimum posting requirement c discussion (7 points) c-level postings: § are usually, but not always, made in a timely fashion. § are generally accurate, but the information delivered is limited. § make vague or incomplete connections between class content and posting by other students. § summarize what other students have posted and contain few novel ideas. § show marginal effort to become involved with group. § have numerous errors in writing conventions § do not meet the minimum posting requirement. d discussion (6 points) d level postings: § are not made in timely fashion, if at all. § are superficial, lacking in analysis or critique. § contribute few novel ideas, connections, or applications. § may veer off topic. § show little effort to participate in learning community as it develops. § does not understand the standard conventions of written english. f discussion (0 points). § participant was rude or abusive to other course participants. in this case, the number and quality of other posts is irrelevant. or § participant failed to meet the basic criteria for the “d discussion.” appendix b. coded excerpts from discussions initial post – structure discussion after doing a little reading i related really well to one part and wonder if you did as well. in the section about structural dilemmas the very last sub-category of "irresponsible vs. unresponsive" created a vision for me of the "go-to" parent or teacher. what i mean is, are you going to tell mom or dad first you failed a test, which one will be cooler with it and which one will blow up at you take away your phone and ground you? i feel this can happen in a school setting easily if one teacher is laid back on homework and allow students a few days to get stuff in verse a teacher who allows no days. or a bigger one that i think we see more often is with discipline and behavior issues. what do some teachers allow and others do not. for example a teacher who writes a student up for every little thing is losing the "power" of the referral where as a teacher who uses the referral as a last chance still hangs onto that power and uses it when necessary? what are your thoughts and have you had similar situations? [question] gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 74 replies to initial post this is a very good point that you brought up, and i think it to relates back to that lateral coordination. not only do we need to have more cross-grade level meetings but i think we also need to have meetings on things such as these. i know that every teacher is going to have a different view on what students can get away with, as well as what administration is going to take as a serious offense, or taken more lightly. i think that the pbis at the middle school is run very well, and i know that we are just really getting into the "good years" of it. i think that we can start opening up some discussions on issues such as these so that administration and teachers, are all on the same page as to what should be a referral and what should be let go. how many chances does each student get? these are good conversations to start having in teams. [reply] i could not agree more with the feeling of isolation. throughout the course of the day, i sometimes do not see teachers from my grade level. information has to pass through all people involved. m__ your point about structure and lack of opportunity here at the middle school is exactly how i feel. cross-grade level meetings would be beneficial if used properly. if nothing more, you interact with peers and build personal relationships. like you stated stephanie, it's important to find the common ground between the two. [clarification] when i look at the structural assumptions, number four really jumps out at me. i think that it is important to be rational about things that go on in an organization, and i feel that many administrators are quick to forget about rationality, and want to get their agenda met and accomplished. adding pressure and forgetting about rationality only stresses individuals out, and doesn’t accomplish much. going off of this, i will talk about how i feel my building is run. i feel that in the building that i am in there is a later coordination. there are many different groups that are working on different things, and then we come together as a staff to report on them. we have school-wide improvement committee, literacy committee, etc. these groups meet on their own and have their own agendas, and report back to the staff as a whole to keep everyone informed and up-to-date. my question on this is whether or not administration (not just in my school, but anywhere with a similar approach) actually looks at it as a lateral approach or if they have it in the back of their minds that ultimately they are making the last decision? and, how as an administrator do you come to certain conclusions without taking it to a vertical approach if groups aren’t getting the outcomes that you would like them to? [clarification] when i think about the structural dilemmas, the first one that really got my attention was the excessive autonomy versus excessive interdependence. when i look at the building i am in, i feel very much isolated from other teachers. i feel that having crossgrade level meetings would help this feeling a lot. the big question that i come too is when are you being too isolated? and when are you coming together too much? as an administrator i think it is important to find the common ground between the two and is something that should have thought put into it in an organization. [clarification] glad you feel valued. that's a big key to keeping younger staff like you here. i think the administration is making a much better effort to give positive feedback and thanks to staff. however, i feel like it's usually when staff brings some kind of positive pr to the district that the gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 75 administration seems to notice. i think so much of the day to day things some of us do that go above and beyond go unnoticed. curious what others thoughts are on that... [reflection] initial post – human resource discussion in my short career thus far as a teacher/coach at mhs, i have little evidence to argue that people are not the greatest asset in my workplace. opinions about this might be different amongst other staff members, but for me personally, i have been made to feel that i am an important asset in our building. [clarification] replies to initial post our physical education department at the middle school consists of me and two other professionals. in the last year and a half, the three of us have overhauled our entire curriculum hoping to create a quality program for our students. many of the changes to our program would not have been possible without the support and trust of our administrators. additionally, our physical education department recently had an article published in teaching today, the statewide educational newsletter for wisconsin. in the article we discussed methods we use to incorporate literacy into our physical education classes. shortly after the article was published, we received an email from our administrators thanking us for our hard work and commitment to our student’s education. by sending a simple email saying thanks, our administrators provided confirmation that what we do is meaningful and we truly are assets in our workplace. “when individuals find satisfaction and meaning in work, the organization profits from effective use of their talent and energy. but when satisfaction and meaning are lacking, individuals withdraw, resist, or rebel” (bolman & deal, 2008, p. 164). [clarification] i definitely agree with your opinion that promotion within education is less practical in comparison to being promoted in a business environment. in education we know that there are great teachers, some with wonderful leadership qualities, which never pursue administration. i believe many because of the time and money needed for an additional degree/certification. in result, promotion within education isn’t always the result of quality performance, but instead, the result of who is able to afford it vs. who is not. [clarification] initial post organizations as cultures i feel as though m___ high school tries to produce a positive school culture. we have annual traditions such as homecoming (spirit week), winterfest, spring fling, and graduation. students and staff also receive purple t-shirts at the beginning in the first week of school and wear them with pride throughout the year. however, i often feel as though it's the same teachers who make an effort to participate in all of these activities. i can't recall the number of times i've heard a veteran teacher say something along the lines of "it's the new teacher's turn to do this..." i try to attend various athletic and extra-curricular events or judge for student senate competitions. at the winterfest assembly this year i judged with one other teacher because no one else could/would help. during this year's corporate challenge we forfeited an entire evening of events due to lack of participation. it's frustrating to feel as though the same people take on the majority of all the tasks. [clarification] gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 76 replies to initial post the truth is, we all have obligations. i know that while i may be considered a "younger" teacher, i have two children, a husband who works long hours and often travels, work a second job, advise three clubs, plan a trip to france every other year, took several graduate courses this year and am pursuing an additional master's degree, all in addition to my four (next year five) preps while most teachers have two or three. yet i still manage to find time to make it to a few athletic events, extra-curricular activities, or academic nights (awards, graduation, etc.). so i wouldn't say i have more time, but maybe more energy. hopefully that lasts! [clarification] i respect that many teachers, new and veteran, have obligations, commitments, or other priorities. however, younger teachers look to veteran teachers to lead by example. that's typically why they are chosen as mentors. the bottom line is there's always a reason not to do something. as a group of individuals pursuing a degree in administration is it not our goal to lead by example? would we not look to our future employees to do the same? is an administrator given a choice not to attend extra-curriculars because of a variety of other obligations? food for thought. [question] [full transcript sets are available from author upon request] references an, y. j., & frick, t. (2006). student perceptions of asynchronous computer-mediated communication in face-to-face courses. journal of computer-mediated communication, 11(2), article 5. an, h., shin, s., & lim, k. (2009). the effects of different instructor facilitation approaches on students’ interactions during asynchronous online discussions. computers & education, 53, 749760. andresen, m.a. (2009). asynchronous discussion forums: success factors, outcomes, assessments, and limitations. educational technology & society, 12(1), 249-257. baran, e., & correia, a-p. (2009). student-led facilitation strategies in online discussions. distance education, 30(3), 339-361. cronbach, l.j. (1975). beyond the two disciplines of scientific psychology. american psychologist, 30, 116-127. dennen, v.p., & wieland, k. (2007). from interaction to intersubjectivity: facilitating online group discourse processes. distance education, 28(3), 281-297. dewever, b., vankeer, h., schellens, t., & valcke, m. (2010). roles as structuring tool in online discussion groups: the differential impact of different roles on social knowledge construction. computers in human behavior, 26, 516-523. gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 77 de wever, b., van keer, h., schellens, t., & valcke, m. (2009). structuring asynchronous discussion groups: the impact of role support and self-assessment on students’ levels of knowledge construction through social negotiation. journal of computer assisted learning, 25, 177-188. dewever, b., schellens, t., valcke, m., & van keer, h. (2006). content analysis schemes to analyze transcripts of online asynchronous discussion groups: a review. computers & education, 46, 6-28. gilbert, p.k., & dabbagh, n. (2005). how to structure online discussions for meaningful discourse: a case study. british journal of educational technology, 36(1), 5-18. hannafin, m.j., & kim, m.c. (2003). in search of a future: a critical analysis of research on web-based teaching and learning. instructional science, 31(4), 347-351. harasim, l. (1993). collaborating in cyberspace: using computer conferences as a group learning environment. interactive learning environments, 3(2), 119-130. harasim, l.m. (1990). online education: an environment for collaboration and intellectual amplification. in l.m. harasim (ed.). online education: perspectives on a new environment (pp. 39-64). new york: praeger. henri, f. (1992). computer conferencing and content analysis. in a.r. kaye (ed.) collaborative learning through computer conferencing (pp. 117-136). berlin: springer. hew, k.f., cheung, w.s., & ng, c.s.l. (2009). student contribution in asynchronous discussion: a review of the research and empirical exploration. instructional science, 38, 571606. hillman, d.c.a. (1999). a new method for analyzing patters of interaction. the american journal of distance education, 13(2), 37-47. jonassen, d., davison, m., collins, m., campbell, j., & bannan haag, b. (1995). constructivism and computer-mediated communication in distance education. the american journal of distance education, 9(2), 7-26. mason, r. (1992). evaluation methodologies for computer conferencing applications. in a.r. kaye (ed.), collaborative learning through computer conferencing (pp. 105-116). berlin: springer. merriam, s.b. (2001). qualitative research and case study applications in education (rev. ed.). san francisco: jossey-bass. ng, c.s.l., & cheung, w.s. (2007). comparing face to face, tutor led discussion, and online discussion in the classroom. australasian journal of educational technology, 23(4), 455-469. gibson, k.m. journal of teaching and learning with technology, vol. 2, no. 2, december 2013. jotlt.indiana.edu 78 palloff, r., & pratt, k. (2007). building online learning communities: effective strategies for the virtual classroom. san francisco: wiley. pena-shaff, j., martin, w., & gay, g. (2001). an epistemological framework for analyzing student interactions in computer-mediated communication environments. journal of interactive learning research 12, 41-68. pena-shaff, j. & nicholls, c. (2004). analyzing student interactions and meaning construction in computer bulletin board discussions. computers & education, 12, 243-256. roblyer, m.d., & wiencke, w.r. (2004). exploring the interaction equation: validating a rubric to assess and encourage interaction in distance courses. journal of asynchronous learning networks, 8(4). rourke, l., & anderson, t. (2004). validity in quantitative content analysis. educational technology research and development, 52(1), 5-18. rourke, l, & anderson, t. (2002). using peer teams to lead online discussions. journal of interactive media in education, 1(1), 1-21. rourke, l., anderson, t., garrison, d.r., & archer, w. (2001). assessing social presence in asynchronous text-based computer conferencing. journal of distance education, 14(2), 50-71. schellens, t., & valcke, m. (2004). fostering knowledge construction in university students through asynchronous discussion groups. computers & education, 46, 349-370. schrire, s. (2006). knowledge building in asynchronous discussion groups: going beyond quantitative analysis. computers & education, 46, 49-70. stacey, e., & gerbic, p. (2003). investigating the impact of computer conferencing: content analysis as a manageable research tool. interact, integrate, impact: proceedings of the 20th annual conference of the australasian society for computers in learning in tertiary education, adelaide, 7-10 december 2003. (retrieved december 12, 2012), from http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/495.pdf. wang, m. (2010). online collaboration and offline interaction between students using asynchronous tools in blended learning. australasian journal of educational technology, 26(6), 830-846. microsoft word 2156-jotlt final.doc journal of teaching and learning with technology, vol. 1, no. 2, december 2012, pp. 59 – 61. book review blended learning: across the disciplines, across the academy norman vaughan1 citation: francine s. glazer, editor. (2012). blended learning: across the disciplines, across the academy. sterling, virginia: stylus. 138 pages. isbn: 978-1-57922-324-3 (pbk) publisher description: this is a practical introduction to blended learning, presenting examples of implementation across a broad spectrum of disciplines. for faculty unfamiliar with this mode of teaching, it illustrates how to address the core challenge of blended learning—to link the activities in each medium so that they reinforce each other to create a single, unified, course—and offers models they can adapt. francine glazer and the contributors to this book describe how they integrate a wide range of pedagogical approaches in their blended courses, use groups to build learning communities, and make the online environment attractive to students. they illustrate under what circumstances particular tasks and activities work best online or face-to-face, and when to incorporate synchronous and asynchronous interactions. they introduce the concept of layering the content of courses to appropriately sequence material for beginning and experienced learners, and to ensure that students see both the online and the face-to-face components as being equal in value and devote equal effort to both modalities. the underlying theme of this book is encouraging students to develop the skills to continue learning throughout their lives. by allowing students to take more time and reflect on the course content, blended learning can promote more student engagement and, consequently, deeper learning. it appeals to today’s digital natives who are accustomed to using technology to find and share information, communicate, and collaborate, and also enables non-traditional students to juggle their commitments more efficiently and successfully. blended learning: across the disciplines, across the academy is an edited book by francine glazer. the book describes five blended learning case studies. the case studies are from a variety of disciplines and institutions in american higher education. the authors of each case study have taken a self-study approach to explore their blended learning courses (bullough & pinnegar, 2001). the introductory chapter of this book does an excellent job of setting the stage for the five case studies by clearly defining blended learning as “courses [that] employ active learning strategies through the use of a variety of pedagogical approaches (p.3) . . . when done well, blended learning combines the best attributes of face-to-face and online courses” (p.7). glazer also indicates that “one size does not fit all” when it comes to course redesign and that the 1 professor in the department of education and schooling, faculty of teaching and learning, mount royal university, calgary, alberta, canada. vaughan, n. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 60 “challenge of blended learning is to link, or blend, what happens in each medium so that face-toface and online activities reinforce each other to create a single, unified course” (p.1). avoiding what twigg (2003) refers to as the course and a half syndrome. each case study describes the author’s personal course redesign journey for blended learning. these chapters include rich personal narratives, course descriptions, lessons learned, and a description of the educational framework that was used to guide the course redesign process. barkley (2006) stresses the importance of communicating these conceptual frameworks to our students so that they can become the “architects of their own learning” (p.1). two of the authors have used the revised version of bloom’s taxonomy of educational objectives (krathwohl, 2002) to determine how to sequence the online and face-to-face learning tasks. for example, carl behnke in his culinary arts course indicates that “most of the online resources are geared toward basic remember and understanding dimensions, reserving the lecture for higher-order tasks of analyzing and evaluating” (p.17). this is similar to the approach that tracey gau uses in her world literature course, “lower-level objectives can be addressed and achieved online so that valuable class time is not spent merely summarizing” (p.91). both authors emphasize “effective integration and leveraging the best of both techniques” (p.17). this approach to course redesign has been referred to as the ‘flipped approach’ where students complete individual web-based learning activities, outside of class time, and then work on collaborative problem solving activities, in-class (baker, 2000). francine glazer has combined a team-based learning and case-study approach to begin the implementation of a blended design for her principles of genetics course. team-based learning is a highly structured form of cooperative learning where students are grouped into permanent teams for the semester and work on sophisticated problems and applications (michaelson, knight, & fink, 2004). whereas, the case-study approach helps students deal with abstract material by providing a story line to make the material more accessible (styer, 2009). a rapid formative assessment approach based on the angelo and cross’ (1993) classroom assessment techniques (cats) framework has been used by alan aycock to guide the blended learning redesign of his survey of world cultures (swc) course. aycock describes cats as “very short – typically one-page – assignments in which students respond to a question that reveals the extent of their learning or the tenor of their response to a particular module or course content . . . they are always formative or progressive assessments that occur during the learning process and therefore evoke a quality of immediacy that promotes rapid feedback (the hallmark of blended learning) and multiple voices in the classroom” (p.72). finally, robert hartwell and elizabeth barkley have used the concept of differentiation as the framework to anchor the blended redesign of their music of multicultural america (mma) course. this is a “systematic approach to planning curriculum and instruction” (tomlinson & strickland, 2005, p. 6) where “teachers individualize course elements such as content (the stuff we teach), process (the ways learners make meaning of content), and product (how learners demonstrate what they have come to know, understand, or do)” (p.115). for the mma course, students choose from a menu of online and face-to-face activities that best meets their personal, scheduling, and learning needs. some students do the entire course online or faceto-face, whereas about 60% combine both delivery methods. overall, i thoroughly enjoyed reading the book blended learning: across the disciplines, across the academy as i discovered that each chapter had a ‘key take away’ or ‘lesson learned’ that i could directly apply to my own blended learning courses. this book also vaughan, n. journal of teaching and learning with technology, vol. 1, no. 2, december 2012. jotlt.indiana.edu 61 provides some very valuable advice about how to manage the workload of a blended course and how to sustain the blend through the use of a community approach. personally, i found there were several limitations to this book. first, all of the blended learning cases were written from the perspective of the teacher. with the exception of the hartwell and barkley case, the voice of the students was noticeably absent. for me this is somewhat problematic as the goal of blended approach to learning is to promote student engagement and success. second, how do we know if any of these course redesigns made a difference for the students? gau describes the evaluation approach that she used for her world literature course (e.g., pre and post course surveys, increase in course success rates percentage of students receiving an a, b or c in the course) but again, i found this lacking in the other cases. despite these shortcomings, i would recommend blended learning: across the disciplines, across the academy to faculty members in higher education who are contemplating redesigning their courses for blended learning. the insights and lessons learned from each of the cases are very useful and can immediately be put into practice. references angelo, t., & cross, p. (1993). classroom assessment techniques (2nd ed.). san francisco: jossey-bass. barkley, e. (2006). honoring student voices, offering students choices: empowering students as architects of their own learning. national teaching and learning forum, 153(3), 1-6. baker, w.j. (2000). the 'classroom flip': using web course management tools to become the guide by the side. selected papers from the 11th international conference on college teaching and learning (11th, jacksonville, florida, april 12-15, 2000). bullough, r.v., & pinnegar, s. (2001). guidelines for quality in autobiographical forms of self study research. educational researcher, 30(3), 13-21. krathwohl, d.r. (2002). a revision of bloom’s taxonomy: an overview. theory into practice, 41(4), 212-218. michaelson, l.k., knight, a., & fink, d. (eds.). (2004). team-based learning: a transformative use of small groups in college teaching. sterling, va: stylus. tomlinson, c., & strickland, c. (2005). differentiation in practice: a resource book for differentiating curriculum, grades 9-12. alexandria, va: association for supervision and curriculum development. twigg, c. (2003). improving learning and reducing costs: new models for online learning. educause review, 38(5), 28-38. styer, s.c. (2009). constructing and using case studies in genetics to engage students in active learning. american biology teacher, 71(3), 142-143. journal of teaching and learning with technology, vol. 4, no. 2, december 2015, pp. 58-61. doi: 10.14434/jotlt.v4n2.13720 book review minds online: teaching effectively with technology britt watwood1 citation: miller, m. (2014) minds online: teaching effectively with technology. cambridge ma: harvard university press. publisher’s description: from wired campuses to smart classrooms to massive open online courses (moocs), digital technology is now firmly embedded in higher education. but the dizzying pace of innovation, combined with a dearth of evidence on the effectiveness of new tools and programs, challenges educators to articulate how technology can best fit into the learning experience. minds online is a concise, nontechnical guide for academic leaders and instructors who seek to advance learning in this changing environment, through a sound scientific understanding of how the human brain assimilates knowledge. drawing on the latest findings from neuroscience and cognitive psychology, miller miller explores how attention, memory, and higher thought processes such as critical thinking and analytical reasoning can be enhanced through technology-aided approaches. she presents innovative ideas for how to use multimedia effectively, how to take advantage of learners’ existing knowledge, and how to motivate students to do their best work and complete the course. michelle d. miller's new book provides a readable review of research-based cognitive principles for improving learning through technology, focusing on attention, memory and thinking. the goal of this book is to guide practitioners with practical advise in order to develop a "cognitively optimized, fully online course." miller is clear that technology alone does not promote learning. learning requires focused attention, effortful practice, and motivation concepts that align with recent syntheses of learning science such as susan ambrose's (2010) how learning works. the first chapter asks two rhetorical questions. is online learning here to stay? does learning online work? miller noted that just by asking these questions, we are holding technologically aided teaching to a higher standard than classroom teaching! she charts out principles for optimal college teaching excerpted from four "best practice” frameworks. these best practices do suggest that, with the conscious use of active learning processes, online learning does indeed work. the book tackles some of the prevailing myths about the psychology of computing: 1 associate director, center for advancing teaching and learning through research, northeastern university, b.watwood@neu.edu mailto:b.watwood@neu.edu watwood journal of teaching and learning with technology, vol. 4, no. 2, december 2015. jotlt.indiana.edu 59 • use of the web "rewires" the brain • students today are "digital natives" • social networking destroys real-life social relationships while there are grains of truth, she provides some interesting analysis of the realities behind these myths and what that might mean for teaching. her next three chapters explore attention, thinking and memory. it is easy to derail attention. yet, attention can easily be shifted. as miller noted: "the inattentional blindness effect illustrates a broader truth about human perception and attention, that looking and seeing are two different things and that we are remarkably prone to missing stimuli when our attention is directed elsewhere." while capacity cannot be expanded, it can be altered by practice. actions that become automatic free up the brain to process other information. attention is highly intertwined with visual processing, which is another facet of online course design that matters. the book explores change blindness, in which changes to the screen are not picked up readily. most people think they perceive more change than they really do. working memory is an area of significant variation among individuals. attention directs what goes in to working memory, so again, understanding attention is important to creating a learning environment. miller suggested several strategies regarding attention and online learning. • ask students to respond chunk material into short segments and have students do something (answer a question, click on a hotspot, etc). • take advantage of automaticity use auto-grading features of lms's to provide practice opportunities and feedback, with incentives for completion. • assess cognitive load positively impact cognitive load through design features. poor instructions or requiring new features without practice can negatively increase cognitive load. • discourage divided attention the web is full of distractions, but simply informing students that they should pay attention actually increases attention. this focus on attention suggests that instructors should educate students about multitasking, make materials as seamless as possible, minimize extraneous attention drains, and keep them engaged through compelling activities. from attention, the book then focuses on memory. technology opens up new opportunities for learning that never existed in face-to-face classrooms. technology allows one to build activities that capitalize on multiple interrelated sensory cues (video, audio, image, text, query, etc.), deeper-level processing, metacognition, and opportunities to engage the emotions. watwood journal of teaching and learning with technology, vol. 4, no. 2, december 2015. jotlt.indiana.edu 60 a key difference between experts and novices lies in how knowledge is organized. experts see patterns and how concepts are linked, including how they are linked to prior knowledge. miller explored research on testing effects and spacing effects. the book suggested five strategies for designing online learning experiences: • include frequent tests and test-like activities • structure for spaced study • involve emotions (carefully) • steer students into deeper learning • base new knowledge on old knowledge effective thinking is something that sets experts apart from novices. it is a skill that can be built with practice. cognitive scientists have broken thinking down into the discrete areas of formal reasoning, decision-making, and problem solving. formal reasoning is hard. our brains tend to take shortcuts when faced with reasoning problems. a fascinating section in this book dealt with the research on creativity. students who are given explicit step-by-step instructions tend to produce less creative work products compared to those who were given less-structured directions. miller noted that experts solve problems better not because they are smarter but because they can draw on a richer base of stored and connected knowledge. she suggested that for online teachers, providing practice opportunities is important, but equally important is providing scaffolding in the form of knowledge organizations and conceptual interrelationships. this can help move students from the novice stage to a more expert-like stage of reasoning. in designing online learning opportunities, one should integrate metacognitive activities with learning activities. this suggests that we as online teachers put some "thinking" into the questions we use as prompts in our courses. the book effectively debunks the time honored learning styles of vak (visual, auditory, and kinesthetic), noting "vak may go down as one of the greatest psychological myths of all time." the cognitive research suggests that we all have all styles and that there really is not one that dominates. she noted that people tend to not know what their "true" style is and have poor skills as self-assessing. assuming one style can lead students to disengage if presented with an alternative style, negatively impact learning. the take away for online teaching is that pictures, audio and video can enhance learning, but the multimedia needs to align with the learning, not overload or distract. thinking inclusively, one should augment any multimedia with alternative options. there are acknowledged motivational challenges between online and on campus teaching. "motivation," as miller noted, comes for the same latin root as the word "to move" mechanisms that put you in motion. the study of motivation is closely aligned with the study of emotion. the book explores the framework of self-determination theory, contrasting intrinsic and extrinsic motivations. this suggests that people are motivated by watwood journal of teaching and learning with technology, vol. 4, no. 2, december 2015. jotlt.indiana.edu 61 the need for three basic things competence, relatedness, and autonomy. when students are cut off from any of these, motivation suffers. academic self-efficacy is a good predictor of academic success. providing videos of "average" students who succeeded boosted self-efficacy, as did presenting grades in informational rather than controlling ways. wording course materials in ways that suggest autonomy, such as "you might..." or "we suggest...", as well as tying course materials to student long-term goals, increase motivation. an interesting section discussed motivation issues associated with a "fixed" mindset versus a "growth" mindset. if students have a fixed mindset, they carry a belief that intelligence is basically unchangeable, whereas those with a growth mindset believe that intelligence is not set in stone. online instructors can sometimes unwittingly foster a mindset by the feedback they provide. positive comments about intelligence, such as "you are smart" or "you are a math whiz!" actually feed a fixed mindset. miller suggested that praise should be focused on the process: working hard, choosing good strategies, etc. motivation is a high stakes endeavor in online teaching, so miller suggested that during the first week, we steer the focus towards the "why" of a course and away from the "what" why study this topic, why this topic might change you as a student, why this topic is important to your future, rather than what is required, what you have to complete, or what the grading policies are. the "what's" are important and need to be covered, but they need to be covered after the "why's" have been covered, and better yet, after the students have engaged with the whys. the book ends with tips to actively manage motivation in the course design. she provided a series of key questions to guide this process. each question is linked with the cognitive principles behind the question as well as tools and techniques that address the question. this is a very readable and useful book. there are many aspects that could be implemented immediately into one's online teaching. it connects some dots between effective teaching practices and the learning science behind why these practices work. i highly recommend adding his book to your personal library! references ambrose, s.a.; bridges, m.w.; dipietro, m.; lovett, m.c.; and norman, m.k. (2010). how learning works: 7 research-based principles for smart teaching. san francisco: jossey-bass. miller, m. (2014). minds online: teaching effectively with technology. cambridge ma: harvard university press. journal of teaching and learning with technology, vol. 10, special issue, pp. 172-184. doi: 10.14434/jotlt.v9i2.31444 teaching in the time of covid-19: reconceptualizing faculty identities in a global pandemic lisa kurz indiana university bloomington kurz@indiana.edu eric t. metzler indiana university bloomington emetzler@indiana.edu katherine c. ryan indiana university bloomington kcryan@indiana.edu abstract: this essay reflects on the experiences of faculty members at a large public university as they responded to the demand for online learning caused by the 2019 coronavirus disease pandemic. it explores themes of course delivery, assessment methods, and faculty–student interactions and how these themes inform faculty identity. the authors suggest that the disruption to faculty identity created by the pandemic may be a fortuitous opportunity to examine deeply held beliefs about what it means to be a college professor. keywords: faculty identity, remote learning, synchronous online teaching. preface the authors of this piece bring a variety of experiences and perspectives to bear in this essay. lisa kurz, principal instructional consultant for non-tenure track development in indiana university’s center for innovative teaching and learning (citl), focuses her work on faculty whose role centers on teaching. in one-on-one consultations and workshops with hundreds of instructors of all ranks over more than 20 years, she has provided guidance and facilitated conversations on best practices in course design and classroom teaching. among her areas of specialization is providing support in both pedagogical best practices and career development for teaching faculty (those not on the tenure track). she is also citl’s assessment specialist and has worked extensively with individual faculty as well as departments and programs across the university, helping them create learning goals and devise methods to assess their students’ learning. when the 2019 coronavirus disease (covid-19) broke out, she and her colleagues provided support and guidance for hundreds of faculty suddenly asked to rethink their teaching and move to the online environment. eric t. metzler is the instructional support and assessment specialist at the indiana university kelley school of business. in eric’s more than 20 years in this role, he has consulted with instructors of all ranks, observed hundreds of business classes, and provided broad pedagogical support not only to business instructors, departments, and deans, but also to instructors from across the university. he has taught assessment methods to graduate students and consulted on assessment topics with business faculty in iraq, south africa, jamaica, and barbados. eric continues to teach an undergraduate honorslevel seminar on consumerism, enabling him to put into practice new ideas about pedagogy that arise from his research and observations. mailto:kurz@indiana.edu mailto:emetzler@indiana.edu mailto:kcryan@indiana.edu kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu katherine ryan is the director of the business communication area of the kelley school’s undergraduate program, overseeing approximately 40 faculty members. katherine has over 25 years of teaching experience and has taught a variety of undergraduate and graduate courses. she is an active participant and presenter at pedagogical conferences. during the transition to online teaching effected by covid-19, she met constantly with instructors to manage the difficult task of teaching business writing and presentations in the online environment. indiana university is a public r1 (carnegie classification) research university located in the college town of bloomington, indiana and the home of the kelley school of business, a top-ranked public business school. with some 32,621 undergraduate students in the fall 2019 term, the university offered 5,527 undergraduate course sections, 4,882 of which (88.3%) were taught as in-person classes. all other modalities combined accounted for only 11.7% of undergraduate course sections. we see a precipitous change in modality by comparing the 2019 figures with statistics from the fall 2020 term, where the university offered 5,746 undergraduate course sections with only 404 (7.0%) designated as in-person classes. for this term, all other modalities accounted for the remaining 93% of sections, the majority hybrid, synchronous online, and asynchronous online courses. in july 2020, the three authors hosted two reflective seminars at the kelley school of business. the seminars gave the participants time to consider specific questions related to the sudden transition to online instruction, situated specifically in the domains of expected learning, assessment, student engagement and care, and overall global considerations. it proved impractical to gather data directly from individual participants; nevertheless, the authors learned much from facilitating the small groups and plenary sessions where faculty responded and shared their impressions of the semester. in the spring of 2020, when instruction at our university suddenly pivoted from traditional inperson classes to a variety of online modalities, many faculty were confronted with the necessity of making drastic changes to their normal teaching habits. gone were traditional lectures delivered in conventional classrooms, where faculty could try to read student expressions to detect confusion or disengagement. gone, too, were traditional exams, with students in a classroom silently bent over their multiple-choice tests while the instructor or a teaching assistant watched. even the casual conversations between instructors and students about course content, students’ lives, or current events, which had filled the minutes before class began, were gone. the lectures were replaced either by recorded versions uploaded to a learning management system to be viewed asynchronously, or in some courses, by synchronous class sessions held online on the video conferencing platform zoom. the in-person exams were replaced by online exams taken by students individually, with online proctoring services often replacing human observers. meanwhile, casual one-on-one conversations became public exchanges. we have had the opportunity to listen to many faculty at our university as they adapted to and reflected on these changes, and we have identified several themes in their reflections. one common theme articulated by faculty centered on changes in how course content was delivered to students— from the familiar terrain of delivering lectures and facilitating face-to-face activities, to the terra incognita for most faculty of delivering content online. another theme revolved around how student learning was assessed. in particular, faculty who relied on objective exams were forced to either administer the exams online (and accept the concomitant academic integrity issues), or grudgingly accept alternative methods for determining what knowledge and skills their students had acquired. the third theme we identified involved the relationships and interactions between instructors and students. not only were the semi-private one-on-one conversations with students now public; faculty were also seeing students in very different contexts and circumstances from what they had previously known. interestingly and unexpectedly, complaints and frustrations about technology were a minor theme compared to those outlined above. faculty told us about technological changes they made: 173 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu adapting existing technologies to meet their teaching needs, learning new software, and assisting students in using new technology tools. but these were rarely mentioned as the most important challenges they faced in adapting to online teaching. most faculty seemed to feel that technology issues were straightforward, compared to the other challenges they faced. as we reflected on the three themes faculty articulated, we noticed something interesting about them: together they constitute a substantial portion of how instructors might define their identity as faculty. until the spring of 2020, experienced faculty inhabited a stable teaching persona, which included knowing the course content cold, standing in front of students to present the content, creating activities and assignments to help students learn, assessing students’ knowledge and skills, assigning grades as fair representations of students’ mastery, and developing an understanding of who students are and how to interact with them. in the spring semester, almost all of that changed. knowing the content cold was still true, but everything else was suddenly problematized. listening to many faculty from different sectors of the university, we came to realize that the greatest challenge in the transition to online teaching seems to have been something that most faculty did not exactly realize: the pivot to online teaching forced them to change how they understood themselves as professors. in this sudden shift, instructors had to rethink not only what they do as instructors, but also who they are as professors. the themes we identified from our interactions comprise the reimagining of professorial identities. in fact, it may be that what made the transition to the online environment so disorienting and uncomfortable for many instructors was in part the unacknowledged impact of the transition on their mindset and identity as professors (mezirow, 1991; passmore, 2014). in this reflective piece, we explore these three themes arising from faculty reflections—content delivery, assessment of student learning, and the faculty–student relationship— through the lens of faculty identity (abu-alruz & khasawneh, 2013; van lankveld, schoonenboom, volman, croiset, & beishuizen, 2017). our concepts of faculty identity arose primarily from our combined decades of interaction with faculty in a variety of contexts. they also emerged as themes in our qualitative examination of faculty responses elicited in the reflection seminars we hosted in the spring of 2020. however, when we turned to the literature to examine work on faculty identity, we found confirmation of our notions in the work of abu-alruz and khasawneh (2013) and van lankveld et al. (2017), among others. these authors postulated that the teaching identity of university faculty includes subject-area competence, knowledge of pedagogy, and a commitment to teaching that includes an interest in students and concern for their well-being. these aspects of identity correspond well to the themes we identified in our research. in addition, we found support for our findings in the work of passmore (2014), who described a qualitative study of the teaching identity of nursing faculty, and the impact on that identity caused by a move from face-to-face to online teaching. she used the theoretical framework of transformative learning (mezirow, 1991) to describe the impact of the pivot to online teaching as a “disorienting dilemma” that spurred changes in faculty identity, from content delivery expert to facilitator of students’ learning. we examine how the pivot to remote teaching and learning caused by the covid-19 pandemic affected three key aspects of the “faculty-as-instructor” identity: the “sage on the stage” (content expert), the objective judge of students’ learning, and the caring, approachable instructor grounding students’ college experience. content delivery the delivery of content has traditionally been the cornerstone of the college professor’s teaching role. after spending years mastering the content and epistemology of one’s discipline and also imagining oneself as the future font of knowledge for one’s students, it is perhaps natural that university instructors launch into careers where they expect to profess their expertise in front of students, sharing 174 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu hard-won knowledge with students by telling them things they do not know. for many instructors, however, the covid-19 crisis surfaced latent but long-present problems with “telling” pedagogies (i.e., content-heavy lecturing to passive students). it also required abrupt changes in teaching style and modality. the combination of recognizing the shortfalls in the way one has always taught and being forced to teach in new and unfamiliar ways put many instructors in the uncomfortable position of questioning their persona as university instructors. many were left wondering, how do i see myself as a professional? how do others see me? although the structure and goals of the university lecture have changed in its 800-year history (friesen, 2011), the lecture format has nevertheless remained a telling pedagogy that has persisted in the academy into the 21st century. whether a means of preserving precious, scarce texts as in the middle ages or a means of promulgating detailed, in-depth knowledge and thinking from an expert as inaugurated by johann gottlieb fichte at the university of jena in the late 18th century (friesen, 2011), lectures sought to broadcast information, which, until the age of the internet, was scarce and difficult to find. with the arrival of the computer age, however, knowledge became not only easily accessible but also easy to find. information is plentiful in our time, not scarce, and this reality changes everything. students no longer need professors to supply information. a simple google search quickly yields whatever a person is looking for or needs to know. rather, students today need professors to help them discern what information is reliable, valuable, applicable, or useful. they need professors to show them how to use and apply the information. they need professors to help them think critically about the tremendous amount of information available at their fingertips. yet, in our experience, even until early 2020—before covid-19 forced the academy into isolation and learning online—many professors continued to teach as if information and knowledge were scarce. students endured such teaching, sitting politely in lecture halls, showing “civil attention” (gannon, 2018), thus enabling instructors to persist in believing that lecturing to passive students was a perfectly acceptable form of college pedagogy. it worked, did it not? the move to online instruction, however, dispatched the myth, as many instructors recognized that long, content-heavy lectures with little student activity or interaction led to extremely low attendance in synchronous online classes. students realized they could watch the recorded lecture at their leisure without missing a beat. why spend valuable time in synchronous sessions when there is no value added in their synchronicity? thus, painful as it may have been for some faculty to open a zoom session only to have 5% of their students attend, the sudden shift to the online environment forced faculty to come to terms with the reality that the “sage on the stage” (king, 1993) model of teaching could no longer be defended as effective teaching. this wake-up call may ultimately be a great leap forward for college pedagogy, but it comes with costs. it is painful and difficult to reconceptualize what one does, how one does it, and perhaps most of all, who one is in one’s profession. for some faculty these changes have provoked discomfort, frustration, fear, and anger—emotions that have sometimes disrupted the faculty–student relationship. at the same time, the very behavior that rankled faculty improved the student learning experience. faculty told us that recorded lectures have enabled students to learn at their own pace, reviewing difficult patches, perhaps fast forwarding through material they already know, and reviewing recordings as they prepare their homework assignments or study for exams. for students, learning in this manner is much more efficient than sitting through a synchronous lecture—whether in person or online. when students use recorded lectures in a way that suits their self-directed learning, we heard from faculty that students view the recordings more than once and ask more questions, suggesting deeper engagement with the course material. as we reflect on the upheaval to the college classroom wrought by covid-19, what stands out to us in particular are faculty statements about now having to enact pedagogical practices that instructional consultants and designers have promoted as best practice for many years, if not decades. 175 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu college pedagogy classics such as walvoord and anderson’s effective grading (1998) and wiggins and mctighe’s understanding by design (2005) have taught us that good instruction begins with good course design. faculty must begin by determining what students should learn, then how they should be assessed, and finally what content will help students succeed on their assessments. this process produces courses that mitigate against the coverage model, where instructors plan daily lessons based on what they will “cover,” effectively allowing students to persist in an immature dualist mindset1 instead of maturing into college-educated thinkers. although the “backward course design” model is certainly not new, many instructors told us that when they pivoted to the online environment, applying that structure (i.e., begin with the end in mind) to each day’s lesson was essential. similarly, walvoord and anderson’s (1998) notion that students should gain “first exposure” at home before arriving to class, which later evolved into the “flipped classroom” of today’s parlance (bergmann & sams, 2012), became a sine qua non of successful synchronous online classes in 2020. faculty told us of the importance of asking students to learn the basic content at home and holding them accountable for doing so (kurz, metzler, & rehrey, 2015) so that class time could be used for more productive and engaging learning activities such as processing or applying the content—perhaps in discussions, debates, small writing assignments, problem sets, or other activities. once again, none of these ideas is new, but the circumstances of covid-19 teaching foregrounded these well-known pedagogical principles and helped faculty see their importance in ways that no teaching conference, consultation, or teaching seminar ever could. while we know that backward course design and synchronous sessions featuring active learning are essential for fostering student engagement and producing optimal learning for all students, and underrepresented students in particular,2 prior to the outbreak of covid-19, these instructional choices were not widely espoused by college instructors. in our view, the disconnect stemmed from the difficulty of enacting such pedagogies, both operationally and emotionally. delivering a polished lecture based on disciplinary content is clear-cut and straightforward; it confers a sense of expert control and confidence among instructors. while the professor lectures, students must sit quietly, attending to the content being shared. planning interactive sessions in which students process information, practice skills, and question assumptions may conversely leave instructors feeling less in control, less sure of the classroom environment, and perhaps even less professional in a world where they are supposed to feel in charge. further, teaching strategies that promote processing or practice in class fundamentally change the role of the college professor from repository and purveyor of knowledge to facilitator of student learning, which for many instructors fails to square with the image of university professor honed during the years of preparation for the profession. for many college instructors, the profession is defined by the gravitas, importance, and social position conferred by commanding a presence in the lecture hall, delivering carefully prepared lectures about one’s discipline, and seeing students sit silently, attending to the information proffered to the class. educators know—and have known for decades— that this model of teaching is not nearly as effective for learning as facilitating in-class processing and practice. yet leaving a telling pedagogy behind in exchange for an interactive model can also equate to shedding a professional role of eminence in order to inhabit a more humble, socially egalitarian position. 1 dualism is marked by the belief that knowledge is fixed, limited, and the domain of the expert (i.e., the professor), whose office it is to tell content to passive students, who must absorb said knowledge (perry, 1999). 2 freeman and theobald (2020) argued that sessions planned around active student interaction as opposed to passively attending to lecture disproportionally help underrepresented minorities and students from a background of poverty succeed in the classroom and persist in their degrees. for freeman and theobald, this best pedagogical practice is a much more effective antiracist step than issuing official statements or forming committees, which they consider the equivalent of politicians’ “thoughts and prayers.” 176 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu hence, perhaps university professors have been slow to transition from instructor as source of knowledge to instructor as learning facilitator because of the high, emotionally charged cost of doing so. nevertheless, covid-19 indiscriminately demanded sudden and uncompromising changes in the education sector. time is precious and zoom sessions must be focused. sessions must be organized economically lest students summarily tune out or leave altogether. students must perceive that the sessions are helping them learn specific skills and knowledge on which they will be assessed. other instructors explained to us that they came to the realization that pushing recorded lectures off into asynchronous time proved to be an efficient way to flip the classroom. now, instead of sitting passively while the professor rehashes information already available in the textbook, students must acquire content before class, enabling synchronous sessions to be interactive, experiential, and social. instead of merely receiving information, students can now hear various viewpoints, solve problems with peers, apply what they are learning, and practice the intellectual moves they need to perform well on the course’s assessments and to grow as thinkers. instructors also told us that the teaching circumstances of covid-19 have drawn attention to the necessity of instructing students how to learn more effectively. instructors came to the realization that most 1stand 2nd-year students do not yet know what it means to prepare for class; nor do they know how to think critically, ask probing questions, or entertain alternative perspectives. these skill deficits for our 1stand 2nd-year students are nothing new. they are rarely part of high school curricula and faculty have lamented for decades that generally speaking, students arrive on campus with poor “student skills.”3 what is new is the faculty’s realization that for their class to work smoothly, they must actively teach students how to prepare for class, ask interesting questions, think critically, and operate in the flipped classroom dynamic. faculty have come to the realization they have skin in the game; that is, if professors do not help students learn important student skills, the course will not succeed. one sees here as elsewhere in this essay that covid-19's gift is how it has helped college instructors see and understand issues that have long been present but latent and perhaps unobtrusive because ignoring them came at no cost. for faculty and students, making the transition to the online environment has required considerable retooling, and for many it has occasioned a sense of grief and loss—both powerful emotions that human beings like to avoid. for students, it has meant self-pacing and learning on one’s own, preparing for class in new ways, attending class in new ways, and perhaps seeing oneself with more agency as a student. for professors, the shift has meant thinking much more about how to structure engaging, interactive sessions where students do most of the work—practicing, articulating thoughts, writing, solving problems, making decision, and more. for many, this change has meant reconceptualizing their professional role, perhaps trading an august, elevated self-concept for a more populist, accessible self-concept, which plays out in their actions: instead of lecturing on the content of their discipline, professors now run activities, direct discussions, facilitate group work, and ask questions. these are all for the good of teaching and learning, but that does not mean the changes come easy. assessment along with content delivery, the assessment of student learning has been a key component of a faculty member’s self-concept. in addition to professing their content knowledge to students, faculty members see their role as being judicious evaluators of students’ knowledge and skills, using carefully designed assessments requiring students to demonstrate key disciplinary knowledge and 3 in our assessment work, we hear again and again that so-called student skills are something our students need to improve in order to fully succeed in college. 177 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu understandings. they administer their assessments in a way that guarantees academic integrity. the completed assessments allow faculty not only to evaluate their students’ work, but also to critique it and offer constructive feedback to help students improve. the entire process conforms with faculty views of their role as instructors, awarding final grades that are reliable and equitable reflections of students’ understanding of important disciplinary content. with the pivot to online teaching in the spring of 2020, faculty were confronted with contexts that dramatically violated key assumptions about their assessment strategies. in particular, faculty who used objective assessment methods had to confront issues of the lack of validity as well as the inequity of their assessments. these faculty were left only with troubling questions: have my assessment methods always been this unfair, this problematic? if i am unable to fairly assess my students’ learning, what does it mean for me to say i am an instructor? in their roles as instructors, faculty understand the importance and the purposes of assessment: to sort students via grades, and to offer feedback. giving grades is a way of sorting students into categories: those who are excellent, competent, marginally so, or incompetent. this function of assessment is distasteful for many faculty but is seen as necessary because of the centrality of grades in the academy. the second purpose of assessment, giving students feedback about their performance, allows faculty to comment on the quality of students’ knowledge and skills and offer suggestions for improvement. this role is typically more appealing (but more time consuming) for faculty than the grading role. but faculty have incorporated both the sorting and the feedback functions of grading into their professional identities as objective judges of students’ learning. the basic methods used by faculty to achieve these purposes are subjective and objective assessments. subjective assessments ask students questions that have no one right answer, or many possible answers, and allow many possible ways of expressing an answer. they typically take the form of essays and other forms of writing, although in recent years many creative and authentic types of subjective assessment such as complex projects or oral presentations have also become popular. for subjective assessments, the feedback purpose of assessment is primary, as faculty can see students’ thinking and give detailed feedback about its quality. the sorting purpose is less important, but its importance has been growing in the past few decades with the advent of rubrics to standardize judgments about the quality of students’ work. objective assessments ask students questions for which there is a single response or a limited set of correct responses and ask students to choose, or to generate, the correct answer(s). the multiplechoice exam is the prototypical objective test and illustrates the primacy of the sorting function of assessment. students receive a score on an objective test that indicates how well they learned the content (the sorting function) but typically receive little or no feedback on their performance. faculty using objective assessments have had to accept this trade-off (an uneasy acceptance in many cases). they may have rationalized their choice because faculty using objective assessments often teach large classes in disciplines that emphasize factual knowledge. in this context, exams that allow (seemingly) reliable sorting of students are worth the loss of an opportunity to provide meaningful feedback. in addition, many large universities need to offer large courses with objective assessments to meet student demand for seats in required courses that are a gateway to desired majors. an instructor’s choice of assessment method is usually based on two factors: the discipline of the course and the size of the class. in humanities and some social science disciplines, students might be asked to demonstrate their understanding of course content through writing or other subjective assessments. in the natural sciences and some professional schools, as noted earlier, assessment is often accomplished using objective exams, particularly in lower division (introductory) courses. and course size has always had an important effect on assessment, in that the larger the course enrollment, the more likely it is that the instructor will have to rely on objective assessment strategies. 178 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu as was the case with content delivery, the pivot to remote teaching forced a wholesale change in the assessment of student learning—for some faculty, at least. faculty who used more subjective assessment strategies told us that they merely needed to make small changes in the timing of due dates, or the specific mechanisms used to submit finished work (giving a presentation in a zoom meeting rather than face-to-face, for example). but they typically did not perceive a need to rethink their subjective assessment strategies entirely, because these strategies translated well to an online environment. faculty who used objective assessments, on the other hand, told us that they felt considerable pressure to rethink their assessment strategies. one primary reason they cited for this pressure was a concern about academic integrity when students took multiple-choice tests online. after the pivot to remote instruction, students took exams without proctoring, in environments instructors could not even see, let alone control. consequently, there was no guarantee that students’ performance reflected what they actually knew and not just what they could search for in a search engine or obtain from an online tutoring service. or, in the worst-case scenario, it could be that a student taking an objective exam could hire someone else to take the test in their place. even the use of online proctoring services had problems, as students learned to simply evade the surveillance. along with concerns about academic integrity, many faculty expressed to us concerns about the equity of the testing situation and students’ access to the internet and reliable technology (gonzalez, calarco, & lynch, 2018). they realized that among the students in their classes might be those who needed to work to support their families; who needed to care for family members; who lacked a quiet place to study or take a test; who did not own a computer; or whose only reliable internet connection was in a public place. they realized that students in certain demographic groups were particularly affected by the pandemic, but that all students were under stress. video surveillance of students while they took exams could exacerbate that stress and add to the equity issues. we should point out that subjective and objective assessments, and their associated trade-offs, have been well known for decades (milton, pollio, & eison, 1986; walvoord & anderson, 2011). the inequities associated with objective testing have also been well known for a considerable time (montenegro & jankowski, 2017); they were simply foregrounded by the pandemic and the pivot to remote learning. the academic integrity and equity concerns required faculty using objective assessment strategies to confront some uncomfortable truths about their chosen assessment methods. they could not continue believing their tests were fair to all their students, because of students’ differing access to the space and technology needed for them to do their best. faculty could no longer ignore the academic integrity issues that threatened the accuracy of the sorting function that is central to objective testing. and faculty who chose to look carefully at the kinds of questions they asked on their objective exams told us that they often discovered that they were testing their students’ ability to memorize vocabulary and basic facts, which are easily found on the internet, rather than application, problem solving, and critical-thinking skills, which are not as easily found, and not easily learned. faculty using objective assessment methods were faced with few viable choices. some faculty teaching smaller classes told us that they opted to change to subjective assessments, but that was not an option for those teaching larger classes. the understandable frustration arising from this lack of choice may have come in part from the idea that the entire experience challenged their view of themselves as impartial judges of their students’ abilities, capable of recognizing excellence and distinguishing it from mediocrity based on assessment results. it may have also caused some to question the entire assessment process—not only the specific methods but also foundational assumptions about the importance of sorting students based on their (perhaps imaginary) ability to distinguish truly exceptional students from those less so. the fact, not lost on some faculty, that these problems predated the pivot to remote instruction spurred a sincere and uncomfortable reevaluation 179 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu (for some) of their identity as instructors. if they were unable to see the fundamental flaws in their assessment methods even though they were experienced instructors, what other fundamental flaws in their teaching might they have missed? faculty–student relationships beyond content delivery and assessment strategies, the third theme we identified through our experiences and conversations with faculty reflected a particularly profound sense of loss—that of relationships and interactions between instructors and students. many of us who have long abandoned the “sage on the stage” model for the more collaborative “guide on the side” approach value highly the relational nature of this less hierarchical teaching method (king, 1993). this approach to teaching is fundamental to how we understand ourselves as professors; it defines our presence in the classroom. the transition to online teaching disrupted the connections that enable us to be the caring, approachable, accessible instructors grounding students’ college experience that we conceive ourselves to be. there is a reason professors arrive to their classrooms early, and it is not just to make sure the technology is working or the desks are in place. it is because those 10 or 15 minutes prior to class create an opportunity to interact with students in a less formal, more unstructured way. it is a time to make small talk about the interests and well-being of students. it is a time to bond over common experiences and discover a shared passion for sports, hobbies, or service. these conversations emerge as the first few students enter the room and begin to unpack their belongings. a sticker on the front of a laptop, a book from another class, a wet umbrella—all of these artifacts create an opening for conversation and connection. even beginning a conversation merely to break an uncomfortable silence is often rewarded with an exchange of information that reveals something about the student that enables the professor to tailor a teaching message a little more effectively. it is the desire to do just that—respond, adjust, reach—that makes connecting with students such an important part of many professors’ self-concept. these connections are not so easily made in the virtual world. in the online classroom, there is no wandering the room, quietly engaging a student who seems to need a supportive ear. there is no light-hearted debate about the basketball game the night before, as the quality of zoom audio makes voices stumble over one another until everyone awkwardly cedes the floor. on the one-dimensional screen, everyone is part of the conversation, even when they are not. there is no easy way for students or instructors to “catch each other for a minute” before or after class. our faculty came to realize not only that these opportunities to connect were diminished, but also how important they were in crafting their teaching persona. this added to many professors’ discomfort with the online teaching experience. it was not only instructors’ curtailed relationship building with students that contributed to the challenge of pivoting to online instruction. broader classroom management issues also emerged for many faculty, affecting their ability to feel confident about their role. both those who rely more heavily on a top-down lecture approach and those who adopt a more democratic approach to teaching and learning felt some sense that the online platform eroded control of the classroom environment. consider a few of the most common classroom expectations, from the explicitly stated, “no use of electronic devices unless required for a class activity,” to the implicit presumption that students pay attention and refrain from perceptibly engaging in work other than that associated with the class. in a typical classroom, violation of these norms is often easy to spot and fairly easy to correct with a purposeful glance or by closing the physical gap between instructor and student, so as to tacitly nudge the student back to the task at hand. it also remains common for an instructor, when recognizing a side conversation in progress, to offer to clarify information or otherwise refocus the attention of the students involved. many faculty pride themselves on having mastered this skill of situational awareness 180 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu in the classroom, often referred to as “withitness” in educational circles (emmer & stough, 2001; mcdaniel, jackson, gaudet, & shim, 2009) and a fundamental tenet of effective classroom management. the ability to be “with it” in a virtual classroom is significantly diminished. even in gallery view and even with all students having their video cameras on (not a guaranteed condition), it is rarely possible for instructors to recognize when students are multitasking and otherwise disengaged, or only minimally engaged, in the learning taking place. for instructors who highly identify with their ability to create and maintain an engaging and focused classroom culture, this was a particularly disappointing realization. beyond the relationships and interactions among instructors and students, the transition to online learning also introduced a contextual layer that does not exist in traditional classrooms. that is, teaching and learning moved from neutral, generic physical spaces into students’ personal online spaces that often revealed more about status, privilege, preferences, and private lives than they may have wanted to share. as faculty, we have always known that such socioeconomic and access disparities exist, but we have rarely been confronted with them to such a degree. the student zooming in from the corner of a cramped bedroom or a kitchen table with family members conversing in the background stood in stark contrast to the student zooming in from the side of their backyard swimming pool or generously appointed den. as such inequalities were not lost on the instructor, neither would they be lost on fellow students who might now consider their classmates from a very different perspective—one that could significantly affect power relationships and how they worked together as peers. in a similar fashion, many instructors—despite the connections they desire and form with students as part of their professional identity—discovered that they wanted to maintain some degree of boundary between their own work and nonwork lives. the desire to maintain clear distinctions between these different spheres of life has been well documented in the boundary theory literature, with those favoring more segmentation experiencing significant stress when the lines between those areas of life became blurred (desrochers & sargent, 2004; piszczek & berg, 2014; rothbard, phillips, & dumas 2005). instructors who found themselves apologizing for interrupting children or cats in front of the computer screen ranged from feeling as though this was a charming peek into their humanity to feeling agitated and concerned that their virtual environment would detract from their professional presence. in each of these cases and the student circumstances above, faculty–student connections were being made in the online platform, just not the type of connections that were particularly desired. as faculty in our sessions reflected on the relationships they had attempted to form and maintain during the spring 2020 semester, it was clear that many instructors struggled to comfortably situate themselves in the virtual environment. they were made vulnerable by the disruption to their professional identifies and the genuine loss they felt as their connections with students were diminished or uncomfortably altered. reimagining these critical interpersonal relationships will define approaches to online learning moving forward. as with content mastery and assessment decisions, these relationships must also be thoughtfully considered with respect to how they contribute to both professional identity and effective teaching and learning. conclusion for most faculty, the swift and sudden pivot from in-person teaching—and all that it implies—to online instruction proved to be an unwelcome jolt accompanied by feelings of disappointment, frustration, and sometime even anger or despair. missed were the frequent concourses with students, the energy of the classroom experience, and the well-established routines that shape the professional lives of university instructors. at the same time, the new teaching circumstances placed many faculty 181 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu members in the uncomfortable position of questioning their previously well-developed persona as college professors as they were forced to see even the most ordinary parts of their profession in new ways. teaching content became an area that provoked uncertainty about one’s self-concept; assessing student performance led to questions about fairness and validity; interactions with students online left faculty feeling loss acutely and confronted instructors with the socioeconomic inequalities among our students that have real effects on their learning. perhaps one important response to these many challenges is to acknowledge them so that we can control them rather than allowing them to control us. we, the authors of this essay, however, prefer to take a more sanguine approach. as noted throughout, many of the insights that faculty derived from struggling with challenges were actually confrontations with issues in higher education that have been around for a long time, sometimes decades. we see these uncomfortable realizations as blessings in disguise, an opportunity visited upon us to rethink some of the basics about college pedagogy, what it means to be a university professor, and how we can offer students an enhanced college experience. if we can but look in the right direction and open our minds to seeing things in new ways, we potentially stand at the precipice of a radical paradigm shift in higher education, a shift not only to course delivery that is radically reenvisioned and student centered, but also in how we see ourselves as facilitators of deep, sustained learning that avails students life’s best opportunities and leads them to the hallmark of adulthood—self-authorship.4 epilogue six months having passed since writing this essay, we have now experienced an academic semester (fall 2020) that was planned primarily for remote and hybrid modalities. in other words, unlike the tumultuous spring 2020 semester, where faculty were forced to pivot suddenly to an online environment, instructors, anticipating a computer-mediated modality in advance, were able to structure their courses accordingly. nevertheless, this foreknowledge did not necessarily yield a sea change in teaching. for many faculty, challenges from the spring 2020 semester persisted into the fall: many students came to class with their videos off and disengaged from the session; electronic resources and consistent accessibility to reliable internet connections were not available for all students; fair and academically honest assessment remained a vexing problem, especially in large classes; instructors continued to struggle to keep their students’ attention during lectures. while the faculty have begun to recognize these and related problems as the new normal, the authors of this essay have recognized that by now the academic community is well aware of the main challenges of remote teaching and learning. and yet, not enough time has transpired for a consensus to develop on best practices to address the challenges. until such a consensus appears, we have responded with recommendations of flexibility with attendance and other course policies, course design that promotes equity and inclusion, revision of assessment strategies, and intentional development of community in the classroom, among other suggestions. in terms of faculty identity, or how faculty think about their profession, we believe these recommendations, along with the challenges they are meant to address, continue to push college teachers to see themselves in new, sometimes uncomfortable, ways. the jury is still out, but perhaps when we are past the threats of the pandemic we will see a renaissance in college teaching and a new definition of college professor. 4 self-authorship can be defined briefly as “the internal capacity to define one’s beliefs, identity, and social relations.” see baxter magolda, 2008, p 269. 182 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu references abu-alruz, j., & khasawneh, s. (2013). professional identity of faculty members at higher education institutions: a criterion for workplace success. research in post-compulsory education, 18(4), 431–442. https://doi.org/10.1080/13596748.2013.847235 baxter magolda, m. b. (1992). knowing and reasoning in college: gender-related patterns in students' intellectual development. san francisco, ca: jossey-bass. baxter magolda, m. b. (2008). three elements of self-authorship. journal of college student development 49(4), 269–284. bergmann, j., & sams, a. (2012). flip your classroom: reach every student in every class every day. international society for technology in education. descrochers, s., & sargent, l. d. (2004). boundary/border theory and work-family integration. organization management journal, 1, 40–48. https://doi.org/10.1057/omj.2004.11 emmer, e. t., & stough, l. m. (2001). classroom management: a critical part of educational psychology, with implications for teacher education. educational psychologist, 36(2), 103– 112. https://doi.org/10.1207/s15326985ep3602_5 freeman, s., & theobald, e. (2020, september 2). is lecturing racist? inside higher ed. https://www.insidehighered.com/views/2020/09/02/lecturing-disadvantagesunderrepresented-minority-and-low-income-students-opinion#.yejcpo_lwjw.link friesen, n. (2011). the lecture as a transmedial pedagogical form: a historical analysis. educational researcher, 40(3), 95–102. https://doi.org/10.3102/0013189x11404603 gannon, k. (2018). iowa nice, “civil attention,” and student engagement. des moines, ia: grand view university, center for excellence in teaching & learning. gonzalez, a. l., calarco, j. m., & lynch, t. (2018). technology problems and student achievement gaps: a validation and extension of the technology maintenance construct. communication research, 47(5). https://doi.org/10.1177/0093650218796366 king, a. (1993). from sage on the stage to guide on the side. college teaching, 41(1), 30–35. kurz, l., metzler, e., & rehrey, g. (2015). the key to success: holding students accountable for coming to class prepared [white paper]. indiana university. retrieved from https://docs.google.com/document/d/1scl8s2o2yuj9cditly4eof5gl71cdmfo8kdack k8pny/edit?usp=sharing mcdaniel, l., jackson, a., gaudet, l., & shim, a. (2009). can "withitness skills" be applied to teaching with laptops? american journal of business education, 2(4), 81–85. https://doi.org/10.19030/ajbe.v2i4.4062 mezirow, j. (1991). transformative dimensions of adult learning. san francisco, ca: jossey-bass. milton, o., pollio, h. r, & eison, j. a. (1986). making sense of college grades: why the grading system does not work and what can be done about it. san francisco, ca: jossey-bass. montenegro, e., & jankowski, n. a. (2017). equity and assessment: moving towards culturally responsive assessment (occasional paper no. 29). urbana, il: university of illinois and indiana university, national institute for learning outcomes assessment. passmore, d. (2014). from “sage on the stage” to facilitator of learning: a transformative learning experience for new online nursing faculty. in c. n. stevenson and j. c. bauer (eds.), building online communities in higher education institutions: creating collaborative experience (pp. 237–257). hershey, pa: information science reference. perry, w. g. (1999). forms of ethical and intellectual development in the college years: a scheme. san francisco: jossey-bass. 183 https://www.insidehighered.com/views/2020/09/02/lecturing-disadvantages-underrepresented-minority-and-low-income-students-opinion#.yejcpo_lwjw.link https://www.insidehighered.com/views/2020/09/02/lecturing-disadvantages-underrepresented-minority-and-low-income-students-opinion#.yejcpo_lwjw.link https://doi.org/10.1177/0093650218796366 https://docs.google.com/document/d/1scl8s2o2yuj9cditly4eof5gl71cdmfo8kdackk8pny/edit?usp=sharing https://docs.google.com/document/d/1scl8s2o2yuj9cditly4eof5gl71cdmfo8kdackk8pny/edit?usp=sharing https://clutejournals.com/index.php/ajbe/article/view/4062 https://clutejournals.com/index.php/ajbe/article/view/4062 https://clutejournals.com/index.php/ajbe/issue/view/434 kurz, metzler, and ryan journal of teaching and learning with technology, vol. 10, special issue, jotlt.indiana.edu piszczek, m. m., & berg, p. (2014). expanding the boundaries of boundary theory: regulative institutions and work-family role management. human relations, 67, 1491–1512. https://doi.org/10.1177/0018726714524241 rothbard, n. p., phillips, k. w. & dumas, t. l. (2005). managing multiple roles: work-family policies and individuals’ desires for segmentation. organization science, 16, 243–258. https://doi.org/10.1287/orsc.1050.0124 van lankveld, t., schoonenboom, j., volman, m., croiset, g., & beishuizen, j. (2017). developing a teacher identity in the university context: a systematic review of the literature. higher education research and development, 36(2), 325–342. https://doi.org/10.1080/07294360.2016.1208154 walvoord, b. e., & anderson, v. j. (1998). effective grading: a tool for learning and assessment in college. wiley. wiggins, g. p., & mctighe, j. (2005). understanding by design. ascd. 184 3248-12029-1-ce journal of teaching and learning with technology, vol. 2, no. 1, june 2013, pp. 15 30. classroom clickers offer more than repetition: converging evidence for the testing effect and confirmatory feedback in clicker-assisted learning amy m. shapiro1 and leamarie t. gordon2 abstract: the present study used a methodology that controlled subject and item effects in a live classroom to demonstrate the efficacy of classroom clicker use for factual knowledge acquisition, and to explore the cognition underlying clicker learning effects. specifically, we sought to rule out repetition as the underlying reason for clicker learning effects by capitalizing on a common cognitive phenomenon, the spacing effect. because the spacing effect is a robust phenomenon that occurs when repetition is used to enhance memory, we proposed that spacing lecture content and clicker questions would improve retention if repetition is the root of clicker-enhanced memory. in experiment 1 we found that the spacing effect did not occur with clicker use. that is, students performed equally on clicker-targeted exam questions regardless of whether the clicker questions were presented immediately after presentation of the information during lecture or after a delay of several days. experiment 2 provided a more direct test of repetition, comparing test performance after clicker use with performance after a second presentation of the relevant material. clicker questions promoted significantly higher performance on test questions than repetition of the targeted material. thus, the present experiments failed to support repetition as the mechanism driving clicker effects. further analyses support the testing effect and confirmatory feedback as the mechanisms through which clickers enhance student performance. the results indicate that clickers offer the possibility of real cognitive change in the classroom. keywords: clickers, feedback, clicker-assisted learning, knowledge acquistion personal response systems, commonly called clickers, have become common in thousands of classrooms nationally. they allow instructors to assess comprehension and memory for material by posing a question to the class (usually multiple-choice) that students answer with remote devices they bring to class. questions and answers take as little as a minute or two to present and collect, and voting results can be displayed instantly in a bar graph. understandably, educators and researchers have been interested in the technology’s educational effectiveness. generally speaking, the majority of studies have shown that clickers are effective in boosting attendance and participation (beekes, 2006; poirier & feldman, 2007; shih, rogers, hart, phillis, & lavoie, 2008; stowell & nelson, 2007) and learning outcomes (kennedy & cutts, 2005; mayer et al., 2009, morling et al., 2008; ribbens, 2007; shapiro, 2009; shapiro & gordon, 2012). few studies have explored the cognitive mechanism through which clickers increase retention of lecture content, however. the focus of the present work was to better 1 university of massachusetts dartmouth, psychology department, 285 old westport road, north dartmouth, ma 02747-2300, ashapiro@umassd.edu 2 tufts university shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 16 understand the cognition driving clicker effects. specifically, the experiments presented here were designed to rule out repetition as the basis of clicker effects in fact-based learning, thereby supporting the hypothesis that the testing effect and feedback drive clicker effects. there are several ways clickers may work to enhance memory for classroom material: (1) directing students’ attention to material likely to be on exams, (2) repetition, and (3) the testing effect. the first possibility, that clickers “tip off” students about the instructor’s judgment of important material, and therefore the content of exam questions, is a reasonable hypothesis. one might expect students to attend to those topics more in class and focus study effort on those topics. greater attention in class and increased studying would both enhance exam performance. if the attention-grabbing hypothesis is correct, it would mean clicker questions do not directly enhance memory or learning. it would mean only that they are an effective means of directing learners’ attention to particular topics. repetition effects, the second possible way clickers enhance memory for classroom material, make the justification for clicker use similarly debatable. repetition can be accomplished through online resources, readings or other assignments outside of class, without using any class time and at no cost to students. if clicker effects are attributable instead to the last possibility, the testing effect, it would indicate they have unique benefit in the classroom. writing clicker questions and integrating them with class lectures does require a modest time investment. once that is completed, however, clicker questions require little class time to administer, correct and enter to grade sheets. indeed, the entire sequence of presentation, response, grading, recording and feedback all happens within seconds. as such, if clickers are due to the testing effect rather than repetition or attention-grabbing it would mean they offer unique benefit of enhanced learning during class time with very little investment of time or money. shapiro and gordon (2012) were able to rule out attention-grabbing and found modest support for the testing effect during clicker use in a live classroom. in their study, a series of exam questions were targeted over the semester in two classes. half the items in one class were targeted with clicker questions when the information was taught in class. the other half of the questions was targeted by attention alerts. they assigned the same items to the opposite conditions in the other class. this counterbalanced the assignment of each question to the experimental and control conditions, and created a situation in which each item served in both the clicker and attention conditions. students did not get clicker questions about the information assigned to the attention condition. instead, they were told that the information was very important and would be covered on the next test. the relevant information on the powerpoint slide was also highlighted in red and was animated to flash. at the end of the semester students were given a survey that asked what directed their decisions about what to study. in spite of the fact that they reported studying the information targeted by the alerts more than that targeted by clicker questions, students performed as well or better on questions when a clicker questions was offered. in short, even when attention was explicitly drawn to specific information in class and studied more outside of class, answering a clicker question had an equal or greater effect on exam performance. that study did not rule out attention-grabbing as a contributing factor to clicker effects, but it did provide strong evidence that it is unlikely to be the sole source of clicker effects. the authors argued that the testing effect also underlies clicker effects. shapiro and gordon (2012) were not able to rule out the possibility of repetition effects as the mechanism underlying clicker effects, however. because they compared a clicker group to a no-clicker control group that was exposed to one presentation of the material, repetition is confounded with clicker use. indeed, the majority of studies that report clicker effects compare shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 17 clicker use with no clicker use, with no control for repetition effects (e.g., mayer et al. 2009; morling et al, 2008; shapiro, 2009). at present, then, it is unclear whether the testing effect or simple repetition effects are driving clicker effects in the classroom. the present study was designed to address this question. we sought to determine whether the learning outcomes observed with clicker use are attributable to repetition. before explaining the methodology, we provide a brief review on the research that explains these phenomena. the testing effect and repetition learning karpicke, roediger and others have documented that testing memory can enhance later recall or recognition better than an equivalent amount of additional study (butler, karpicke, & roediger, 2007; carrier & pashler, 1992; karpicke & roediger, 2007a, 2007b, 2008; roediger & karpicke, 2006a; szpunar, mcdermott, & roediger, 2008). in what has become the classic paradigm for investigating the testing effect, thompson, wenger, and bartlings (1978) gave one group 3 study sessions followed by a delayed test (ssst). another group studied the same information once and was then tested 3 times (sttt), the final test serving as the dependent measure after a 48 hour delay. on the final test, the ssst group forgot 56% of the material, as opposed to just 13% by the sttt group. this basic effect has been demonstrated using free recall (jacoby, 1978; szpunar et al., 2008), short-answer (agarwal, karpicke, kang, roediger, & mcdermott, 2006) and multiple-choice (duchastel, 1981; nungester & duchastel, 1982) tests and has been demonstrated with memory for word lists (karpicke & roediger, 2007a; tulving, 1967), paired-associates (allen, mahler, & estes (1969) and text (nungester & duchastel, 1982; roediger and karpicke, 2006a). the cognition underlying the testing effect is not fully understood but some hypotheses have emerged and are currently under investigation. one possibility is that repeated testing creates conditions in which information is over-learned, a position argued by thompson et al. (1978). over-learning is an unlikely explanation of clicker effects, as it is improbable that offering a single clicker question in class can lead to over-learning. a more likely possibility is that testing strengthens the pathways leading to a stored memory more than additional study does (bjork, 1975). since study can be very passive (e.g., re-reading text passages or lecture notes), the more active nature of generating responses or comparing multiple-choice alternatives could reasonably offer greater opportunity for such enhancement. in other words, individuals are engaging in an activity that requires greater concentration during testing than some forms of study. indeed, bjork and bjork (1992) have argued that there is a positive relationship between the level of effort required during testing and the strength of memory. as such, the effect may be a form of depth of processing (craik & lockhart, 1972). alternatively, testing may generate new routes to the memory trace, thus multiplying possible access points to the material (mcdaniel & masson, 1985). when memories are formed, information about the context and activities relevant to the material are also formed. testing offers new perspectives and links to the information that may be sensitive to different memory cues than the connections formed during study. the latter possibility would take advantage of encoding specificity, as a pathway generated through testing is likely to be more easily accessed during later testing. an excellent and more extensive review of the testing effect is provided by roediger & karpicke (2006b). although the mechanisms underlying the testing effect are not fully understood, numerous investigations have demonstrated that the effect seems to be enhanced by feedback shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 18 (e.g., butler & roediger, 2007; hattie & timperley, 2007; kulhavy, 1977; pashler, cepeda, wixted, & rohrer, 2005; sassenrath & gaverick, 1965; thorndike, 1913). feedback can be confirmatory or corrective, and there is evidence that both types enhance later test performance (butler, karpicke, & roediger, 2007; kluger & denisi, 1996; mcdaniel et al., 2007; vojdanoska et al., 2010). because clickers allow instructors to provide feedback with a simple button click within seconds of voting, feedback is widely used among clicker-adopting instructors. as a consequence, feedback is an important facet of clicker use to consider when questioning the reasons underlying clicker-mediated learning effects, particularly the testing effect. it is important to note that the testing effect has been demonstrated in many experiments that did not employ feedback (see kang, mcdermott, & roediger, 2007, experiment 1; marsh, agawal, & roediger, 2009; roediger & karpicke, 2006a), so while there is the potential for the contribution of feedback effects during clicker-based learning, some other mechanism unique to testing appears to be working with or in addition to feedback. a study by kang et al. (2007) underscores this point. after reading journal articles, subjects took either short answer or multiple-choice tests prior to a final memory test. subjects did better on the final test when they took preliminary multiple-choice tests. when feedback was offered on the preliminary tests (in experiment 2), however, students taking the short answer tests did better on the final test. in sum, testing improved learning in kang et al.’s study, but the addition of feedback altered something about the mechanism involved. the results are highly suggestive of some sort of interaction between the memory processes relevant during testing and feedback. in spite of the fact that testing, especially with feedback, has been shown to enhance performance on tests more than study repetition, mere re-exposure to material alone can improve learning. the more times a student is exposed to a bit of information, the greater the likelihood he or she will retain it (e.g., ebbinghaus,1913; raney, 2003; scarborough, cortese, & scarborough, 1977; tulving, 1967). as such, it is certainly possible that clicker questions may improve retention for classroom content merely by re-exposing students to the material. in other words, clicker effects may simply be repetition effects, and that is a potential criticism of any experiment that demonstrates clicker effects by comparing clicker use with a no-clicker control. thus, it is important to rule out repetition as the cause of clicker effects in order to strengthen the argument for classroom clickers as effective and worthwhile pedagogical tools. the present study shapiro and gordon (2012) concluded that the testing effect, not attention-grabbing, is responsible for enhanced learning with clickers in their experiment. because they compared clicker groups to non-clicker control groups, as do most published studies on the topic, clicker use was confounded with repetition in their investigation. in the present two-experiment study we tested whether clicker effects are due, at least in part, to repetition. experiment 1 takes advantage repetition learning in order to determine the role of repetition in clicker effects. specifically, if repetition is a significant source of clicker effects, clicker use should be subject to the spacing effect. the spacing effect (also called distributed learning) refers to the phenomenon in which rehearsal or re-exposure to material results in greater memory when a period of time is allowed to intervene between presentations (benjamin & tullis, 2010; cepeda, pashler, vul, wixted, & rohrer, 2006; glenberg, 1979; hintzman, 1974). if clicker questions are more effective when offered after a delay of several days, it will shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 19 indicate the questions are likely serving as a method of repeating exposure to class material. if the spacing effect is not evident, it will indicate that repetition is unlikely to be a significant factor in clicker effects. in experiment 2, we compared a clicker group that received a single presentation of the material and a subsequent clicker question to a group that received a second presentation of the material in place of the clicker question. because shapiro and gordon (2012) have found evidence against attention-grabbing as the reason for clicker effects, failure to support repetition in the present study would provide converging evidence that clicker effects are most likely attributable to the testing effect. we also took advantage of the clicker data to perform a secondary analysis on clicker performance to learn something about the role of feedback in clicker effects. experiment 1 the experiment was designed to determine whether the clicker learning effects demonstrated in prior studies are subject to the spacing effect, and thus attributable to repetition effects. we designed experiment 1 to compare exam question performance when clicker questions were asked immediately after in-class presentation of the material and when clicker questions were asked after a delay. if the spacing effect is in evidence, subjects should score higher on test items when clicker questions were offered 2-5 days after the material was taught in class, as compared with the same clicker questions offered the same day. finding a spacing effect would indicate that clicker effects may be attributed, at least in part, to repetition. if the spacing effect does not emerge in the data, it would indicate that either feedback or the testing effect leads to cognitive change that can’t be attributed to simple rehearsal. for this reason, an analysis of clicker question performance was conducted to determine the role of feedback apart from repetition. method subjects four hundred students enrolled in two sections of general psychology at the university of massachusetts participated in the study. students participated as part of their normal coursework, and earned participation points by correctly answering in-class questions. they ranged from freshmen to seniors and represented a range of disciplines offered at the institution. irb approval was sought prior to beginning the study and a waiver was granted. materials and procedure the class covered 11 topics in general psychology and was taught as a typical lecture course with demonstrations and multimedia integrated into many of the lectures. powerpoint presentations were projected onto a movie theater-sized screen. in-class clicker questions were integrated into the presentations, with individual slides dedicated to single questions. the iclicker system was used to allow students to make their responses to clicker questions. shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 20 students were required to purchase their clickers (for $20-40, depending on whether they were new or bundled with the required text book). sixteen test clicker question/test item pairs were used as stimuli in the present study. each clicker question was written to tap the same information as its targeted exam question. all clicker and exam questions were multiple-choice and were taken from shapiro and gordon (2012). the clicker question/test item pairs were spread throughout the semester, and across the four exams administered during the semester. performance on the exam questions was the dependent variable. all the targeted exam questions were included in the exams for both classes. the clicker question written for each targeted exam question was also given to each class. the timing of the clicker question presentation was manipulated as the within-subjects independent variable. when assigned to the “immediate” condition, clicker questions were given in class directly after the material was presented and any student questions were answered. when assigned to the “delayed” condition, the questions were given at the start of another class meeting, 2-5 days after the material was taught. half the items were included in each condition for one class, with the other half included in the opposite condition for the other class. as such, each of the 16 experimental items was included in both the immediate and delayed conditions, and each subject contributed data to both conditions. presentation of the relevant course material was the same in both conditions; the information was included on a powerpoint slide. identical “filler” clicker questions targeting material unrelated to the experimental items were offered to both classes, with the experimental items mixed randomly among them. between 1-5 clicker questions (filler and experimental) were asked in class each day. the instructor projected the clicker questions onto the screen after soliciting and answering any questions from the students. students were given 30-90 seconds to answer each question and a bar chart showing the percentage of the class to respond with each option was projected to provide feedback after voting was closed. exam and clicker question validation. because a simple, no-clicker control condition would not allow discrimination between clicker and repetition effects, which is the purpose of this investigation, a no-clicker group was not included. for that reason, it was important to establish that the materials used in the present study do induce a basic learning effect. as mentioned, the sixteen clicker questions, and the corresponding exam questions for which they were written, were taken from shapiro and gordon (2012). the clicker question written for each exam question probed the same basic information as the test question, but was still unique. in their study, shapiro and gordon implemented a counterbalancing strategy wherein each of two classes was given clicker questions for half the targeted exam questions. for the other half of the questions, subjects were given no clicker question. for half of those in the control condition (see experiment 1), no special treatment was given to the material in class. for the other half, however, students were told the material was important and would be on the test (see experiment 2), creating a very conservative test of clicker learning effects. the methodology controlled for both item and subject effects, as each exam question was used in the control and clicker conditions and each subject contributed data to both conditions. half the stimuli in the present experiment were taken from shapiro and gordon’s experiment 1 and half from experiment 2. thus, in order to establish that the item subset chosen for the present study does produce the basic clicker learning effect, the analysis from that experiment was re-run including only the subset of items chosen for the present study. analyzed by subjects, a paired t-test shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 21 revealed a significant effect of clickers on performance, t(234) = 5.62, p < .0001, d=.37. students scored a mean of 68.9% (sd=18.7) correct on items when no clicker question was offered and 76.8% (sd=18.1) correct when a question was offered, more than an 11% performance increase. the results were also significant when analyzed by items, t(15) = 4.29, p < .001, d=1.08, with items answered correctly by 69.4% (sd=12.1) of subjects when placed in the control condition and 76.0% (sd=10.8) answering the same questions correctly when clicker questions were asked, an increase of almost 10%. again, this is a very conservative test of the stimuli because half the items in the control condition were identified to students as material that would be on the test. in spite of the warning, clicker questions still significantly boosted exam performance. other measures of the stimuli were taken to ensure stimulus validity. two independent content experts provided validation ratings of the stimuli. both are professors of psychology that routinely teach introductory psychology. they rated each clicker and exam question on a 7point scale for the following dimensions: (1) overall quality of the question, (2) relevance of the information targeted by the clicker/exam item pairs to the content and goals of an introductory psychology course, (3) the relationship between each clicker item and each exam question. the questions used in the experiment all scored a minimum rating and minimum mean of 5.0 by each rater on questions 1 and 2. the clicker/exam pairs met the same criteria on survey question 3. the relationship ratings between clicker questions and exam questions which were not intended as pairs were also analyzed. it was important that unpaired items were actually unrelated to ensure clicker questions were not enhancing memory for exam questions for which they were not written. all unrelated clicker/exam question pairs used in the present experiment scored a maximum rating of 2.0 among reviewers and had a mean rating of 1.5. the low ratings established the unlikelihood of “spillover” effects. that is, clicker questions were unlikely to affect performance on exam questions for which they were not intended. results and discussion students who withdrew early from the course, those with attendance lower than 60%, and those who missed more than one exam were excluded from the data analysis. these students provided insufficient data for the within-subjects comparisons or were insufficiently exposed to the independent variable. the deletions yielded a total of 283 subjects in the analysis. moreover, individual exam question data were removed from the analysis for students who were absent from class the day the targeted content was presented. missing those critical classes meant missing the targeted content as well as their immediate clicker questions. also, effects of the delayed clicker questions would be difficult to interpret for those cases. a maximum of 16 exam questions per subject was possible and these deletions resulted in a mean of 13.1 per subject. out of a maximum of 283 student scores for each question, the deletions resulted in a mean of 229.6. paired t-tests were performed to compare performance between the immediate and delayed conditions. the results did not reveal evidence of a spacing effect. when analyzed by subjects, there was no significant difference between performance on exam items when targeted by immediate (m = 67.5, sd = 24.1) or delayed (m = 70.0, sd = 21.8) clicker questions, t(282) = 1.73, p > .05. no significant difference between the immediate (m = 67.4, sd=9.3) and delayed (m = 69.8, sd=11.7) conditions was revealed in the item analysis, t(15) = 1.04, p > .05. the mean discrimination index for the exam questions was 50.4. shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 22 since there was no spacing effect, the data argue against repetition as a significant mechanism underlying clicker effects. if repetition isn’t driving the effect, what is? a clue to the relevant processes may be gleaned by examining clicker question performance in the immediate versus delayed conditions. it makes intuitive sense that students would perform better on immediate clicker questions, as the information needed to answer the questions correctly has just been presented in lecture. in light of the fact that students performed equally well on later exam questions regardless of clicker question timing, however, if students did perform better on the immediate versus delayed clicker questions it would suggest corrective feedback is being used to improve test performance to some extent. paired t-tests by items, comparing clicker question performance between immediate and delayed conditions revealed just that. students scored a significantly higher percent correct on immediate clicker questions (m = 94.7, sd=9.1) than delayed (m = 83.2, sd=15.8), t(281) = 10.99, p < .0001, d = .65. the same result was found when analyzed by items, with the same clicker questions answered correctly more often when asked in the immediate condition (m=94.7, sd=5.1) than in the delayed condition (m=82.0, sd=18.3), t(15) = 2.85, p < .01, d = .72. not only are the t-tests significant, but the effect sizes are quite robust. despite such clear differences between immediate and delayed clicker performance, exam performance was not affected by conditions. as such, it stands to reason students were able to make some use of their performance feedback in the delayed condition to improve test performance. the clicker performance analysis provides only indirect evidence about the effect of corrective feedback, however. a more direct test is possible by comparing exam question performance when the clicker questions were answered correctly versus incorrectly. if feedback is a primary factor in clicker effects, students should score equally on exam questions regardless of clicker performance as long as they are given feedback, as they were in the present study. if there is a significant difference, it would mean the effect of corrective feedback is limited and unlikely to account for the entire effect. to run this test, all subjects and questions in the delayed clicker condition were combined to create groups based on clicker performance. because clicker performance was quite high in the immediate condition (95%), there were insufficient incorrect responses to compare with the correct responses, so the analysis was done only on the delayed clicker questions. moreover, since exam question performance was deleted when the critical content lecture was missed, there are no cases in the immediate clicker condition in which students attended the content lecture but missed the clicker questions. the delayed condition, however, provides an important comparison group. that is, students who attended the critical content lecture but were not exposed to the delayed clicker question. the limitations of corrective feedback effects are seen when performance is compared on test items for which students correctly versus incorrectly answered the corresponding clicker questions, or did not see the clicker questions. the mean of the 1422 exam questions included in the analysis, for which the corresponding clicker questions were correctly answered3, was 72%. for the 286 exam questions, for which the corresponding clicker questions were incorrectly answered, the mean score was 63% correct. there were 191 unanswered, delayed clicker questions across subjects that did attend the critical content lecture (in other words, students who received the content in class but did not see the clicker question) and the mean score on the corresponding exam questions was 59%. although the effect size was quite small, the difference was significant, f(2, 1896) = 9.83, p< .0001,η2 =.01. a scheffe’s posthoc analysis revealed that 3 with 16 items and 283 subjects, there were 2284 possible clicker responses in the immediate and in the delayed conditions. the number in the analysis is lower due to student absences. shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 23 exam question performance was significantly higher in the case of correctly answered clicker questions than incorrectly answered clicker questions, p < .05 (two tailed) and in the case of correct versus missed clicker questions, p < .05 (two tailed). the difference between exam question performance based on incorrect versus missed clicker questions was not significant, p > .05 (two tailed). if corrective feedback were a primary mechanism through which clicker effects worked, there should be little or no significant difference on exam performance based on clicker performance. more importantly, incorrectly answered clicker questions should yield better performance than getting no clicker question at all. after all, if students are using clicker questions primarily to gain corrective feedback on their performance, one would expect to see evidence of widespread self-correction on the exam questions. the significant performance advantage by students getting the answer correct, in addition to the comparable exam performance of students getting a clicker question wrong and those unexposed to it, suggests corrective feedback was not particularly useful for students getting clicker questions wrong. the large differences in sample sizes and the rather low effect size, however, warrant caution about the strength of this conclusion. experiment 2 the purpose of experiment 2 was to provide converging evidence with experiment 1 that repetition is not the major source of clicker learning effects. the advantage of the methodology used in experiment 1 was that the presentation of immediate and delayed clicker questions seemed natural to students within the context of a live classroom. taking advantage of the spacing effect in this way, however, only provided indirect evidence of the role of repetition. experiment 2 addressed the question more directly by comparing exam question performance after the presentation of clicker questions or information repetition. moreover, since the main evidence refuting repetition effects in experiment 1 was a nonsignificant result, experiment 2 was also designed to provide positive evidence (i.e., a significant statistical result) in support of our hypothesis. method subjects three hundred twenty students enrolled in two sections of general psychology at the university of massachusetts participated in the study. students participated as part of their normal coursework, and earned participation points by correctly answering in-class questions. they ranged from freshmen to seniors and represented all five colleges across campus. irb approval was sought prior to beginning the study and a waiver was granted. materials and procedure the same materials and procedure the same procedure was used as in experiment 1, but with one change. instead of half the exam questions being targeted with delayed clicker questions in each semester, half were targeted with a second, immediate presentation of the material. in the clicker and repetition conditions, the same slide was used to present the information for the first shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 24 time. in the clicker condition a clicker question followed the slide. in the repetition condition a second powerpoint slide that presented the relevant information in a slightly different way from the first was presented in lieu of a clicker question. in this way, the effect of a second, novel presentation on exam question performance could be compared with the effect of a clicker question. a sample stimulus set from each condition is provided in appendix a. in both conditions, the targeted information was presented verbally along with an accompanying slide. (in the appendix a example, the targeted information was the role of the hypothalamus in hormone regulation.) in the repetition condition, the information was repeated with a new visual aid, while in the clicker condition students answered a question in lieu of seeing the second slide. results and discussion students that withdrew early from the course, those with attendance lower than 60%, and those that missed more than one exam were excluded from the data analysis. this yielded a total of 290 students in the analysis. paired t-tests were performed to compare performance between the clicker and repetition conditions. the results indicated significantly better performance in the clicker condition (m = 61.2, sd = 21.6) than in the repetition condition (m = 56.2, sd = 20.6) when analyzed by subjects, t (289) = 3.417, p = .001, d = .20. the effect was also significant when analyzed by items, t (15) = 2.419, p = .029, d = .60, with students performing better on items when the relevant content was presented with a clicker question (m = 60.7, sd = 10.4) rather than with a second presentation (m = 55.2, sd = 12.0). the results of experiment 2 converge with those of experiment 1 to support the hypothesis that clicker questions do not enhance retention of classroom material merely because they act as a second presentation of information. the 5-point increase in subject performance (from 52.2 to 61.2) in the subject analysis represents a performance increase of 8.9%. the effect size is rather small, however. the 5.5-point increase in the item analysis represents a 10% increase and a moderate effect size, however. while these results can’t rule out any role of repetition in clicker effects, they do provide compelling evidence that repetition is not the major source of the effect. general discussion and conclusions shapiro and gordon (2012) reported evidence that clicker effects are not attributable to drawing students’ attention to certain material. that study was not able to rule out repetition effects as an underlying cause of clicker-enhanced learning, however. the present study addressed that possibility and demonstrated that repetition is unlikely to be a major contributor to the effect. in doing so, it provides converging evidence with shapiro and gordon that the testing effect is likely to underlie clicker-enhanced learning. in a secondary analysis of experiment 1, we tried to determine whether feedback has a role in clicker effects, since feedback is an important variable in the testing effect. the conclusions we were able to draw from those analyses are suggestive of some role of feedback, but do not paint a clear picture. the delayed clicker group performed worse on clicker questions than the immediate group but performed equivalently on exam questions, suggesting that corrective feedback helped. however, a comparison of exam question performance when students correctly versus incorrectly answered the clicker questions revealed students performed shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 25 better on exam questions when they got clicker questions right. indeed, students answering the clicker question incorrectly performed only as well on the exam questions as students that were not exposed to the clicker question at all. these results suggest corrective feedback had a weak effect on exam performance. any conclusions drawn from the latter result, however, are mitigated by the rather low effect size. on balance, then, the present results are suggestive of some role of corrective feedback in clicker-based learning. that conclusion is compatible with the large literature on the role of feedback in the testing effect. certainly, feedback should be an important area for future inquiry. regardless of the feedback question, the results do converge with shapiro and gordon (2012) to support the conclusion that the testing effect is the most likely mechanism underlying clicker effects. the notion of testing itself causing cognitive change is supported by the extensive work of karpicke and colleagues (e.g., karpicke & roediger, 2007a; 2008) on the testing effect. as bjork (1975) suggests, the act of retrieving memories may strengthen the memory trace. moreover, it may create new routes to memories that are more easily invoked during exams, with the context common to testing situations acting as a retrieval cue. the present experiment was designed to test clicker use for enhancing fact-based learning alone. as such, the results do not support clicker use for problem-solving, application, or deep-level understanding of the material. within the context of fact-based learning, however, the present results are of practical importance for educators and students. as such, we can offer some concrete suggestions for effective use of clickers in the classroom. specifically, we suggest that important factual content be targeted with clicker questions. the questions should be written specifically to require memory retrieval of the targeted information. we also suggest the questions be worded clearly and in a way that maximizes students ability to correctly answer the questions. after all, if the testing effect is at the heart of clicker-enhanced learning, the goal should be to encourage students to correctly recall the correct information from memory, thereby activating the testing effect. finally, clickers seem to invoke cognitive change in the classroom that is unique. if clicker effects were attributable to repetition or attention-grabbing, their value might be dubious. after all, there are many avenues through which to provide repetition or enhance attention inside and outside the classroom. having demonstrated that clicker use affects cognitive change attributable to the testing effect (and quite possibly to feedback, as well) the present results support clickers as a unique and valuable pedagogical classroom tool. given the relatively low cost in terms of classroom time and equipment expense, the evidence in support of their educational benefit suggests they do offer real value to students and instructors. references agarwal, p. k., karpicke, j. d., kang, s. k., roediger, h. l., & mcdermott, k. b. (2008). examining the testing effect with openand closed-book tests. applied cognitive psychology, 22, 861-876. doi:10.1002/acp.1391 allen, g. a., mahler, w. a., & estes, w. k. (1969). effects of recall tests on long-term retention of paired associates. journal of verbal learning & verbal behavior, 8(4), 463-470. doi:10.1016/s0022-5371(69)80090-3 shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 26 beekes, w. (2006). the "millionaire" method for encouraging participation. active learning in higher education: the journal of the institute for learning and teaching, 7, 25-36. doi:10.1177/1469787406061143 benjamin, a., & tullis, j. (2010). what makes distributed practice effective? cognitive psychology, 61, 228-247. doi:10.1016/j.cogpsych.2010.05.004 bjork, r. a. (1975). retrieval as a memory modifier: an interpretation of negative recency and related phenomena. in r.l. solso (ed.), information processing and cognition: the loyola symposium (pp. 123-144). hillsdale, nj: erlbaum. bjork, r. a., & bjork, e. l. (1992). a new theory of disuse and an old theory of stimulus fluctuation. in a. healy, s. kosslyn, & r. shiffrin (eds.), from learning processes to cognitive processes: essays in honor of william k. estes volume 2 (pp. 35-67). hillsdale, nj: erlbaum. butler, a. e., karpicke, j. d., & roediger, h. l. (2007). the effect of type and timing of feedback on learning from multiple-choice tests. journal of experimental psychology: applied, 13, 273-281. butler, a. c., & roediger, h. l. (2007). testing improves long-term retention in a simulated classroom setting. european journal of cognitive psychology, 19, 514–527. doi:10.1037/1076898x.13.4.273 carrier, m., & pashler, h. (1992). the influence of retrieval on retention. memory & cognition, 20, 633-642. doi:10.3758/bf03202713 cepeda, n. j., pashler, h., vul, e., wixted, j.t., & rohrer, d. (2006). distributed practice in verbal recall tasks: a review and quantitative synthesis. psychological bulletin, 132, 354-380. doi:10.1037/0033-2909.132.3.354 craik, f. i., & lockhart, r. s. (1972). levels of processing: a framework for memory research. journal of verbal learning & verbal behavior, 11, 671-684. doi:10.1016/s0022-‐ 5371(72)80001-‐x duchastel, p. c. (1981). retention of prose following testing with different types of tests. contemporary educational psychology, 6, 217-226. doi:10.1016/0361-476x(81)90002-3 ebbinghaus, h. (1913). memory: a contribution to experimental psychology. (h. a. ruger & c. e. bussenius, trans.). new york: teachers college press. glenberg, a. m. (1979). component-levels theory of the effects of spacing of repetitions on recall and recognition. memory & cognition, 7, 95–112. hattie, j., & timperley, h. (2007). the power of feedback. review of educational research, 77, 81-112. doi: 10.3102/003465430298487 shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 27 hintzman, d. l. (1974). theoretical implications of the spacing effect. in r. l. solso (ed.), theories in cognitive psychology: the loyola symposium (pp. 77–97). potomac, md: erlbaum. jacoby, l. l. (1978). on interpreting the effects of repetitions: solving a problem versus remembering a solution. journal of verbal learning and verbal behavior, 17, 649-667. doi: 10.1016/s0022-5371(78)90393-6 kang, s. h. k., mcdermott, k. b., & roediger, h. l. (2007). test format and corrective feedback modify the effect of testing on long-term retention. european journal of cognitive psychology, 19, 528-558. doi: 10.1080/09541440601056620 karpicke, j. d., & roediger, h. l. (2007a). repeated retrieval during learning is the key to long-term retention. journal of memory and language, 57, 151-162. doi: 10.1016/j.jml.2006.09.004 karpicke, j. d., & roediger, h. l. (2007b). expanding retrieval practice promotes short-term retention, but equally spaced retrieval enhances long-term retention. journal of experimental psychology: learning, memory, and cognition, 33, 704-719. doi: 10.1037/0278-7393.33.4.704 karpicke, j. d., & roediger, h. l. (2008). the critical importance of retrieval for learning. science, 319, 966-968. doi: 10.1126/science.1152408 kluger, a. & denisi, a. (1996). the effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. psychological bulletin, 119, 254-284 doi: 10.1037/0033-2909.119.2.254. kennedy, g. e., & cutts, q. i. (2005). the association between students’ use of an electronic voting system and their learning outcomes. journal of computer assisted learning, 21, 260268. doi: 10.1111/j.1365-2729.2005.00133.x kulhavy, r. w. (1977). feedback in written instruction. review of educational research, 47, 211-232. doi:10.2307/1170128 marsh, e. j., agarwal, p. k., & roediger, h. l. (2009). memorial consequences of answering sat ii questions. journal of experimental psychology: applied, 15, 1-11. doi:10.1037/a0014721 mayer, r. e., stull, a., deleeuw, k., almeroth, k., bimber, b., chun, d., bulger, m., campbell, j., knight, a., & zhang, h. (2009). clickers in college classrooms: fostering learning with questioning methods in large lecture classes. contemporary educational psychology, 34, 51–57. doi:10.1016/j.cedpsych.2008.04.002 mcdaniel, m. a., & masson, m. e. j. (1985). altering memory representations through retrieval. journal of experimental psychology: learning, memory, and cognition, 11, 371-385. doi:10.1037//0278-7393.11.2.371 shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 28 mcdaniel, m. a., anderson, j. l., derbish, m. h., & morrisette, n. (2007). testing the testing effect in the classroom. european journal of cognitive psychology, 19, 494-513. doi:10.1080/09541440701326154 morling, b., mcauliffe, m., cohen, l., & dilorenzo, t. (2008). efficacy of personal response systems (“clickers”) in large, introductory psychology classes. teaching of psychology, 35, 45-50. doi:10.1080/00986280701818516 nungester, r. j., & duchastel, p. c. (1982). testing versus review: effects on retention. journal of educational psychology, 74, 18-22. doi:10.1037/0022-0663.74.1.18 pashler, h., cepeda, n. j., wixted, j. t., & rohrer, d. (2005). when does feedback facilitate learning of words? journal of experimental psychology: learning, memory, and cognition, 31, 3-8. doi:10.1037/0278-7393.31.1.3 poirier, c. r., & feldman, r.s. (2007). promoting active learning using individual response technology in large introductory psychology classes. teaching of psychology, 34, 194-196. doi:10.1080/00986280701498665 raney, g. (2003). a context-dependent representation model for explaining text repetition effects. psychonomic bulletin & review, 10, 15-28. doi:10.3758/bf03196466 ribbens, e. (2007). why i like clicker personal response systems. journal of college science teaching, 37, 60-62. roediger, h. l., & karpicke, j. d. (2006a). test-enhanced learning: taking memory tests improves long-term retention. psychological science, 17, 249-255. doi: 10.1111/j.1467-9280.2006.01693.x roediger, h. l., & karpicke, j. d. (2006b). the power of testing memory: basic research and implications for educational practice. perspectives on psychological science, 1, 181-210. doi:10.1111/j.1745-6916.2006.00012.x sassenrath, j.m., & garverick, c.m. (1965). effects of differential feedback from examinations on retention and transfer. journal of educational psychology, 56, 259-263. doi:10.1037/h0022474 scarborough, d. l., cortese, c., & scarborough,h. s. (1977). frequency and repetition effects in lexical memory. journal of experimental psychology: human perception & performance, 3, 1-17. doi:10.1037//0096-1523.3.1.1 shapiro, a. m. (2009). an empirical study of personal response technology for improving attendance and learning in a large class. journal of the scholarship of teaching and learning, 9, 13-26. shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 29 shapiro, a.m., & gordon, l.t. (2012). a controlled study of clicker-assisted memory enhancement in college classrooms. applied cognitive psychology, 26, 635–643. doi: 10.1002/acp.2843. shih, m., rogers, r., hart, d., phillis, r., & lavoie, n. (2008, april). community of practice: the use of personal response system technology in large lectures. paper presented at the university of massachusetts conference on information technology, boxborough, ma. stowell, j., & nelson, j. (2007). benefits of electronic audience response systems on student participation, learning, and emotion. teaching of psychology, 34, 253-258. doi:10.1080/00986280701700391 szpunar, k. k., mcdermott, k. b., & roediger, h. l. (2008). testing during study insulates against the buildup of proactive interference. journal of experimental psychology: learning, memory, and cognition, 34, 1392-1399. doi:10.1037/a0013082 thompson, c. p., wenger, s. k., & bartlings, c. a. (1978). how recall facilitates subsequent recall: a reappraisal. journal of experimental psychology: human learning and memory, 4, 210-221. doi:10.1037/0278-7393.4.3.210 thorndike, e. l. (1913). educational psychology: vol. 1. the original nature of man. new york: columbia university. tulving, e. (1967). the effects of presentation and recall of material in free-recall verbal learning. journal of verbal learning and verbal behavior, 6, 175-184. doi:10.1016/s00225371(67)80092-6 vojdanoska, m.; cranney, j., & newell, b. (2010). the testing effect: the role of feedback and collaboration in a tertiary classroom setting. applied cognitive psychology, 24, 1183-1195. doi: 10.1002/acp.1630 appendix appendix a. sample stimulus set. sample item in the clicker and repetition conditions, reproduced in grayscale. targeted exam question: which brain structure exerts considerable influence over the secretion of hormones throughout the body? a. the hypothalamus b. the amygdala c. the hippocampus d. the thalamus shapiro, a.m. and gordon l.t. journal of teaching and learning with technology, vol. 2, no. 1, june 2013. jotlt.indiana.edu 30 experimental condition first presentation second presentation clicker repetition hypothalamus • located deep in the brain • controls hormones and regulates a number of functions iclicker question which of the following is not a function of the hypothalamus? 1. hormone regulation 2. thirst 3. sleep 4. all of these are hypothalamus functions hypothalamus • located deep in the brain • controls hormones and regulates a number of functions hypothalamus • temperature regulation • controls hormones (endocrine system) • sexual activity • hunger • thirst • sleep 1201jones journal of teaching and learning with technology, vol. 1, no. 1, june 2012, pp. 42 – 58. factors that impact students’ motivation in an online course: using the music model of academic motivation brett d. jones1, joan monahan watson2, lee rakes3, and sehmuz akalin4 abstract: the aim of this study was to examine the factors that motivate students in large online courses. specifically, the purposes were: (a) to document how highly men and women rated motivational beliefs in a large online course; (b) to determine why men and women rated their motivational beliefs the way in which they did; and (c) to provide recommendations for how to intentionally design online courses to motivate students. using a mixed methods design, we used a questionnaire to assess undergraduate students’ perceptions of the components of the music model of academic motivation (i.e., empowerment, usefulness, success, interest, and caring) in an online course and their suggestions for changing the course. overall, men and women provided high ratings for their motivational beliefs in the course. the suggestions students provided for changing the course were similar for both sexes and revealed a preference for instructional strategies that were consistent with the tenets of the music model of academic motivation, including: offering more and/or varied assessments, providing interactive activities, including videos and/or video lectures, and offering face-toface meetings. other suggestions for improving the online course design are provided. keywords: motivation, music model of academic motivation, online teaching, engagement, student perceptions i. introduction. although online courses are becoming more prevalent in higher education, the literature related to student motivation in online courses is only in its nascent stages (e.g., dixson, 2010). instructors and instructional designers of online courses must consider how engaging students in online course content might be similar to, yet possibly different from, face-to-face courses. in one study of a course that was taught face-to-face in one semester and then taught online in another semester, the researcher found that the students in the online section of the course provided higher ratings for several motivational beliefs than the students in the face-to-face section of the course (jones, 2010a). although this study documented differences in students’ beliefs, it did not explore why students rated their motivational beliefs higher in the online section than in the face-to-face section of the course. the aim of the present study was to address this issue by examining why students in online courses might provide higher ratings for motivational beliefs than students in face-to-face courses. specifically, the purposes of the present study were: (a) to document how highly men and women rated motivational beliefs in a 1 virginia tech, school of education (0313), blacksburg, va 24061 2 virginia tech, undergraduate academic affairs office, college of liberal arts and human sciences, 232b wallace hall (0426), 295 west campus drive, blacksburg, va 24061 3 virginia tech, school of education (0313), blacksburg, va 24061 4 virginia tech, school of education (0313), blacksburg, va 24061 jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 43 large online course, (b) to determine why men and women rated their motivational beliefs the way in which they did, and (c) to provide recommendations for how to intentionally design online courses to motivate students. a. background. motivation is a varied construct that can be examined through the lens of many theories and principles. to help instructors design courses that engage students in learning, jones (2009) developed the music model of academic motivation, which consists of five components that have been derived from research and theory as ones that are critical to student engagement in academic settings: empowerment, usefulness, success, interest, and caring. the name of the model, music, is an acronym based on the second letter of “empowerment” and the first letter of the other four motivational components. the music model has been used as a framework for instructors in designing instruction (jones, 2009; jones, 2010b) and for researchers in understanding the impact of instruction on students’ motivation (jones, 2010a; jones, ruff, snyder, petrich, & koonce, 2012). interestingly, jones (2010a) documented that men and women’s ratings differed for some of the music components in an online course. the first component of the music model, empowerment, refers to the amount of perceived control that students have over their interactions with their learning environment. instructors can empower students by supporting their autonomy, such as by providing them with choices and the ability to make decisions. in online courses, empowerment has been shown to be a predictor of undergraduate students’ effort, course ratings, and instructor ratings (jones, 2010a). the usefulness component of the music model involves the extent to which students believe that the coursework (e.g., assignments, activities, readings) is useful for their shortor long-term goals as their motivation is affected by their perceptions of the relevance of what they are learning for the future (de volder & lens, 1982; kauffman & husman, 2004; tabachnick, miller, & relyea, 2008). one implication is that instructors need to ensure that students understand the connection between the coursework and their goals. students in an online course have been shown to access examples and exercises more frequently when they were provided with information about the usefulness of the material (sansone, fraughton, zachary, butner, & heiner, 2011). for the third music component, success, instructors need to ensure that students believe that they can succeed if they have the required knowledge and skills and put forth the appropriate effort. instructors can foster students’ success beliefs in a variety of ways, including making the course expectations clear, challenging students at an appropriate level, and providing students with feedback regularly. for example, students’ perceptions of their ability to succeed in using technology in online courses have been shown to be related to their motivation (kim & frick, 2011). the interest music component includes two theoretically distinct constructs: situational interest and individual interest (hidi & renninger, 2006). situational interest, which is akin to curiosity, refers to immediate, short-term enjoyment of instructional activities, whereas individual interest refers to internally activated personal values about a topic that are more enduring. instructors can create situational interest by designing instruction and coursework that incorporates novelty, social interaction, games, humor, surprising information, and/or that engenders emotions (bergin, 1999). instructors can develop students’ individual interest in a jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 44 topic by providing opportunities for them to become more knowledgeable about the topic and by helping them understand its value (hidi & renninger, 2006). studies of undergraduate and graduate students in online courses have documented that when instructors make the online course content more useful and relevant to students’ interests, students’ motivation increases (kim & frick, 2011). the underlying principle of the caring music component is that all humans have a need to establish and sustain caring interpersonal relationships (baumeister & leary, 1995; ryan & deci, 2000). the caring component can be divided into two components: academic caring and personal caring (johnson, johnson, & anderson, 1983). academic caring specifies that instructors need to demonstrate to students that they care about whether or not they successfully meet the course objectives. personal caring involves the idea that students need to perceive that their instructor cares about their welfare. having an online presence in online courses, providing students with well-conceived immediate feedback, supporting students’ critical and independent perspectives, offering invitations for personal discussions and interactions, and encouraging students to engage with one another in learning communities are all strategies for communicating a sense of caring in online courses that can lead to increased student motivation (baker, 2010; weiss, 2000). b. research questions. because jones (2010a) documented differences between men and women for some of the music model components, we designed the present study to examine not only why students have certain motivational beliefs in online courses, but also whether these beliefs vary by gender. we addressed the following two research questions in this study. 1. how highly do men and women rate each of the components of the music model? 2. what online course characteristics do men and women perceive as ones that could be changed to increase their perceptions of the music components? ii. methodology. a. design. we implemented a partially mixed, concurrent design whereby the quantitative and qualitative components have approximately equal status (onwuegbuzie, & collins, 2007). this study includes some of newman, ridenour, newman, and demarco’s (2003) goals for conducting research, such as: understanding a complex phenomena (i.e., how course characteristics affect student motivation), adding to the knowledge base in the areas of motivation and the scholarship of teaching and learning, and informing constituencies (e.g., educators, instructional designers) of the findings. b. participants. participants in this study included 609 of the 651 undergraduates (a 93.5% response rate) enrolled in a fully online “personal health” course at a large, public university in the united states. about half of the participants were women (n = 303; 49.8%) and about half were men (n = 306; 50.2%). the majority of students were white or caucasian (not hispanic; n = 466; jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 45 76.5%), whereas others self-reported their race/ethnicity as asian or pacific islander (n = 73; 12.0%), black or african american (n = 30; 4.9%), other (n = 21; 3.4%), hispanic (n = 17; 2.8%), or native american (n = 2; 0.3%). the reported academic level of the participants reflected students at their freshman (n = 33; 5.4%), sophomore (n = 109; 17.9%), junior (n = 187; 30.7%), and senior (n = 280; 46.0%) years. c. course description. the syllabus description of the personal health course stated, “this on-line course is designed to provide students with health information based on scientific principles that will enable him/her to make sound decisions regarding his/her health. the major emphasis is wellness and the importance of individual responsibility for health related matters through health promotion efforts.” the course included material from thirteen chapters of a textbook covering topics such as wellness, mental health, substance abuse, alcohol, tobacco, cardiovascular health, cancer, communicable diseases, consumer health, nutrition, fitness, and human sexuality. students were assessed with four exams that were weighted equally toward students’ final course grade. the exams included questions in the format of true/false and multiple-choice and assessed content material from the textbook. to prepare for the exams, students read the textbook and studied questions provided by the instructor that were similar to the questions on the exams. students were also required to attend one workshop at the campus health center or to complete five online self-assessments. final grades were calculated based on the following percentages: the exams accounted for 84.5%, the workshop or online assessments accounted for 14.1%, and a questionnaire about the course accounted for 1.4% of students’ final grade. the course was not a requirement for any of the students as part of their university coursework. d. measures. participants completed a questionnaire that contained items from previously validated instruments, as well as items written by the authors. the instruments that we used were the same as those presented in jones (2010a). students rated each item on a 7-point likert-type scale with descriptors at each point; one example item of each is presented here. the instruments measured seven constructs: five items measured empowerment (α = 0.93; “my instructor listens to how i would like to do things.”), three items measured usefulness (α = 0.95; “in general, the material in this course is useful to me.”), four items measured success (α = 0.93; “in this course, i feel that i am able to perform well.”), three items measured situational interest (α = 0.90; “in general, how interested are you in learning the content material in this course?”), three items measured individual interest (α = 0.84; “learning the course content material is very valuable to me.”), four items measured academic caring (α = 0.93; “i believe that my instructor cares about how much i learn.”), and four items measured personal caring (α = 0.92; i believe that my instructor really cares about me as a person.”). we found the reliability estimates for the scales to be acceptable. as a measure of the perceived quality of the course, students were asked on a 7-point likert-type scale with descriptors at each point (1 = terrible; 7 = excellent): “my overall rating of the course is:” open-ended items were written by the authors to gain further insight into those aspects of the course that contributed to or detracted from the music components. the exact wording of the eight open-ended items is provided in the “results” section. jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 46 e. procedures. participants were introduced to the questionnaire through the course syllabus, which was provided at the start of the semester. at three weeks prior to the availability of the questionnaire and again at one week prior to the availability of the questionnaire, the course instructor reminded the participants via email that they needed to complete the questionnaire assignment when it became available. a link to the online questionnaire was made available to the participants during the ninth week of a 16-week semester via email notification and on the course website. f. data analysis. we used spss 12.0 to analyze students’ responses to the likert-type and descriptive items on the questionnaire. to compare the differences between men and women on the music model components, we conducted t-tests and set the alpha level at 0.01 to address the problem of multiple comparisons. for analysis of the open-ended items, we used a thematic whole text analysis, which was informed by the analytic procedure developed by glaser and strauss (1967; also see strauss & corbin, 1998). an initial coding scheme for the item responses was developed after the authors read all of the responses, identified themes, and created coding categories within the themes. once codes were established for all open-ended items, the authors independently coded all 609 potential responses for each question. their responses were compared and the disagreements were noted. because it was possible for participants to provide a response that warranted more than one code, the inter-rater reliability was computed using the percentage of responses, not respondents. the inter-rater reliability ranged from 91% to 98% for the open-ended items. iii. results. a. research question 1: ratings for music model components. the first research question asked: how highly do men and women rate each of the components of the music model? to address this question, we computed the mean scores and conducted ttests to determine whether there were differences between females and males in their ratings. the means, standard deviations, and results of the t-tests are presented in table 1. both men and women rated all of the variables highly in that all of the mean values were greater than 5.0 on a 7-point likert-type scale. women provided statistically higher ratings than men for usefulness, success, situational interest, and individual interest. we found no statistical differences between men and women for empowerment, academic caring, or personal caring. men and women’s overall rating of the course was similar (t = 1.86, df = 607, p = .06). the average course ratings were slightly above 6 on the 7-point scale (m = 6.11, sd = 0.97 for men; m = 6.26, sd = 1.02 for women), indicating that their overall rating of the course was between very good (a “6” on the scale) and excellent (a “7” on the scale). jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 47 table 1. means, standard deviations, and t-test results of students’ ratings of the music model components by sex. variable femalesa m (sd) malesb m (sd) mean difference t df d empowerment 5.46 (1.04) 5.25 (1.19) 0.21 2.32 597.9 0.19 usefulness 6.02 (0.96) 5.81 (1.04) 0.21 2.63** 607.0 0.21 success 6.45 (0.64) 6.29 (0.76) 0.16 2.78** 591.2 0.23 situational interest 5.93 (0.88) 5.67 (0.96) 0.26 3.47*** 603.4 0.28 individual interest 6.20 (0.71) 5.96 (0.84) 0.23 3.70*** 591.2 0.31 academic caring 6.15 (1.03) 6.00 (1.02) 0.15 1.85 607.0 0.15 personal caring 5.30 (1.54) 5.36 (1.46) -0.05 -0.45 607.0 0.04 note: all items were rated on a 7-point likert-type scale. ** p ≤ 0.01; *** p ≤ 0.001 an = 303; bn = 306 b. research question 2: course characteristics related to music components. our second research question asked: what online course characteristics do men and women perceive as ones that could be changed to increase their perceptions of the music components? participants were asked a series of open-ended questions for which they provided information about those aspects of the course that could be changed to enhance their motivation. responses to these questions are summarized in the following sections. empowerment. we asked participants the following question related to empowerment: “what could be changed in this course to make you feel you had more control over your learning?” we received 614 responses (310 from males and 314 from females); the results are presented in table 2. over half of the students reported that nothing could be changed to give them more control and 16.3% of the responses indicated that they already had sufficient control. the other responses reflect more varied suggestions on how the course could be changed to give students more control over their learning, including eliminating exam deadlines, requiring more or varied assessment opportunities, offering face-to-face meetings with the professor, providing opportunities for interactive activities with other students in the class, finding ways to include videos and video lectures into the course, and incorporating more workshop opportunities (see table 2 for the complete list). to determine which aspects of the course gave students a sense of control, we asked them: “which aspects of this course give you control over this course?” we received 983 responses (458 from males and 525 from females), which are summarized in table 3. of the overall responses, 18.1% indicated that the availability of practice questions to prepare for the course exams gave them control over the course; 16.4% indicated that the ability to work at their own pace/teach themselves gave them control over the course; 14.6% of the overall responses indicated that “everything” about the course gave them control over the course; and 12.7% of the responses indicated that the choice to either read the textbook or answer the practice questions gave them control over the course. varied responses comprised 34.6% of the overall data for this question and indicated that the online format and its subsequent flexibility for testing and completing assigned work, the correspondence with the instructor, and the choice to attend workshops outside of class contributed to the their sense of control in the course. jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 48 table 2. things that could be changed to give students more control over their learning. response % male responsesa % female responsesb % overall responsesc nothing 50.0 58.9 54.4 i have sufficient control 17.4 15.1 16.3 irrelevant response that did not address the question 9.4 6.6 8.0 no exam deadlines except one at the end of the course 4.8 4.3 4.6 n/a 2.6 4.3 3.4 require more or varied assessments 3.5 2.6 3.1 allow for meetings with the professor 3.3 0.3 1.8 make the course more interactive 1.6 1.3 1.5 videos or lecture videos 1.9 1.0 1.5 more workshops 1.6 1.0 1.3 note: inter-rater reliability = 94%; responses with less than 1.0% overall are not shown. a310 coded responses, b314 coded responses, c624 coded responses table 3. aspects of the course that give students control over the course. response % male responsesa % female responsesb % overall responsesc availability of practice questions or tests 17.5 20.2 18.9 ability to work at my own pace or teach myself 17.5 15.4 16.5 everything 15.7 13.4 14.6 choice to read text or answer practice questions 12.4 13.4 12.9 online course or online tests 9.6 6.7 8.2 where or when to take multiple choice exams 7.4 7.4 7.4 correspondence with the instructor 6.1 8.0 7.1 plenty of time to take tests or flexible deadlines 3.9 5.9 4.9 attending the workshops 2.8 3.1 3.0 irrelevant response that did not address the question 3.3 1.3 2.3 being able to finish early or get ahead in class 1.6 2.1 1.9 choice between tests or workshops 1.1 1.5 1.3 note: inter-rater reliability = 91%; responses with less than 1.0% overall are not shown. a458 coded responses, b525 coded responses, c983 coded responses usefulness. we asked students: “what could be changed in this course to make it more useful to you?” we received 627 responses (317 responses from males and 310 from females), which are summarized in table 4. over half of the responses reported that there was nothing that could be changed to make the course more useful to them (52.3%); however, 39.2% of the responses indicate that there are methods and practices that could be changed to make the course more useful. although the suggestions for making the course more useful represented a variety of ideas (as shown in table 4), 5.6% of the overall responses indicated that providing more interactive, group activities throughout the term would make the course more useful, 4.8% of the overall responses indicated that requiring more workshops would make the course more useful, and 3.7% of the overall responses indicated that requiring more or varied assessments would make the course more useful. success. we asked students, “what could be changed in this course to help you feel you could be more successful in it?” and we received 620 responses (309 from males and 311 from females). the results are presented in table 5. over two-thirds of the students reported that nothing could be changed in the course to make them feel more successful in it. although varied jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 49 and fewer in number, the remaining responses indicated that students believed that they would be more successful if the course required more, varied types of assessments; if the course was more interactive; if videos and video lectures were included among the instructional materials for the course; if improvements were made to the textbook, course website, and study guides; if more workshops were made available; and if other resources were provided to help students better prepare for exams (see table 5 for the complete list). table 4. things that could be changed in the course to make it more useful. response % male responsesa % female responsesb % overall responsesc nothing 51.1 53.5 52.3 n/a or irrelevant response 9.5 7.4 8.5 provide more interactive, group activities 5.7 5.5 5.6 require more workshops 4.4 5.2 4.8 require more or varied assessments 3.8 3.5 3.7 use a different textbook 3.5 1.3 2.4 provide a more specific content focus 1.9 2.6 2.2 do not use a textbook 1.6 2.6 2.1 provide online tutorials or lectures 2.8 1.3 2.1 make it a traditional class that is not online 2.2 1.9 2.1 give shorter, more frequent exams 0.6 3.2 1.9 use videos to share information 2.8 0.6 1.8 post presentation slides online 0.9 2.3 1.6 focus more on current news or health issues 1.6 1.3 1.4 provide fewer multiple choice questions 1.6 1.0 1.3 make the content more relevant 1.6 1.0 1.3 send less email 0.9 1.6 1.3 use the course management system for everything 1.6 0.6 1.1 offer more or varied practice questions 0.6 1.3 1.0 reveal all practice questions at once 0.3 1.6 1.0 note: inter-rater reliability = 94%; responses with less than 1.0% overall are not shown. a317 coded responses, b310 coded responses, c627 coded responses table 5. things that could be changed in the course to help students feel more successful. response % male responsesa % female responsesb % overall responsesc nothing 68.0 69.1 68.6 irrelevant response that did not address the question 5.2 3.6 4.4 n/a 3.6 5.5 4.6 require more or varied assessments 2.6 4.0 3.3 make the course more interactive 2.3 3.6 3.0 videos or lecture videos 2.8 2.3 2.6 improved textbook 2.6 0.6 1.6 more practice questions after each chapter 1.3 1.6 1.5 weekly online lectures 2.3 0.6 1.5 improved study guides 1.0 1.6 1.3 more workshops 1.0 1.6 1.3 improve the website 1.0 1.6 1.3 use other methods to help prep for exams 1.0 1.3 1.2 note: inter-rater reliability = 97%; responses with less than 1.0% overall are not shown. a309 coded responses, b311 coded responses, c620 coded responses jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 50 interest. we asked students “what could be changed in this course to make it more interesting and enjoyable?” and we received 643 responses (321 from males and 322 from females). forty percent of the responses indicated that nothing could be changed to make the course more interesting and enjoyable; however, nearly 52% of the responses suggested a variety of changes. the most predominant suggestions for making the course more interesting and enjoyable included showing videos or including images, and making the class more interactive by including games and discussion forums. other responses indicated that requiring more workshops, incorporating more and varied assessments, and maintaining a more specific content focus would make the course more interesting and enjoyable, as would making improvements to the textbook and providing additional instructional materials beyond the textbook (see table 6 for the remainder of the responses). table 6. things that could be changed in the course to make it more interesting and enjoyable. response % male responsesa % female responsesb % overall responsesc nothing 40.1 39.8 40.0 show videos or images 12.1 11.1 11.6 more interactive activities 9.7 12.7 11.2 more workshops 6.5 7.5 7.0 irrelevant response that did not address the question 5.3 3.4 4.4 n/a 3.4 4.0 3.7 require more or varied assessments 3.4 3.4 3.4 more specific content focus 2.8 3.1 3.0 use real-life examples, stories, or case studies 1.2 4.0 2.6 opportunities for application or hands-on 1.9 2.8 2.4 make content more relevant to students’ lives 2.5 2.2 2.4 textbook improvements 3.4 1.2 2.3 provide additional materials beyond textbook 2.2 1.2 1.7 more meetings or interactions with instructor 2.2 1.2 1.7 video-taped lectures or presentation slides 1.2 0.9 1.1 note: inter-rater reliability = 98%; responses with less than 1.0% overall are not shown. a321 coded responses, b322 coded responses, c643 coded responses caring. because the caring component can be divided into academic and personal caring (jones, 2010a; jones & wilkins, 2012), we asked questions related to both of these caring subcomponents. related to academic caring, we asked students: “what could be changed in this course to make you feel that the instructor cares about whether you learn the course content and do well in the course?” we received 621 responses (319 from males and 302 from females), which are summarized in table 7. almost half of the students reported that there was nothing that could be done to increase academic caring. nearly 16% of the students reported that academic caring is difficult to convey in an online environment and that it is, therefore, not expected. additional responses suggested providing more interaction between the student and the instructor, providing opportunities to meet the instructor face-to-face, offering the course face-toface instead of fully online, and asking students about themselves personally via email (see table 7 for the remainder of the responses). jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 51 table 7. things that could be changed in the course to increase academic caring. response % male responsesa % female responsesb % overall responsesc nothing or can’t think of anything 44.0 47.1 45.6 caring is difficult to convey online or isn’t expected 13.7 17.9 15.8 more interaction between the students and instructor 6.0 6.3 6.2 opportunities to meet the instructor face-to-face 6.3 4.0 5.2 n/a 4.1 5.3 4.7 offer the class face-to-face instead of online 6.3 1.7 4.0 irrelevant response that didn’t answer the question 2.2 2.6 2.4 ask students about themselves personally by email 2.5 2.0 2.3 send email about current events in health 2.8 1.7 2.3 more interaction among students 2.2 2.0 2.1 don’t know 2.8 1.3 2.1 instructor should hold “live” office hours online 1.9 1.3 1.6 meet with students to discuss their performance 0.9 1.3 1.1 class is too large for the instructor to show caring 0.9 1.3 1.1 video lectures online 1.3 0.7 1.0 note: inter-rater reliability = 96%; responses with less than 1.0% overall are not shown. a319 coded responses, b302 coded responses, c621 coded responses to gather additional data related to academic caring, we asked students: “what does the instructor do to provide you with the impression that she cares about whether you learn the course content and do well in the course?” we received 667 responses (327 from males and 340 from females), which are summarized in table 8. of the responses, 73.9% indicated that the instructor’s continual communication via email to the class gave the impression that she cared about whether they learned the course content and did well in the course, with an overall 8.3% of the responses indicating that prompt, thorough responses to students’ questions via email gave them the impression that the instructor cared about their academic success in the course. among the remaining 14.3% of responses, students cited the accessibility of the instructor, the instructor’s encouragement for students to ask questions, her accommodations and flexibility to meet the needs of her students, and her personal, individualized responses to students’ emails as things the instructor did to provide the impression that she cared about whether the students learned the course content and did well in the course. with respect to personal caring, we asked students: “what does the instructor do to provide you with the impression that she cares about you as a person?” we received 643 responses (326 from males and 317 from females), which are summarized in table 9. of the responses, 35.0% indicated that the instructor’s frequent email reminders and notifications gave the impression that she cared about students personally. additionally, 13.5% of the responses indicated that prompt, personalized email responses gave students the impression that the instructor cared about them personally, with 6.1% of the responses indicating that the tone of the email (e.g., polite, friendly, encouraging) made the students feel as if the professor cared for them personally. the instructor’s approachability and willingness to help was found in 7.3% of the responses. overall, 7.3% of the responses indicated that the professor did “nothing” to provide the students with the impression that she cared about them personally, whereas 6.6% of the responses asserted that personal caring was not possible in an online environment and 4.0% of the responses noted that personal caring is not possible because students have no personal interaction with the professor. the remaining responses are presented in table 9. jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 52 table 8. things that the instructor does to provide academic caring. response % male responsesa % female responsesb % overall responsesc continual communication via email to the class 74.9 72.3 73.6 prompt, thorough responses to email inquiries 7.5 9.1 8.3 irrelevant response that didn’t answer the question 5.2 2.6 3.9 nothing 3.1 1.8 2.5 accessibility of instructor 1.2 3.5 2.4 encourages students to ask questions 1.5 3.2 2.4 accommodating and flexible to meet student needs 1.8 1.5 1.7 personal, individualized responses to student email 0.9 1.5 1.2 n/a 1.8 0.3 1.1 clear, detailed course documents and materials 0.3 1.8 1.1 provides practice exams 0.9 1.2 1.1 note: inter-rater reliability = 97%; responses with less than 1.0% overall are not shown. a327 coded responses, b340 coded responses, c667 coded responses table 9. things that the instructor does to provide personal caring. response % male responsesa % female responsesb % overall responsesc frequent email reminders and notifications 36.6 33.4 35.0 prompt, personalized email responses 11.3 15.6 13.5 approachability or willingness to help 6.1 8.5 7.3 nothing 9.8 4.7 7.3 irrelevant response that did not address the question 8.6 5.7 7.2 personal caring not possible in online environment 5.9 7.3 6.6 tone of email was polite, friendly, or encouraging 6.1 6.0 6.1 office hours and availability 4.9 3.5 4.2 have had no personal interaction with instructor 3.1 5.0 4.1 n/a 3.7 2.8 3.3 patience or assistance with technology issues 0.3 3.5 1.9 allowed students to force-add or enroll late in course 1.8 0.3 1.1 flexibility of due dates 0.6 1.6 1.1 note: inter-rater reliability = 97%; responses with less than 1.0% overall are not shown. a326 coded responses, b317 coded responses, c643 coded responses iv. discussion. a. research question 1. both men and women rated each of the components of the music model higher than 5.0 on a 7point likert-type scale. these findings indicate that, overall, men and women were satisfied in this type of course. as further evidence, students’ average overall course ratings were between very good and excellent. additional research is needed to determine why women provided statistically higher ratings than men for usefulness, success, situational interest, and individual interest; however, as jones (2010a) speculated, based on research in the field of interest (jones, howe, & rua, 2000; von bothmer & fridlund, 2005), women might value some aspects of the health content more than men (i.e., they might find it more useful and interesting). being jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 53 interested in the course content and finding it useful might also lead them to feel more successful which could result in higher ratings than men on all of these music components. b. research question 2. when students responded to what they would do to change the instruction to make it more consistent with each music model component, the suggestions provided by men and women appeared to be similar in quantity. therefore, we grouped men and women’s responses together and discuss them together in this section. student recommendations across music components. students’ responses across the music components included recommendations for the addition and/or change of specific course characteristics. each of these characteristics and their perceived benefits is discussed in detail in the following sections and is illustrated in figure 1. figure 1. summary of the main course characteristics that could be changed to enhance the music model components. jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 54 students suggested that the instructor provide more and/or varied types of assessments to increase their perceptions of empowerment, usefulness, success, and situational interest. currently, the course is constructed such that 84.5% of the students’ final grade for the course is based on the results of exams that include true/false and multiple-choice questions. although these types of summative assessments might be appropriate for the evaluation of students’ comprehension of specific curricular objectives, they do not allow for formative development of students’ understanding of the content. adopting a course design that includes more and/or varied types of assessments may improve students’ perceptions of empowerment by providing them with more choices; improve students’ perceptions of usefulness by creating formative assessments that “inform future learning experiences” (doolittle, 1999, p. 8); improve students’ perceptions of success by providing other types of assessments (besides true/false and multiple choice exams) for which some students’ believe that they have a better chance of succeeding at; and improve students’ perceptions of situational interest by reducing redundancy of assessment methods and introducing a sense of novelty. students suggested that the instructor include more activities that involve student interaction within the course. this response was highest for situational interest, followed by academic caring, usefulness, success, and empowerment. thus, interactive activities were perceived as a means to improve perceptions in all of the music components. counter to isolated learning assignments, interactive activities require social negotiation and mediation, allowing for multiple perspectives and representations of content (doolittle, 1999). further contributing to an effective learning environment, interactive activities allow for formative assessment opportunities in which students are engaged in higher-order cognitive processes— including analysis, synthesis, elaboration, and evaluation—as they provide one another with ongoing feedback and validation (marra & jonassen, 2001). because of the significance of the role of interaction with respect to student motivation that was identified in this study, future researchers should examine exactly what students consider “interactive activities” and which of them might be the most effective at increasing students’ perceptions of the music components. we believe that interactive activities would increase students’ perceptions of situational interest if they are novel, involve social interaction, include games or puzzles, or require physical movement (see bergin, 1999, for evidence and a discussion). students suggested that the instructor include videos and/or provide video lectures. this suggestion was highest for situational interest, but also appeared as a suggestion for empowerment, usefulness, success, and academic caring. videos could enhance situational interest by providing a medium that is novel to the text-heavy nature of the course; they may also be incorporated to illustrate the usefulness of the material in ways that are not as easily (or quickly) transmitted through text. further, videos (particularly appropriate motion pictures in which characters and situations are developed in emotionally evocative ways) serve to construct authentic, albeit vicarious, environments in which the course content may be accessed and contextually engaged. videos allow for a shared framework within a course and provide a common narrative from which students can derive relevance and authenticity, critical components of an effective learning environment (marra & jonassen, 2001). because the course was offered completely online, students recommended meeting faceto-face with the instructor as a means to increase academic caring, interest, and empowerment. certainly, “in person” conversations better facilitate “personal” connections, incorporating queues such as eye contact, facial expressions, tone of voice, and immediate responses to dynamic questions, and these factors may increase the perception of caring. interest may also be jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 55 heightened in these face-to-face sessions through the enthusiasm of the teacher and her ability to provide immediate, personal examples of the content in light of student questions and experiences. finally, students may feel more empowered through face-to-face meetings, particularly if their ideas and knowledge are heard and validated. offering face-to-face opportunities in an online course also provides students with another choice through which to receive guidance about the course content. student recommendations within each music component. in this section, we highlight some of the other student recommendations that were more common in one of the music components and less common in the other components (see figure 1). to feel more empowered, students suggested removing the exam deadlines, which would provide them with more choices as to when to complete the course work. this recommendation is simple for the instructor to implement; however, one problem with this recommendation is that students might not self-regulate their learning well in an online environment without regular queues and reminders. the danger of removing exam deadlines is procrastination: some students might wait until the end of the course to take all of the exams and, subsequently, perform poorly in the course. as jones (2010b) states, the empowerment and success components must be balanced carefully so that one does not hinder the other. in this case, too much empowerment in the form of no deadlines might hinder students’ ability to be successful. a possible compromise would be to have deadlines, but allow students to complete the work and receive grades on it anytime prior to the deadline. this way, students have a choice as to when to do the work, as long as it is completed before the instructor-set deadline. in fact, students reported that the ability to work at their own pace was one aspect of the course that provided them with control. students’ suggestions that appear consistent with the usefulness component of the music model focused on the content of the course. some students recommended using a different textbook or not using a textbook at all. such suggestions should be considered if the textbook content is not related to students’ lives or to the real-world in some manner. we acknowledge that not all learning objectives can be personally useful to all students, but to the extent possible, the instructional materials should be presented within a framework of the learners’ experiences and prior knowledge. in this way, learners can find relevance in newly introduced material. other suggestions by students included focusing on more relevant and current health issues, which might be easier to do through web-based resources and real-world case studies, which could be made more current than those provided in a paper textbook. over two-thirds of the students reported that there was nothing about the course that could be changed to help them feel more successful. this finding was also evidenced with the quantitative data in that the success component was rated higher than any of the other music components by both the men and the women. these results indicate that the structure of the course is sufficient for most students to feel successful. most of the recommendations for success were about factors related to the exams, which seems reasonable given the high importance of the exams for students’ final course grade. the suggestions included providing more practice questions, giving other methods to help prepare for the exams, and providing the correct answers after the tests. these techniques would provide students with formative and constructive feedback about their increasing content knowledge, which could help them to succeed on the exams. the suggestion to provide more exams would allow each exam to include less content, which is another method that could help students succeed. students provided some specific examples for how the course could be more interesting and enjoyable, such as providing more workshops; using real-life examples, stories, and/or case jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 56 studies; providing opportunities for application of the content; improving the textbook; using materials beyond the textbook; and incorporating videos and/or presentation slides. most of these recommendations would vary the style of the course presentation, which is one way to improve situational interest (jones, 2009). given that email was the only means of communication between the instructor and her students, many of the suggestions for the caring component related to the use of email. table 8 and 9 show that students felt cared for (academically and personally) through the instructor’s continual email communications and her prompt, polite, and personalized responses to students’ email inquiries. these findings are consistent with the findings of a study by clayton, blumberg, and auld (2010), which found that students in online courses want “engaging learning environments that promote direct interaction with professor(s) and students, spontaneity, immediate feedback, and relationships with faculty and other students” (p. 362). possible ways for the instructor to do more to be perceived as caring include asking students about themselves by email and by promoting interaction among the students (dixson, 2010). v. conclusions. although men and women differ in the amount of some of their quantitative ratings of the music components, there does not appear to be a need to design an online course differently for men and women because the suggestions provided in the open-ended items for changing the course were similar for both sexes. students’ responses to the open-ended items revealed a preference for instructional strategies that are consistent with the tenets of the music model of academic motivation; thus, providing validity evidence for the use of the music model in online courses. it is notable that several of the strategies provided could increase students’ perceptions in more than one component of the music model, such as providing varied types of assessments, including interactive activities, providing videos and/or video lectures, and meeting face-to-face with the instructor. it is our hope that instructors can use the recommendations provided in this study and that doing so will lead to greater student engagement in online courses. references baker, c. (2010). the impact of instructor immediacy and presence for online student affective learning, cognition, and motivation. the journal of educators online, 7(1), 1-30. baumeister, r., & leary, m. (1995). the need to belong: desire for interpersonal attachments as a fundamental human motivation. psychological bulletin, 117, 497-529. bergin, d. a. (1999). influences on classroom interest. educational psychologist, 34, 87-98. clayton, k., blumberg, f., & auld, d. p. (2010). the relationship between motivation, learning strategies and choice of environment whether traditional or including an online component. british journal of educational technology, 41(3), 349-364. doi: 10.1111/j.14678535.2009.00993.x jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 57 de volder, m., & lens, w. (1982). academic achievement and future time perspective as a cognitive-motivational concept. journal of personality and social psychology, 42(3), 566–571. dixson, m. d. (2010). creating effective student engagement in online courses: what do students find engaging? journal of the scholarship of teaching and learning, 10(2), 1-13. doolittle, p. e. (1999). constructivism and online education. retrieved from http://www.trainingshare.com/resources/doo2.htm glaser, b. g. & strauss, a. l. (1967). the discovery of grounded theory: strategies for qualitative research. chicago: aldine publishing company. hidi, s., & renninger, k. a. (2006). the four-phase model of interest development. educational psychologist, 41(2), 111-127. johnson, d. w., johnson, r. & anderson, a. (1983). social interdependence and classroom climate. journal of psychology, 114(1), 135-142. jones, b. d. (2009). motivating students to engage in learning: the music model of academic motivation. international journal of teaching and learning in higher education, 21(3), 272285. jones, b. d. (2010a). an examination of motivation model components in face-to-face and online instruction. electronic journal of research in educational psychology, 8(3), 915-944. jones, b. d. (2010b, october). strategies to implement a motivation model and increase student engagement. paper presented at the annual meeting of the international society for exploring teaching and learning, nashville, tn. jones, b. d., ruff, c., snyder, j. d., petrich, b., & koonce, c. (2012). the effects of mind mapping activities on students’ motivation. international journal for the scholarship of teaching and learning, 6(1), 1-21. jones, b. d., & wilkins, j. l. m. (2012). testing the music model of academic motivation through confirmatory factor analysis. manuscript submitted for publication. jones, m. g., howe, a., & rua, m. j. (2000). gender differences in students’ experiences, interests, and attitudes toward science and scientists. science education, 84(2), 180-192. kauffman, d. f., & husman, j. (2004). effects of time perspective on student motivation: introduction to a special issue. educational psychology review, 16(1), 1-7. kim, k., & frick, t. w. (2011). changes in student motivation during online learning. journal of educational computing research, 44(1), 1-23. jones, b. d., watson, j. m., rakes, l., and akalin, s. journal of teaching and learning with technology, vol. 1, no. 1, june 2012. jotlt.indiana.edu 58 marra, r. m., & jonassen, d. h. (2001). limitations of online courses for supporting constructive learning. quarterly review of distance education, 2(4), 303-317. newman, i., ridenour, c., newman, c., & demarco, g.m.p., jr. (2003). a typology of research purposes and its relationship to mixed methods research. in a. tashakkori & c. teddlie, (eds.) handbook of mixed methods in social & behavioral research (pp. 167-188). thousand oaks, ca: sage. onwuegbuzie, a. j., & collins, k. m. t. (2007). a typology of mixed methods sampling designs in social science research. the qualitative report, 12(2), 281-316. ryan, r. m., & deci, e. l. (2000). self-determination theory and facilitation of intrinsic motivation, social development, and well-being. american psychologist, 55(1), 68-78. sansone, c., fraughton, t., zachary, j. l., butner, j., & heiner, c. (2011). self-regulation of motivation when learning online: the importance of who, why and how. educational technology research and development, 59(2), 199-212. strauss, a., & corbin, j. (1998). basics of qualitative research: techniques and procedures for developing grounded theory. thousand oaks, ca: sage. tabachnick, s. e., miller, r. b., & relyea, g. e. (2008). the relationships among students’ future-oriented goals and subgoals, perceived task instrumentality, and task-oriented selfregulation strategies in an academic environment. journal of educational psychology, 100(3), 629-642. von bothmer, m. i. k., & fridlund, b. (2005). gender differences in health habits and in motivation for a healthy lifestyle among swedish university students. nursing & health sciences, 7(2), 107-118. weiss, r. e. (2000, winter). humanizing the online classroom. in r. e. weiss, d. s. knowlton, & b. w. speck (eds.), new directions for teaching and learning: no. 84, principles of effective teaching in the online classroom (47-51). san francisco: jossey-bass. journal of teaching and learning with technology, vol. 4, no. 1, june 2015, pp. 40-60. doi: 10.14434/jotlt.v4n1.13002 revisiting use of real-time polling for learning transfer sheri stover1, dan noel2, mindy mcnutt3, & sharon g. heilmann4 abstract: instructors in five different undergraduate courses designed their courses to include real-time polling to increase their students’ levels of engagement and participation in an attempt to enhance students’ learning transfer. bjork (1994) defined learning transfer as “the ability to use information after significant periods of disuse and the ability to use information to solve problems that arise in a context different (if only slightly) from the context in which the information was originally learned” (p. 187). this mixed methods research study examined the results of those efforts after surveying students’ perceptions of whether the use of real-time polling had an effect on their understanding of the course content and their levels of participation and engagement in the classroom. instructors used poll everywhere to incorporate real-time polling in classes where 98% of students had suitable devices to respond to the polls. results from this survey indicate that the use of real-time polling helped students better understand the course material and also increased their level of participation and engagement. keywords: real-time polling, poll everywhere, learning transfer faculty members in higher education have begun to implement clickers in their classroom. clickers are also known as audience response systems and real-time polling. clickers are hand-held devices that students use to respond to questions displayed on a computer projector. a receiver device records students’ responses and then displays the aggregated results for the entire class to see (campt & freeman, 2010). most frequently, clickers are used to respond to multiple-choice questions, but some clickers allow students to type in short, openended responses. clickers are sold for about $30 to $40 from manufacturers such as turningpoint and iclicker (kelly, 2011). these costs can place an additional financial burden on students, many of whom have reported dissatisfaction in being required to purchase a clicker and then having to remember to bring it to class (patry, 2009). companies like poll everywhere now provide real-time polling where students can use their cell phones to respond to polls. the advantage of using cell phones is that students can use a tool that most of them have readily available (dahlstrom, 2012). poll everywhere has an educational plan where instructors can utilize the polling for free in classes that have no more than 40 students. once the polls are created, the instructor displays the questions on the computer projector for all students to see. students can use their smart phones, feature phones, laptop computers, or tablets to respond to the real-time polls (poll everywhere, n.d.). 1 department of leadership studies in education and organizations, wright state university, 3640 colonel glenn hwy., dayton, oh 45435 sheri.stover@wright.edu 2 department of leadership studies in education and organizations, wright state university, 3640 colonel glenn hwy., dayton, oh 45435 dan.noel@wright.edu 3 department of leadership studies in education and organizations, wright state university, 3640 colonel glenn hwy., dayton, oh 45435 mindy.mcnutt@wright.edu 4 department of leadership studies in education and organizations, wright state university, 3640 colonel glenn hwy., dayton, oh 45435 sharon.heilmann@wright.edu stover, s., noel, d., mcnutt, m., & heilmann, s. g. 41 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu it is not the technology that enhances students’ learning; however, it is the ways in which the instructor utilizes the technology. real-time polling is important because it is a tool that can be utilized by instructors to implement teaching methodologies into their classroom that will enhance their students’ learning transfer. literature review many college faculty members continue to teach the way they were taught, using didactic lecture with a mid-term and a final exam to assess students’ learning (halpern & hakel, 2003). this results in students who can achieve satisfactory grades by memorizing the material to pass the test, but does not result in a large numbers of students being able to transfer their learning to future situations (bransford, brown, & cocking, 2000). instructors need to incorporate teaching methodologies that help prepare students to be independent learners, capable of applying their learning in authentic situations beyond their college classes (halpern & hakel, 2003). this literature review will outline the current research on learning transfer and address whether the use of real-time polling can enhance the design of a class to improve learning transfer. ratey (2002) defined learning as a change in the neural networks in the brain. he concluded that the brain has the ability to store information in its short-term recall; thus, students can memorize information and retrieve it for tests. however, if the information is not used again, it is purged from the brain (ratey, 2002). bjork (1994) defined learning transfer as “the ability to use information after significant periods of disuse and the ability to use information to solve problems that arise in a context different (if only slightly) from the context in which the information was originally learned” (p. 187). the primary mandate/undertaking of colleges and universities is to teach in order for students to be able to transfer their learning. in other words, transferring knowledge implies that students can accurately recall and use knowledge, skills, and attitudes learned in college at a later time in their career (schwartz & bransford, 1998). since it is challenging to predict the types of situations in which students will be required to apply their knowledge, the aim of higher education should be to facilitate students’ ability to transfer what they have learned so students can independently implement solutions (halpern & hakel, 2003). most instructors assume that learning transfer happens once students have successfully completed the class, but this does not always happen (leimbach & maringka, 2009). wiggins (2012) found that students have challenges transferring the content learned in previous classes unless the classes are specifically designed for learning transfer. classes that are designed to enhance students’ learning transfer need to ensure that students have high levels of engagement and participation. student engagement is defined as the “time, energy, and resources [that students] spend on activities designed to enhance learning” (exeter, et al., 2010, p. 762). student participation is defined as a “student’s willingness, need, desire, and compulsion to participate in, and be successful in, the learning process” (bomia et al., 1997, p. 3). classes that are designed to enhance learning transfer and include high levels of student engagement and participation include the following characteristics. active participation. the first characteristic of classes designed to enhance learning transfer is that students are active participants in the learning process. students cannot simply be passive learners who are merely exposed to information through didactic lecture and assessed at surface levels (bransford stover, s., noel, d., mcnutt, m., & heilmann, s. g. 42 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu et al., 2000). it is critical that students gain deep understanding, which requires them to spend a substantial amount of time working with the academic content. when students are repeatedly required to generate responses to real-time polling questions with minimal cues, they strengthen their neural connections. halpern and hakel (2003) refer to this strategy as “the single most important variable in promoting long-term retention and transfer” (p. 38). requiring students to frequently retrieve information creates a “memory trace” and repeated practice strengthens the neural connections. incorporating frequent real-time polling during each class moves students from being passive learners to becoming active participants by continually requiring them to “practice at retrieval” (halpern & hakel, 2003, p. 38). classes in which students are passive learners and receive information from teachers who lecture result in student memorization and ‘cramming’ in preparation for tests (organization of economic cooperation and development, 2009). students may receive good grades because the brain’s short-term recall can store information for 18 to 36 hours (bjork, 1994). if students do not continue to practice using that information, any new cellular material is reabsorbed by the brain and the information is not retained (zadina, 2008). in a lecture-based classroom, the instructor is the one who is firing his or her own neuron network and the students are in a state of passivity (doyle, 2011). in a teacher-centered approach, instructors feel pressured to “cover” their course material and they march through the textbook material to ensure that every chapter of the book is covered. this learning is inert and does not result in high levels of transfer (bransford, franks, vye, & sherwood, 1989). wiggins and mctighe (2005) called this approach to teaching, “teach, test, and hope for the best” (p. 3). in this approach, the implicit assumption is that learning transfer simply takes care of itself. rogers (1983) argued the need to change teacher-centered learning environments because “students become passive, apathetic, and bored” (p. 25). the incorporation of real-time polling can enhance students’ levels of engagement and participation (patry, 2009) because it can help shift learning environments from teacher-centered to learner-centered by requiring students to participate by using their polling device to respond to polling questions (mccabe, 2006). deep understanding. the second characteristic of classes that enhance learning transfer is that students need to move from simple memorization to deep understanding with abstract and contextual knowledge. students become engaged when given opportunities to experience abstraction, which is the process of allowing students to apply the content to other contexts (bransford et al., 2000). students also need to move beyond the lower-level thinking skills such as remembering and understanding and move to the higher-order thinking skills of applying, analyzing, evaluating, and creating (krathwohl, 2002; renkl, atkinson, maier, & stanley, 2002). mazur (1977), a physics and applied physics professor at harvard university, began using real-time polling to ensure his students had deeper levels of understanding. mazur continues to use real-time polling to deepen students’ understanding by interspersing his lectures with conceptual questions that are designed to expose challenges in understanding the material. the questions he uses require students to use their higher-level application skills to be able to provide a response. mazur gives students a few minutes to deliberate, and then must commit to an answer by using the polling device. using this methodology allows instructors to quickly gauge students’ understanding through the instructor response dashboard that summarizes the students’ responses (miller, lasry, lukoff, schell, & mazur, 2014). when classes have high stover, s., noel, d., mcnutt, m., & heilmann, s. g. 43 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu levels of misunderstanding, mazur asks students to spend a few minutes in groups of three or four in order for them to reach consensus on the correct answer. students then need to think through their arguments and discuss them with other students; this process allows them to deepen their level of understanding and also clarify any misunderstandings. since students are trying to convince each other of the correct answer, this type of teaching methodology is called peer instruction (mazur, 1997). following student discussions, instructors have students use the polling device to vote again. instructors can then share the correct answer and respond to any lingering questions or provide clarification, if needed. the use of real-time polling in peerinstruction is an excellent strategy to help enhance students’ learning transfer because it requires students to be actively engaged. students need to apply their knowledge and then defend their answers, instead of simply sitting passively in class and taking notes (lambert, 2012). frequent assessments. the third characteristic of classes designed to maximize learning transfer is that students are actively involved with frequent assessments that are distributed throughout the class. students should not be assessed with one-time tests such as a single mid-term or final exam, but be continually assessed using active, dynamic, and continual processes (bransford et al., 2000). incorporating polling into each and every class requires students to be continually assessed, which requires them to stay engaged and results in better long-term retention (pashler, rohrer, cepeda, & carpenter, 2007). polling can also allow students to review course content by assessing prior knowledge (abrahamson, 1999). once stored, it is important to continue to review the information on a regular basis, thereby strengthening connections between neurons (willis, 2006). instructors teaching educational psychology at the university of california found their students scored significantly higher on exams when they used clickers during class as formative assessments to respond to frequent exam-like questions compared to students enrolled in classes not using clickers. the researchers felt the clickers increased student learning because: (a) students needed to pay closer attention to the course material to be able to correctly answer the exam-like questions; (b) students needed to organize and integrate their course material in their brains while formulating answers, and (c) students develop metacognitive skills for gauging their levels of understanding of the course material (mayer et al., 2009). instructors teaching a large enrollment introductory psychology class embedded questions throughout the lecture as a formative assessment method to test students’ level of understanding. the researchers found the students in classes using clickers had significantly higher scores (p < .05) than students enrolled in sections not using this teaching methodology (powell, straub, rodriguez, & vanhorn, 2011). increase use of senses. the fourth characteristic of classes designed to encourage higher learning transfer is that students are required to use more of their senses (seitz, kim, & shams, 2006). real-time polling can be used in class to help students utilize more of their senses while learning course content. for example, students use their visual memory when seeing the questions, problem sets, and possible answers displayed on powerpoint slides. students also use their auditory memory when hearing their instructor talk about the questions and later, if students are to discuss the answers stover, s., noel, d., mcnutt, m., & heilmann, s. g. 44 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu with their peers. additionally, students use their tactile-kinesthetic memory when moving their body from a potentially bored, inattentive, passive listener position to a more alert one in which they prepare to use the polling device to choose an answer. furthermore, students also activate their feelings of excitement as they begin to generate eagerness when they are required to make a choice on the polling device. the more senses students use in practicing their learning, the more pathways become available for recall (seitz et al., 2006). implementing multisensory learning environments allows for more effective learning transfer for longer periods of time (medina, 2008). “learning will happen more effectively if the learner is as involved as possible, using as many of his [or her] faculties as possible, in the learning” (crosby, 1981, p. 10). visible learning. the fifth characteristic of classes that are designed for strong learning transfer is that classes include activities that require students to make their learning visible, clarify any misconceptions, and develop their metacognition (bransford et al., 2000). metacognition is a person’s awareness of their own thinking and their ability to plan, monitor, evaluate, and repair cognitive learning (kirsh, 2005). incorporating real-time polling with appropriately crafted questions is an excellent strategy to help students strengthen their metacognition because it requires students to repeatedly and frequently apply knowledge to answer questions and receive immediate feedback about their level of understanding of the topic (manke-brady, 2012). this is important because halpern and hakel (2003) found that students are poor judges of how well they understand complex topics and will develop misunderstandings if they do not have ways to accurately judge their levels of understanding. chabris and simons (2009) outlined why students’ develop misunderstandings by explaining that people have challenges with their perception, memory, attention, and reasoning. they went on to note that people frequently miss a lot of things happening around them, but due to inattentional blindness, they have no idea what they are missing. developing lessons that help students identify their misconceptions allows them to learn content at deeper levels for longer retention. increased participation and engagement. the sixth characteristic in classes designed for effective learning transfer is that classes are designed to require students to have high levels of participation and engagement in order to keep students’ attention. penner (1984) found that student attention and concentration drops off dramatically after 10 to 15 minutes. research studies have shown that the human brain is not equipped to pay attention to auditory information for long periods of time, regardless of the students’ grade level or ability (milton, pollio, & eison, 1986). bligh (2000) conducted research to show that when students spend prolonged periods of time on a repetitive task such as note taking, their lower centers of the brain (mindless behavior) becomes activated. research in neuroscience has suggested that students need to practice and use information to allow them to see how the information has interconnections and how it can be used in other contexts to enhance learning transfer (dewinstanley & bork, 2002). if students are going to achieve learning beyond lower-level information acquisition, they need to be actively engaged in the process of learning (pascarella & terenzini, 2005). “if we want students to become more effective in meaningful learning and thinking, they need to spend more time in active, meaningful learning and thinking not just sitting and passively receiving information” (mckeachie, pintrich, lin, & smith, 1986, stover, s., noel, d., mcnutt, m., & heilmann, s. g. 45 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu p. 77). incorporating real-time polling is a good way to break-up long lectures (addison, wright & milner, 2009) and will ensure that students have high levels of engagement and participation which will lead them to develop stronger neural connections to maximize learning transfer (doyle, 2011). research questions the instructors involved in this research study attempted to utilize the real-time polling in their classes to maximize learning transfer of students by increasing students’ levels of engagement and participation. the researchers attempted to answer the following questions: 1) does the use of real-time polling have an impact on students’ perceived levels of participation and engagement? 2) does the use of real-time polling have an impact on students’ perceived ability to understand the course material? method three instructors in five different classes used real-time polling in an attempt to increase students’ levels of engagement and participation in order to enhance students’ learning transfer. demographics the students in this research study were enrolled in classes at a mid-sized university in the midwest in the united states. the survey was given to 97 participants in five different classes taught by three different instructors. there were two students who did not have a device to utilize for the real-time polling and could not participate, so they only completed the demographics section of the survey. the students in the survey were majoring in organizational leadership, which is designed to prepare them to become managers and supervisors in the private, public, and nonprofit sectors. all students taking the survey were undergraduates with the majority being seniors (n = 56), the next being juniors (n = 39), and the least of them being sophomores (n = 2). the age of the participants ranged from 18-24 (n = 57), 25-30 (n = 21), 31-40 (n = 9), 41-50 (n = 9), and 50 and over (n = 1). the gender make-up of the participants was more male (n = 53) than female (n = 44). the racial mix of the participants was caucasian (n = 72), african american (n = 21), other (n = 2), asian (n = 1), and hispanic/latino (n=1). instrument students were asked to complete a paper and pencil survey to measure their perceptions of using the poll everywhere real-time polling. the survey was administered by someone other than the classroom instructor, to assure students’ privacy. the survey was administered during the last week of a semester class. students completed a 49-question survey that was developed by the researchers. the survey included questions about students’ demographics, the type of device they used, their level of participation and engagement, their thoughts about learning transfer, and their thoughts about using real-time polling in the future. utilizing a likert scale, students responded to statements with a (1) strongly agree (sa), (2) agree (a), (3) disagree (d), or (4) strongly disagree (sd). additionally, students were asked to provide comments stover, s., noel, d., mcnutt, m., & heilmann, s. g. 46 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu about the impact of real-time polling on their level of understanding, satisfaction, participation, and engagement by responding to open-ended questions. procedure instructors in five different classes asked students to use poll everywhere as a real-time polling device to respond to polling questions while teaching their classes. students used their personal devices (cell phones, lap top, or tablet) to respond to these real-time polls. quantitative data were gathered from students by asking them to complete a survey questionnaire that asked students to give their perceptions about how the use of real-time polling had an impact on their level of understanding and their level of participation and engagement. students were given statements such as “using real-time polling during class helps me to better understand the class material” and then select if they (1) strongly agree, (2) agree, (3) disagree, or (4) strongly disagree. students’ likert scale responses were entered into an excel spreadsheet and then imported into spss 21 for quantitative data analysis. on the quantitative survey, there were six questions designed to measure for student participation and nine questions designed to measure for engagement. exploratory factor analysis (efa) with principal axis factoring and varimax rotation was used to identify the underlying relationships between the survey items (norris & lecavalier, 2009). results are displayed in table #1. principal axis factoring assumes all variables have been measured with some degree of error (kim & mueller, 1978). varimax (orthogonal) rotation attempts to minimize the number of variables that have high factor loadings, thus interpretability of factors can be enhanced. bartlett’s test of sphericity (χ2=651.93, p<.01) indicates the correlation matrix is an identity matrix; thus data appear to be a sample from a multivariate normal population. table 1. rotated factor matrix factor 1 2 3 q23 -.545 q24 .461 .736 q25 .583 .465 q26 .684 q27 .707 q28 .380 q31 .686 q32 .764 q33 .766 q34 .709 q35 -.442 q36 .722 q37 .783 q38 .760 q39 .571 .490 stover, s., noel, d., mcnutt, m., & heilmann, s. g. 47 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu the most conservative approach to interpreting the rotated factor matrix was employed; thus any items that loaded across multiple factors were removed. the final variable, classroom engagement and participation, is comprised of items 26, 27, 31, 32, 33, 34, 36, 37, and 38. the cronbach’s alpha for classroom engagement and participation is .92, which indicates an excellent level of internal consistency among these questions (george & mallery, 2011). qualitative data were gathered from opened-ended questions where students were asked two open-ended questions about their thoughts about how the use of real-time polling impacted their comfort level speaking in class and how it impacted their level of attention and engagement. the responses to opened-ended questions were imported into nvvio 10 research software for qualitative analysis to group with common themes. results the results section summarizes the results from the student survey that measures students’ perceptions about the incorporation of real-time polling in their class. research question #1 the first research question asked if the use of real-time polling had an impact on students’ level of participation and engagement. in total, there were nine questions on the survey measuring the impact. cronbach’s alpha, the measure for internal consistency, revealed internal consistency of .92 which indicates an excellent level of internal consistency (george & mallery, 2011). these nine questions were combined to develop a total score to summarize students’ perceptions of how the use of real-time polling had an impact on their level of participation and engagement (m = 1.5; sd = .45.). four of these questions about students’ level of participation and engagement asked students how the use of real-time polling impacted their classroom communication (see table 2). the questions which the students agreed with from most to least included: (1) i feel that using real-time polling during class enhances the quality of discussions (m = 1.46); (2) i like that my polling responses are anonymous (m = 1.48); (3) the use of real-time polling in class enhances controversial discussions (m = 1.51), and (4) using real-time polling in class makes me feel as if i have a voice to contribute during class discussions (m = 1.71). stover, s., noel, d., mcnutt, m., & heilmann, s. g. 48 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu table 2. real-time polling survey: participation & engagement questions n sa (1) a (2) d (3) sd (4) m sd 1) q #32: i feel that using real-time polling during class enhances the quality of discussions. 94 51 54% 43 46% 0 0% 0 0% 1.46 .50 2) q #27: i like that my polling responses are anonymous. 94 52 55% 39 42% 3 3% 0 0% 1.48 .56 3) q #34: the use of real-time polling in class enhances controversial discussions. 95 51 54% 40 42% 4 4% 0 0% 1.51 .58 4) q #26: using real-time polling in class makes me feel as if i have a voice to contribute during class discussions. 94 37 39% 47 50% 10 11% 0 0% 1.71 .65 there were three questions that asked if students felt the use of real-time polling increased levels of participation and engagement because it impacted their enjoyment (see table 3). the questions students agreed with from most to least included: (1) i like using a personal mobile device to engage in real-time polling during class (m = 1.36); (2) using mobile devices for real-time polling during class is fun (m = 1.40); and (3) i wish that other instructors would use real-time polling in their classes (m = 1.49). table 3. poll everywhere survey: enjoyment and fun questions n sa (1) a (2) d (3) sd (4) m sd 1) q #31: i like using a personal mobile device to engage in real time polling during class. 95 63 66% 30 32% 2 2% 0 0% 1.36 .52 2) q #33: using mobile devices for real-time polling during class is fun. 95 60 63% 32 34% 3 3% 0 0% 1.40 .55 3) q #38: i wish that other instructors would use real-time polling in their classes 95 50 53% 43 45% 2 2% 0 0% 1.49 .54 stover, s., noel, d., mcnutt, m., & heilmann, s. g. 49 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu there were two questions on the survey (see table 4) that measured students’ thoughts on whether they believed real-time polling kept them engaged and attentive. the questions with which the students agreed with from most to least included: (1) i feel more connected to the class when participating with real-time polling (m = 1.65) and (2) i become attentive when my instructor directs us to respond using real-time polling (m = 1.67). table 4. poll everywhere survey: engagement questions n sa (1) a (2) d (3) sd (4) m sd 1) q #37: i feel more connected to the class when participating with real-time polling. 94 43 46% 42 44% 8 9% 1 1% 1.65 .68 2) q #36: i become attentive when my instructor directs us to respond using real-time polling. 95 39 41% 48 51% 8 8% 0 0% 1.67 .63 open-ended comments were grouped into categories where students indicated the use of real-time polling had an impact on their participation and engagement. the two categories identified were an impact on their active participation and also their levels of participation and engagement. in the first category of active participation, some students felt the real-time polling allowed them to be more fully active in the class for those students that were shy and they felt the polling gave them a voice (see table 5). table 5. student comments about effect of real-time polling on giving them a voice 1) sometimes i can feel uncomfortable speaking in class, this definitely provides an outlet for people to be heard, no matter what the comfort level and takes less time than hearing everyone’s opinion. 2) i get nervous speaking in front of people and with the polling i can still get my statement made without being shy. some students felt they could be more active in the classroom using real-time polling because it allowed them to respond anonymously and they could express their opinions without being judged (see table 6). stover, s., noel, d., mcnutt, m., & heilmann, s. g. 50 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu table 6. student comments about effect of real-time polling on being judged 1) i don’t speak up b/c [sic] i am often the one who knows the answers and don't want to be the “teacher's pet”. 2) [i don’t speak up because i] feel uncomfortable when i think people are judging my disability. 3) i love that it is anonymous, i don’t feel judged or anxious. some students did not believe that real-time polling had any effect on their active participation because they felt comfortable speaking up in class and commented, “i prefer getting credit for my ideas rather than anonymous responses.” the second category identified in the open comments was related to comments regarding how real-time polling impacted their levels of participation and engagement. the majority of comments from students indicated that the use of real-time polling had a positive impact on their level of engagement. most students felt that the use of real-time polling helped them to stay focused and have fun with comments such as those listed in table 7. table 7. effect of real-time polling on participation and engagement due to focus and fun 1) just makes you pay attention. 2) it keeps things moving and energetic! it’s not just passing through lecture slides [and] offers something more. 3) it makes class fun. 4) feel more engaged. students also felt the use of real-time polling impacted their levels of participation and engagement because it allowed everyone in the class to feel included (see table 8). table 8. effect of real-time polling on participation and engagement due to inclusion 1) they are a good way to interact with the class and keep everyone involved. 2) i think it allows everyone to feel like they can contribute to the class discussion. 3) i don't mind speaking up in class. i sometimes have exactly the same thing to say as someone else and that may be why i don't say much but i also enjoy using the real time polling because it gives the whole class a voice. research question #2 the second research question asked if the use of real-time polling had a perceived impact on students’ ability to understand the course material. there were 90% of the students who strongly agreed or agreed (see table 9) that the use of real-time polling helped them to better understand the material (m = 1.84, sd = .57). stover, s., noel, d., mcnutt, m., & heilmann, s. g. 51 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu table 9. poll everywhere survey: perceived student learning n sa (1) a (2) d (3) sd (4) m sd using real-time polling during class helps me to better understand the class material. 94 24 (25%) 61 (65%) 9 (10%) 0 (0%) 1.84 .57 a bivariate analysis was conducted to determine the empirical relationship between perceived student learning and classroom engagement and participation. the bivariate correlation between the two variables of perceived student learning and classroom engagement and participation is significant (r=.55, p<.01, n=91). there was evidence to conclude there is a significant association between classroom engagement and participation and perceived student learning. to evaluate differences in perceptions between individuals with high perceptions of learning transfer (comprised of strongly agree and agree) and low perceptions (comprised of disagree and strongly disagree), an independent sample t-test was computed using classroom participation and engagement as the test variable and perceived student learning as the grouping variable (group 1=strongly agree and agree; group 2=disagree; note; 0 responses for strongly disagree). results indicated the difference (md=.52, t(9.60) -3.50, p<.001, equal variances assumed) between students with higher perceptions of learning transfer reported greater perceptions of overall engagement than students with lower perceptions of learning transfer (m=1.48, n=82, sd=42; m=2.00, n=9, sd=.42, respectively, where lower scores indicate higher levels of perceived overall engagement). open-ended comments were grouped into categories where students indicated the use of real-time polling had an impact on their learning. the three categories identified were visible learning, frequent assessments, and deeper learning. students felt that the use of real-time polling allowed instructors to craft questions where students’ responses help make learning visible. students made comments in the first category with comments such as comments in table 10. table 10. impact of real-time polling on students’ learning by making learning visible 1) totally engages you and you know what your level of understanding is. 2) i really enjoy this. i think it keeps students involved while in class. it reminds me of how when you're in grade school and you write your answer on your white board. 3) it lets you see how almost the whole class views a particular subject. 4) i felt that polling allows me to see how others feel another category that was identified from open comments was that instructors could use real-time polling to quickly and frequently assess students (see table 11). stover, s., noel, d., mcnutt, m., & heilmann, s. g. 52 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu table 11. impact of real-time polling on students’ learning with frequent assessments 1) sometimes i can feel uncomfortable speaking in class, this definitely provides an outlet for people to be heard, no matter what the comfort level and takes less time than hearing everyone’s opinion. 2) i felt it was a faster way to get my point and answer across. 3) i thought it was great, makes me feel accountable. the third category identified from open comments was that instructors could use realtime polling to encourage deeper learning. students reported that when results of real-time polls were displayed, it sparked a good class discussion (see table 12). table 12. impact of real-time polling on students’ perceived learning with deeper learning 1) i fell helps start good class discussion. 2) i like seeing the results pop up on the screen and the discussions afterword. 3) i think real time polling helps me to speak in class because it can [help me to] feel comfortable about what i say if i see that others would agree. discussion bransford et al. (2000) found that it is necessary for instructors to design classes where students have high levels of participation and engagement to enhance students’ learning transfer. design strategies to enhance learning transfer are classes where instructors require students to have active participation, deep levels of understanding, frequent assessments, increased use of their senses, make their learning visible, and high levels of participation and engagement. this section discusses if the incorporation of real-time polling had an impact on these characteristics designed to enhance learning transfer. active participation. halpern and hakel (2003) found that learning transfer is enhanced by requiring students to practice at retrieving content by actively participating instead of passively listening to lectures. getting students to actively participate in class may be a challenge to many instructors and incorporating real-time polling can give instructors tools to enable this. some students admitted that they were shy and that real-time polling helped give them a voice (89%). most students communicated appreciation of the anonymous nature of the polling (97%). some of the openended comments revealed that there were students who felt comfortable speaking in class and that for them, the real-time polling made no difference. deeper understanding. the incorporation of real-time polling followed by class discussion has been shown to deepen students’ understanding of course concepts (mazur, 1977). the students in this study overpoweringly indicated that the use of real-time polling helped them to better understand the stover, s., noel, d., mcnutt, m., & heilmann, s. g. 53 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu class material with 90% agreeing with this statement. students unanimously agreed that the use of real-time polling helped the quality of the discussion with 100% agreeing and 96% of them indicating that it enhanced controversial questions. students felt the real-time polling encouraged deeper understanding because the results of the polls would be displayed and spark a good discussion. some students felt more comfortable joining the discussion when they could see their opinions would be supported from the poll results. frequent assessments. karpicke and roediger (2006) found that when study preparation time is equal, students that used assessment tools in their study preparation significantly improved their long-term retention compared to students using review methods for studying. incorporating assessments requires students to be actively involved in their learning since their brain is retrieving information and the effort deepens and strengthens their neural connections (larsen, butler, & roediger, 2013). students in this study reported the incorporation of real-time polling allowed instructors to quickly and frequently assess students with polling questions. instructors that incorporate real-time polling requires all the students to actively engage with assessment by responding to the polling questions, instead of a few that might verbally answer a question during class. the regular incorporation of real-time polling made some students believe they were accountable since they were being frequently polled. increased use of senses. many classroom instructors can intuitively read their students’ body language to judge their level of alertness. the instructors noted that students’ physical demeanor would change when polling questions were included by moving from a relaxed posture to an attentive and engaged demeanor. students use more of their senses when asked to pick up their real-time polling device as they are seeing the question, touching their responding device, and feeling the excitement as they prepare to select their response. seitz et al. (2006) found that the more senses that students use while learning the better as more pathways become available for recall. there were 92% of the students felt the use of real-time polling required them to become more attentive. visible learning. traditional studying methods such as reading the textbook and highlighting the course material can cause students to have a “fluency illusion” because they understand the material and believe they have mastered the content (carey, 2014). the students in these classes reported that the use of real-time polling allowed the instructor to make learning visible when they displayed the aggregate results of each poll. this allowed students to see how the other participants in class felt about the questions or how they stacked up against their peers. one student compared the use of real-time polling to the process of going to a whiteboard to make their learning visible. increased participation and engagement. the majority of university classes last from 50 to 90 minutes, which is much longer than stover, s., noel, d., mcnutt, m., & heilmann, s. g. 54 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu the typical attention span of most college students. bunce, flens, and neiles (2010) found that classes designed using real-time polling to enhance active learning results in fewer attention lapses due to the engaging of students’ attention. the student in this research study indicated that the use of real-time polling helped them feel higher levels of engagement with most of the students indicating that they wished other instructors would implement it (98%). students felt that using real-time polling was fun (97%), and made them more connected to the class (90%). the open-ended comments supported this with almost all comments conveying that real-time polling helped to get their attention. considerations for implementing real-time polling instructors will not only face challenges in learning the technical steps of setting up realtime polling, but also face pedagogical challenges to ensure the implementation of real-time polling results in learning enhancements. many universities will offer technical training workshops which will help instructors learn practical applications, but few offer workshops in pedagogical implementation. a suggestion would be to implement real-time polling with a colleague and take turns conducting peer-reviews of each classroom. another advantage of implementing real-time polling with a colleague is that instructors can practice together and brainstorm technical and pedagogical issues to develop solutions. based on this study, the researchers have some suggestions for instructors who wish to incorporate real-time polling into their classroom. the first consideration is to ensure that most students have a cell-phone or device to participate in the real-time polling. the results from this research study showed that 98% of students had cell phones which allowed them to participate. of those students who had cell phones, 100% of them indicated that they would prefer using their cell phone for the real-time polling instead of purchasing a “clicker device.” since the majority of students had devices which allowed them to participate, instructors were quite pleased when they were able to quickly query the class and get a response from most students. while there were two students who indicated that they did not have a cell phone and could not participate, the researchers felt that they obtained a much larger number of students participating than they would have normally during class discussions without the use of real-time polling. instructors may also face technical challenges when implementing poll everywhere. the poll everywhere technology requires that students use their own texting service to respond to real-time poll questions. therefore, students’ phones need to have robust enough phone service to be able to text responses. since the real-time polling required students to text their responses, this was sometimes a usability issue for those students who were unfamiliar with how to use their texting tool on their cell-phone. additionally, students who used feature phones (non-smart phones) to text their answers were at a disadvantage because it took longer for them to text their response and doing so was a substantially more cumbersome process. another challenge for instructors to be aware of is that for poll everywhere real-time polling, students are charged for each text that is sent to respond to a question. the costs incurred for texting would normally be far less than the $40 for the cost of a clicker device, even though the cost for required clicker devices may be covered by tuition assistance, where data plan costs normally would not. several of the students provided open-ended comments that indicated they did not have issues with texting by saying, “i think everyone has unlimited texting.” instructors should also be aware that the time of day may have an impact on students’ ability to participate using their own cell phone. for example, during evening classes students’ stover, s., noel, d., mcnutt, m., & heilmann, s. g. 55 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu cell phone batteries may begin to lose power and students may want to spare cell-phone use to ensure power for their ride back home. the classroom environment should be considered by instructors before implementing real-time polling that requires students to use their own cell-phones. evening classes or lack of phone access may require instructors to use traditional “clicker devices.” however, if the environment supports students’ use of their own cell phone device, it can save students money as well as the additional burden of having to bring an additional “clicker device.” significance of the study the ultimate goal for higher education needs to be transfer of learning so that students can take the knowledge learned and utilize it when employed and the instructor is not there to help them. instead of teaching students to successfully complete midterms, instructors need to design their classes to prepare students to independently use the knowledge in an unpredictable real world situation (halpern & hakel, 2003). real-time polling can be used to help implement design characteristics that increase students’ levels of engagement and participation that will enhance learning transfer. companies like poll everywhere now have real-time polling solutions where students no longer need to purchase “clicker devices”, but can use their existing cellphone to respond to real-time polls. allowing students to use their own device to participate in the real-time polling not only saves them money, but eliminates the need of having to bring an additional device to class since most students usually have them readily available. study limitations this study relied on the perceptions of students who responded to a survey. the study also relied on three different instructors implementing real-time polling in their class, so some variation on instructors’ levels of expertise in conducting the polls was expected, which might have affected the results. this study asked for students’ perceptions if the use of real-time polling increased their understanding of the course content and did not measure assessment results. another limitation is that the researchers were also the instructors of these courses and this duplicity of roles may have affected their objectivity in analyzing the student responses. suggestions and recommendations for further research almost all the students in this study had devices to use for real-time polling with 98% of students owning cell phones. this allowed students to participate in real-time polling without the necessity of spending extra money for a “clicker” device. a suggestion for further research would be to compare students’ perceptions of engagement and participation in classes where students are required to purchase “clicker” devices to classes where students use their own cell phone. the data for this study was gathered from a survey given to students. a recommendation for further research would be to conduct interviews with students to elicit anecdotal information to gather deeper understandings of how students perceived the use of real-time polling had an impact on their level of understanding course material, their level of participation, and their level of engagement. this study was conducted in one department at one university. another suggestion for research could be to conduct the study with different academic departments and/or stover, s., noel, d., mcnutt, m., & heilmann, s. g. 56 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu different universities to see if there is a difference with the results, especially if the instructor is not also conducting the study. final recommendation for future studies would be to expand the use of real-time polling to teaching students how to utilize the technology. ninety three percent of students indicated on the survey they felt that using real time polling could benefit their professional life and ninetytwo percent indicated that it would be a marketable skill that could be help differentiate them. in the classes used for this study, the instructor conducted the real-time polling and the students who responded to the polling questions. a recommendation for further research would be to have the students facilitate real-time polling in their presentations to see if their perception of the value of using real-time polling increases. conclusion the purpose of higher education to ensure learning transfer is so that our students can take the information learned while enrolled in classes and be able to recall and implement this learning when employed at a later date in situations that are different than the classroom. “teaching for retention during a single academic term to prepare students for an assessment that will be given to them in the same context in which the learning occurs is very different from teaching for long-term retention and transfer” (halpern & hakel, 2003, p. 38). instructors can use real-time polling to design classes that enhance students’ learning transfer. the use of realtime polling will allow students to “practice at retrieval” (mayer et al., 2008) by providing instructors with frequent opportunities in which they can encourage their students to apply learning while responding to polling questions. furthermore, real-time polling enables students to develop their metacognition (halpern & hakel, 2003) because they are able to check their own understanding by comparing their own responses to the correct answers. the incorporation of real-time polling will facilitate instructor adoption of more learner-centered strategies that allow students to assume more responsibility of their learning, and also ensure that students are in active learning environments instead of being passive recipients of knowledge (doyle, 2011). the instructors involved in this study redesigned their courses using real-time polling to enhance their students’ learning transfer. the results of the survey showed that students reported perceived higher levels of participation and engagement. students’ responses also demonstrated that they concluded that the use of real-time polling overwhelmingly helped them to better understand the content. based on the statistically significant positive relationship between the variable of engagement and participation and the other variable of perceived student learning (r=.55, p<.01, n=91), it can be concluded that use of real-time polling does engage students. this engagement has been shown to have a positive impact on perceived student learning. stover, s., noel, d., mcnutt, m., & heilmann, s. g. 57 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu references abrahamson, a. l. (1999, may). teaching with classroom communication system -what it involves and why it works. paper presented at international workshop, new trends in physics teaching, puebla, mexico. retrieved from better education inc. web site: http://www.bedu.com/publications/pueblafinal2.html. addison, s., wright, a., & milner, r. (2009). using clickers to improve student engagement and performance in an introductory biochemistry class. biochemistry and molecular biology education, 37(2), 84-91. doi: 10.1002/bmb.20264 bjork, d. r. (1994). memory and metamemory: considerations in the training of human beings. in j. metcalfe & a. shimamura (eds.). metacognition: knowing about knowing, 185-205. cambridge, ma: mit press. bligh, d. a. (2000). what’s the use of lectures? san francisco: jossey-bass. bomia, l., beluzo, l., demeester, d., elander, k., johnson, m., & sheldon, b. (1997). the impact of teaching strategies on intrinsic motivation. retrieved from eric database. (ed418925). bransford, j. d., brown, a. l., & cocking, r. r. (2000). how people learn: brain, mind, experience, and school. washington, dc: national academy press. bransford, j. d., franks, j. j.., vye, n. j., & sherwood, r. d. (1989). new approaches to instruction: because wisdom can’t be told. in s. vosniadou & a. ortony (eds.), similarity and analogical reasoning (pp. 470-497). new york, ny: cambridge university press. bunce, d. m., flens, e a., & neiles, k. y. (2010). how long can students pay attention in class? a study of student attention decline using clickers. journal of chemical education, 87(12), 1438-1443. campt, d., & freeman, m. (2010). the meetings revolution will be clickerized. corporate meetings & incentives, 29(4), 34-35. carey, b. (2014). how we learn: the surprising truth about when, where, and why it happens. new york, ny: random house. chabris, c. & simons, d. (2009). the invisible gorilla: how our intuitions deceive us. new york, ny: broadway paperbacks. crosby, a. (1981). a critical look: the philosophical foundations of experiential education. journal of experiential education. 4(1), 9–15. doi: 10.1177/105382598100400103 dahlstrom, e. (2012). ecar study of undergraduate students and information technology. louisville, co: educause center for applied research. retrieved from stover, s., noel, d., mcnutt, m., & heilmann, s. g. 58 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu http://www.educause.edu/ecar dewinstanley, a. & bork, r. a. (2002). successful lecturing: presenting information in ways that engage effective processing. in d. f. halpern & m. d. hakel (eds.), applying the science of learning to university teaching and beyond (19-32). new york, ny: wiley periodicals. doyle, t. (2011). learner-centered teaching: putting the research on learning into practice. sterling, va: stylus. exeter, d. j., ameratunga, s., ratima, m., morton, s., dickson, m., hsu, d., & jackson, r. (2010). student engagement in very large classes: the teachers’ perspective. studies in higher education, 35(7), 761–775. george, d. & mallery, p. (2011). spss for windows step by step: a simple guide and reference (eleventh edition). boston, ma: allyn & bacon. halpern, d. f., & hakel, m. d. (2003). applying the science of learning to the university and beyond. change, 35(4), 36-41. doi: 10.1080/00091380309604109 kelly, k. (2011). san francisco state university student response systems (“clickers”): clicker standardization investigation project report. retrieved from http://angieportacio2.myefolio.com/uploads/csi_00_clickers_projectreport_v1c.doc kim, j., & mueller, c.w. (1978). factor analysis: statistical methods and practical issues. newbury park, ca: sage publications. kirsh, d. (2005). metacognition, distributed cognition and visual design. in p. gardenfors & p. johansson (eds.), cognition, education, and communication technology (pp. 147-180). mahwah, nj: lawrence erlbaum associate, inc. krathwohl, d. r. (2002). a revision of bloom's taxonomy: an overview. theory into practice, 41(4), 212–218. doi:10.1207/s15430421tip4104_2. lambert, c. (2012). twilight of the lecture. harvard magazine. retrieved from http://harvardmagazine.com/2012/03/twilight-of-the-lecture larsen, d. p., butler, a. c., & roediger, h. l. (2013). comparative effects of test-enhanced learning and self-explanation on long-term retention. medical education, 47(7), 674-682. doi: 10.1111/medu.12141 leimbach, m., & maringka, j. (2009). learning transfer model: a research-driven approach to enhancing learning effectiveness. wilson learning. retrieved from http://www.wilsonlearning.com/images/uploads/pdfs/learning_transfer_approach.pdf manke-brady, m. (2012). clickers and metacognition: how do electronic response devices (clickers) influence student metacognition? (doctoral dissertation). retrieved from usb digital stover, s., noel, d., mcnutt, m., & heilmann, s. g. 59 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu library http://digitallibrary.usc.edu/cdm/ref/collection/p15799coll3/id/35109 mayer, r. e., stull, a., deleeuw, k., almeroth, k., bimber, b., chun, d., zhang, h. (2009). clickers in college classrooms: fostering learning with questioning methods in large lecture classes. contemporary educational psychology, 34 (2009), 51–57. doi: 10.1016/j.cedpsych.2008.04.002 mazur, e. (1997). peer instruction: a user’s manual. upper saddle river, nj: prentice hall. mccabe, m. (2006). live assessments by questioning in an interactive classroom. in banks, d. a. (2006) (ed). audience response systems in higher education: applications and cases. hershey, pa: information science publishing. mckeachie, w. j., pintrich, p., lin y., & smith, d. (1986). teaching and learning in the college classroom: a review of the research literature. ann arbor: university of michigan, ncriptal. medina, j. (2008). brain rules. seattle, wa: pear press. miller, k., lasry, n., lukoff, b., schell, j., & mazur, e. (2014). conceptual question response times in peer instruction classrooms. physical review special topicsphysics education research, 10(2), 020113-1-020113-6. doi: 10.1103/physrevstper.10.020113 milton, o., pollio, h. r., & eison, j. a. (1986). cooperative earning for higher education faculty. phoenix, az: american council on education and the oryx press. norris, m. & lecavalier, l. (2010). evaluating the use of exploratory factor analysis in developmental disability psychological research. journal of autism and developmental disorders, 40(1), 8-20. doi: 10.1007/s10803-009-0816-2. organisation for economic cooperation and development (oecd). (2009). creating effective teaching and learning environments: first results from talis. paris, france: oecd. retrieved from http://www.oecd.org/education/school/creatingeffectiveteachingandlearningenvironmentsfi rstresultsfromtalis.htm#video pascarella, e. t., & terenzini, p. t. (2005). how college affects students, volume 2: a third decade of research. san francisco,ca: jossey-bass. pashler, h., rohrer, d., cepeda, n. j., & carpenter, s. k. (2007). enhancing learning and retarding forgetting: choices and consequences. psychonomic bulletin and review, 14(2), 187 193. patry, m. (2009). clickers in large classes: from student perceptions towards an understanding of best practices. international journal for the scholarship of teaching and learning, 3(2). retrieved from http://academics.georgiasouthern.edu/ijsotl/v3n2.html stover, s., noel, d., mcnutt, m., & heilmann, s. g. 60 journal of teaching and learning with technology, vol. 4, no. 1, june 2015. jotlt.indiana.edu penner, j. g. (1984). why many college teachers cannot lecture: how to avoid communication breakdown in the classroom. springfield, il: charles c. thomas. poll everywhere. (n.d.). frequently asked questions. retrieved from http://www.polleverywhere.com/faq powell, s., straub, c., rodriguez, j., & vanhorn, b. (2011). using clickers in large college psychology classes: academic achievement and perceptions. journal of the scholarship of teaching and learning, 11(4), 1-11. ratey, j. (2002). a user’s guide to the brain: perception, attention, and the four theaters of the brain. new york, ny: vintage books. renkl, a., atkinson, r. k., maier, u. h., & staley, r. (2002). from example study to problem solving: smooth transitions help learning. journal of experimental education, 70 (4), 293–315. roediger, h. l. & karpicke, j. d. (2006). test-enhanced learning: taking memory tests improves long-term retention. psychological science, 17(3), 249-55. rogers, c. r. (1983). as a teacher, can i be myself? in freedom to learn for the ‘80’s. columbus, oh: charles e. merrill publishing company. schwartz, d. l., & bransford, j. d. (1998). a time for telling. cognition and instruction, 16(4), 475-522. doi: 10.1207/s1532690xci1604_4 seitz, a. r., kim, r., & shams, l. (2006). sound facilitates visual learning. current biology, 16(4), 14221427. wiggins, g., & mctighe, j. (2005). understanding by design. alexandria, va: ascd. wiggins, g. (2012). transfer as the point of education. [web log comment]. retrieved from http://grantwiggins.wordpress.com/2012/01/11/transfer-as-the-point-of-education/ willis, j. (2006). researched-based strategies to ignite student learning. alexandria, va: ascd. zadina, j. (2008). six weeks to a brain-compatible classroom. publisher: author.