




















































Scharff


Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013, pp. 1 - 14. 

Improving oral presentations: Inserting subtitles in videos for 
targeted feedback1 

 
Hanna Yang2 and Lauren F.V. Scharff3 

 
Abstract: Instructors are increasingly using videotaping in addition to written 
summarized feedback to develop oral presentation skills, but reviewing videotapes 
with students can be a time-consuming process.  Moreover, students may find that 
summarized feedback, which is displaced from the video itself, is vague and 
unhelpful.  This project investigated a new way for instructors to deliver targeted 
feedback within video recordings, and embedded the new approach within other 
best practices (e.g. rubrics, guided self-reflection). We compared two groups 
(N=31) across two presentations, with one group first receiving videotapes that 
included interjected feedback, much like subtitles, in their videos, while the other 
group first received raw videotapes and met face-to-face with their instructor to 
review their performance.  Despite the significant student perception that face-to-
face feedback was more useful, our results showed that interjected feedback was 
more helpful for developing students’ style skills, and there was no difference in 
improvement across presentations for content, organization and response to 
audience.  Across both groups, students reported great benefit of video feedback 
because it provided them with a third-party perspective of their own performance.  
Furthermore, interjected feedback provided instructors with a substantial time 
savings compared to the face-to-face meetings.  

Keywords: oral presentations, feedback, videotaping, best practices 
 
Providing meaningful feedback to students amidst the challenges of balancing the timeliness of 
the feedback with the quality of the feedback is a familiar struggle for most educators.  This 
balance is particularly difficult to strike in the context of helping students improve their oral 
communication skills due to the ephemeral nature of the presentation.  To address these 
challenges, some educators have turned to technology, for example videotaping student 
presentations. One relatively common way that instructors use video feedback to promote 
student development is to schedule meetings with students to replay the videotapes and analyze 
the students’ performance together.  Unfortunately this can pose an unsustainable burden of time 
and coordination for both parties, especially the faculty member.  Further, technology alone does 
not provide a complete solution (Amirault & Visser, 2009); it should be embedded within a 
course design that aids and incentivizes the students to conduct meaningful self-analysis and 
promote the development of targeted skills. While the few published studies available regarding 
the use of videotaping oral presentations share positive views of the practice, none share data on 
the development of oral presentation skills, nor do they address how the use of videotaping fits 
within a course design that embeds other, known best practices. Thus, the purpose of this project 
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1 Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of 
the U. S. Air Force, Department of Defense, or the U. S. Govt.  
2 Hanna Yang, Department of Law, U.S. Air Force Academy, 2354 Fairchild Drive, Suite 4K25, U.S. Air Force Academy, CO 
80840 hanna.yang@us.af.mil  
3 Lauren F.V. Scharff, Director, Scholarship of Teaching and Learning, U.S. Air Force Academy, 2354 Fairchild Drive, Suite 
4K25, U.S. Air Force Academy, CO 80840, lauren.scharff@usafa.edu  



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

2 

was to find and assess a way to help instructors provide timely, meaningful, and sustainable 
feedback to students about their oral communication skills that was also likely to be used by 
students.         
 
Literature Review 
 
Feedback is a crucial aspect of the learning and development process because it helps target 
specific deficiencies and strengths, and provides formative guidance for development (for a nice 
overview of evidence, see Chang et al., 2012).  However, most instructors will readily admit that 
the process of grading and providing meaningful feedback is one of the least desirable aspects of 
their work. Further, although some students are increasingly demanding more feedback from 
their instructors (Chang et al., 2012), a large number of students also exhibit behaviors that 
indicate they do not value feedback, (e.g. failing to collect feedback, quickly glancing at their 
grades rather than taking time to read the feedback comments).  To further complicate the 
“messages” received by faculty, some students indicate they prefer quality feedback over 
timeliness, whereas some indicate that they value timeliness over quality feedback (Chang et al., 
2012; Winter & Dye, 2004). It was within this mixed context that we approached our goal of oral 
presentation skill development, using technology as a tool embedded within other best practices. 

Oral Presentations—Feedback Challenges and a New Approach. Oral presentations 
pose several challenges for instructors with respect to their ability to provide meaningful, 
formative feedback.  First, in contrast to written papers, oral presentations operate on a real-time 
basis, so without video capture, they leave no tangible artifact that students and instructors can 
review and assess. Second, students may perceive a lack of clarity, reliability, validity, and 
fairness in the criteria used for assessing oral presentation skills (e.g. Cooper, 2005; Price, 
Handley, Millar, & O’Donovan, 2010).  For example, oral presentation assessments often 
emphasize content more than command of the oral medium, or command of the oral medium 
more than content, leading to an imbalanced assessment of oral presentation skills (Cooper, 
2005).  The uneven focus is likely due to the fact that, without a videotape to allow multiple 
viewings, it is difficult to pay detailed attention to both aspects (content and style) of the 
presentation. A third challenge is that the nature of oral presentations does not naturally lend 
itself to the type of accurate, targeted commenting that instructors often provide in specific parts 
or margins of papers (McKeachie & Svinicki, 2006), which provides students with subsequent 
opportunities for guided self-reflection.  Studies have shown that feedback needs to be specific to 
be effective (e.g. Gibbs & Simpson, 2004), but students often feel that instructor feedback is 
vague, difficult to follow, and not useful (Price et al., 2010).  With only summarized feedback 
provided separately from the oral presentation, it is easy to understand how the perception of 
vague and confusing feedback could be perpetuated in the context of oral presentation feedback. 
Finally, a fourth challenge is the issue of timeliness of the feedback. Studies have shown that if 
students do not receive timely feedback, they will be likely to disregard the feedback they 
eventually receive, based on the perception that such feedback is now irrelevant (e.g. Gibbs & 
Simpson, 2004; Winter & Dye, 2004). Our personal experience suggests that the process takes 
several days, or in some cases, weeks to provide feedback for an entire class.  These observations 
align with those of Kovach (1996) who reported that efforts to capture oral presentations on 
video and provide instructor feedback require a formidable amount of time, administration, and 
cost.  



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

3 

The above workload issues might suggest that the drawbacks of videotaping oral 
presentations overcome the benefits. However, video capture has increasingly been used in many 
disciplines to provide feedback for improving oral communication skills, for example in 
medicine (Savoldelli, Naik, Park, Joo, & Hamstra, 2006; Byrne, Sellen, Jones, Aitkenhead, 
Hussain, Gilder, Smith, & Ribes, 2002) and law (Kovach, 1996; Legal Research and Writing 
Listserv responses, 2011). However, as we considered our own incorporation of videotaping 
student oral presentations, we realized that even the above-published “successes” had 
shortcomings.  Simply providing students with videotapes does not provide students the targeted 
guidance and feedback they need to meaningfully reflect on their videos (Cooper, 2005). Further, 
although face-to-face feedback enables targeted commenting during the meeting between the 
instructor and the student, it does not provide a historical artifact of targeted comments for 
students to review on their own. What if we could give targeted feedback in a manner that also 
allows students to have a permanent record of their presentation, i.e. interjected video feedback?  

Our new approach, interjected video feedback, is textual instructor feedback that is 
manually inserted into a video at specific timeframes of a student’s performance, much like 
subtitles, thereby enabling a student to replay the video and see which specific moments in his or 
her presentation that did or did not meet the assessment criteria, as well as the manner by which 
they did or did not meet the assessment criteria.  This is akin to comments interjected in a 
student’s written paper, which allows instructors to pinpoint specific writing issues at the precise 
points at which they occur, rather than in a global summary at the end of the student’s paper.  
Moreover, interjected video feedback can be replayed by students at their leisure, providing them 
with multiple opportunities to review and self-assess their oral presentation skills. 

We also acknowledge that technology, in and of itself, rarely provides a complete 
solution. Therefore, we incorporated as many best practices about feedback into this project as 
possible in order to place our use of interjected videotaped feedback in a context that both 
supported student learning and skill development, and maintained a manageable instructor 
workload.    

A Framework of Best Practices. The major challenges we hoped to address with our 
course design and new technique were those of clarity and reliability of assessment, of student 
use of feedback, and of time and workload. No one best practice addresses all of these 
challenges, so we incorporated multiple practices: the use of a developmentally-oriented rubric 
combined with summarized feedback, student assignments requiring review of their videotapes 
and response to guided self-reflection questions, and more than one oral presentation assignment 
so that skills could develop. In order to test the impact of the new, targeted, interjected feedback, 
we randomly assigned half the students to receive it for the first presentation, while the other half 
received it for the second presentation. 

Rubrics have been shown to be a helpful tool for providing timely, yet detailed feedback, 
as well as explicitly conveying the instructor’s expectations to students (Stevens & Levi, 2005; 
Andrade, 1997).  Our rubric was also “developmental” in tone, in order to emphasize the process 
of learning.  Whereas some rubrics evaluate students’ demonstration of assignment components, 
(e.g. “Style” or “Content”) using end-state terms, such as “Poor,” “Good,” or “Excellent,” our 
rubric evaluated students using terms denoting progression, namely by using the following 
terms:  “Not Acceptable,” “Beginning,” “Intermediate,” and “Advanced.” Further, along with the 
rubric performance-level indications, we included several sentences of summarized comments at 
the end of the rubric feedback form. Such summarized feedback provides more context, 
explanation, and in-depth insight about the student’s performance, and it can help students 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

4 

understand the connection between their performance and scores on a standardized rubric. 
Without the benefit of a rubric, summarized feedback may be perceived as unstructured, and 
therefore, unclear. 

Our self-guided student reflections also encouraged students to make links between the 
rubric dimensions, i.e. instructor expectations, and their performance. As noted above, many 
students do not deeply process feedback, and thus, they do not use that feedback to shape their 
future efforts. By building guided self-reflection assignments into the course, we “forced” 
students to review their performance (watch their own video), identify specific behaviors that 
linked to each rubric component, and generate steps to improve each component in subsequent 
presentations.  This guided reflection design follows from Nicol and Macfarlane-Dick’s (2006) 
conclusion that students can only learn from their self-reflection if their reflection is informed 
by, or measurable against, specific goals, criteria, or standards.  
 The third best practice we incorporated, multiple opportunities for development, supports 
long-time understanding of the role of practice in skill acquisition (e.g. Newell & Rosenbloom, 
1980), as well as further promotes student use of feedback. By requiring students to come up 
with the self-reflected steps for improvement, we more explicitly framed the oral presentations as 
part of a developmental process, which framed the instructor’s feedback from the first 
presentation as part of a feed-forward process.  Studies have shown that students will often 
dismiss feedback if they believe that the feedback only pertains to a discrete assessment (Gibbs 
& Simpson, 2004; Price et al., 2010).  Thus, this aspect of our design was incorporated to 
increase the value that students placed on the feedback, increasing the likelihood that they would 
use it to guide their development, not just because they were required to as part of the self-guided 
reflection assignment.  

Justification for Research. This project was designed to evaluate the impact of 
interjected video feedback on the development of students’ oral presentation skills and on 
student attitudes about the value of oral presentation feedback. We believed this new type of 
feedback could provide the specific, targeted guidance that would support student development 
equally well as face-to-face meetings during which the instructor and student review the video 
together, which has been the standard way for instructors to share targeted presentation feedback 
with students. Further, instructor load would be reduced somewhat; a pilot study indicated that it 
took about half as much time for the instructor to watch a video presentation and interject the 
comments as to meet face-to-face with a student and share the same points. 
 However, we acknowledge that there are qualitative differences between the interjected 
feedback, which is completely instructor determined, and the feedback that can occur during a 
face-to-face meeting, where students can direct some of the focus and also request elaboration or 
clarification.   This personal tailoring within the face-to-face feedback process might make it 
more likely that students and instructors reach a common understanding on the assessment goals. 
On the other hand, our pilot data also indicated that some students may feel uncomfortable 
meeting face-to-face with instructors about their performance, and prefer to watch themselves in 
the privacy of their own rooms. Therefore, this study was designed to compare the impact of 
interjected video feedback with face-to-face feedback, embedded within the best practices 
described above, on both student performance as well as student attitudes.    
 
 
 
 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

5 

Methods 
 
Participants  
 
Participants were 31 students from two sections of a core law course for sophomores at an 
institution in the Midwest.  While students are placed into course sections randomly by the 
registrar’s office each semester, in this case, the section of students receiving the face-to-face 
feedback first had an average Academic Composite (Accomp) score of 3461.69, while the 
students receiving interjected feedback first had an average score of 3240.6 (max possible is 
4,400, and most of our admitted students have a score of at least 2500).    
 
Research Design 
 
This study incorporated a two-group design with counterbalancing across two oral presentation 
assignments.  One of the two sections was randomly selected to receive interjected video 
feedback following the first presentation, while the other section first received the raw video plus 
engaged in a face-to-face meeting with the instructor to review the video (NInt=16, NF2F=15).  
The opposite types of feedback were given to each section following the second presentation. 
Both groups for both presentations received summarized written feedback plus rubric scores (see 
details below), and completed the reflection assignment (see details below).   

Dependent variables included performance scores, reflection assignment responses, and 
subjective feedback collected with an end-of-course questionnaire (see details below). In order to 
control for possible experimenter bias, a blind grader (not the instructor, and someone who did 
not know which students had received interjected feedback or face-to-face meeting with the 
instructor after their first presentation) used a rubric to assess the videotaped performances of the 
two student groups (the instructor graded the presentations separately for input into the course 
grade).  
 
Materials 
  
Equipment and software. Currently, there is no software that allows instructors to accomplish 
video capture and interjected instructor feedback on a real-time basis, which would be most ideal 
and alleviate the stresses of time, administration, and cost.   Thus, we investigated several current 
software applications that would allow instructors to insert comments post production (e.g. 
Camtasia, YouSeeU, Screen-cast-o-matic, Windows Live Moviemaker).  Additionally, we 
considered lecture capture systems that simultaneously capture a video and information written 
within a document shown on a screen, but then the comments are spatially displaced from the 
video.  Based on cost and ease of use, we chose Windows Live Moviemaker 2011. This software 
application is free and intuitive to use for the interjection of short tailored feedback in the form 
of subtitles at specific points within the videos.  Since we ran our study, a newer version of 
Moviemaker, Windows Moviemaker 2.6, was released.  Compared to the old version, the newer 
version of Moviemaker requires a few additional steps to interject comments.   A handheld 
camera was used to videotape the oral presentations.   

Video scoring key. To streamline the interjected commenting process and to minimize 
students’ distraction level while they viewed their videos, the instructor created and used a video 
scoring key (see Table 1).  So, for example, instead of inserting lengthy phrases, paragraphs, or 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

6 

narrative, the instructor might for example type in “Tr-” to mark that a student transitioned 
poorly from one subject to the next or “To+” to indicate that a student demonstrated a very 
appropriate tone while making his or her legal argument.  The video scoring key was based on 
the rubric that students were provided prior to their first and second oral advocacy exercises.   

 
Table 1. Video Scoring Key for Interjecting Comments in Students’ Presentation Videos. 
Key Skill being assessed 
K  Knowledge of subject matter 
S Support (law/facts) for your points 
Tr Transitions 
L Logic of sequence 
IP Information’s purpose 
W Word choice 
P Pace 
V Volume 
To Tone 
A Articulation (grammar, enunciation) 
I Inflection (of voice) 
EC Eye contact 
M Movements 
R Responsiveness to audience’s questions/ answers 
E Engagement level 

Note. The instructor used a “+” or “-” after interjecting a key letter to indicate whether the 
student’s skill was strong or needed improvement.   
 

Rubric. A rubric was created to address the widespread student perception that oral 
presentations are graded too subjectively and to guide the blind grader’s scoring.  Each 
component of the rubric (Content, Organization, Style, and Responds to Audience) and each 
level of achievement (Not Acceptable, Beginning, Intermediate, and Advanced) was derived 
from our institution’s outcomes for oral communication skills.  The specific expectations for 
each level of achievement were tailored to both the oral advocacy focus of the course and the 
sophomore level of the students.  Each level of achievement had a small range of possible scores, 
with a maximum of 10 points per component. 

Summarized feedback. The summarized feedback included instructor’s comments as 
well as a compilation of in-class peer critiquers’ comments. Written comments in the form of full 
sentences were provided under headings that aligned with the rubric components:  Content, 
Organization, Style, and Responds to Audience.  

Guided self-reflection assignment. The guided self-reflection required students to view 
their videotaped performance (half of them having interjected comments) and list specific 
instances of both strong and weak performances under each component (Content, Organization, 
Style, and Responds to Audience).  They were required to explain why their performance would 
have merited a certain level of achievement (Not Acceptable, Beginning, Intermediate, or 
Advanced), using the language from the rubric.  Furthermore, students were required to describe 
specific steps they planned to take to improve in each component.  This assignment helped 
ensure that the students would closely review their videos, because anecdotal feedback from 
prior semesters indicated that many students avoided watching themselves because it made them 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

7 

uncomfortable.  By requiring students to incorporate the language from the rubric, we created a 
structured framework for students to self-reflect and increased the connection between the 
instructor’s expectations and the students’ understanding about the assessment’s goals. 

Student subjective feedback questionnaires. To ensure a more comprehensive 
understanding of the role of interjected feedback in developing students’ oral communication 
skills we created an end-of-semester questionnaire that asked students for their perceptions about 
the usefulness of viewing the videos, of the instructor’s written feedback (rubric scores and 
comments), of the interjected comments in the video, of the self-guided reflection, and of the 
rubric criteria.  Two additional questions asked about the clarity of the rubric criteria, and the 
number of times students reviewed their videos beyond what was required for the self-reflection.  
 
Procedure 
 
During the course of one semester, students in the course were required to deliver two oral 
arguments, each lasting 8 minutes.  During each presentation, students presented their evaluation 
and advocacy of a legal problem to fictional justices of the court (role-played by fellow 
classmates).  The handheld camera was placed on a tripod and positioned to capture the speaker 
at a podium (the speaker stayed at the podium for the entire presentation). Each observing 
student was given a copy of the peer review form, which they completed as the presentation 
occurred and then submitted to the instructor.  

Following the presentations, the instructor transferred the media files of the students’ 
presentations from the handheld camera to a PC computer, opened up the media files on her 
computer using Windows Moviemaker, and used the “Caption” function to insert comments 
using the shorthand letters from the video scoring key. It took the instructor about 10-15 minutes 
to interject comments into each student’s presentation. Similar to grading papers, interjecting 
comments into the weaker presentations took longer than the stronger presentations. The videos 
and feedback were given to students within 4 to 8 workdays following the first presentation, and 
within 6 to 14 workdays following the second presentation. The feedback included the 
summarized written instructor comments and rubric evaluation. Upon receiving their videos and 
feedback, students then had up to a week to complete the guided reflection. 

One section of students received interjected feedback, while the other section of students 
received only a raw video of their performance and individually met with the instructor in face-
to-face meetings 1 to 4 workdays after receiving the videos.  Students were expected to bring 
their completed self-reflection to the face-to-face meeting. During these meetings, the instructor 
played and reviewed the videos with the students, stopping at specific points to discuss their 
performance.  Each of these meetings lasted about 20-30 minutes. The same procedure was 
followed for both presentations, except that the sections were reversed with respect to which 
section received interjected feedback and which received face-to-face feedback after the second 
oral presentation.  

 During the final lesson of the semester, students completed a paper version of the 
subjective feedback questionnaire in class.  No names or other identifying information were 
collected with the feedback, and it took approximately fifteen minutes for students to complete. 
 
 
 
 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

8 

Data analysis 
 
In order to test the impact of interjected feedback compared to face-to-face feedback, we 
compared the two groups with respect to their performance and subjective feedback.  For the 
performance comparisons we had a blind scorer use the rubric to assign a total score of up to 40 
points, based on his analysis of four components (Content, Organization, Style, and Responds to 
Audience, each scored up to 10 points).  For the subjective Likert-scale feedback, 1 point was 
assigned for “Not Useful,” “Disagree,” and “Not Likely,” 5 points were assigned for “Very 
Useful,” “Strongly Agree,” and “Very Likely,” and intermediate scores were given (2, 3, 4) for 
the progressively intermediate response options (e.g. minimally useful, somewhat useful, and 
useful, respectively). We categorized the open-ended responses based on common themes that 
appeared.   
 

Results 
 
Performance Data—Blindly Scored Video Presentations. For each rubric component as well 
as the total score, we performed a 2 (Group: interjected feedback first or face-to-face feedback 
first) x 2 (Presentation: first or second) mixed ANOVA, with group being the between variable 
and time being the within variable.  For all components and the total score, there were significant 
main effects of time, p < 0.01 and there were no main effects for groups or interactions.   

However, Accomp was higher for the group receiving face-to-face feedback first, t(24) = 
1.6, p = 0.06 (one-tailed), and it significantly correlated with the scores on the students’ second 
presentation, r (26) = 0.52, p < 0.01.  Therefore, we calculated difference scores based on the 
students’ amount of improvement for each of the four component scores and the total score, and 
then for each we performed a single-factor, 2-level ANCOVA using Accomp as the covariate.  In 
all cases, the adjusted means led to increases in the difference score for the interjected feedback 
first group, i.e. they showed more improvement between presentations, and decreases in the 
difference score for the face-to-face feedback first group. For the component of style, the 
adjusted difference between the groups was nearly significant F (1.25) = 3.43, p = 0.08, with the 
interjected feedback first group showing more improvement across the two presentations than the 
face-to-face feedback first group (mean improvement = 1.5 compared to 0.6, respectively).     

Student Questionnaires: Likert-Scale Responses. In most cases, the average Likert 
response scores indicated no difference between the two groups regarding the usefulness of the 
rubric criteria, the usefulness of viewing the videos on their own, the usefulness of the 
summarized feedback, or the usefulness of the self-reflection. In all these cases, there was 
generally good agreement that each of the aspects of the course feedback process were useful, 
with the average scores ranging from 3.8 up to 4.4 on the 5-point scale.  

However, both groups indicated that they watched their videos more times after the first 
presentation (mean = 1.53) than the second presentation (mean = 1.28). A 2 (Group: interjected 
first or face-to-face first) x 2 (Presentation: first or second) mixed ANOVA for the number of 
times to watch their videos beyond what was required to complete the reflection assignment and 
meeting with the instructor showed no group difference and no interaction, but a significant 
effect of presentation, F(1,29)=5.24, p=.03. 

Further, there was a clear indication that, regardless of group, the students believed the 
face-to-face feedback (mean = 4.5) was more useful than the interjected feedback (mean = 3.8). 
Thus, we also performed a 2 (Group: interjected first or face-to-face first) x 2 (type of feedback: 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

9 

INT or F2F) mixed ANOVA for reported usefulness of feedback.  Regardless of what type of 
feedback they received first, students significantly rated face-to-face feedback as being more 
useful, F(1, 28) = 8.33, p< 0.01.  There was no main effect for group nor was there a significant 
interaction.   

Student Questionnaires: Open-Ended Responses. Students’ open-ended responses 
showed several clear trends that help us better understand the performance and Likert-scale data, 
and that hint at pros and cons for both the interjected and the face-to-face feedback. These 
comments did not show different trends based on group (whether students received interjected 
feedback first or face-to-face feedback first). 
 First of all, the vast majority of students indicated the general value of having the videos 
to review.  For example, several noted the helpful aspect of being able to view themselves as if 
they were a member of the audience rather than the presenter, for example: “Seeing yourself is 
completely different sometimes than how actually you pictured yourself doing,” and “I was able 
to put a critique to an actual picture and see what everyone else saw.” Many students also made 
generic comments about how watching their videos helped them improve:  “I learn and improve 
better analyzing my own video on my own time,” “a lot of the times you don't notice the 
mistakes or habits you make so the video allowed me to break bad habits and improve,” and “I 
think [receiving a videotaped presentation] was the most useful feedback I have ever received on 
an oral presentation.”    
 As noted above, all students were required to watch their videos prior to answering the 
guided reflections.  Thus, for both presentations, all students watched their video in private first, 
and then half of them met with the instructor for face-to-face feedback.  Similar to our pilot 
study, many students in this study also found it uncomfortable to watch themselves, even if at the 
same time they noted how beneficial it was to have the video recordings.  Example comments 
include, “it's very difficult to watch yourself in the video when you're not presenting and it 
helped give insights that I otherwise would not have noticed,” “It was awkward to watch myself, 
but it did help accentuate idiosyncrasies during the presentation,” and “Allowed me to see 
firsthand what I was doing wrong.  But it was the most awkward thing ever.” 
 More explicitly related to the interjected feedback, many students appreciated the 
targeted nature of the interjected comments. Example responses include, “helps identify exactly 
where mistakes were made,” “showed specific instances to focus on,” “showed positive/negative 
things right as they were happening,” and “that was the most useful part. I saw that I did 
something well or poorly and I was immediately notified from the instructor's point of view.”  
Less positively, a small number of students indicated that the interjected comments were 
distracting, or that they struggled with the abbreviations used (Table 1). For example, the 
interjected comments were a “little confusing - had to go back and look up the symbol key a 
couple of times and it took away from watching the video.” 

With respect to the face-to-face feedback, students especially appreciated the depth of 
explanation when they met face-to-face with the instructor. One student stated, “I understood 
more when the feedback was face to face and more personal—I also learned more about the 
concepts,” and another student echoed this sentiment in the following comment:  “[Face to face] 
was the best feedback, even better than the written feedback because we were able to really 
dissect my argument and discuss the pros/cons and how to improve on other points that could 
have been made.” Others noted that the face-to-face feedback “Gave a chance to go deep into the 
reasoning behind deficiencies and find a way to fix them,” and “helped explain in detail what I 
could do better.”  



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

10 

Discussion 
 
Our study was designed to investigate how the use of interjected comments into video recordings 
of student oral presentations would impact student presentation skill development relative to the 
use of a video recording and face-to-face feedback sessions with the instructor.  A motivation for 
this work was to create effective practices for students’ development while managing the load on 
the instructor. We carefully embedded the oral presentation feedback within several other best 
practices for student development (e.g. use of a rubric, guided reflection to link the feedback 
with the presentation objectives).  Overall, our data indicate significant positive effects of using 
video recordings, with respect to both the development of students’ presentation skills, and their 
self-reported attitudes.  Both groups improved between their first and second presentations. 
However, other than for the rubric component of Style, where the group receiving interjected 
feedback first showed a strong trend for greater improvement, there were no significant 
differences between groups.  
 The trend toward a difference in improvement for the Style component may be due to the 
fact that this component focuses on more overt behaviors (e.g. “enunciation, pace, volume, eye 
contact, body movements”) that can be targeted more precisely within the video recordings.  In 
contrast, the rubric components of Content and Organization tap into higher-level aspects of the 
presentations that aren’t easily targeted within a few frames.  Further, even when some aspect of 
organization or content was indicated using the interjected video comments, the nature of the 
comments, i.e. the use of short abbreviations such as “L” to indicate something about the logic of 
the sequence, meant that they were not deeply informative.  This example highlights the inherent 
tension present when balancing instructor load and quality feedback; although short 
abbreviations are a time-saving mechanism for instructors, they can lead to the commonly held 
student perception that instructor feedback is vague and difficult to apply (Price et al., 2010). 
 The students’ self-reported feedback offers further insight into the relative benefits of the 
interjected and face-to-face feedback.  Regardless of whether they received the interjected 
feedback first or second, students reported great value in having the videos to review, and they 
showed an appreciation of the targeted nature of the interjected comments.  Thus, even though 
providing students raw videotapes without anything more may not help them to reflect as 
effectively as possible (Cooper, 2005), the videotapes still serves as a tangible artifact that allows 
them to view themselves in the third-person, and therefore helps them gain a new perspective on 
their performance.  Furthermore, students’ positive reception of the interjected comments aligns 
closely with Gibbs and Simpson who stated that feedback needs to be specific to be effective 
(Gibbs & Simpson, 2004).   
   Many students also explicitly noted the discomfort they felt when watching themselves, 
which suggests another benefit of the interjected comments: the feedback review process can be 
private rather than shared with the instructor. However, these same students also clearly 
indicated that they especially appreciated the face-to-face feedback because of the depth and 
personalized nature of that feedback.  In fact, for both groups, face-to-face feedback was rated as 
significantly more useful than the interjected feedback.  These preferences highlight, perhaps, an 
unstated assumption that face-to-face meetings resulted in more “quality” feedback as opposed to 
interjected feedback which was merely “timely” (Winter & Dye, 2004; Chang et al., 2012).   One 
reason why students may have felt that the face-to-face meetings resulted in more quality 
feedback is that they had the opportunity to direct the discussion and engage in a dialogue with 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

11 

the instructor, even if ultimately, they would have gained the same information through both 
interjected and summarized comments.   
 As we move forward in considering how best to use an instructor’s time and resources, 
we should examine the disconnect between students’ perceptions and performance.  After all, 
what we as instructors ultimately want is an improvement in student performance.  If the face-to-
face feedback really was so much more useful, why didn’t the group receiving face-to-face 
feedback on the first presentation show more improvement from the first to the second 
presentation than the group that first received interjected feedback, especially with respect to the 
areas of content and organization?  Is it really worth an instructor’s time to meet individually 
with each student and review the videotapes?  

One interpretation is that the Content and Organization components of performance are 
more cognitively challenging and require more practice to improve.  In contrast, the Style 
components may be more tangible and easier for students to develop in a shorter time period. 
Thus, even if the face-to-face feedback was more useful for students, the amount of improvement 
seen from one presentation to the next would not be significant. In future semesters, development 
of the Content and Organization components could be further enhanced by requiring more than 
two oral presentations in order to build in more opportunities for practice. Alternately, the 
addition of writing assignments that specifically link to the presentations would allow instructors 
to give more detailed, interjected written feedback on the content and organization in the papers 
without needing to meet face-to face with the students  

However, we don’t want to forget about the benefit of the interjected comments on the 
Style component development.  The style and real-time audience interaction aspects of oral 
presentations are what distinguish oral presentations from written papers, and are the skills we 
hope to develop in our students. In the interest of not overloading instructors perhaps the more 
overt nature of the style elements could be captured through a peer-review process. The benefits 
of peer review (e.g. engagement, greater depth of processing for the reviewer and receiver of the 
review) are well documented for aspects of assignments to which students can bring some 
expertise (e.g. Lundstrum & Baker, 2009).   Throughout their lives, students have watched many 
others give presentations, and they should be able identify stylistic aspects of presentations that 
were less effective, especially if given specific guidance on behaviors to note.  What most 
students are not practiced at is watching and analyzing their own performances, especially during 
more awkward moments where the human tendency is to look away. Thus, students could be 
assigned to review a small number of classmates’ video recordings and, using style guidelines, 
insert the interjected feedback. The students could then watch their own videos with interjected 
feedback in the privacy of their own room.  While instructors could still note stylistic aspects 
during face-to-face feedback, they would be able to focus the majority of their discussion on the 
higher-level aspects of content and organization.   In this way, instructors could maximize their 
time and efforts, as well as leverage peer critiquing to provide students a well-balanced 
assessment of oral presentation skills that does not unduly emphasize content over command of 
the oral medium or oral medium over content (Cooper, 2005). 

Important to note is that all students received their feedback as part of an intentional 
course design that incorporated best practices, such as multiple presentations to support a 
developmental focus  (Gibbs & Simpson, 2004; Price et al., 2010), the integrated use of the 
rubric  (Stevens & Levi, 2005; Andrade, 1997), and structured reflection activities that “forced” 
students to watch the video at least once and explicitly state steps they would take for 
improvement (Nicol & Macfarlane-Dick, 2006).  In other words, the use of video technology in 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

12 

and of itself is not a complete solution (Hooper & Rieber, 1995). An intentional course 
framework ensures more explicit overlap between the students’ and instructors’ understanding of 
the same goals (Nicol & Macfarlane-Dick, 2006). Without this framework of best practices, it’s 
likely that the positive impact of any feedback would be decreased.  In fact, the significant 
decrease in the number of video viewings following the second presentation compared to the first 
presentation suggests that students often only move beyond the required minimum when there is 
a follow-on assignment that could clearly benefit from use of the feedback (Gibbs & Simpson, 
2004; Price et al., 2010). All of our best practices helped ensure that our feedback process was 
not a one-way one street from instructor to student, but rather, part of a process involving both 
traditional and non-traditional forms of feedback that required active engagement from the 
students as well as the instructor.   
 Also with respect to technology use, it’s important to acknowledge that the use of 
technology provides challenges (e.g. server space to store videos, purchase costs, time to learn to 
use applications) (Kovach, 1996), and that, despite rapid evolution, the technology resources are 
often not designed with instructors’ goals in mind.  In our study, the time it took the instructor to 
provide the interjected comments, post-production, using the abbreviations shown in Table 1 was 
about half the amount of time taken when meeting face-to-face.  Thus, we did achieve a 
substantial time savings. However, at 10-15 minutes per video, the total amount of time was still 
substantial. Thus, while we personally believe there is a benefit to recording student oral 
presentations and to interjecting comments to give feedback, especially for style elements, we 
cannot ignore some of the costs also associated with the approach. 

In sum, students crave feedback (Robert & Anthony, 2003), and our study indicates that 
video feedback can help support student development of oral presentation skills.  Our results also 
suggest that, depending upon the specific skills an instructor wants to develop, i.e. style versus 
content and organization, different types of feedback might be more effective. Further, our 
student feedback responses suggest that access to even just the raw video without comments or a 
face-to-face meeting could provide some benefit, especially with respect to general aspects of the 
presentation, because the videos provide students with the perspective of a member of the 
audience. Thus, an instructor might choose different feedback options for different oral 
presentations throughout the semester in order to balance developmental progress and load on 
the instructor.  Alternately, through the use of interjected comments by peers (for style elements) 
and face-to-face by instructors (for the higher-level content and organization elements), both 
types of components could be effectively developed without expecting an instructor to provide 
both types of feedback. Crucially, we should all remember that feedback needs to implemented 
with best practices in mind, so that students have reason to and take the time to review and 
process the feedback.  Without student engagement in the feedback and development process, no 
development will occur.   
 

Acknowledgements 
 
This research was made possible by contributions from James “Jeremy” Marsh and John Hertel 
in the Department of Law, U.S. Air Force Academy. 
 
 
 
 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

13 

References 
 
Amirault, R. J., & Visser, Y. L. (2009). The university in periods of technological change: A 
historically grounded perspective. The Journal of Computing in Higher Education, 21(1). 
 
Andrade, H. (1997). Understanding rubrics.  Educational Leadership, 54, 14-17.    
http://www.jcu.edu/academic/planassess/pdf/Assessment%20Resources/Rubrics/Other%20Rubri
c%20Development%20Resources/rubric.pdf 
 
Bloom, B. (1956). Taxonomy of educational objectives: Handbooks 1 to 3: The cognitive, 
affective, and psychomotor domain. London: Longman.  
 
Byrne, A.J., Sellen, A.J., Jones, J.G., Aitkenhead, A.R., Hussain, S., Gilder, F., Smith, H.L., & 
Ribes, P. (2002). Effect of videotape feedback on anesthetists’ performance while managing 
simulated anesthetic crises:  A multicentre study, Anaesthesia, 57, 169-82.  
http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2044.2002.02361.x/pdf 
 
Chang, N., Watson, A.B., Bakerson, M.A., Williams, E. E., McGoron, F. X., & Spitzer, B. 
(2012).  Electronic feedback or handwritten feedback:  What do undergraduate students prefer 
and why?  Journal of Teaching and Learning with Technology, 1(1), 1-23.  
http://jotlt.indiana.edu/article/view/2043/1996 
 
Cooper, D. (2005). Assessing what we have taught: The challenges faced with the assessment of 
oral presentation skills, Proceedings HERDSA, University of Sydney, Australia.  
http://conference.herdsa.org.au/2005/pdf/refereed/paper_283.pdf 
 
Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports student’s 
learning. Learning and Teaching in Higher Education, 1, 3-3. 
http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/issue1.pdf#page=5 
 
Hooper, S., & Rieber, L.P. (1995).  Teaching with technology.  Teaching:  Theory into practice, 
Needham Heights:  Allyn and Bacon.     
http://www.d11.org/LRS/PersonalizedLearning/Documents/Hooper+and+Reiber.pdf 
 
Kovach, K. (1996). Virtual reality testing:  The use of video for evaluation in legal education, 
Journal of Legal Education, 46(June), 233-251. 
 
Lundstrum, K., & Baker, W. (2009). To give is better than to receive: The benefits of peer 
review to the reviewer’s own writing. Journal of Second Language Writing, 18, 30-43. 
 
McKeachie, W.J., & Svinicki, M. (2006). McKeachie’s teaching tips, Boston: Houghton Mifflin  
Newell, A., & Rosenbloom, P. (1980). Mechanisms of skill acquisition and the law of practice. 
Computer Science Department Paper 2387. retrieved 16 April 2013 
http://repository.cmu.edu/compsci/2387 
 



Yang, H. and Scharff, L.F.V. 

Journal of Teaching and Learning with Technology, Vol. 2, No. 1, June 2013. 
jotlt.indiana.edu  

14 

Nicol, D. J., & Macfarlane-Dick, D. (2006).  Formative assessment and self-regulated learning:  
A model and seven principles of good feedback practice.  Studies in Higher Education, 31(2), 
199-218. http://www.tandfonline.com/doi/pdf/10.1080/03075070600572090 
 
Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010).  Feedback:  All that effort, but what 
is the effect?  Assessment & Evaluation in Higher Education, 35(3), 277-289.   
 
Robert, P., & Anthony, H. (2003).  A study of the purposes and importance of assessment 
feedback.  University of Technology, Sydney.  
http://epress.lib.uts.edu.au/research/bitstream/handle/10453/6323/2003002119.pdf?sequence=1 
 
Savoldelli, G.L., Naik, V.N., Park, J., Joo, H.S., & Hamstra,  S.J. (2006). Value of debriefing 
during simulated crisis management, Anesthesiology, 105, 279-85. 
http://journals.lww.com/anesthesiology/Abstract/2006/08000/Value_of_Debriefing_during_Sim
ulated_Crisis.10.aspx 
 
Stevens, D. D., & Levi, A. J. (2005).  Introduction to rubrics:  An assessment tool to save 
grading time, convey effective feedback and promote student learning. Sterling, VA:  Stylus 
Publishing, LLC.  
 
Responses on Legal Research and Writing Listserv (LRWPROF-L@LISTSERV.IUPUI.EDU), 
December 2011    
 
Winter, C., & Dye, V.L. (2004).  An investigation into the reasons why students do not collect 
marked assignment and the accompanying feedback.  Learning and Teaching Projects 
2003/2004.  University of Wolverhampton. 
http://wlv.openrepository.com/wlv/bitstream/2436/3780/1/An%2520investigation%2520pgs%25
20133-141.pdf.  
 


