




















































3701


Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013, pp. 43 – 59. 

Using iAnnotate to enhance feedback on written work 
 

Kristi Upson-Saia1 and Suzanne Scott2 
 

Abstract: This paper discusses an iAnnotate feedback model used by the authors 
to comment on written work in first-year writing courses. We show that the use of 
iAnnotate, like other emergent technologies, mitigated a number of issues that 
regularly undermine high-quality feedback (such as the time it takes for 
instructors to write detailed comments and the challenge for students to read 
illegible handwriting or to keep track of hard copies of their papers).  Yet, we 
contend that our feedback model goes beyond these practical benefits and, more 
importantly, enhances student learning.  Specifically, we argue that it aligns 
instructor and student standards, elucidates for students the different types of 
comments instructors make (and clarifies that they ought to prioritize some 
comments over others), helps students and instructors identify recurrences and 
patterns of comments (thus also helping students and instructors diagnose general 
writing strengths and weaknesses), and conditions students to engage with 
feedback not only as a justification of their grade, but as a launching point from 
which they can develop as thinkers and writers.  The success of this feedback 
model is partly attributable to the features of iAnnotate and partly attributable to 
the classroom complements we designed as part of the feedback model. 
 
Keywords: feedback; assessment; e-assessment; technology; technopedagogy; e-
learning tools; iAnnotate; visual learning; writing instruction 

 
I. Introduction. 
 
If you ask instructors what the most dreaded or onerous part of teaching is, “grading papers” is 
the response that nearly always tops the list. Instructors complain that providing extensive 
feedback takes time, time that is in short supply for those who are teaching a full load, who have 
an active research agenda, and who are expected to perform service to the institution.  When they 
find out that their feedback has gone unread by students,3 many instructors become embittered 
and exchange careful, detailed remarks for simpler notes or just grades (Wojtas, 1998; Higgins, 
Hartley, & Skelton, 2001). 

In this paper, we propose a feedback model that attempts to alleviate some of the issues 
of grading commonly registered by instructors. Specifically, we aimed to create a feedback 
model that students understand to be a valuable component of their learning process and that 
instructors perceive to be worth the time and effort they expend. After a brief overview of 
pedagogical scholarship on feedback (including the recent introduction of emergent technologies 
to enhance feedback), we describe our use of iAnnotate in four writing courses at Occidental 
College from 2011-2012, we explain how our use of the application addresses persistent 

	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1 Associate Professor, Religious Studies and Director for Teaching Excellence, Occidental College, upsonsaia@oxy.edu 
2 Assistant Professor, Film and Media Studies, Department of English, Arizona State University, suzannelynscott@gmail.com 
3 Duncan (2007) argues that students tend to read instructors’ comments only if the grade they receive is misaligned with the 
grade they expect to have earned, while Wojtas (1998) found that students do not read the comments “if they disliked the grade.” 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

44 

complaints from instructors and students, as well as how our use of the application aligns with 
the best practices detailed in scholarship on feedback.   

 
II. Pedagogical scholarship on feedback. 
 
There is no shortage of scholarly literature on feedback. Some scholarship focuses on how 
instructors can most effectively structure their feedback, while other scholarship focuses on how 
to motivate students to engage feedback in a meaningful way. With regard to the former, 
consensus has emerged around the characteristics of high-quality feedback:  

1) It is seamlessly aligned with the articulated goals and standards of the assignment 
(Nicol & Macfarlane-Dick, 2006; Duncan, 2007; Hounsell et al., 2008; Sadler, 2010). 

2) It focuses on the most important learning objectives, leaving aside lower order concerns 
(Black & Wiliam, 1998; McNeill, Gosper, & Xu, 2012). 

3) It is returned in a timely manner while the material is still fresh in students’ minds 
(Cowan, 2003, Hepplestone et al., 2011). 

4) There is a required mechanism through which students reflect on and respond to the 
feedback, increasing the likelihood that students will incorporate suggestions in future 
assignments (Carless, 2006; Hepplestone et al., 2011; Carless et al., 2011).   

While there is broad agreement on the features of high-quality feedback, scholars 
acknowledge that this sort of feedback is exceedingly time- and labor-intensive for instructors. 
Moreover, scholars contend that there are barriers to students’ understanding or apprehension of 
even high-quality feedback. First, instructors and students hold different perceptions about the 
purpose and function of feedback. While students understand comments to be merely a 
justification of the grade they earned, instructors also understand their feedback to be another 
opportunity in which to (re)teach the material or to offer advice on how students can develop 
their skills as logicians or writers. The misalignment of feedback’s function—dubbed “feedback” 
versus “feedforward”—leads to students’ misuse or lack of use of high-quality feedback 
(Bjorkman, 1972; Mutch, 2003; Rust, O’Donovan, & Price, 2005; Nesbit & Burton, 2006; 
Weaver, 2006; Lizzio & Wilson, 2008; Poulos & Mahony, 2008; Irons, 2008; Burke, 2009; 
Draper, 2009; Walker, 2009; Sadler, 2010; Price et al., 2010). 

Second, it is a challenge for students to interpret instructors’ comments because we offer 
different types of comments. For instance, we write critiques of students’ ideas or writing skills 
alongside conversational responses to their ideas alongside suggestions for further reading or 
research (Mutch, 2003). We expect students to engage differently with different types of 
comments, yet we rarely make these expectations explicit nor do we train students in how to 
properly engage different types of comments. Students, thus, tend to treat all comments the 
same: as criticisms of their work that justify the grade they were awarded. Additionally, faculty 
include a range of comments that they would hierarchize: lower-order comments (e.g., 
grammatical problems, flawed prose, etc.) versus higher-order comments (e.g., problems with 
argumentation, reasoning, and marshaling evidence). Yet, again, we rarely explain to students 
how to rank the importance of different comments and thus, to our disappointment, students tend 
to focus their revision work on less important—but more easily fixable—issues, ignoring the 
bigger problems (Mutch, 2003; Weaver, 2006).  

Third, instructors struggle to find the balance between providing highly individualized 
comments, careful instructions for revision, and advice for future development (i.e., enough 
feedback so that students have a clear understanding of what is going wrong), while also 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

45 

avoiding so much feedback that students are left overwhelmed and paralyzed, not knowing 
where to start addressing this whirlwind of comments (Monroe, 2002; Higgins, Hartley, & 
Skelton, 2002; Nicol & Macfarlane-Dick, 2006; Miller, Linn, & Gronlund, 2012).  

Within the past several years, a new set of scholarship on feedback using emergent 
technologies has taken steps toward addressing some of the obstacles to high-quality 
feedback.4 Some studies have argued that new technologies offer a more efficient workflow that 
reduces the amount of time and effort expended by instructors. As Heinrich et al. (2009) put it: 

...e-tools can make a real impact on efficiency: providing documents, easily 
accessible to all involved, anytime and anyplace; accepting assignment 
submissions, managing deadlines, recording submission details, dealing with 
safe and secure storage; returning commented-on assignments and marks to 
students; storing and if necessary exporting class lists of marks.  Using e-tools 
for these tasks frees up time that can be used for focusing on quality feedback. 

 
Heinrich et al. (2009) agree that instructors have found Learning Management Systems (LMS) to 
be efficient ways to manage the submission of student work since the LMS automatically records 
late work and ensures that student work remains secure. 

Other studies propose that instructors compile a bank of commonly-used comments that 
they can simply cut, paste, and tailor to each individual paper, saving much of the time it would 
ordinarily take to write the same comments again and again. This time-saving measure enabled 
them to provide more feedback to students in large courses and to spend their time tailoring their 
stock remarks to individual students’ work (Brown, Bull, & Race, 1999; Heinrich, 2007; Irons, 
2008; Heinrich et al., 2009). 

Instructors are also able to return work in a timelier manner (not needing to wait until 
class to hand deliver in person hard copies of their feedback); this timeliness increases the 
probability that students will read and value our comments (Denton, 2001; Cowan, 2003, 
Hepplestone et al., 2011). Electronic feedback is also more legible to students, which means 
students are no longer required to ask us during office hours to decipher illegible comments; e-
comments decrease the chance that they would simply ignore remarks they could not read on 
their own (Denton, 2001; Denton et al., 2008). 

Heinrich et al. (2009) report that using technologies, such as the Track Changes feature in 
Microsoft Word, makes it possible to embed links to additional readings or resources into the 
comments, directing students’ further engagement with the material. These sorts of comments 
shift the culture of feedback from a means to simply justify the grade to a dialogic engagement 
between student and instructor that is presumed to continue beyond the individual paper or 
assignment (e.g., Irons, 2008; Carless, 2006; Price et al., 2010; Carless et al., 2011). 

Finally, electronically submitted and commented-upon work facilitates better assessment 
of student progress over time. When working with hard copies, an instructor would have to make 
copies of hand-written comments and create an easily navigable file system for those hard 
copies. Electronic papers with embedded electronic comments can be stored and catalogued 
more easily so that instructors can track the progress of students’ work over the course of the 
semester (Heinrich et al., 2009).5 
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
4 The new technologies discussed in this literature include general software programs (e.g., Microsoft Word Track Changes, 
Google.docs), Learning Management Systems (e.g., Moodle, Blackboard), and specialized applications or assessment tools 
(Markin, Turnitin, GradeMark, Re: Mark, MarkTool, Adobe, and iAnnotate). 
5 Moreover, instructors interested in assessing their own assessment practices have at their disposal an easily navigable set of 
papers with their comments (Heinrich et al., 2009). 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

46 

This paper adds to the scholarly conversation about the pedagogical benefits of emergent 
technologies. We show that the use of iAnnotate, like other emergent technologies, enhances the 
efficiency of instructor’s workflow, reduces the time it takes to return papers, and provides 
students with more legible feedback. Yet, we contend that our feedback model goes beyond these 
practical benefits and, more importantly, enhances student learning.  Specifically, we argue that 
our feedback model aligns instructor and student standards, elucidates for students the types of 
comments we make (and helps them prioritize some comments over others), helps students and 
instructors identify recurrences and  patterns of comments (thus also helping students and 
instructors diagnose general writing strengths and weaknesses), and conditions students to 
engage with feedback not only as a justification of their grade, but as a launching point from 
which they can develop as thinkers and writers.  In what follows, we show that the success of 
this feedback model is partly attributable to the features of iAnnotate and partly attributable to 
the classroom complements we designed around the feedback model. 

 
III. Approach. 
 
Beginning in Fall 2011, Occidental College’s Center for Digital Learning + Research and the 
Center for Teaching Excellence co-sponsored several cohorts of Faculty Learning Communities 
that explored the pedagogical uses of the iPad. At that time, the authors of this paper began to 
use iAnnotate to grade student writing in four first-year writing-intensive seminars.6 Although 
iAnnotate has received much praise on blog posts, such as ProfHacker on the Chronicle of 
Higher Education website, for being an easy, portable, paperless way to annotate and share 
documents, not enough attention, we contend, has been paid to the pedagogical benefits of the 
application.7 After a brief description of the features of iAnnotate and how we used the tool in 
our writing courses, we will discuss in detail how our feedback model enhanced student learning. 

iAnnotate PDF is a productivity application from Branchfire that is available on the iPad 
and Android tablets.8 iAnnotate includes a palette of tools to annotate a document, including the 
ability to highlight, underline or strikethrough, type or write notes (in the margins or in 
collapsible balloons), and to bookmark. For commonly used annotation, iAnnotate enables users 
to create stamps (text or symbols) that can be imprinted on the document with a single click from 
the tool bar.9   

We found the stamps feature to be exceedingly useful when commenting on student 
writing. Since we tend to evaluate papers on the same criteria—the criteria laid out in our 
grading rubrics—we tend to write the same sorts of comments on every paper we grade. The 
stamps feature of iAnnotate, therefore, enabled us to save an inordinate amount of time writing 
marginal comments. We simply created stamps for our commonly used comments, such as: “you 
need to make your reasoning more explicit,” “good, careful reasoning,” “nice use of evidence,” 
“you need to interpret/analyze your evidence,” “clarify the point of this paragraph,” “nice 

	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
6  While our focus has been exclusively on using iAnnotate to write comments on student papers, the application has been widely 
adopted in academic iPad pilot programs at Stanford University, Massachusetts Institute of Technology, and the University of 
Michigan, among others. The application’s uses in these academic contexts range from annotating course readings, to taking 
notes on class PowerPoint presentations, to sharing documents and working collaboratively. 
7  See, for example, Jones (2010) and Sample (2011). 
8  At the time of publication, iAnnotate retailed for $9.99. 
9  iAnnotate has many default stamps, comments such as “excellent” and “good job,” etc., as well as symbols such as check 
marks, smiley faces, and exclamation points.  We found these comments to be far too vague to be useful and thus we quickly 
created our own stamps. 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

47 

guidepost,”   “awkward prose,” “citation needed,” etc.10 After the initial time and labor it took to 
set up the stamp system, the process of inserting a stock comment with a single click made the 
feedback process much faster.  

In this way, the stamps mimic the “comment bank” suggested by Irons (2008) and 
Heinrich et al. (2009). Yet, because the color of the stamps are adjustable, we sorted our 
individual comments into the categories we used on our rubric—argument, structure, use of 
evidence, writing style/prose, and mechanics—and then assigned a different color to each 
category of comments. For instance, comments related to argumentation were colored blue, 
comments related to evidence were colored green, comments related to organization and 
structure were colored orange, and so on (See Figure 1). In this way, students could easily see 
how our comments mapped onto the rubric and onto broader areas of thinking and writing.  

 

 
Figure 1. Example of student paper with instructor comments. 

 
Further, we used other features of iAnnotate to demarcate different types of comments. 

As noted above, we used our custom stamps to mark strengths and weaknesses in terms of 

	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
10  For a catalog of our custom stamps, see Appendix A. 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

48 

argument and writing. Still more, we used the checkmark stamp (ü) to acknowledge a good 
point and collapsible balloons (that included lengthier comments) to converse with students’ 
ideas and arguments (See Figure 2).  

 

 
Figure 2. Example of student paper with additional iAnnotate markup. 

 
In addition to these types of annotation in the margins of the paper, we included 

comments at the end of the paper. Here we interpreted the comments above. We pointed students 
back to the check marks that indicated where they succeeded and we elaborated on why these 
were particularly successful moments. We directed them to remarks made in collapsible balloons 
and tied together our engagement with their ideas in one synthetic remark. Finally, we identified 
the strengths and weaknesses of their argument and writing by drawing their attention to visual 
patterns of comments: recurring stamps or recurring colors. For instance, if their paper was 
littered with positive blue comments and negative orange comments, we could praise them for 
their exceptional argumentation and encourage them to work on their organization. In this way, 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

49 

our summative remarks at the end of the paper provided students a key to deciphering the 
marginalia above. 
 
IV. Teaching Method. 
 
Although our summative remarks gave students a road map to decipher our comments on their 
papers, we found that we also needed to spend time in class talking about our feedback method. 
That is, this feedback model needed to be carefully integrated into a writing course in such a way 
that students: 1) understood the goals of the method and how those goals aligned to the standards 
and objectives of the assignment/course; 2) understood how to interpret and use our feedback; 
and 3) were required to reflect on and incorporate feedback into subsequent writing 
assignments. In this section, we will discuss these classroom complements to our written 
feedback. 
 
A. Aligning Student and Instructor Expectations. 
 
As scholars have long demonstrated, in order for students to benefit from feedback, they must 
understand the standards on which they are being evaluated, the feedback must make clear how 
their performance measures up to those standards, and the feedback must offer suggestions on 
how they can move steadily closer to achieving those standards on subsequent work (Sadler, 
1989; Nicol & MacFarlane-Dick, 2006). In order to align our students’ expectations with our 
own, in class we introduced the color-coded rubric (See Figure 3) on which their work would be 
assessed and showed them a sample paper marked up using iAnnotate. We explained how sets of 
comments lined up with categories of the rubric and we urged students to pay attention to the 
colors of our comments in order to discern broader writing strengths and weaknesses. We also 
used this in-class orientation to delineate between higher and lower order comments, again 
corresponding to the color-coded rubric (e.g., explaining that blue comments on argumentation 
are more significant than red comments on mechanics). We provided this orientation on both the 
first day of class and again when we distributed and discussed the first assignment. Moreover, on 
our course sites, we posted the rubric and a description of and key for our iAnnotate feedback 
model so students could reference these materials on their own time as well.  

Once students had completed their first draft, we devoted several class periods to writing 
workshops, pertaining to aspects of the rubric (e.g., one day each on use of evidence, structure 
and organization, thesis, prose, introductions, and conclusions). In each workshop, we circulated 
a handout that used language on our rubric and that would show up later in our customized 
stamps. This synchronization between the writing instruction, rubric, and stamps created a 
consistent message to students about our standards, promoted transparency in how student work 
would be assessed, and conditioned students on how they should be evaluating their own work 
during the pre-writing and drafting phases and in peer review sessions. 
 
B. Navigating, Interpreting, and Using Feedback. 
 
As mentioned above, scholars have found a persistent misalignment between instructors’ and 
students’ perceptions of the purpose of feedback. While students tend to read feedback only as a 
justification of their grade, instructors hope students learn—about course material or about their 
skills as thinkers and writers—from their comments. Moreover, although instructors offer a  



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

50 

 
Figure 3. Color-coded Rubric. 
 
variety of types of feedback (e.g., criticisms of specific ideas, conversational engagement with 
ideas, comments on broader writing skills, etc.) and although instructors expect students to 
engage differently with each type of comment, students have a hard time distinguishing these 
different kinds and levels of feedback. In short, students need to be trained to understand how we 
expect them to read and use our feedback. 

After returning their first paper, we devoted classtime to reminding them how to navigate 
our feedback: we explained the difference between stamps, checkmarks, and discussions in the 
collapsible balloons.  To this we added a discussion of what we viewed to be the purpose and 
function of feedback and of how they ought to engage with each type of comment. We clarified 
that feedback was useful to justify the grade they received, but to do more than just that. We 
explained that they should use some feedback—our stamps that, in aggregate, pointed out broad 
areas of writing strengths and weaknesses—to inform their reflection on their broader writing 
and revision process and to modify their current process of drafting and revising.  Further, we 
explained that they should use other types of feedback—our comments in collapsible balloons 
that engaged their ideas—as an attempt to point out areas in which students’ ideas need to be 
corrected or developed. This might mean that students need to review course material that they 
did not understand sufficiently or that we are encouraging them to continue to pursue an 
interesting line of inquiry in a subsequent paper. Our aim was to reorient students to the range of 
ways they might use our feedback, thus maximizing the value of our comments for them as well 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

51 

as for us. While any instructor could teach students to use even traditional feedback in this way, 
we found that the palette of annotation tools in iAnnotate—namely, the ability to color-code the 
stamps and to vary the look of our comments—made it easier for us to visually represent these 
different kinds and types of comments and to coach students on how to engage differently with 
each.  
 
C. Reflection. 
 
Although instructors are regularly disappointed that students do not make good use of their 
careful feedback, recently several scholars have observed that students are seldom, if ever, 
required to engage with instructor feedback. These scholars urge instructors to require some sort 
of assignment in which students read and reflect on feedback given to them (Weaver, 2006; 
Hepplestone et al., 2011; Carless, 2011). Following this advice, we required students to reflect 
on and respond to our feedback in two ways.  At the end of the semester (directly before they 
began work on the final paper), students had to compose a short written reflection on all of their 
papers to date in the course. They were to compose a self-assessment that identified broad areas 
in which they did well and poorly, noted areas in which they had improved over the course of the 
semester, and devised strategies for continuing to work on areas in which they were persistently 
weak. Then we met with students in one-on-one conferences to discuss their ideas for their final 
paper, as well as to talk about their research, writing, and revision process as it related to the 
strengths and weaknesses identified in their reflection. 
 
V. Findings. 
 
Because this was an informal and limited pilot program, our findings are grounding in (1) the 
instructors’ assessment of students’ development over the course of the semester; (2) students’ 
written self-assessments;11 (3) anecdotal student feedback in one-on-one conferences; (4) student 
feedback collected in course evaluations;12 and (5) for one of the four courses, pre-semester and 
post-semester surveys.13  We have divided our findings into three subsections. First, we 
enumerate how our feedback model enhanced student learning. Second, we discuss some of the 
more practical benefits of this feedback model for students and instructors. And finally, we 
present issues that arose and offer suggestions on how they might be resolved in future iterations 
of this feedback practice. 
 
A. Enhancing Student Learning. 
 
This feedback model had several immediate benefits to student learning. For clarity, we have 
broken down these learning benefits in a way that most clearly delineates them; we recognize, 
though, that these are artificial categories.  In practice, we saw many intersections between these 
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
11  With sixteen students enrolled in each class, we had a total of sixty-four course self-assessments.	
  
12  With sixteen students enrolled in each class, we had a total of sixty-four course evaluations. 
13 After teaching three courses using our feedback model, we gave two surveys to gather both quantitative and qualitative data 
about students’ attitudes toward feedback in general and to our feedback model specifically.  The pre-semester survey was 
designed to assess students’ exposure to and preferences for hand-written or electronic comments, to assess how students’ use 
feedback (if at all) in subsequent writing assignments, and to help us design the in-class framing of our feedback model.  The 
post-semester survey was designed to collect student responses to our feedback model to help us identify issues and refine the 
model in subsequent courses. Both surveys were optional, with thirteen of sixteen enrolled students completing both, and were 
composed of a mixture of multiple choice, ranked/scaled options, and open-ended elaborations or justifications of their responses.	
  	
  



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

52 

categories. First, we found that, in comparison with prior students in our first-year writing 
courses, these students had a better understanding of our standards and expectations. Because the 
stamp system was aligned with our grading rubric and with the writing workshops—in terms of 
verbatim language and color-coded categories—our standards were repeatedly enforced and 
linked to visual cues.14  Students’ self-assessments demonstrated that they had absorbed our 
standards as a vast majority of them used our own categories and language to discuss their 
primary strengths weakness and to make a concrete plan for improvement. For instance, one 
student remarked that the many green stamps that read “you need to interpret/analyze your 
evidence,” “this evidence isn’t relevant to the point you’re making,” and “insufficient evidence: 
add more/greater range to substantiate your point” visually clarified that the student had trouble 
marshaling evidence. The student wrote in her self-assessment, “I need to work on interpreting 
evidence to create a better dialogue between sources and my own ideas. When writing I will 
often pull in quotes I find last minute without really thinking about how well they substantiate 
the point I’m trying to make. In preparation for my term paper, I plan on writing out a detailed 
outline and mapping out each specific point/quote from sources that I want to use to make sure 
they’re relevant and explicitly linked to my argument.” Another student noted that it was 
abundantly clear that he was not guiding his readers through the stages of his argument since 
“every paragraph in every paper has a ‘you need a better transition here’ stamp next to it!” 
Although we did not instruct students to use our categories during peer-reviewing sessions, we 
regularly overheard them offering feedback to their peers that mimicked the categories and 
language of our rubrics.15 One student even began bringing her own set of colored pens to peer-
review sessions to replicate the colored taxonomy of the rubric when writing comments on her 
classmates’ papers. 

Second, we found that our feedback model taught students (especially first-year students 
who were unfamiliar with college-level writing and feedback) how to read and rank their 
instructors’ comments.  Students reported that they were able to understand that we offered 
different types of comments, each with distinct purposes, because they were visually distinct in 
the margins of their papers.16  Moreover, students understood the relative importance of our 
comments. Because dissecting voluminous and uniform marginal comments is challenging for 
students, color-coding distinguishes visually higher-order concerns from lower-order concerns. 
When students made reference to writing style or mechanical issues in their self-assessments, 
their language clearly conveyed an understanding that these were lower-order concerns. For 
example, one student wrote in her self-assessment: “As for silly spelling and grammar mistakes, 
this has always been a weakness because I do not put enough emphasis on the editing process. I 
think it will help me if I print out my essay, read it aloud a few times, and really go through it 
with a fine comb to avoid these silly mistakes.” On the contrary, they also clearly understood 
argument and structure to be most salient; one student wrote on her self-assessment: “Before this 

	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
14 Somewhat unexpectedly, this overt alignment to keep the rubrics, writing workshops and stamps cleanly aligned forced us to 
be more focused and consistent. 
15  It was also apparent that, in peer-reviewing sessions, students offered more pointed feedback.  In past classes we both 
struggled to get students to be more hard-hitting and direct with their peers.  We had chocked this up to their hesitance to criticize 
their classmates, but we have come to realize that some of their hesitation stemmed from the fact that they simply did not 
understand sufficiently the standards of assessment and thus were unable to marshal those standards in their evaluation of their 
peers’ work. 
16 Although our papers had the same amount of marginal notes as prior papers, the systematization of the notes—and our 
explanation of the system—made it easier for students to navigate or, put differently, made it so that students were not 
overwhelmed (a common problem that plagues overly commented-upon papers; Monroe, 2002; Higgins, Hartley, & Skelton, 
2002; Nicol & Macfarlane-Dick, 2006; Miller, Linn, & Gronlund, 2012). 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

53 

class I spent a lot of time editing the spelling and grammar. I now know I need to spend that time 
on more important things like my argument.” This new ability to navigate comments has 
extended beyond our initial pilot program, with students reporting to us that, even after moving 
into courses that employ more traditional feedback, they are more easily able to parse and 
prioritize comments. 

Third, students and the instructors were better able to identify patterns of writing 
strengths and weaknesses. In the past, when writing hand-written comments on papers, we did 
not flag every instance of a particular writing flaw. For instance, if a paper had weak transitions 
throughout, we would simply note the first instance and alert the student that this was a problem 
throughout (with a note that read something like “here and throughout” or “this is a pervasive 
problem”). With iAnnotate, however, the ease of the stamp feature allowed us to mark every 
instance. The repetition of stamps within a student’s paper—and still more the repetition across 
multiple assignments—alerted students to look beyond any given instance or beyond any given 
assignment to see more clearly the larger issues with their writing. One student, for example, 
remarked that he had never really paid attention to his transition sentences, or fully understood 
the impact they had on how the reader understood his (otherwise compelling and thoughtful) 
argument until he saw a barrage of orange “weak transition” stamps appearing all over his work. 
Here, the student not only identified a primary weakness, but also gained a greater understanding 
of how one structural element impacted the strength of his paper overall. As instructors, we 
noted that students who routinely received the same stamped comments on their first few 
assignments seemed to resolve these issues more quickly than students in the past. Taking the 
aforementioned student as an example, by the time he submitted his final paper outline, he was 
including rough transition sentences that he planned to refine in subsequent drafts.  

The ability to see patterns of writing strengths and weakness was helpful not only for the 
students, but for the instructors as well. While consulting with students on an upcoming paper, 
we could glance quickly at the color-coded comments to be reminded of the areas on which 
students excelled and on which they needed work. We found this ability to track very quickly 
students’ strengths, weaknesses, and progress to save an inordinate amount of time17 and it made 
our conferences with students much more specific and productive. 

Fourth, our feedback model allowed us to visualize the relationship between categories 
on our rubric, and thus between elements of writing. For example, a paragraph that was marked 
up with multiple comments in blue and green clearly expressed to students the connection 
between their presentation and analysis of evidence and the strength of their argument. By 
visually representing these two writing elements in tandem, students perceived how they were 
integrated and interdependent.18 For example, in her self-assessment one student connected one 
of her strengths as a writer (identifying strong and appropriate evidence) with one of her 
weaknesses (analyzing and leveraging that evidence to substantiate her argument): “Although I 
am able to choose evidence properly, I am weak at times at fully analyzing the evidence at the 
highest level of detail. At times I will make broad claims and fail to fully unpack these claims by 
analyzing more carefully my evidence, which is necessary to make a more thorough and 
persuasive argument in my paper.”19 
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
17  Although most of the discussion about iAnnotate’s benefits in terms of efficiency have centered on the time saved during 
grading, we found the time saved reviewing prior papers to be far weightier. 
18  We found it easiest to discuss these sorts of interconnections with students one-on-one.  Some students, especially those with 
less preparation in writing, were focused on working on one or two writing issues and simply not ready to think about these more 
sophisticated interrelationships between elements of writing. 
19  Again, this is evidence of a student adopting the language used in the rubric and stamps: “unpack this claim.”   



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

54 

Fifth, students began to understand and value feedback as more than merely justification 
of the grade. Because we placed an emphasis (early and often) on how to most effectively read 
and rank comments with an eye towards refining their arguments and writing, our feedback 
model functioned to reshape students’ attitude toward feedback. Several students who admitted 
to rarely revisiting, much less revising, their written work in prior courses reported that our 
feedback model helped them view comments not as punitive remarks to be consumed once and 
then forgotten, but as a multi-layered conversation about their ideas and about their development 
as critical thinkers and writers.  Other students, some whose prior instructors used the Microsoft 
Word’s Track Changes feature to comment on their work, remarked that they began to see 
comments as more than edits to be “resolved” without further reflection on broader writing 
issues that transcended the particular assignment.20  
 
B. Practical Benefits. 
 
The students responded positively to our feedback system not only because they learned about 
themselves as writers and were able to more quickly progress as writers, but also for more 
pragmatic reasons. They considered improved legibility and increased accessibility to be useful. 
Many students admitted that, in the past, they simply did not read comments when the 
handwriting was illegible and that they regularly misplaced hard copies of graded papers. 
Because iAnnotate obviated issues of legibility and made “losing” a paper an impossibility (even 
if the email containing the annotated PDF was deleted, another copy of their paper with full 
comments was just an email away), students had no legitimate excuse not to read their 
instructors’ comments. In fact, even those students who claimed that our feedback model had not 
fundamentally changed the way they engaged with different types or kinds of comments noted 
that having all of their papers digitally accessible made them more likely to revisit their written 
work.   

Further, iAnnotate streamlines instructors’ grading workflow to maximize efficiency; the 
practical benefits are five-fold. First, is the portability and extended battery life of the tablet (the 
device on which most instructors use iAnnotate). Second, is the easy, paperless submission and 
return of student work, using iAnnotate’s built-in ability to sync with Dropbox or built in email 
function. Third, toolbars can be customized to include the instructor’s most frequently used tools 
and stamps and easily adapted to any course and/or paper topic. Fourth, integrating other free 
apps further facilitates the process (e.g., we used Dragon Dictation to dictate and transcribe the 
summative comments at the end of the paper; some devices, like the iPad 3, now offer direct 
dictation into iAnnotate). Finally, as noted above, quick accessibility to color-coded stamps 
makes it faster and easier to track students’ writing problems and progress. 
 
C. Issues and Troubleshooting. 
 
Despite our overall satisfaction with our feedback model, we encountered three significant 
issues. First, some students had trouble remembering which colors corresponded to which 
category of the rubric when they did not have the rubric directly in front of them. When surveyed 
at the end of the semester, students suggested that we include a stamp at the top of every paper 

	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
20 On Microsoft Word creating the impression of “teacher as editor,” see Michael J. Faris’ blog post, “Using iAnnotate to grade”: 
http://blogs.tlt.psu.edu/projects/ipad/2010/10/using-iannotate-to-grade.html   



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

55 

that could function as a key to the taxonomy. iAnnotate would also allow instructors to easily 
insert the full, color-coded rubric at the end of each paper.  

Second, we encountered some technical difficulties. Students noted that sometimes the 
colors were lost when they printed their annotated papers using campus printers whose default 
was black-and-white. Instructors should stress that students need to read comments electronically 
or need to print them in color. Another small group of students mentioned that, depending on the 
program they used to open the annotated PDF (e.g., Adobe Reader or Preview, iBooks, 
iAnnotate, DocsToGo), some colors were more legible than others.  Before implementing this 
feedback model, instructors should investigate which colors and programs are most legible on 
the programs available at their institution and they should advise students to use those programs 
to read their annotations. 
 Finally, some students reported a lack of “personal touch” associated with the use of e-
assessment tools. In our pre-semester survey, the vast majority of the respondents indicated that 
the majority of their written work in High School had been graded by hand (85%). On the survey 
several students remarked that, while they did not find any fundamental difference between 
handwritten and electronic comments in terms of content, they generally perceived handwritten 
comments to be more “personal” and they claimed to “connect” with it more despite issues of 
illegibility.21 As more than one of these students acknowledged, however, their preference likely 
also stemmed from the fact that they were simply accustomed to handwritten comments. Yet this 
perception is not insignificant as Chang et al. (2012) discovered that students’ perceptions of 
personable feedback is interconnected with their perceptions of quality feedback; in other words, 
students think that the care associated with taking time to hand-write comments correlates with 
students’ perception that caring professors offer higher quality feedback and thus they take that 
feedback more seriously.22 One way instructors might temper these concerns is to create 
handwritten comments (rather than text stamps) in iAnnotate by using a stylus though this might 
result in issues of illegibility, especially given complaints about the lack of precision of styli, and 
obviates the practical benefits of saving time for the instructor. Alternatively, instructors might 
also choose to use a new feature of the latest version of iAnnotate: audio comments. Instructors 
can pepper the paper with audio comments of up to 60 seconds each. In addition to mitigating 
concerns about “impersonal” feedback, audio files might also create a more expressly dialogic 
form of feedback (and could stand in for the collapsible balloons as we used them).23  
 
VI. Conclusions. 
 
We found it interesting that the students who responded most positively to our feedback model 
were the strongest and weakest writers in terms of the elements of writing emphasized on our 
rubric. On the one hand, students who entered the course with a strong grasp on writing 
fundamentals reported that this feedback model helped them pinpoint very nuanced aspects of 
their writing (within broader categories) that needed improvement.  On the other hand, our 
weakest students, who frequently self-identified as visual learners, found the feedback model 
especially well-suited to their learning style, enabling them to visualize their writing strengths 
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
21  This finding corroborates student preferences for a “human aspect” to feedback found in Budge (2011) and students’ aversion 
to e-assessment because it is impersonal, as reported in Ferguson (2011), Scott (2006), and Morgan and Toledo (2006).   
22  This study finds that students prefer e-assessment for its accessibility, legibility, and timeliness, while they value handwritten 
feedback as higher-quality because of its personability.   	
  
23  On using iAnnotate’s audio feature to make grading more personal, see Doug Ward’s post on ProfHacker:  
http://chronicle.com/blogs/profhacker/grading-with-voice-on-an-ipad/40907 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

56 

and weaknesses.  Specifically, the color-coding enabled them to compartmentalize writing issues 
and to more systematically approach revisions, tackling one category at a time. So, in the end, we 
were surprised, yet pleased, to find that our feedback model addressed existing educational and 
learning inequities. 

In addition to speaking to students with differing educational backgrounds and learning 
styles, we believe that this feedback model could be productively applied across courses, 
disciplines, and institutions within minimal adaptation. In our small liberal arts college 
environment, where class sizes are relatively small and there is a premium placed on the 
professor-student interaction, iAnnotate functioned to help enrich these interactions by focusing 
and concentrating our engagements around our learning objectives. The e-assessment tool kept 
students’ and instructors’ attention firmly trained on a limited set of writing elements and on 
students’ development as thinkers and writers. When considering how this system might be 
applied to different courses or different institutional contexts, particularly those with much larger 
enrollments or those in which student work is graded by a rotating instructors or teaching 
assistants, the benefits of this feedback model become even more apparent. In particular, for the 
former, this model would enable instructors to offer more detailed feedback than would be 
ordinarily possible given the size of their classes. For the latter, this model would create 
coherent, unified standards that could be used by various graders, providing more consistency for 
students, thus improving the chance that students—now with a clearer sense of what is going 
wrong—could develop as writers. 

 
Appendix A: List of customized stamps 

 
Argumentation interesting idea 

develop this idea further 
good, careful reasoning 
you need to make your reasoning more explicit 
you need to make explicit each stage/layer of logic in this argument 
imprecise reasoning 
unpack this claim 
strong thesis, complex argument  
refine your thesis 
your intro is lacking a thesis 

Evidence nice use of evidence 
you need to interpret/analyze your evidence 
you need to introduce your evidence 
this evidence isn’t relevant to the point you are making 
insufficient evidence: add more/greater range to substantiate your point 
support this claim with evidence 

Structure strong transition 
weak transition 
clarify the point of this paragraph 
clarify how this paragraph contributes to your overall argument 
nice guidepost  

Style awkward prose 
well-written/nicely-put 
vary your word choice 
vary your sentence structure 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

57 

unpack this sentence—too long, too many ideas 
this language is vague, specify 
does this word convey precisely what you mean? 
consider your audience 

Mechanics incomplete/improper citation 
citation needed 
proofread your paper 
sp. 

 
References 

 
Bjorkman, M. (1972). Feedforward and feedback as determiners of knowledge and policy: Notes 
on a neglected issue. Scandinavian Journal of Psychology, 13, 152-158. 
 
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 
5(1), 7-74. 
 
Brown, S., Bull, J., & Race, P. (Eds.). (1999). Computer Assisted Assessment in Higher 
Education. London, Routledge. 
 
Burke, D. (2009). Strategies for using feedback students bring to higher education. Assessment & 
Evaluation in Higher Education, 34(1), 41–50. 
 
Buzzetto-More, N. A., & Alade, A.J. (2006). Best practices in e-assessment. Journal of 
Information Technology Education, 5, 251-269. 
 
Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 
31(2), 219–233. 
 
Carless, D., Salter, D.,  Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. 
Studies in Higher Education, 36(4), 395-407. 
 
Chang, Ni, Watson, A.B., Bakerson, M.A., Williams, E.E., McGroon, F.A., & Spitzer, B. (2012). 
Electronic feedback or handwritten feedback: What do undergraduate students prefer and why? 
Journal of Teaching and Learning with Technology, 1(1), 1-23. 
 
Cowan, J. (2003). Assessment for learning—giving timely and effective feedback. Exchange, 4, 
21-22. 
 
Denton, P. (2001). Generating coursework feedback for large groups of students using MS Excel 
and MS Word. University Chemistry Education, 5, 1-8. 
 
Denton, P., Madden, J., Roberts, M., & Rowe, P. (2008). Students’ responses to traditional and 
computer-assisted formative feedback: A comparative case study. British Journal of Educational 
Technology, 39(3), 486-500. 
 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

58 

Duncan, N. (2007). Feed-forward: Improving students’ use of tutors’ comments. Assessment & 
Evaluation in Higher Education, 32, 271-283. 
 
Draper, S. (2009). What are learners actually regulating when giving feedback? British Journal 
of Educational Technology, 40(2), 306–315. 
 
Faris, M. J. (2010). Using iAnnotate to grade [Blog post]. Retrieved from 
http://blogs.tlt.psu.edu/projects/ipad/2010/10/using-iannotate-to-grade.html 
 
Heinrich, E. (2007). E-learning support for essay-type assessments. In N.A. Buzzetto-More (ed), 
Principles of Effective Online Teaching. Santa Rosa: Informing Science Press. 
 
Heinrich, E., Milne, J., Ramsay, A., & Morrison, D. (2009). Recommendations for the use of e-
tools for improvements around assignment marking quality. Assessment & Evaluation in Higher 
Education, 34(4), 469–479. 
 
Hepplestone, S., Holden, G., Irwin, B., Parkin, H., & Thorpe, L. (2011). Using technology to 
encourage student engagement with feedback: A literature review.  Research in Learning 
Technology, 19(2), 117-127. 
 
Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: The problem of 
communicating assessment feedback. Teaching in Higher Education, 6(2), 269-74. 
 
Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the 
role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53-64. 
 
Hounsell D., McCune, V., Hounsell, J., & Litjens, J. (2008). The quality of guidance and 
feedback to students. Higher Education Research and Development, 27, 55-67. 
 
Irons, A. (2008). Enhancing learning through formative assessment and feedback. Abingdon: 
Routledge. 
 
Jones, J. B. (2010, June 4). Mark up PDFs on Your iPad: iAnnotate PDF [Blog post]. Retrieved 
from http://chronicle.com/blogs/profhacker/mark-up-pdfs-on-your-ipad-iannotate-pdf/24500  
 
Lizzio, A., & Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and 
effectiveness. Assessment & Evaluation in Higher Education, 33(3), 263-75. 
 
McNeill, M., Gosper, M., & Xu, J. (2012). Assessment choices to target higher order learning 
outcomes: The power of academic empowerment. Research in Learning Technology, 20, 283-96. 
 
Miller, M., Linn, R., & Gronlund, N. (2012). Measurement and Assessment in Teaching (11th 
edition). Columbus: Pearson. 
 
Monroe, B. (2002). Feedback: Where it’s at is where it’s at. The English Journal, 92(1), 102-
104. 



Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

59 

 
Mutch, A. (2003). Exploring the practice of feedback to students. Active Learning in Higher 
Education, 4(1), 24–38. 
 
Nesbit, P., & Burton, S. (2006). Student justice perceptions following assignment feedback. 
Assessment & Evaluation in Higher Education, 31(6), 655–670. 
 
Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A 
model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-
218. 
 
Nicol, D. (2009). Assessment for learner self-regulation: Enhancing achievement in the first year 
using learning technologies. Assessment & Evaluation in Higher Education, 34(3), 335–352. 
 
Poulos, A., & Mahony, M.J. (2008). Effectiveness of feedback: The students’ perspective. 
Assessment & Evaluation in Higher Education, 33(2), 143-154. 
 
Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: All that effort, but what 
is the effect? Assessment & Evaluation in Higher Education, 35(3), 277–289. 
 
Rust, C., O’Donovan, B., & Price, M. (2005). A social constructivist assessment process model: 
How the research literature shows us this could be best practice. Assessment &Evaluation in 
Higher Education, 30(3), 231–240. 
 
Sadler, D. (2010). Beyond feedback: Developing student capability in complex appraisal. 
Assessment & Evaluation in Higher Education, 35(5), 535–550. 
 
Sample, M. (2011, November 8). Making the Most of iAnnotate on the iPad [Blog post]. 
Retrieved from http://chronicle.com/blogs/profhacker/making-the-most-of-iannotate-on-the-
ipad/37091  
 
Walker, M. (2009). An investigation into written comments on assignments: Do students find 
them usable? Assessment & Evaluation in Higher Education, 34(1), 67-78. 
 
Ward, D. (2012, June 19). Grading with voice on an iPad [Blog post]. Retrieved from 
http://chronicle.com/blogs/profhacker/grading-with-voice-on-an-ipad/40907  
 
Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutors’ written 
responses. Assessment & Evaluation in Higher Education, 31(3), 379–394. 
 
Wojtas, O. (1998). Feedback? No, just give us the answers. Times Higher Education Supplement, 
September 25. 


