




















































3892


Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013, pp. 21 - 42.	
  

Undergraduate students’ perceptions of electronic and handwritten 
feedback and related rationale 

 
Ni Chang1, Bruce Watson2, Michelle A. Bakerson3, and Frank X. McGoron4 

 
Abstract: Some instructors, besides awarding grades, provide comments/feedback 
on students’ assignments. Views of students on feedback help frame effective and 
efficient teaching and learning. It is important to delve into this topic. In the 2013 
academic year, all undergraduate students at a Midwestern university were 
invited to complete a survey to share perceptions of which feedback form they 
preferred: handwritten or e-feedback and related rationale behind their 
preferences. Their rationales were given in the categories of the following five 
themes: accessibility, timeliness, legibility, quality and personal. The data were 
analyzed quantitatively and qualitatively, and show that the majority of the 
respondents preferred e-feedback. With respect to the rationale, more 
respondents and higher ratings overall were given to e-feedback for timeliness, 
accessibility, and legibility. Although more respondents overall favored e-
feedback, the ratings were higher in handwritten feedback for its quality and 
personal themes. Age and class standing are positively associated with students’ 
desire for feedback in general and for e-feedback. However, there was a negative 
association between students’ GPA and feedback in general and e-feedback. In 
this article, addressed are also limitations, educational implications, and future 
research suggestions.   
 
Keywords: feedback, electronic feedback, handwritten feedback, instructors, 
students 

	
  
I.  Introduction. 
 
Feedback is information that fosters deep learning (Denton, Madden, Roberts, & Rowe, 2008; 
Higgins, Hartley, & Skelton, 2002). It is a vital component of effective and efficient teaching and 
learning in higher education (Ackerman & Gross, 2010; Ball, 2009; Hounsell, 2003; Matthews, 
Janicki, He, & Patterson, 2012; Parkin, Hepplestone, Holden, Irwin, & Thorpe, 2012). Good 
teaching is represented by helpful comments on students’ assignments (Ramsden, 2003). With 
the rapid development of technologies, some instructors have shifted the way they provide 
feedback from a conventional handwritten approach to a technological format; specifically 
typing feedback and delivering it electronically. Students’ views on feedback help frame both 
effective and efficient instruction and learning in higher education (Denton et al., 2008; Higgins 
et al., 2002; Parkin et al., 2012).  It is important to know students’ perceptions of feedback, 
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1 Department of Elementary Education, Indiana University South Bend, 1700 Mishawaka Ave. South Bend, IN 46634, 
nchang@iusb.edu 
2 Department of Professional Educational Services, Indiana University South Bend, 1700 Mishawaka Ave. South Bend, IN 
46634, watsonbr@iusb.edu 
3 Department of Secondary Education and Foundations of Education, Indiana University South Bend, 1700 Mishawaka Ave. 
South Bend, IN 46634, mbakerso@iusb.edu 
4 Department of Elementary Education, Indiana University South Bend, 1700 Mishawaka Ave. South Bend, IN 46634, 
fmcgoron@iusb.edu	
  



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

22 

including handwritten and electronic feedback (e-feedback) (Ackerman & Gross, 2010; Carless, 
2006; Higgins et al., 2002). Therefore, a survey was conducted at a regional campus of a large 
Midwestern university during the academic year 2012 to 2013. The purposes of this survey study 
were to explore the perceptions of undergraduate students regarding two forms of feedback: e-
feedback and handwritten feedback and to explore the reasons behind the students varied 
preferences. The research questions underlying this study were “What do undergraduate students 
prefer: handwritten feedback or e-feedback?” and “What are their related rationale?” 
 
A. Theoretical Framework. 
 
Students desire to receive feedback, as it could help better their learning (Hyland, 2000). 
However, feedback needs to be easily accessible to students. Accessibility is a general 
expectation of students in the millennial generation (Morrissey, Coolican, & Wolfgang, 2011). A 
survey study conducted by Di Costa (2010) found that accessibility was mostly recognized by 
the students as a component in defining useful feedback. Bridge and Appleyard (2008) and 
Sadler (2010) noted students appreciated the permanence and safety of feedback that could be 
accessed electronically. In contrast, Chang et al. (2012) found one reason given by the 
handwritten feedback supporters was that they were able to easily access feedback conveniently 
through professors in class. That is, students did not need to rely on computers to access 
feedback.  

Besides accessibility of feedback, timeliness has been identified as an important element 
in benefiting student learning. The National Union of Students (NUS; 2008) survey found 
students were unhappy with the timing of their feedback. Although students want feedback that 
is constructive, they have a strong preference for feedback that is prompt (Scott, 2006) and 
timely (Ferguson, 2011). If feedback is received late, it becomes useless to students, as many 
students have already moved on (Denton et al., 2008). To receive feedback early, it seems 
electronically delivered feedback gets the majority of student support (Chang et al., 2012). When 
Bridge and Appleyard (2008) asked students to consider the issue of online feedback, 88% 
reported that they favored online feedback because they were able to receive it faster than in the 
more conventional format of hand delivery. Bai and Smith (2010) cited the automated nature of 
e-learning as contributing to the benefit of timely feedback.	
   

When feedback is typed rather than handwritten, feedback is readable. Denton et al., 
(2008) reported that students considered legibility a feature that would significantly improve the 
feedback they received. Therefore, legibility is a significant element in supporting student 
learning (Ferguson, 2011).  (Price, Handley, Millar, & O'Donovan, 2010) reported students’ 
general criticism of feedback was mainly due to illegible writing. Illegible feedback makes it 
unclear, leaving students both disappointed and frustrated, which are also supported by the study 
conducted by Chang et al. (2012). 

In aiding students to learn, feedback also needs to be constructive and helpful. The 
content needs to be understood by students. Feedback should also enable students to know what 
and where their attention is needed and whether or not their work is on right track. Furthermore, 
allowing students to engage in revisions according to received feedback is beneficial to students 
as well. All the above is the operational term of quality. According to the National Union of 
Students (2008), students are dissatisfied with the quality of feedback. Case (2007) also 
identified poor and low quality feedback as issues in the feedback students received. When 
considering the quality of online instruction, Yang and Durrington (2010) found quality of 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

23 

instructors’ feedback as the aspect mentioned most often in student course evaluations. When 
time and quality were considered as competing aspects of feedback, students were happy to wait 
a little longer for feedback if quality increased (Chang et al., 2012; Ferguson, 2011). 

Quality feedback also needs to contain language that is positive and relational, which 
may help establish the relationship between instructors and students. When such feedback is 
received, students may think their professors care about their learning. Time and effort spent in 
providing feedback on students’ assignments is appreciated by students. Students are thus likely 
to read feedback and, in turn, better their performances. All the above is the operational term of 
personal in terms of feedback. Krause and Stark (2010) found that feedback is most useful to 
students when it is perceived to be personal. Students responding to Ferguson’s (2011) study 
want feedback to be both positive and personal. When the tone of feedback is overly negative, 
students often feel that instructors do not care about their learning (Price et al., 2010). Without 
feedback that is personal, students may view assignments as mere products, leaving them feeling 
alienated and disengaged (Di Costa, 2010; Mann, 2001; Price et al., 2010). With respect to 
feedback that is personal, one interesting finding by Chang et al. (2012) was that respondents 
who supported handwritten feedback perceived that type of feedback as more personal than those 
who supported e-feedback. The handwritten supporters also recognized that handwritten 
feedback enabled them to have close rapport with their instructors.  

Accessibility, timeliness, legibility, quality, and personal, as have been mentioned above, 
are the five themes identified by Chang et al. (2012) through a prior study in the academic year 
2011-2012. Two hundred and sixty students from the School of Education at the university 
participated in the study. The study was intended to explore what form of feedback that the 
students preferred, handwritten or electronic, and related rationale behind their preferences. In 
term of e-feedback, it was defined as all feedback that was delivered to students electronically. 
As the result of the study, Chang et al. (2012) found that the majority of the participants (68%) 
preferred e-feedback while 32% preferred handwritten feedback. When considering rationale for 
preferring e-feedback, 38% of the respondents enjoyed its easy accessibility. Thirty percent of 
students favored timeliness and 16% supported its legibility. Not as many e-feedback supporters 
mentioned quality (10%) and personal (1%) aspects as they did for timeliness and legibility. In 
contrast, there were many more handwritten feedback supporters who endorsed quality (40%) 
and personal (32%). Fewer students favored handwritten feedback for accessibility (25%), and 
timeliness (3%). No handwritten feedback supporters indicated legibility as a rationale. The 
present study further explored the two aspects: What form of feedback did the students prefer: 
handwritten or electronic feedback? And what was the related rationale?  
 
II. Methods. 
 
A. Participants. 
 
All undergraduate students at a Midwestern university were invited to participate in a survey 
asking about handwritten and e-feedback and the related rationale. Of the approximate 7,200 
students, 763 undergraduate students responded, with a return rate of almost 11%. Out of the 763 
respondents, those respondents who skipped questions are noted in the results. Almost twice as 
many female as male respondents preferred e-feedback (n = 475) over handwritten feedback (n = 
273). The predominant age range was 18-24 (n = 423). Class standing for the most part was 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

24 

evenly distributed. The predominant GPA range was 3.01-4.00 (n = 470) and the College of 
Liberal Arts (CLAS) had the most respondents (n = 301) (see Table 1). 
 
B. Instrument. 
 
The online survey was hosted on Survey Monkey and was used to collect data. The survey 
questions were modified and revised from the previous study to obtain more valid information 
with students of the entire campus. In other words, based on the five themes: accessibility, 
timeliness, legibility, quality, and personal, which were derived from the previous study (Chang 
et al., 2012), the present study expanded and extended each of the themes with a few 
corresponding items on a 7 point Likert scale. For example, there were four factors under the 
theme of accessibility: (a) allows me to get information easily, (b) allows me to receive and send 
information conveniently, (c) allows me to ask questions easily and (d) makes me feel secure to 
receive feedback from the professor. The survey instrument consisted of thirteen closed-ended 
questions with multiple factors in each and four open-ended questions. 
 
Table 1.  Demographics in terms of handwritten and e-feedback feedback preference. 

 Handwritten E-feedback Blank Total 
Variables n % n % n % n % 

Gender          
Male 74 35.24 135 63.98 1 0.47 210 100 

Female 199 36.18 340 61.93 10 1.82 549 100 
 273 36% 475 62%     
Age         

18-24 180 42.55 239 56.50 4 0.95 423 100 
25-34 53 29.78 122 68.54 3 1.69 178 100 
35-44 26 26.26 71 71.72 2 2.02 99 100 
45-54 11 25.58 31 72.09 1 2.33 43 100 

55+ 5 27.78 12 66.67 1 5.56 18 100 
 275 36% 475 62%     
Class 
Standing     

  
  

Freshman 74 46.84 81 51.27 3 1.90 158 100 
Sophomore 74 43.27 95 56.21 2 1.18 171 100 

Junior 62 32.80 125 66.14 2 1.06 189 100 
Senior 65 27.20 170 71.13 4 1.67 239 100 

     
  

  
GPA         

3.01-4.00 161 34.26 302 64.26 7 1.49 470 100 
2.01-3.00 78 36.62 134 62.91 1 0.47 213 100 
1.01-2.00 4 25.00 12 75.00 0 0 16 100 
0.00-1.00 1 100 0 0 0 0 1 100 
Unknown 31 56.36 23 41.82 1 1.82 55 100 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

25 

         
School         

Arts 23 34.33 42 62.69 2 2.99 67 100 
Business 31 27.68 80 71.43 1 0.89 112 100 

Education 57 43.18 74 56.49 1 0.76 132 100 
CLAS 118 39.20 181 60.13 2 0.66 301 100 
Health 34 28.81 85 71.43 1 0.84 120 100 

Technology 12 44.44 14 51.85 1 3.70 27 100 
Note.  Percent ranges refer to the partitioned group or n.  Also, some of the ns do not add up to 763 as some 
respondents skipped questions. 

 
C. Procedure. 
 
After the Institutional Review Board approval, the survey link was sent out to all undergraduate 
students who were in attendance at the university via an email invitation. On Survey Monkey, 
the students were first prompted with a study information sheet, which informed them of the 
purpose of the study, ensured confidentiality and also made it clear that participation was 
voluntary. If potential respondents agreed to participate, they continued on to complete the 
survey. All potential participants received a first follow-up letter electronically two weeks after 
the initial invitation letter was sent out. A second follow-up letter was emailed to all potential 
participants two weeks later. The study was closed two weeks following the second follow-up 
letter. 
 
D. Data Analysis. 
 
To answer the research questions of whether the undergraduate students preferred e-feedback or 
handwritten feedback, nonparametric and parametric tests were utilized. SPSS 20 was used to 
answer why either of these options was preferred over the other. A crosstabs procedure, using the 
Chi-square test of independence was used to analyze the nominal variables. A Chi-square test of 
independence measures the degree to which a sample of data comes from a population with a 
specific distribution (Bakerson, 2009; Mertler & Vanatta, 2005; Rosenberg, 2007; Stevenson, 
2007). It tests whether the observed frequency count of a distribution of scores fits the theoretical 
distribution of scores. This issue was addressed through the use of the Pearson's Chi-square 
procedure (Bakerson, 2009, Mertler & Vanatta, 2005, Rosenberg, 2007). Independent t-tests 
were conducted to compare feedback preference for all factors under the five themes; 
accessibility, timeliness, legibility, quality, and personal (Charmaz, 2000; Creswell, 2002). 
Correlations of demographic variables, with feedback preferences, were run to establish patterns 
in the variables (Cresswell, 2002). In addition all responses to open ended questions were 
analyzed with respect to their justifications or preferences for handwritten or e-feedback 
providing a purposeful examination of detailed actual experience (Cresswell, 2002). 
 
 
 
 
 
 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

26 

III. Results and Discussion. 
 
A. Preference for the form of feedback. 
 
With respect to the first research question: “What do the participants prefer: handwritten 
feedback or electronic feedback?” it was found that the majority of the participants (n=476, 
63.3%) preferred e-feedback (see Figure 1). The studies conducted by Chang et al. (2012), 
Denton et al. (2008), and Parkin et al. (2012) also yielded similar findings in which more 
students preferred e-feedback than handwritten feedback.  

 
Figure 1.  Feedback preference. 
 
B. Degrees of preferences for both forms of feedback.  
 
In addition to a question on preference, the respondents were also asked to rate the degree of 
preference for e-feedback and handwritten feedback in general, and then for all factors under the 
five main themes; accessibility, timeliness, legibility, quality, and personal. Table 2 details the 
results of the question concerning what degree a respondent preferred: e-feedback or handwritten 
feedback. Whichever the preference by the respondents, handwritten or e-feedback, these 
respondents also rated their preferred feedback form higher than the other. 
 
Table 2.  T-tests comparing how much preference for handwritten and e-feedback 
feedback based on choice of feedback. 
 n Mean SD t df p 
Preference for Handwritten      

Handwritten 276 1.95 1.01 -24.596 745 0.00 
E-feedback 471 4.46 1.51    

Preference for E-feedback      
Handwritten 274 4.33 0.921.39 29.33 748 0.00 
E-feedback 476 1.86 0.92    

Note. Likert scale 1 = very much prefer to 7 = not preferred at all, the lower the mean the stronger the preference. 
 
 

0.0% 

20.0% 

40.0% 

60.0% 

80.0% 

100.0% Handwritten 
feedback 
Electronic 
feedback 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

27 

C. The usefulness of two forms of feedback. 
 
The respondents were also asked to rate the degree of usefulness of each form of feedback (see 
Table 3). When the respondents preferred handwritten feedback, they also thought handwritten 
feedback was more useful than e-feedback. When the respondents chose e-feedback as their 
preferred form, they rated e-feedback as much more useful than handwritten feedback. 
 
Table 3.  T-tests comparing usefulness of feedback. 
 n Mean SD t df p 
Usefulness of Handwritten      

Handwritten 275 1.644 0.878 -16.147 748 0.000 
E-feedback 475 3.324 1.591    

Usefulness of E-feedback      
Handwritten 274 3.518 1.435 20.127 747 0.000 
E-feedback 476 1.787 0.916    

Note. Likert scale 1 = very useful to 7 = not useful at all, the lower the mean the stronger the preference. 
 
D. Accessibility. 
 
There were four factors under the theme of accessibility: (a) allows me to get information easily, 
(b) allows me to receive and send information conveniently, (c) allows me to ask questions easily 
and (d) makes me feel secure to receive feedback from the professor. Irrespective of the 
respondents’ preferred feedback form, there was a statistically significant difference in the 
perceptions of each of the factors under this theme between handwritten feedback supporters and 
e-feedback supporters. That is, when the respondents chose handwritten feedback as their 
preferred feedback form, they rated all factors more strongly than those who preferred e-
feedback (see Table 4). When the respondents chose e-feedback as their preferred feedback form, 
they rated all factors under e-feedback more strongly than the same factors under handwritten 
feedback (see Table 5). Overall, however, these respondents gave higher ratings to e-feedback 
than to handwritten feedback regardless of preferred feedback form (see Tables 4 & 5). 
 
Table 4.  T-tests comparing accessibility factors for e-feedback feedback. 
 n Mean SD t df p 
(a) Allows me to get information easily 

Handwritten Preference 270 2.722 1.595 13.858 736 0.000 
E-feedback Preference 468 1.511 0.773    

(b) Allows me to receive and send information conveniently 
Handwritten Preference 269 2.100 1.307 9.668 733 0.000 
E-feedback Preference 466 1.380 0.703    

(c) Allows me to ask questions easily  
Handwritten Preference 269 2.877 1.815 9.770 734 0.000 
E-feedback Preference 467 1.803 1.164    

(d) Makes me feel secure to receive feedback from the professor 
Handwritten Preference 267 3.240 1.664 12.912 729 0.000 
E-feedback Preference 464 1.882 1.167    

Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

28 

 
The justifications provided by the e-feedback supporters for (a) allows me to get 

information easily include, “I'm always online, always even on my phone so it makes things 
easier for me.” “[N]o matter where you are, you usually have access to the internet therefore you 
can get it anywhere at any time.” Denton et al. (2008) and Parkin et al. (2012) found similar data. 
They found that technology enabled students to access their grades and feedback at a time and 
place of their choosing. In commenting on (b) allows me to receive and send information 
conveniently, some e-feedback supporters wrote, “Easily accessible as it only requires one or two 
clicks of the mouse.” “Very helpful because I can log on whenever it is convenient for my 
schedule to check on things.” Similarly, conveniently receiving and sending information with the 
use of the Internet was concluded in Chang (2011) and Chang et al. (2012). Students recognized 
and appreciated the flexibility and convenience that technology could provide in facilitating their 
learning (Denton et al., 2008; Parkin et al., 2012). 

In contrast, handwritten feedback supporters had their own reasons to support (a) allows 
me to get information easily and (b) allows me to receive and send information conveniently. The 
respondents justified, “It does not require a computer to read.” To some students, finding a 
computer and/or logging on a computer required an effort. A student noted, “If it's an email or 
electronic, I have to take the time to log in to the computer, which at home is slow and in a dark 
corner.” The rationale given by the handwritten feedback supporters is consistent with the studies 
conducted by Chang (2011) and by Chang et al. (2012), handwritten feedback was independent 
of the Internet, which made student learning convenient. To avoid redundancy, the discussion of 
(c) allows me to ask questions easily will be made in section of Personal. 

With respect to why e-feedback supporters supported (d) felt secure to receive feedback 
from professors, here are some of the explanations: “I don't have to worry about losing it!” “It's 
nice that you can always go back to refer to it when it's saved online.” Yet, the handwritten 
feedback supporters contended, “Does make me feel secure with having the actual feedback in 
my hands.” “This is also good for keeping me secure because I can always keep and lock the 
feedback from it being deleted.”  Even though Chang et al. (2012) identified and supported this 
category, few other studies have examined this category. Therefore future research is warranted 
for better facilitating student learning.  
 

Table 5.  T-tests comparing accessibility factors for handwritten feedback. 
 
 n Mean SD t df p 
(a) Allows me to get information easily 

Handwritten Preference 274 2.449 1.465 -17.526 728 0.000 
E-feedback Preference 456 4.568 1.648    

(b) Allows me to receive and send information conveniently 
Handwritten Preference 271 3.989 1.623 -10.838 518 0.000 
E-feedback Preference 454 5.286 1.447    

(c) Allows me to ask questions easily  
Handwritten Preference 266 2.872 1.680 -12.335 579 0.000 
E-feedback Preference 454 4.504 1.770    

(d) Makes me feel secure to receive feedback from the professor 
Handwritten Preference 268 1.720 1.206 -14.100 718 0.000 
E-feedback Preference 452 3.489 1.832    



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

29 

E. Timeliness. 
 
There is only one factor under the theme of timeliness: (e) [Feedback] allows me to receive 
feedback fast. On this factor, there was a statistically significant difference between the views by 
the handwritten feedback supporters and those by the e-feedback supporters. When the 
respondents chose handwritten feedback as their preferred feedback form, they rated timeliness 
more strongly than those favoring e-feedback. When the respondents chose e-feedback as their 
preferred feedback form, they rated timeliness more strongly than those favoring handwritten 
feedback. Overall, however, these respondents’ ratings for e-feedback were stronger than for 
handwritten feedback regardless of preferred feedback form (see Table 6). 
 
Table 6.  T-tests comparing timeliness theme for handwritten and e-feedback. 
 n Mean SD t df p 
Handwritten (e) feedback allows to receive feedback fast 

Handwritten 266 3.624 1.581 -12.220 570 0.00 
E-feedback 451 5.135 1.631    

E-feedback (e) feedback allows to receive feedback fast 
Handwritten 267 2.277 1.461 8.927 731 0.00 
E-feedback 466 1.504 0.883    

Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 
Regardless of the respondents’ preferences for the two forms of feedback, it is apparent 

that they rated e-feedback as timelier than handwritten feedback. The mean difference of views 
on timeliness is notably large (see Table 6). Similar findings were determined in the reports by 
Chang et al. (2012) and Dennen, Darabi, and Smith (2007). When feedback is delivered 
electronically, students do not have to wait until next class or another week, as a student wrote, 
“…I don't have to wait a week to hear back on how well I did or what I need to improve on.” 
Another student pointed out, “If I receive feedback that is very late, I usually disregard it because 
it is irrelevant.” The findings are consistent with Parkin et al. (2012), who found that if students 
did not receive feedback in time for it to be meaningful germane to a task assessed, the relevance 
of the feedback could thus be reduced. Feedback needs to be timely to appropriately promote 
student learning (Chang et al., 2012; Dennen et al., 2007; Di Costa, 2010; Ferguson, 2011; 
Parkin et al., 2012; Rowe & Wood, 2008).  

However, from the perspectives of those who supported handwritten feedback, timeliness 
did not seem to be a concern. One respondent rationalized that feedback that was regularly 
delivered in class would enable students to predict when they could receive feedback from 
instructors: “With handwritten feedback, you know when you can expect to receive it (i.e. in 
class or other scheduled meeting time).” Another reason behind not being concerned about 
timeliness is the view many handwritten feedback supporters, even some e-feedback supporters, 
had that the delayed return of feedback is due to instructors spending time reading students’ 
work, as a student put, “It takes longer to get a handwritten feedback … because the Professor 
took the time and effort to read it [your work].” Thus, feedback could be shaped by individual 
student assignments as a means of individualized instruction (Chang et al., 2012). As such, the 
respondents perceived that they were likely to receive detailed and constructive feedback, as 
some commented, “I am willing to wait longer for and prefer to wait for detailed handwritten 
feedback as opposed to electronic feedback.” “If constructive feedback given, time isn’t too 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

30 

much a factor.” “It's okay if they take a little longer because the quality is better.” Chang et al. 
(2012) and Ferguson (2011) had a similar finding that students would be willing to wait longer 
for quality feedback. 

Throughout the entire survey, neither those students who preferred e-feedback nor those 
who preferred handwritten feedback specifically indicated the size or type of assignments in 
relation to timeliness. In other words, none mentioned about to what extent timeliness is based 
on the size or type of assignment: a longer assignment might be turned around slower than a 
shorter assignment. It could be easily understood that short essay can be more quickly evaluated 
than a longer paper. Therefore, there is no particular answer to this issue. Nonetheless, feedback 
that is received untimely is not helpful in deepening or maximizing student learning (Chang et al., 
2012; Dennen et al., 2007; Di Costa, 2010; Ferguson, 2011; Parkin et al., 2012; Rowe & Wood, 
2008). 

 
F. Legibility. 
 
There were two factors under the theme of legibility; (f) [Feedback] enables me to read the 
feedback and (g) [Feedback] enables me to understand what the professor writes. There were 
statistically significant differences between the perceptions by the handwritten feedback 
supporters and those by e-feedback supporters. When the respondents chose handwritten as their 
preferred feedback form, they rated both factors under the theme of legibility more strongly than 
those e-feedback supporters did (see Table 7). The same holds true for the respondents who 
chose e-feedback as their preferred feedback form. These respondents rated the two factors of 
legibility under e-feedback more strongly than under handwritten feedback (see Table 8). 
 
Table 7.  T-tests comparing legibility factors for handwritten feedback. 
       
 n Mean SD t df p 
(f) enables me to read the feedback 

Handwritten Preference 266 2.959 1.510 -11.912 716 0.000 
E-feedback Preference 452 4.522 1.800    

(g) enables me to understand what the professor writes  
Handwritten Preference 267 3.079 1.450 -12.404 717 0.000 
E-feedback Preference 452 4.601 1.675    

Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 

Even though there are statistically significant differences within each factor, overall more 
students preferred e-feedback on both of these factors, and gave higher ratings regardless of their 
particular feedback preference (see Tables 7 and 8). Chang et al. (2012), Denton et al. (2008), 
Ferguson (2011), and Price et al. (2010) reported similar findings. Common justifications 
provided by the respondents include, “Since it is typed, it is legible [,] [i]f their spelling and 
grammar is good at least.” “… electronic feedback wins in this category [legibility].” Denton et 
al. (2008) and Parkin et al. (2012) found that many students were likely to read or use feedback if 
it was returned to them in a typed and legible format. Chang (2011), Chang et al. (2012), and 
Ferguson (2011) also confirmed the finding that typed feedback enabled students to read 
feedback without difficulty. With respect to (g), [Feedback] enables me to understand what the 
professor writes, to some respondents, e-feedback, even if it is typed, does not make sense to 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

31 

students and is full of spelling errors, it is of little use, as a respondent expressed, “You will 
always be able to read typed [feedback], but that doesn't matter if [it] is not necessarily 
comprehensible and more subject to misspellings.” On the contrary, if feedback’s quality was 
good, the respondents were willing to take time to decipher it. A student put it this way: “If the 
quality of what is written is high enough, student time to making out the writing is worth it.” The 
linkage between legibility and quality appears to suggest that students care about their learning 
and hope to act on feedback to better their work (Chang et al., 2012; Ferguson, 2011). However, 
further research is needed for a deep look at this factor. 
  
Table 8.  T-tests comparing legibility factors for e-feedback feedback. 
 n Mean SD t df p 
(f) enables me to read the feedback 

Handwritten Preference 267 1.846 1.316 6.707 728 0.000 
E-feedback Preference 463 1.324 0.788    

(g) enables me to understand what the professor writes  
Handwritten Preference 265 1.996 1.242 5.886 726 0.000 
E-feedback Preference 463 1.495 1.021    

Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 
G. Quality. 
 
There were seven factors under the theme of quality: [Feedback] (h) offers constructive criticism 
or comments, (i) is helpful, (j) allows me to understand the content of the professor’s comment, 
(k) allows for revisions and improvement, (l) provides detailed information that I would like to 
know in text, (m) provides detailed information that I would like to know at the end of paper, and 
(n) allows me to feel and touch the feedback, which is conducive to my reading and 
understanding. There were statistically significant differences between the views by the 
handwritten feedback supporters and those by the e-feedback supporters on all the factors of 
quality. That is, when the respondents chose handwritten feedback as their preferred feedback 
form, they rated all factors more strongly than those by e-feedback supporters (see Table 9). The 
same holds true for those who chose e-feedback as their preferred feedback form. These 
respondents rated factors of quality under e-feedback statistically more strongly than under 
handwritten feedback (see Table 10). However, overall, more respondents rated factors of (h) 
and (n) under handwritten feedback higher than those under e-feedback (see Tables 9 & 10). 
 
 
 
 
 
 
 
 
 
 
 
 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

32 

Table 9.  T-tests comparing quality factors for handwritten feedback. 
 n Mean SD t df p 
(h) offers constructive criticism or comments 

Handwritten Preference 268 1.679 1.126 -9.792 718 0.000 
E-feedback Preference 452 1.799 1.659    

(i) is helpful 
Handwritten Preference 267 1.588 1.098 -10.137 717 0.000 
E-feedback Preference 452 2.741 1.656    

(j) allows me to understand the content of the professor's comment 
Handwritten Preference 267 1.970 1.214 -10.962 716 0.000 
E-feedback Preference 451 3.268 1.695    

(k) allows for revisions and improvement 
Handwritten Preference 265 1.951 1.228 -10.375 712 0.000 
E-feedback Preference 449 3.229 1.770    

(l) provides detailed information I would like to know in text 
Handwritten Preference 266 2.139 1.382 -9.426 711 0.000 
E-feedback Preference 447 3.333 1.770    

(m) provides detailed information I would like to know at the end of a paper 
Handwritten Preference 263 1.658 1.036 -10.914 708 0.000 
E-feedback Preference 447 2.904 1.672    

(n) allows me to feel and touch the feedback, which is conducive to my reading 
Handwritten Preference 265 1.676 1.258 -11.655 707 0.000 
E-feedback Preference 444 3.205 1.902    

Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 

Handwritten feedback supporters perceived that if the feedback was handwritten, the 
quality of handwritten feedback was always higher than that of e-feedback. A student said, 
“Handwritten feedback from my courses has been consistently higher quality and more thought 
out comments than any electronic feedback I have received.” Most handwritten feedback 
supporters were also in sync with the notion that handwritten feedback was “more apt to 
explaining mistakes.” When feedback enabled students to see and understand their mistakes, it is 
likely that students perceived such feedback as high quality. Therefore, handwritten feedback 
was helpful and comprehensible, and enabled students to know specifically where further 
improvement was needed. In addition, when instructors write feedback by hand, various colors 
of pens would be used for different purposes, as a respondent explained, “Some teachers use 
different colored ink which helps distinguish whether the written comment refers to a mistake or 
simply a constructive comment. An example would be red ink for errors like [grammar]. Blue 
ink could mean a [constructive] comment or constructive [criticism].” Chang et al. (2012) found 
that the handwritten feedback supporters appeared to have attached much greater importance to 
the feedback that was more detailed and specific than feedback that was typed and sent 
electronically.  
 
 
 
 
 
 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

33 

Table 10.  T-tests comparing quality factors for e-feedback feedback. 
 n Mean SD t df p 
(h) offers constructive criticism or comments 

Handwritten Preference 263 2.970 1.604 8.656 725 0.000 
E-feedback Preference 464 2.070 1.180    

(i) is helpful 
Handwritten Preference 265 2.608 1.580 8.053 727 0.000 
E-feedback Preference 464 1.819 1.057    

(j) allows me to understand the content of the content of the professor's comment 
Handwritten Preference 264 3.136 1.549 10.844 727 0.000 
E-feedback Preference 465 2.039 1.159    

(k) allows for revisions and improvement 
Handwritten Preference 263 2.875 1.492 8.024 721 0.000 
E-feedback Preference 460 2.078 1.148    

(l) provides detailed information I would like to know in text 
Handwritten Preference 261 3.111 1.561 8.787 719 0.000 
E-feedback Preference 460 2.174 1.259    

(m) provides detailed information I would like to know at the end of a paper 
Handwritten Preference 259 3.290 1.567 9.676 714 0.000 
E-feedback Preference 457 2.230 1.310    

(n) allows me to feel and touch the feedback, which is conducive to my reading 
Handwritten Preference 261 4.667 1.817 8.708 715 0.000 
E-feedback Preference 456 3.384 1.943    

Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 
Many handwritten feedback supporters also show their propensity toward handwritten 

feedback by rationalizing their disapproval of e-feedback. One respondent noted, E-feedback 
“[i]s usually based on a scale rather than the professor leaving actual comments.” 
Miscommunication is another reason for many handwritten feedback supporters to feel 
disinterested in e-feedback. A respondent wrote, “It is particularly hard to fully understand 
nuance via electronic communication. [Thus], miscommunication is so easy.” A lack of non-
verbal cues could easily lead readers to misinterpret or misunderstand instructors’ intended 
comments or messages (Chang, 2011). In terms of caring for student learning, the respondents 
felt that e-feedback did not show the sincerity of professors: E-feedback was “[n]ot always the 
best advice because it seems like they just threw it together.” These reasons indirectly convey 
that e-feedback is not useful and does not allow students to improve their learning.  

E-feedback supporters offered a different rationale for preferring all factors of quality. 
From their perspectives, e-feedback was specific and offered useful explanations: “I've noticed 
that most of the electronic feedbacks are more in-depth in their explanations and reasons.” Parkin 
et al. (2012) echoed that the participants in their study felt that online feedback was thoughtful. 
Additional reasons given by e-feedback supporters include, “The clarity I receive from electronic 
feedback has been better than written. I suspect that is because thoughts can be edited and 
organized in such a way that handwritten examples do not allow.” Parkin et al. (2012) also 
reported that their respondents recognized editing and revising feedback could become fairly 
easy to tutors with the use of electronic tools. Apparently, technology has made teaching more 
effective, as instructors are able to edit and reorganize feedback that has been composed. In 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

34 

contrast, instructors who chose to write feedback by hand did not seem able to do so frequently 
and conveniently. An e-feedback supporter commented, “Handwritten comments tend to be 
abbreviated more often and leaves you occasionally wondering if you missed something or if you 
correctly understand the abbreviations.” Decoding abbreviations and wondering whether the 
resulting work matched the instructor’s intended meaning were fairly uneasy to the respondents 
and could generate a sense of uncertainty. Such feeling and emotional status could plausibly 
become the reasons for some respondents to support e-feedback. However, these aspects were 
not found by the studies conducted by (Chang, 2011) and by (Chang et al., 2012). As such, an 
investigation could be warranted to further the understanding of how to facilitate student learning 
via assessment feedback. 

The qualitative data given above might help point to specific, detailed, clear, thoughtful, 
and comprehensible feedback that was generally desired by the respondents, as it could offer 
information for improvement. In other words, the data showed that irrespective of their particular 
feedback preferences, the respondents viewed that handwritten feedback could provide 
constructive feedback. This might explain why more respondents, in general, gave higher ratings 
to handwritten feedback than e-feedback on (h) offers constructive criticism or comments than to 
e-feedback.  

 
H. Personal. 
 
There were four factors under the category of personal: [Feedback] (o) allows me to establish 
rapport with my professor, (p) encourages me to read feedback, (q) shows that the professor 
cares about me, and (r) makes me appreciate my professor's time and attention. When the 
respondents chose handwritten feedback as their preferred feedback form, they rated all factors 
significantly more strongly than those by e-feedback supporters (see Table 11). The same holds 
true for those who chose e-feedback as their preferred feedback form. These respondents rated 
all factors under electronic feedback significantly more strongly than the same factors under 
handwritten feedback (see Table 12). However, overall, more respondents rated factors of (q) 
and (r) under handwritten feedback higher than those under e-feedback (see Tables 11 & 12). 
One of the main reasons for handwritten supporters to support handwritten feedback may be that 
“[h]andwritten feedback …  always seems personal …” as a respondent stated. Commonly felt 
by the respondents is that e-feedback appears to distance instructors from students 
psychologically (Chang, 2011), as some students noted: “There seems to be a distance between 
you and the professor if all feedback is just electronic.”  The respondents explained, “Electronic 
is usually more of a summary…” “… they … just copy and paste a generic statement.” Similarly 
Chang et al. (2012) found that “… sometimes electronic feedback feels generic and impersonal” 
(p. 12). As such, if feedback is handwritten, it would be difficult for instructors to “duplicate” 
feedback, as a respondent pointed out, “I feel like an instructor is much less likely to copy and 
paste when the feedback is handwritten.” If feedback is copied and pasted on a student’s 
assignment, the student would be made to“[a]lmost feel as if I’m simply a part of a mass email 
that is sent out to a lot of students.” This is implicit that instructors care very little about student 
learning, if e-feedback is delivered in this fashion. Therefore handwritten feedback seems a 
good-to-fit candidate for instructors to show care about student learning, as a respondent 
remarked, “I think that having a professor hand write their comments not only shows that you[‘re] 
not just another number but that they actually care about your improvements in their classes.” 
This might also explain why overall the respondents in the present study gave higher ratings on 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

35 

the factors of (q) shows that the professor cares about me, and (r) makes me appreciate my 
professor's time and attention, irrespective of their particular preferred feedback forms. In fact, 
the respondents’ view of care rendered by instructors had already been expressed in the section 
of timeliness. That is, handwritten feedback supporters were willing to wait a bit long to receive 
handwritten feedback, because they perceived that instructors took time to provide thoughtful 
and constructive feedback, which demonstrated that their academic enhancement was cared by 
the instructors.  

 
Table 11.  T-tests comparing personal factors for handwritten feedback. 
 n Mean SD t df p 
(o) allows me to establish rapport with my professor 

Handwritten Preference 265 1.751 1.114 -9.940 710 0.000 
E-feedback Preference 447 2.953 1.772    

(p) encourages me to read the feedback 
Handwritten Preference 265 1.381 0.871 -10.945 710 0.000 
E-feedback Preference 447 2.651 1.765    

(q) shows that the professor cares about me 
Handwritten Preference 263 1.464 0.923 -9.164 707 0.000 
E-feedback Preference 446 2.498 1.686    

(r) makes me appreciate my professor's time and attention 
Handwritten Preference 264 1.337 0.778 -9.007 707 0.000 
E-feedback Preference 445 2.256 1.546    

 Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 

In this sense, handwritten feedback seems to have a tendency to make students feel 
personally connected with instructors. “[H]andwritten feedback seems more human than 
electronic feedback,” commented a respondent. Chang et al. (2012) also reported that when all 
feedback was received electronically, it became easy for a student to feel like a number and that 
when feedback was handwritten it would encourage students to ask instructors for clarifications 
of comments. This can also address (c) allows me to ask questions easily in the section of 
Accessibility. When feedback was written by hand and delivered in class, asking instructors 
questions becomes quite easy. “Handwritten feedback makes it more welcoming to ask the 
professor questions about their feedback face-to-face and encourage building a student-instructor 
relationship with the instructor,” commended a respondent. Chang et al. (2012) echoed that it 
was convenient to approach instructors for explanations if feedback was delivered in class. Easy 
and immediate responses from instructors also represent gestures that instructors care about 
students’ improvement.  

Asking instructors questions face-to-face could promote a positive relationship between 
instructor and student, which seemed, in turn, to encourage students to read feedback. Otherwise, 
reading feedback is unlikely to happen, as a respondent shared, “[M]y professor does not get to 
know me this way …, if it can be all uniform and not unique to each student, the connection is 
not there so reading the "comments" is much less likely to happen.” It is apparent that students’ 
emotions, derived from the relationship between instructor and student, plays a very important 
role in student learning. “The personal relationship between a professor and myself is very 
important to me.” “I love to feel the connection between the professors,” remarked the 
respondents. Di Costa (2010) and Rowe and Wood (2008) also reported that students wanted 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

36 

instructors to consider their feelings; they wanted instructors to be empathetic and 
understandable. 
 
Table 12.  T-tests comparing personal factors for e-feedback feedback. 
 n Mean SD t df p 
(o) allows me to establish rapport with my professor 

Handwritten Preference 262 4.053 1.780 9.777 718 0.000 
E-feedback Preference 458 2.769 1.647    

(p) encourages me to read the feedback 
Handwritten Preference 261 3.874 1.914 14.769 717 0.000 
E-feedback Preference 458 2.109 1.280    

(q) shows that the professor cares about me 
Handwritten Preference 260 3.862 1.804 10.461 714 0.000 
E-feedback Preference 456 2.540 1.516    

(r) makes me appreciate my professor's time and attention 
Handwritten Preference 261 3.671 1.860 11.240 715 0.000 
E-feedback Preference 456 2.318 1.342    

 Note. Likert scale 1 = strongly agree to 7 = strongly disagree, the lower the mean the stronger the preference. 
 

Some e-feedback supporters disagreed with their peers and believed that e-feedback had 
its capability to establish rapport with professors. They defended that e-feedback was “[m]ore 
one on one [than] the classroom,” and “… was speaking directly to me.” In view of e-feedback 
supporters, e-feedback was “[m]ore personal.” The findings are consistent with Rowe and Wood 
(2008) that students requested feedback to be more personal, as it could motivate student 
learning and guide students in the right direction.  

 
I. Correlations among demographic factors. 
 
The second research question: “What are their related rationale?” was also examined through 
correlations of demographic variables. Table 13 shows there were positive correlations among 
students’ ages and feedback preference. It means that the older the students were the more they 
preferred feedback. The finding is consistent with the findings by Chang (2011) and Chang et al. 
(2012). In addition, a positive correlation was also observed among class standings and feedback 
preference. This means the higher class standing, the more the students desired for feedback. 
This finding is incongruent with the reports by Siew (2003) and Chang et al. (2012). In regards 
to GPA, however, GPA and feedback preference were negatively correlated. This means that 
those whose GPA was between 1.00 and 2.01 craved for feedback more than those whose GPA 
ranged between 2.01 and 3.00. This finding is inconsistent with the reports by Chang (2011) and 
Chang et al. (2012) that the higher GPA the respondents had, the more eager they wished to 
receive feedback. However, further research is needed as there seemed more respondents whose 
GPA ranged between 3.01-4.00 (62.4%) than those GPA between 2.01 and 3.00 (28.1%), 1.01-
2.00 (2.1%).  

In terms of preference for a particular form of feedback, a crosstabs procedure, using the 
Chi-square test of independence, revealed there were no statistically significant differences 
between the observed and expected frequencies on the variables of interest. The results failed to 
reveal a statistically significant difference in terms of gender, χ2(2, 752) = 3.543,  p = 0.170 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

37 

Table 13.  Feedback correlations among demographic variables. 

 
Gender Age 

Class 
Standing GPA College 

Feedback 
Preference 

 Gender  1.000 -.088* -.041 -.033 -.020 -.003 
Age   1.000 .272** -.050 .008 .147** 
Class Standing    1.000 -.258** -.044 .165** 
GPA     1.000 -.005 -.072* 
College       1.000 -.004 
Feedback Preference       1.000 
*. Correlation is significant at the 0.05 level (2-tailed). 
**. Correlation is significant at the 0.01 level (2-tailed). 
 
between handwritten and e-feedback. This means that regardless of gender there was no 
preference between handwritten or e-feedback. However, the Chi-square test of independence 
indicated a statistically significant difference, χ2(5, 752) = 16.792,  p = 0.005 among age. This 
means the older students were, the more preference they had for e-feedback. The Chi-square test 
for independence also indicated a statistically significant difference, χ2(3, 746) = 21.020, p = 
0.000, among class standing. E-feedback was preferred by seniors 72.3%. Juniors also preferred 
e-feedback 66.8%. For freshmen and sophomores the preference for e-feedback was about even. 

A crosstabs procedure, Chi-square test of independence, also revealed a statistically 
significant difference χ2(4, 752) = 13.511, p = 0.009 among GPA respondents. In the 3.01–4.00 
GPA group, 65.4% preferred e-feedback. Among GPA respondents in the 2.01–3.00 GPA group, 
63.4% preferred e-feedback, while GPA respondents in the 1.01-2.00 GPA group preferred e-
feedback 75.0% of the time. 

There was statistically significant difference χ2(5, 751) = 11.719, p = 0.039 among 
colleges as well. The biggest preference difference was found in the College of Health Sciences 
with 71.4% of these respondents preferring e-feedback. All other colleges preferred e-feedback 
as well, although the differences were much smaller. 

 
J. Educational implications. 
 
The findings offer useful insights of the respondents on their preferred feedback form and the 
related rationale behind their preferences. As such, it is time for instructors and concerned 
administrators to start contemplating how to compose/or develop and deliver feedback, be it 
handwritten or e-feedback, in order to genuinely facilitate student learning. To be more specific, 
it is time to make changes to ways to develop and deliver e-feedback to bolster its quality and 
personal attributes. It is time to make changes to ways to develop and deliver handwritten 
feedback to better its timeliness, accessibility, and legibility. The need for change also implies 
that a form of feedback may not matter much if feedback, be it handwritten or e-feedback, is 
useful and beneficial to student learning and/or contains all the five themes. Therefore, in 
providing feedback, instructors need to “engage with students, consider their responses and offer 
individualized challenges” (Rushoff, 2013). Perhaps, basic training or professional development 
for instructors would enable them to establish a better understanding of what kind of e-feedback, 
for example, is needed by students. In addition the delivery style impacts student learning, as a 
student pointed out, “The few times I have received feedback in these ways [electronically] 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

38 

(especially through annotations), I found it [e-feedback] immensely helpful. As such, I think this 
problem is more of one of education on the part of professors; if they are aware of this method of 
giving feedback and how to provide it in this way, then maybe they would be more likely to do 
so. Professor training would be very helpful.” Professional training converging on how to 
provide and deliver feedback, be it handwritten or e-feedback, is of great significance. 
 
K. Future research. 
 
This study demonstrated that both handwritten feedback and e-feedback supporters appeared to 
clearly hold their own positions. To facilitate student learning via assessment feedback, future 
research would be useful to examine specifically what content of handwritten feedback is desired 
by respondents and, when and how instructors deliver this feedback to students. The same is 
necessary for the examination of e-feedback supporters’ views. Further research may also be 
focused on if “a hybrid approach” to providing and sending feedback to students is helpful from 
the students’ point of view, e.g. Tablet PC or iAnnotate PDF on iPad. These approaches would 
allow for handwriting and delivering feedback electronically. Or future research may need to be 
focused on the following question: “Do students prefer feedback provided with the use of 
VoiceThread, the software that allows for recording feedback orally and delivering it 
electronically? In addition future research may look into whether or not feedback provided 
through various electronic means, such as email, webs, Oncourse, phones, etc., would result in 
different students’ perceptions or even in different impact on their learning. Interested others 
could also delve into to what extent e-feedback or handwritten feedback could really improve 
teaching and learning. 
 
L. Limitations. 
 
The following limitations were identified (1) Even though the survey instrument was modified 
and improved from the previous study, 2% of the respondents thought the survey was a bit too 
long. Thus, it might be the case that some respondents might not be able to complete the survey 
in earnest or honestly convey their insights. (2) This survey was conducted at the beginning of 
the spring semester. It might be that some students had not had much experience receiving or 
reading feedback. (3) It might be that some respondents’ perceptions might not fully reflect their 
views taken into consideration that they might not comprehend certain survey questions and/or 
might be distracted by their surroundings when the survey was being taken. (4) Lastly, since 
there was no clear definition of e-feedback given, it might bear on the answers of the respondents 
to some survey questions. Nonetheless, with a large number of the respondents involved in this 
study, the findings could still make useful contributions to teaching and learning in higher 
education, generating a stimulating topic for the best interest of students. 
 
IV. Conclusion. 
 
Feedback preferences of undergraduate students at a Midwestern university were explored with 
regards to handwritten feedback or e-feedback and the rationale behind these preferences. It was 
found that about two thirds of the respondents preferred e-feedback. However, each group of 
supporters appeared to hold their explicitly distinct reasons for their perceptions in terms of the 
five themes: accessibility, timeliness, legibility, quality, and personal. Despite their differing 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

39 

views, it appears that irrespective of their distinctive preferences, ratings for favoring 
handwritten feedback under some factors of quality and personal were stronger than for e-
feedback. Likewise, there were stronger ratings and more respondents, regardless of their 
distinctive preferences, supporting e-feedback for its timeliness, accessibility, and legibility. The 
justifications that backed up their expressed preferences could also explain why there were 
higher ratings for usefulness of handwritten feedback than that of e-feedback. In addition, the 
respondents’ various perceptions with respect to e-feedback were also found to be positively 
correlated with age and class standing and negatively correlated with GPA: Those whose GPA is 
between 1.01-2.00 favored more feedback than those whose GPA was between the range of 
3.01–4.00 and that of 2.01–3.00. 

The findings indicate that the majority of students long for assistance from instructors to 
better their learning via assessment feedback. It is important for instructors to be mindful when 
providing feedback on students’ assignments in terms of what, why, how, and when. Since 
feedback offering has been recognized by literature to have significant effect on student learning 
(Case, 2007; Chang, 2011; Ferguson, 2011; Krause & Stark, 2010) and fundamental in 
supporting and regulating the learning process (Ifenthaler, 2010). It is time for all faculty 
concerned with effective student learning to understand more about the provision of feedback via 
the assessment process. Awarding a single grade is not welcomed by students and is not 
conducive to improving learning. Students do desire to receive feedback (Chang, 2011; Siew, 
2003). However, the feedback should truly help advance their learning.  
 

References 
 

Ackerman, D. S., & Gross, B. L. (2010). Instructor feedback: How much do students really want? 
Journal of Marketing Education, 32(2), 172-181. doi: 10.1177/0273475309360159 
 
Bai, X., & Smith, M. B. (2010). Promoting hybrid learning through a sharable elearning 
approach. Journal of Asynchronous Learning Networks, 14(3), 13-24.  
 
Bakerson, M. (2009). Persistence and success: A study of cognitive, social, and institutional 
factors related to retention of Kalamazoo Promise Recipients at Western Michigan University. 
Proquest Dissertations & Theses Database: A&I. . Western Michigan University, United States  
 
Ball, E. (2009). A participatory action research study on handwritten annotation feedback and its 
impact on staff and student. Systemic Practice and Action Research, 22, 111-124.  
 
Bridge, P., & Appleyard, R. (2008). A comparison of electronic and paper-based assignment 
submission and feedback. British Journal of Educational Technology, 39(4), 644-650.  
 
Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 
31, 219-233.  
 
Case, S. (2007). Reconfiguring and realigning the assessment feedback processes for an 
undergraduate criminology degree. Assessment & Evaluation in Higher Education, 32(3), 285-
299.  
 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

40 

Chang, N. (2011). Pre-service teachers’ views: How did e-feedback through assessment facilitate 
their learning? Journal of Scholarship of Teaching and Learning, 11(2), 16-33.  
 
Chang, N., Watson, B., Bakerson, M., Williams, E., McGoron, F. , & Spitzer, B. (2012). 
Electronic feedback or handwritten feedback: What do undergraduate students prefer and why? 
Journal of Scholarship of Teaching with Technology, 1(1), 1-23.  
 
Charmaz, C. (2000). Grounded theory: Objectivist and constructivist methods (2nd ed.). London: 
Sage. 
 
Creswell, J. W. (2002). Research design. London: Sage. 
 
Dennen, V. P., Darabi, A., & Smith, L. J. (2007). Instructor-learner interaction in online courses: 
The relative perceived importance of particular instructor actions on performance and 
satisfaction. Distance Education, 28(1), 65-79.  
 
Denton, P., Madden, J., Roberts, M., & Rowe, P. (2008). Students' response to traditional and 
computer-assisted formative feedback: A comparative case study. British Journal of Educational 
Technology, 39(3), 486-500. doi: 10.1111/j.1467-8535.2007.00745.x 
 
Di Costa, N. (2010). Feedback on Feedback: Student and academic perceptions, expectations 
and practices within an undergraduate Pharmacy course. Paper presented at the ATN 
Assessment Conference 2010 University of Technology Sydney.  
 
Ferguson, P. (2011). Student perceptions of quality feedback in teacher education. Assessment & 
Evaluation in Higher Education, 36(1), 51-62.  
 
Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the 
role of assessment feedback in student learning. Studies in Higher Education, 27, 53-64.  
 
Hounsell, D. (2003). Student feedback, learning, and development. Berkshire, UK: SRHE & 
Open University Press. 
 
Hyland, P. (2000). Learning from feedback on assessment. Manchester, UK: Manchester 
University Press. 
 
Ifenthaler, D. (2010). Bridging the gap between expert-novice differences: The model-based 
feedback approach. Journal of Research on Technology in Education, 43(2), 103-117.  
 
Krause, U., & Stark, R. (2010). Reflection in example- and problem-based learning: Effects of 
reflection prompts, feedback and cooperative learning. Evaluation & Research in Education, 
23(4), 255-272.  
 
Mann, S. (2001). Alternative perspectives on the student experience: Alienation and engagement. 
Studies in Higher Education 26(1), 7-20.  
 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

41 

Matthews, K., Janicki, T., He, L., & Patterson, L. (2012). Implementation of an automated 
grading system with an adaptive learning component to affect student feedback and eesponse 
time. Journal of Information Systems Education, 23(1), 71-83.  
 
Mertler, C. A., & Vanatta, R. A. . (2005). Advanced and multivariate statistical methods (3rd ed 
ed.). Glendale, CA: Pyrzcak Publishing. 
 
Morrissey, G., Coolican, M., & Wolfgang, D. (2011). An intersection of interests: The millennial 
generation and an alternative world language teacher education program. Paper presented at the 
American Educational Research Association Annual Conference New Orleans, LA.  
 
National Union of Students. (2008). Student Experience Report. 
http://aces.shu.ac.uk/employability/resources/NUSStudentExperienceReport.pdf 
 
Parkin, H., Hepplestone, S., Holden, G., Irwin, B., & Thorpe, L. (2012). A role for technology in 
enhancing students’ engagement with feedback. Assessment & Evaluation in Higher Education, 
37(8), 963-973.  
Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback: All that effort, but what is 
the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289. doi: 
10.1080/02602930903541007 
 
Ramsden, P. (2003). Learning to teach in higher education (2nd ed.). London: RoutledgeFalmer. 
 
Rosenberg, K. M. (2007). The excel statistics companion. Belmont, CA: Thomson Higher 
Education. 
 
Rowe, A. D., & Wood, L. N. (2008). Student perceptions and preferences for feedback. Asian 
Social Science, 4(3), 78-88.  
 
Rushoff, D. (2013, January 15, 2013). Online courses need human element to educate. from 
http://www.cnn.com/2013/01/15/opinion/rushkoff-moocs/index.html 
 
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. 
Assessment & Evaluation in Higher Education, 35(5), 535-550. doi: 
10.1080/02602930903541015 
 
Scott, G. (2006). Accessing the Student Voice: A Higher Education Innovation Program Project. 
Canberra, Australia: Department of Education, Science and Training. 
 
Siew, P. F. (2003). Flexible on-line assessment and feedback for teaching linear algebra. 
International Journal of Mathematical Education in Science & Technology, 34(1), 43-52.  
 
Stevenson, J. P. (2007). Applied multivariate statistics for the social sciences (5th ed.). New 
York, NY: Routledge. 
 



Chang, N., Watson, B., Bakerson, M.A. and McGoron, F.X. 
	
  
 

Journal of Teaching and Learning with Technology, Vol. 2, No. 2, December 2013. 
jotlt.indiana.edu 

42 

Yang, Y., & Durrington, V. (2010). Investigation of students' perceptions of online course 
quality. International Journal on E-Learning, 9(3), 341-361.  
	
  
 
 


