Microsoft Word - b8300 Final.docx


Journal of Education, 2021 

Issue 83, http://journals.ukzn.ac.za/index.php/joe                    doi: http://dx.doi.org/10.17159/2520-9868/i83a05 

 

 

Online ISSN 2520-9868  Print ISSN 0259-479X 

 

 

Connecting assessment and feedback: A customised and 

personalised experience for knowledge-building  

 

Pryah Mahabeer  

Curriculum Studies, School of Education, University of KwaZulu-Natal, Durban, South Africa  

mahabeerp3@ukzn.ac.za 

https://orcid.org/0000-0003-4576-690X 

 

Fathima Firoz Akoo  

Student in Curriculum Studies, School of Education, University of KwaZulu-Natal, Durban, South Africa  

Fatzakoo@gmail.com 

 

(Received: 30 September 2020; accepted: 12 May 2021) 

 

Abstract 

Formative assessment coupled with effectual feedback is instrumental in enhancing student-learning experience 

and contributing to knowledge-building. However, feedback does not always translate into the desired outcomes 

for students receiving feedback and this compromises educational experiences and goals. In this small-scale 

empirical study, we worked with five postgraduate Honours students at a university in South Africa to explore 

their experiences of feedback on formative assessments in the learning space. We focused in a nuanced way on 

innovative opportunities and practices of feedback in the digital age. The data collected from the semi-structured 

interviews revealed that participants understood the value of quality formative assessment and feedback. Most 

participants reacted negatively to assessment grids and feedback received from lecturers. Some were 

unaccustomed to digital formative assessment and feedback as a developmental tool. They recommended a 

discipline-specific blended feedback approach that incorporates face-to-face feedback to make the digital 

feedback provided to them more meaningful. This would provide useful feedback that would create a 

customised and personalised learning experience for students in collaborative knowledge-building. 

 

Keywords: formative assessment (FA), feedback, knowledge-building, students’ experiences 

 

 

Introduction 

In South Africa, the specific context of this study, and globally, assessment and feedback are 

vital components if successful teaching and learning is to take place in Higher Education 

(Mulliner & Tucker, 2017; Munro et al., 2018). When lecturers do not provide the necessary 

feedback to students, this results in their failure to improve in future assessment tasks, and it 



88    Journal of Education, No. 83, 2021 

 

also compromises the core purpose of assessment and feedback which is to provide helpful 

advice aimed at improving student learning (Ajjawi & Boud, 2017). The way in which 

students understand, interpret, and implement feedback makes a difference to how they learn.  

Koen et al. (2012) have emphasised the distinct link between assessment and feedback. 

Valuable feedback helps students to engage deeply with knowledge, but many consider 

receiving feedback on their assessments to be one of their least fulfilling experiences at 

university (West & Turner, 2016). Many students have reported feeling disappointed with the 

feedback they receive yet academics often feel that they provide practical and educational 

feedback (Mulliner & Tucker, 2017). Ali et al. (2018) indicated that students are 

predominantly discontented with the lack of dialogue in the assessment and feedback process 

and the lack of personalised and customised examples of how they could improve their work. 

Therefore, they suggested that strategies be put in place to ensure that reflection occurs after 

students receive feedback to improve their engagement with it.  

Feedback is not limited to lecturers offering it face-to-face but may be digital (online) as well. 

The university in this study has acknowledged the advancements in technology and 

incorporates them into the teaching and learning context through online teaching, assessment, 

and feedback on a digital learning portal to keep in line with national and global trends in 

education. The use of digital resources, assessment, and feedback is expected in the teaching 

of postgraduate modules. Digital feedback is more trusted by students and motivates them to 

perform better because it focuses on the task and not on the individual (Lipnevich & Smith, 

2009). However, this leads to a loss of personal interaction and meaningful exchange 

between student and lecturer (Bailey & Garner, 2010; Lipnevich & Smith, 2009).  

This study could help determine the effectiveness of feedback exchanges taking place online 

instead of face-to-face and whether the blended feedback approach may indeed be a more 

useful way of providing effective feedback. In this study, we report on an exploration of 

postgraduate (Honours) students’ experiences in relation to lecturer feedback on formative 

assessments at a university in South Africa.  

Meanings of assessment and feedback 

“The usefulness and effectiveness of assessment depends on the quality of the feedback” 

(Glazer, 2014, p. 277). An essential aspect of this study draws on the meaning of assessment 

and feedback, on how formative assessment and feedback remain interconnected to 

knowledge-building, and on how this relationship contributes to students’ learning 

experience. For many individuals, the notion of assessment has many meanings but, generally 

speaking, assessment is a process that assists lecturers to understand students’ achievements 

and the level of their performance. This is done by using different activities and instruments 

and is meant to help lecturers report on their students’ accomplishments and, in turn, to help 

them develop their teaching (Black & Wiliam, 2004; Crooks, 2001).  

There are two main categories of assessment—formative and summative. Summative 

assessments are fixed at the end of a learning period (at the end of a semester or module) and 



Mahabeer & Akoo: Connecting assessment and feedback    89 

 

     
  

they require students to display the knowledge and skills learnt and whether their long-term 

learning goals have been achieved but there can be limited or no feedback (Glazer, 2014). 

Formative assessments occur frequently throughout the learning process and aims to monitor 

students’ progress and improve lecturer instructional strategies, thus improving teaching and 

learning and resulting in a high-quality educational experience for students (Black & Wiliam, 

2004; Glazer, 2014; Kinne et al., 2014). Formative assessment uses immediate feedback that 

benefits both the lecturer and the student, facilitates program improvement, and provides 

possible corrective measures at each stage of the teaching and learning process (Bennett, 

2011; Glazer, 2014). While summative assessment evaluates students’ learning to validate a 

grade (Glazer, 2014), formative assessment is rooted in the lecturer’s pedagogical knowledge 

and operates in a much broader educational context (Bennett, 2011; Byrd, 2013). In this 

study, we focused on formative assessment. 

Reacting to feedback  

Feedback is a crucial communication tool between students and lecturers since it regulates 

the teaching and learning process (Lipnevich & Smith, 2009) and it is an essential function of 

knowledge acquisition that improves performance and guides students towards their learning 

goals (Bailey & Garner, 2010; Dawson, 2017), along with directing them towards the 

attainment of future goals.  

Ideally, feedback is ongoing student-lecturer interaction. Black and Wiliam (2004) have 

asserted that the two critical role-players for realising practical assessment and feedback are 

the assessor and the assessed. Bailey and Garner (2010, p.188) described feedback as an 

interplay between and among lecturers’ “pedagogical goals, students’ learning needs, and 

institutional and governmental education policies.” In the assessment and feedback process, 

students should be mindful of how this assists them to achieve their learning goals and 

improve their performance; in doing so, they need to become more open to accepting and 

engaging with the feedback received from lecturers (Black & Wiliam, 2004; Higgins et al., 

2002). It is not a given that students will make sense of feedback or respond positively to it or 

use it to enhance their learning (Koen et al., 2012). Grades or numerical scores do not serve 

as efficient feedback mechanisms because they do not provide students with the tools to 

enhance learning. Descriptive feedback leads to the highest degree of performance 

(Lipnevich & Smith, 2009). These scholars have noted that comments made by lecturers 

allow students to focus on what is relevant and can work to stimulate their mental 

performance, while giving only grades can inhibit students’ cognitive processes and slow 

down their learning. 

Receiving assessment feedback is a sensitive and complex emotional process whereby 

students’ emotions influence how they receive, process, and action feedback (Bowker, 2018). 

The assessment and feedback process is an emotional one, students invest their time and 

energy into the assessment activities and lecturers’ feedback responses can have a negative 

effect on their learning and performance (Carless, 2006). For Bailey and Garner (2010) some 

students do not understand the purpose or usefulness of feedback, and this leads to 

dissatisfaction with it. Studies across 14 universities in Australia observed that over 90% of 



90    Journal of Education, No. 83, 2021 

 

students were dissatisfied with the feedback they received and experienced general 

inconsistencies in it (Scott & Morrison, 2006). It was of poor quality, there was not enough of 

it, and it followed no precise requirements. Carless (2006) mentioned that assessment 

dialogues between lecturers and students is key to demystifying and strengthening the 

assessment and feedback process, and consequently reducing student discontent and poor 

performance.  

There is a considerable discrepancy in the ways in which students and lecturers perceive the 

feedback the latter give to the former (Bailey & Garner, 2010; Scott & Morrison, 2006). On 

the one hand, students might ignore the feedback because they do not understand it. If they 

perceive feedback to be ineffective, they might not do anything to benefit from it. On the 

other hand, some students spend little time reading assessment feedback and this raises 

questions about whether they understand the purpose and usefulness of it and about how they 

should use it to improve current and future assessments (Crisp, 2007). Despite the problems 

students face in interpreting or implementing feedback, some understand and acknowledge its 

potential value (Higgins et al., 2002). 

Feedback is helpful to students if it is made up of high-quality comments that are usable, 

timely, frequent, used to avoid repeating mistakes, relevant, detailed, and personalised to the 

individual student’s work (Dawson et al., 2019; Glazer, 2014). Generally, students prefer 

individual verbal and written feedback and recognise it as being the most useful (Mulliner & 

Tucker, 2017).  

Types of feedback 

Pereira et al. (2016) have differentiated between feedback (what happened at the point of 

assessment) and feed forward (aimed at guiding students towards achieving their intended 

learning goals and at encouraging meaningful and quality learning). They add that feedback 

is internal or external and shapes and transforms students’ attitudes to their learning: internal 

feedback provides students with knowledge about the “quality of the cognitive process”, and 

external feedback may assist students to reflect on their learning (Pereira et al., 2016, p. 9). 

Their findings show feedback to be more effectual during the performance stage.  

Lecturers use the assessment rubric as a feedback tool to assess students’ work. It usually 

consists of evaluative criteria, an explanation of the quality necessary to meet the 

requirements, and a scoring strategy. Koen et al. (2012) have explained that while analytical 

rubrics can be valuable tools for providing feedback, they will be effective only if they assist 

students to clearly identify the expectations of the lecturer and the specific requirements of 

the task. Using rubrics improves the quality of teaching in providing clear feedback and 

identifying possible areas for improvement (Kinne et al., 2014). However, Kinne et al. (2014) 

have argued that if rubrics are too specific, students might use them only as a way of 

achieving a successful grade and so the degree to which a rubric improves teaching and 

learning is dependent on the quality of the rubric and the manner in which it is effectively 

used. 



Mahabeer & Akoo: Connecting assessment and feedback    91 

 

     
  

In this study, we underline the relationship between assessment and feedback and the use of 

the assessment rubric grid as a feedback tool in the teaching and learning process in 

postgraduate modules. Our study may assist lecturers, students, researchers, and curriculum 

developers to refocus their energies towards understanding the interrelationship and 

significance of assessment and feedback in promoting knowledge-building.  

The relationship between assessment and feedback  

Feedback and assessment are critical to the learning experience and behaviour of students at 

university because of their impact on what students learn, how they learn, and how they 

perceive the learning environment (Adams & McNab, 2013, p.36). Assessment can influence 

students’ learning if they understand what is essential to evaluation (Pereira et al., 2016). 

New and diverse trends in assessment indicate that it should be based on the specific student, 

and on continual feedback that leads to the student’s self-regulation of learning. This 

motivates students to regulate and improve their work by using lecturer feedback to help 

achieve their learning goals (Pereira et al., 2016). 

While assessment is useful in promoting learning, motivating students, improving and 

facilitating reflection, and identifying errors, inadequate feedback can confuse and 

demotivate them (Higgins et al., 2002). Ideally, it should build confidence and encourage 

students to perform better. As mentioned earlier, if feedback is insufficient, students could 

ignore it (Ferguson, 2011). A review on assessment feedback as seen in Pereira et al. (2016) 

showed that feedback is not always used by the students (Li & De Luca, 2014), and even if 

feedback is effectual, students may not always appreciate it (Blair & McGinty, 2013). 

“The amount of detail of feedback; the usefulness of feedback; the extent to which students 

are only interested in grades; and the fairness of marking procedures” (Carless, 2006, p. 230) 

are key points to be considered in improving assessment and feedback practices. For students 

to benefit from feedback, it should connect to assessment tasks that encourage knowledge-

building (Koen et al., 2012). To be helpful there must be a clear link between and among the 

elements of the feedback itself, the assessment activities and guidelines, and the assessment 

frameworks and criteria (Ferguson, 2011). Students should be able to see the connection 

between what they have already achieved and their desired performance. In this way, 

assessment feedback provides the relevant information needed by students to maximise their 

educational efforts (Ferguson, 2011).  

The interplay between assessment, feedback, and 

knowledge-building  

Knowledge-building is not simply about knowing, it is about students gaining a deep 

understanding as and when they participate in the continual creation of knowledge through 

the authentic learning processes of sharing, improving, and building on existing knowledge 

and ideas (Scardamalia, 2002; Scardamalia & Bereiter, 2006). Students feel free to “reveal 

ignorance, voice half-baked notions, give and receive criticism” (Scardamalia, 2002, p. 9). 



92    Journal of Education, No. 83, 2021 

 

Knowledge-building is similar to a socio-constructivist approach to theory, pedagogy, and 

technology since it focuses on students’ shared responsibility for knowledge enhancement 

and collaborative learning development (Bereiter, 2002; Scardamalia & Bereiter, 2006, 2014; 

Yang et al., 2020; Zhu & Kim, 2017). Students’ knowledge-building inquiry, supported by 

Knowledge Forum (KF), is a “computer-supported collaborative discourse environment” 

designed to improve shared ideas (Yang et al., 2020, p. 1247).  

Scardamalia (2002) drew twelve closely interrelated principles that depict knowledge-

building pedagogy and that explain how cognitive, social, and emotional constructs are 

interconnected (see Yang et al., 2020; Zhu & Kim, 2017). Notably, the emotional aspect of 

knowledge-building is rarely explored (Zhu & Kim, 2017). The twelve principles are 

abridged below (Yang et al., 2020): epistemic agency emphasised in knowledge-building 

involves high-level agency when students take the initiative in negotiating their own and 

others’ ideas and take ownership of goal setting, monitoring, and evaluation in a socially 

shared context. Collective responsibility for community knowledge is valued more than 

individual performance in the contribution and advancing of ideas. Last, all ideas are 

improvable, and in knowledge-building, learning is progressive and not a predetermined 

outcome (Yang et al., 2020).  

In knowledge-building, assessment is the process of advancing knowledge, helping lecturers 

and students understand the knowledge acquired, and identifying any problems as the process 

of teaching and learning continues (Zhu & Kim, 2017). Assessment is an essential element of 

knowledge-building and embedded and transformative assessment supports students’ 

realisation and cultivation of their metacognitive skills (Scardamalia, 2002). In one of the 

limited knowledge-building studies that examines both online and offline dialogue, Yang et 

al. (2020) advanced notions of progress monitoring, interrelated to the principle of 

concurrent, embedded, and transformative assessment. 

While formative assessment focuses on bridging the gap between current and expected 

performance, reflective assessment in knowledge-building centres more on nurturing student 

agency and on ongoing learning improvement (Lei & Chan, 2018; Yang et al., 2020), as 

students take an active role in their thinking about assessment and feedback. Reflective 

assessment has been shown to uphold metacognition, and to expand understanding and ideas 

in a collective social context where students interact and develop social and collaborative 

capabilities (Yang et al., 2020; Zhu & Kim, 2017). Through reflective assessment and the 

consistent notion of formative assessment and feedback, students can use data continually to 

guide their collaborative reflection and collective growth towards developing knowledge-

building (Yang et al., 2020). 

Research methodology 

Social research from an interpretivist perspective aims to understand the meaning behind 

human behaviour and how people make sense of their world (Bertram & Christiansen, 2014). 

Interactions between the researcher and respondents within their natural setting gave rise to 



Mahabeer & Akoo: Connecting assessment and feedback    93 

 

     
  

the detailed descriptions (Remler & Van Ryzin, 2014). Although on a small scale, our 

qualitative approach, following Scott and Morrison (2006), allowed us to understand and 

gather in-depth information from participants by probing beneath the surface and by 

uncovering the deeper meanings and understandings of feedback within an educational and 

social context that lay behind students’ personal experiences of, opinions on, and attitudes to 

feedback. In this empirical study, a qualitative approach within the interpretivism paradigm 

allowed for this small-scale, in-depth exploration of postgraduate Honours students’ lived 

experiences, opinions, and reasoning related to the lecturer feedback they received on 

formative assessments.  

Context of the study 

The study was located in the school of education at a tertiary institution in KwaZulu-Natal, 

South Africa. The participants were students of various ages and socio-economic 

backgrounds.  

Participants 

The sample size was limited to five participants from a single university in South Africa. The 

students purposively selected were BEd (Bachelor of Education) postgraduate Honours 

students in Curriculum Studies. At the time of the study, these students had completed two 

compulsory modules of the academic year based on sharing their feedback on formative 

assessments. Convenience sampling (Padgett, 1998) allowed us easy access to the 

participants. As with the findings of most qualitative case study research, this study is limited 

in generalisability (see Cohen et al., 2007). There is potential for an expanded version of this 

study to be conducted on a larger scale across other disciplines and in different universities in 

South Africa. 

Data collection  

Face-to-face semi-structured interviews allowed us to collect empirical data for this study. 

We asked open-ended and probing questions to gain insight into the participants’ 

experiences, perceptions, and understandings as they told their stories (see Bertram & 

Christiansen, 2014; Padgett, 1998; Remler & Van Ryzin, 2014). “Conducting a good 

interview is, in some ways, like participating in a good conversation: it involves listening 

intently and asking questions that focus on concrete examples and feelings rather than on 

abstract speculations” (Eisner, 1998, p. 183) that are unlikely to provide meaningful 

information on the students’ perceptions and understandings (Remler & Van Ryzin, 2014). 

Our interview design included pre-determined questions that were given to the participants 

beforehand. The interviews were conducted by one of the researchers, a postgraduate 

Honours student in Curriculum Studies at the time of the study. The interviews lasted fifteen 

to thirty minutes, were audio-recorded, and were transcribed verbatim.  

We used thematic data analysis to draw out the crucial implicit and explicit ideas in it as 

advocated by Remler and Van Ryzin (2014). We focused on students’ experiences and their 



94    Journal of Education, No. 83, 2021 

 

perceptions on feedback following formative assessments to better understand the data and to 

draw meaningful conclusions as suggested by Kashif et al. (2014). Following Lincoln & 

Guba (1985) and Cresswell (2014) we undertook many readings of the transcribed data to 

better understand it so as to draw out emerging themes. We categorised, compared, 

summarised, and interpreted the data to provide a thick description of the research findings 

(see Bertram & Christiansen, 2014).  

As Lincoln and Guba (1985) have noted, the trustworthiness of qualitative research 

guarantees that the findings are credible, dependable, confirmable, and transferable. Using 

different sources of data ensured credibility as suggested by Remler and Van Ryzin (2014). 

Member checking entailed providing the participants with the written transcripts and initial 

analysis of the data for verification (see Bertram and Christiansen, 2014; Remler & Van 

Ryzin, 2014). This small-scale qualitative study did not aim for generalisability, but the 

findings could be compared to others obtained in similar contexts as Lincoln and Guba 

(1985) and Creswell (2014) have suggested. Ethical considerations were adhered to in 

conducting this study. Participants were aware of the nature of the research and their rights as 

participants, and confidentiality and anonymity were guaranteed in our using pseudonyms.  

A brief biography of the participants  

In line with the ethical principles of maintaining anonymity, the participants are referred to 

by the pseudonyms of Andrew, Letty, Noma, Pretty, and Yolanda.  

Andrew is a 29-year-old African man with four years of teaching experience. He is a second 

language English speaker, is currently not employed, and is studying full time. 

Letty is a 24-year-old African woman with no teaching experience. She is currently studying 

full-time and is a second language English speaker.  

Noma is a 47-year-old African woman with 13 years of teaching experience. She is a second 

language English speaker and has already completed her Honours degree in Social 

Development. She is currently employed as a Grade 4 teacher and is studying part-time.  

Pretty is a 22-year-old African woman who completed her BEd summa cum laude. She is a 

full-time student with no teaching experience and is a second language English speaker.  

Yolanda is a 23-year-old Indian woman who has two years of teaching experience. She is a 

first language English speaker, is currently employed, and is studying part-time.  

Discussion of findings 

The emerging themes discussed in this section are  

• Feedback as an “eye-opening” experience to improve knowledge-building 



Mahabeer & Akoo: Connecting assessment and feedback    95 

 

     
  

• Students’ reflections on the use and understanding of the assessment grid as a tool for 

feedback  

• The contentious nature of receiving and interpreting feedback: language and 

technology as barriers to useful feedback 

• Students’ reactions to feedback received  

• Feedback as a useful educational tool to stimulate the student learning experience  

• Adopting a blended approach to feedback: face-to-face and digital (online) written 

feedback  

Participants’ responses are presented verbatim. We analysed the data thematically to answer 

the following research question: What are postgraduate Honours students’ experiences of 

receiving feedback on formative assessments?  

These students described a diversity of experiences and sentiments on the use of formative 

assessments and the feedback they received. Most of the participants found that the use of 

formative assessment is “a good thing” Andrew said, “The way they were helping us . . . I 

saw it as a way of developing us.” The variety of assessment tasks was appreciated by 

students. Andrew found these “balanced because they were not only giving you one thing, 

and then after that you go to the exam.”  

Participants also appreciated formative assessments because, as Noma said, “You will be 

sitting down in your own time doing an assignment” instead of structured examinations. 

Students did, however, experience some difficulty in becoming accustomed to the use of 

digital formative assessments and feedback. Pretty indicated that “it was a bit tricky . . . but it 

was an experience of learning . . . this was a developmental project because our lecturer with 

the changes being made, you’ll get this mark.” One participant, Yolanda, preferred 

examinations to formative assessments but, based on the above responses, the developmental 

nature of formative assessments and feedback were seen to be beneficial for most of the 

students. All participants, except Yolanda, found assessment to be a developmental, balanced, 

fair approach since it allowed for various learning styles and allowed the students to work at 

home.  

Studies like that of Watling and Ginsburg (2019) have suggested a shift towards feedback 

being more developmental and using the coaching approach in higher education based on a 

distinction being drawn between the role of the lecturer as an assessor and as a coach. For 

coaching to be meaningful, students must trust the lecturer and display their weaknesses in 

order to gain appropriate feedback to overcome their challenges. The coaching approach is 

practical if both students and lecturers understand the developmental nature and intent of 

feedback interactions and if they create an environment of mutual trust (Watling and 

Ginsburg, 2019).  

Feedback as an “eye-opening” experience to improve knowledge-building 

Students experienced receiving both constructive and adverse feedback from lecturers about 

formative assessments. Focusing on constructive aspects of feedback, Andrew described 



96    Journal of Education, No. 83, 2021 

 

receiving feedback as “eye-opening.” He stated, “It’s not like I understood everything but 

some of the questions or feedback I understood that this is what I need to improve, and I try 

to work hard.”  

For Pretty, “How you receive the feedback” was critical, “because it wasn’t ‘Write this! Do 

this!’ But it was just like, ‘It would sound better if it were this . . .’ So that person is 

acknowledging where you are.” According to her, constructive feedback was a combination 

of praise and suggestions for improvement that led to students reflecting on what was lacking 

in their responses and implementing the feedback received to improve their work. Students 

emphasised that lecturers must be conscious of how they provide feedback to students and 

must be sure to acknowledge the student’s efforts for them to be more receptive to the 

feedback. Similarly, studies by Higgins et al. (2002) and Ferguson (2011) concluded that 

lecturers should aim to provide feedback that is timely, personalised, clear, constructive, with 

a clear focus on applauding students’ successes and steering them towards future 

development. Importantly, for feedback to be helpful, the student must understand the 

language used by the lecturers. 

Students’ reflections on the use and understanding of the assessment grid as a 

tool for feedback 

To better understand students’ experiences of assessment and feedback, it was necessary to 

understand their experience and perceptions of formative assessments and the use of 

assessment grids as a feedback tool since these were the methods used by lecturers in the two 

modules referred to in this study. The participants responded to the use of assessment grids as 

a tool of feedback for formative assessments. These grids appeared to be generic for each 

type of assessment activity; the written assessments had the same assessment grid across both 

modules and the oral seminar presentations had a different one across the two modules. Two 

participants did not understand the assessment grid in terms of criteria and the calculation of 

the ranges. Noma expressed that she “didn’t understand it because it didn’t help [her] and 

asked, ‘What was the use of it?’” Letty said, “I didn’t understand it because you didn’t get a 

precise mark. It was a range.” Pretty indicated that she understood the assessment grid as a 

point of reference. She said, “Every time I was writing a paragraph, I would just go back in 

terms of, am I still on the right track, am I still within the theme, within the topic . . . you had 

a sense of what [percentage] you could get.” However, Andrew indicated that although “you 

can understand that this is what the assessment grid means . . . but now when it comes to 

writing, you find that you are still struggling.”  

While two students claimed to have understood the concept of the assessment grid, the other 

three did not understand how to interpret or apply it to pass their assessments. Feedback took 

the form of tracked changes on submissions and a letter and number on the assessment grid. 

The grid was, therefore, not helpful to most of the students. Perhaps it was not correctly used 

or adapted by the lecturer or explained to the students adequately, and therefore did not meet 

their needs, according to Letty, Andrew, and Noma. Dawson (2017) underlined that 

assessment rubrics only become specific depending on their design and how they are adjusted 

to suit the needs of the lecturer and the students. The assessment grids used as feedback tools 



Mahabeer & Akoo: Connecting assessment and feedback    97 

 

     
  

across the modules were generic and not adapted to suit the needs of each assessment and 

were ineffective, so the students did not understand their use. It is clear, therefore, that 

standardised feedback tools such as structured feedback forms and assessment rubrics must 

be transparent, fair, and consistent across various departments. However, these tools may lead 

to a loss of personal interaction and meaningful exchange between student and lecturer 

because if students do not understand and engage with the feedback received from their 

lecturer, it is of no benefit to them (Lipnevich & Smith, 2009).  

Only one participant, Pretty, had a positive experience with the assessment grid and found 

that “it was effective, it provided the percentage in . . . if you write like this, you will get this. 

So, you had a sense of what you could get and what you could not get.” However, most of the 

participants found the grid ineffective as a feedback tool and of no benefit. Andrew indicated 

that “actually it’s only guiding . . . if you are still struggling in writing, it may not be 

effective.” Noma suggested that the grid confused her when she said, “We had some 

questions, but we didn’t have a chance to ask the questions.” Letty could not understand her 

progress or how the grid worked. She said, “I felt that it wasn’t understandable for us 

[because] now the letter will represent something else, and the mark would be something . . . 

For me, it didn’t tie in.”  

The students’ sentiments expressed above are supported by Dawson’s (2017) point that while 

the rubric can be used as a form of feedback on assessment, it should be accompanied by 

feedback information and used as a stimulus for further discussion between lecturer and 

student. In this case, students said that they received no additional feedback information and 

thus found the assessment grid ineffective as a feedback tool since it did not provide them 

with any meaningful information. Besides, students found the feedback “confusing” 

according to Letty, since no further discussion took place to provide clarity. Likewise, Koen 

et al. (2012) have argued that grades, numerical scores, or ranges do not serve as efficient 

feedback tools because these do not provide students with any devices to enhance learning or 

any detailed constructive feedback. Arguably, studies suggest that using rubrics has benefits 

for both the lecturer and the students since rubrics ensure objectivity and fairness in grading, 

and when shared with students they assist them understand the lecturer’s expectations and 

help them revise and improve their work before submitting it for grading (Kinne et al., 2014). 

Rubrics also provide students with a point of reference when collaborating with lecturers 

thereby supporting and improving teaching and learning along with knowledge-building 

(Kinne et al., 2014). 

The contentious nature of receiving and interpreting feedback: Language and 

technology as barriers to useful feedback 

The participants indicated that feedback is not always correctly understood and could be 

misinterpreted by students or entirely ignored by them. Andrew explained, “I understood but 

not like everything because . . . they are asking you ‘so what?’ . . . sometimes you don’t know 

how you can even answer that kind of question.” Letty agreed when she said, “You get more 

confused.” She thought to herself, “You didn’t explain to me what you wanted. I’ve tried to 

do this according to my own understanding and the points that you are giving me in the 



98    Journal of Education, No. 83, 2021 

 

feedback they are very vague.” Studies suggest that students sometimes ignore feedback 

(Ferguson, 2011) because they are interested only in their mark, not in improving it. Mulliner 

& Tucker (2017) refuted the idea that students are concerned only about their grades; they 

argued that students read, think about, and action feedback received, and lecturers should not 

think that they do not. 

Consistent with the findings of Dawson et al. (2019), students’ responses in this study 

revealed how they become confused when feedback was vague, and comments did not 

provide meaningful information about what needed to be corrected. Effective feedback 

positively influences students’ actions, involves a dialogue between lecturer and student, and 

is cyclical so it must be meaningful and correctly understood before students can correctly 

implement it (Ajjawi & Boud, 2017). Feedback that is not actionable by students, through the 

lack of resources or the misunderstanding of it, is ineffective. Feedback must be timely, 

specific, actionable, and task-oriented, constructive in guiding students where they went 

wrong and instructive on how to improve, and it must come from a credible and trusted 

source (Ajjawi & Boud, 2017). Lecturers should avoid making direct or indirect comparisons 

to other students when giving feedback to a particular student (Shute, 2008).  

Many students in this particular university are second language English speakers and they 

indicated that they encounter difficulties understanding the terminology or intended meaning 

of feedback. Letty said, “I’ve read these comments; I’ve tried to understand them, but I’m not 

getting anything out of them.” Students do not always address these misunderstandings and 

this causes a barrier to useful feedback, as can be seen from Noma’s statement that “in fact, I 

didn’t ask any help from the lecturer. I didn’t.” Some students preferred to seek help from 

other lecturers instead. As indicated by Andrew, “What I didn’t understand, I went to other 

lecturers to find out what could I do with this. So they helped me.”  

Letty revealed that she felt “confused” and added, “So you go to this lecturer, and they say 

‘No, read the comments that I’ve given you’ [but] I’ve read these comments. I’ve tried to 

understand them, but I’m not getting anything out of them, so that’s how I felt.” Letty reacted 

by seeking assistance from her peers. Receiving quality feedback from peers may be as 

advantageous as feedback from lecturers (Kinne et al., 2014). As Letty explained, “I couldn’t 

edit my work from the lecturer’s feedback but from relying on other people’s feedback 

explaining to me in the class.”  

Importantly, feedback should be clear and specific to avoid confusion and frustration among 

students (Shute, 2008). The language used by the lecturer should be understandable to the 

student and should focus on the assessment activity, not on the student, and expounded 

feedback should be in manageable and comprehensible units to promote learning (Higgins et 

al., 2002; Ferguson, 2011; Shute, 2008). Dawson et al. (2019) argued that feedback should be 

mediated by considering what students think and do about the feedback they receive on their 

work and how this relates to measurable improvements. Ultimately, feedback should consist 

of constructive criticism, and the lecturer must use productive and supportive language to 

ensure that the student feels motivated (Fong et al., 2018).  



Mahabeer & Akoo: Connecting assessment and feedback    99 

 

     
  

In addition to language barriers, some students are not entirely computer literate and need to 

become accustomed to submitting assessments and accepting feedback digitally. “It was 

really hard because I’m not really a computer-driven person that much. Now, I had to learn to 

adjust to having feedback and then make sense of feedback on [my] own,” said Pretty. 

Consequently, she was concerned about “misinterpret[ing] what he meant or what she 

meant.” Sociocultural barriers contributed to some students experiencing technological 

challenges outlined by Pretty because they found it difficult to read, understand, and interpret 

the digitally received feedback so had trouble responding, and taking corrective action.  

Studies (see Yang et al., 2020) suggest that low-achieving and at-risk students are often from 

varied indigenous and low socioeconomic backgrounds and have scarcer cognitive, 

metacognitive, and social skills required for educational success. These students experience 

low motivation, low self-efficacy, and difficulty developing collaborative and productive 

higher-order proficiencies than do high-achieving students.  

Understanding and using the context of the learning environment is critical to providing 

valuable feedback. Students’ socio-economic background affects their ability to understand 

and interpret feedback, and lecturers may be unaware of the contextual, emotional, and 

psychological influences that feedback has on students and assume that correcting their work 

without further elaboration is sufficient. Watling and Ginsburg (2019) observed that useful 

feedback depends on many factors, including building a climate of trust and a clear channel 

of communication and dialogue between student and lecturer, the social and cultural context, 

the learning environment, and the lecturer’s ability to observe students. The absence of these 

factors might lead students to perceive feedback as ineffective, resulting in their seeking 

assistance from another lecturer as did Andrew, their peers as did Letty, or completely 

ignoring the feedback. Lecturers may also perceive their feedback to be more valuable and 

practical than students perceive it to be since many lecturers “perceive feedback as corrective 

information transmission, and ignore the complexities of a relationship, context, materials, 

students and the feedback process” (Ajjawi & Boud, 2017, p. 4).  

Students’ reactions to feedback received 

There were different reactions to the ways in which students reacted to feedback received on 

formative assessments. In this section, we discuss the students’ responses to constructive and 

adverse feedback. 

Constructive feedback received  

Of the five participants, only Andrew and Pretty experienced and reacted to some form of 

constructive feedback. Andrew noted, “It was not easy, but I try my best to respond to the 

feedback, and the percentage was improved . . . I did face-to-face feedback . . . and I was able 

to ask some of the things I did not understand, and it was explained clearly.” Andrew’s use of 

feedback and the subsequent improvement in his mark is supported by Ferguson (2011) and 

Crisp (2007) who have explained that feedback can be beneficial only if the students take the 

time to read and act upon it.  



100    Journal of Education, No. 83, 2021 

 

Pretty seemed to have experienced and reacted to feedback positively. However, it was 

different for the rest of the participants. She felt her work was acknowledged and appreciated, 

and the feedback guided her towards achieving her learning goals. “When someone says, ‘I 

get what you’re saying, but it would have been better if it was like this.’. . . So, I saw not only 

my mark improving but also my writing skills, my reading skills, and even my 

comprehension skills.” Similarly, Ferguson (2011) described students as preferring brief, 

concise comments that highlight the positives and negatives, identify weaknesses, and 

suggest improvements to guide future progress. 

Students prefer to receive personalised comments about their work that allows them to 

improve it, correct their errors, and keep them motivated (Poulos & Mahony, 2008). 

Therefore, formative feedback must clarify the gap between the students’ performance and 

their goals and encourage improvement by highlighting the students’ strengths and explaining 

the improvements that need to be made. 

Adverse feedback received 

Students have become increasingly confused and frustrated because of the insufficiency and 

lateness of feedback received from lecturers (Higgins et al., 2002). These lecturers may not 

be identifying what is right and what needs improving and may grade work only as fair or 

poor without further explanation (Higgins et al., 2002). Compacted timetables in higher 

education and the clustering of assignments across modules at the end of the semester also 

leads to students being overwhelmed and getting less benefit from feedback (Higgins et al., 

2002). Some students in this study reported having experienced negative feedback from 

lecturers in some form. Andrew was discouraged but eventually acted on the feedback and 

resubmitted his work.  

He said,  

You start to see that you still have to work hard to move to higher percentages . . . I 

was discouraged but then after that when I see that also I have a deadline so then . . . I 

was able to respond according to the feedback.  

Students’ expectations of feedback and what they received did not correlate. Letty said, 

“When we got feedback, it wasn’t what we expected.” She explained, “I’ve done the work 

and felt that I understood it, and I did it to my best, but now I’m getting something that is 

completely different, very negative.” The negative feedback Letty received overshadowed the 

positive feedback. She added, “In fact, I don’t remember getting positive feedback. The mark 

would seem positive, but it wouldn’t relate to the comments.” This resulted in adverse 

reactions towards the module itself. “I ended up not liking the module, especially toward the 

end [because] now you had to fix a lot of your assignments with minimal feedback, and that 

was hard.” 

The gap between what students regard as helpful feedback and the feedback they receive 

leads to dissatisfaction and demotivation as Ferguson (2011) pointed out. Similarly, Watling 



Mahabeer & Akoo: Connecting assessment and feedback    101 

 

     
  

and Ginsburg (2019) acknowledged students’ dissatisfaction in not benefiting from effective 

feedback. Negative feedback decreases the potential learning value since students lose 

interest in the feedback process. They become overwhelmed and benefit little from such 

feedback, as is evident in the experience of Andrew and Letty.  

Higgins et al. (2002) claimed that students understand and acknowledge the potential value of 

feedback despite the problems they face in interpreting or implementing it. Pretty accepted 

negative feedback objectively and reacted positively to it. She said, “There will be comments 

that will say or leave you astonished. . . It depends on your attitude as a person in terms of 

what kind of person are you.” Our findings revealed that negative feedback could be 

overcome with resilience as could be seen in the responses of Andrew and Pretty. Yolanda 

responded positively and implemented the feedback received but experienced a negative 

outcome. She explained, “I did take every single feedback into consideration, but . . . it didn’t 

improve my mark. I did have good comments but mostly negative.”  

Students in this study consistently experienced negative feelings concerning lecturer feedback 

on formative assessments. According to Andrew, they were “discouraged” at not having “the 

strength to correct everything that they [been] given.” Letty felt “confused” about receiving 

the assessment and feedback, and this confusion continued even after face-to-face feedback 

from the lecturer. She said, “They [the lecturers] didn’t explain to us what they expected from 

us. So, we did the work from our interpretation and understanding and maybe discussing with 

peers in the class.”  

Some students felt despondent after receiving feedback since they had received lower marks 

than they had expected after putting in an immense effort. For Letty, “I didn’t understand 

why I was getting that mark, why I had worked very hard.” Pretty shared Letty’s feelings 

about feedback, and about being “overwhelmed” with feelings of “rejection” and 

“negativity.” Pretty said, “I feel upset, sad that I put so much work into this, and then I get a 

lower mark.”  

Students are emotionally affected by feedback when it elicits feelings of pride, joy, and 

success (Lipnevich & Smith, 2009) but feedback can also lead to shame, disappointment, and 

despair, as can be seen in the responses of these participants. Yolanda said, “It didn’t make 

me feel good; you start to doubt yourself.” Noma added, “I thought that maybe I’m wasting 

my time studying.” Letty said she was “frustrated with the feedback [and] didn’t understand 

[what] they were explaining” and what she “should do.” Letty lamented, “I wasn’t happy 

with my mark for the amount of work I put into the module.” Letty described feeling 

“stupid”, and Yolanda felt “belittled” by the attitudes displayed by lecturers towards them 

when they asked questions or sought clarity. It appears, according to Noma, that lecturers 

“obviously” assumed students have specific knowledge and skills, which they “didn’t 

[have].”  

Students voiced feelings of sadness, disappointment, self-doubt, and frustration and reported 

feeling “overwhelmed” (Pretty) and “demotivated” (Letty) over adverse feedback comments. 

In similar vein Yang et al. (2020) commented on how negative communication messages 



102    Journal of Education, No. 83, 2021 

 

make students feel powerless and disadvantaged over their learning capabilities and how this 

hinders growth in their self-efficacy. Noma, Yolanda, and Letty expressed that lecturers’ 

feedback did not improve their learning experience and performance or guide them towards 

achieving their learning goals. After receiving feedback, they felt that their performance in 

the module was a fragmented learning experience. While steps can be taken to ensure a better 

feedback experience, the experience itself is subjective. Ajjawi and Boud (2017) have 

elaborated on the fragmented learning experience and indicated that students who received 

impractical feedback are deprived of the opportunity to improve their future assessment tasks; 

the responses of Letty, Yolanda, and Noma bear this out. Although he “did not understand 

everything,” Andrew still reacted positively to feedback by implementing what he did 

understand and seeking clarity from other lecturers. His mark did improve after these efforts. 

Pretty displayed a high level of internal strength and resilience in dealing with negative 

feedback in particular. She chose to view the lecturer’s feedback as a developmental tool for 

improving herself academically, even though the comments made her sad and upset. Pretty 

described her enhanced learning as a sense of “reaching” beyond her capabilities or what she 

thought she could do. She saw improvements not just in her marks but in her writing, reading, 

and comprehension skills.  

These feelings described above are consistent with the findings of Watling and Ginsburg 

(2019) who pointed out that students can experience feedback as ineffectual and harmful if it 

is person-oriented rather than task-oriented and that it threatens their self-esteem and triggers 

negative emotions. Lipnevich & Smith (2009) argued that students are emotionally affected 

by feedback and that this exerts influence over student results and influences their future 

reactions to feedback. Crisp (2007) also contended that positive feedback can be ineffective 

for some students with high grades who may not find it necessary to work harder to improve 

their work once positive feedback is received, and that some weaker students may become 

demotivated and disillusioned with negative feedback and may not attempt to improve their 

work. Poulos and Mahony (2008) and Bailey and Garner (2010) suggested that a sense of 

reservation exists among academics about providing feedback to students and that this filters 

down to them and results in students not benefiting from feedback and having a fragmented 

learning experience. 

Feedback as a useful educational tool to stimulate the student learning 

experience  

The participants in this study suggested that the criteria and expectations should be clearly 

explained for each assessment activity to make feedback more effective. Noma said, “If they 

can explain the assessment grid before we write the exam. . . they can tell us what is 

expected.” Pretty mentioned that the involvement of students in the design of the “rubric 

process” would let them feel a sense of “ownership” of their learning and “ownership of the 

assessment.” Likewise, Kinne et al. (2014) concluded that involving students in the 

assessment process by co-constructing a draft of the rubric gives them a voice in creating the 

criteria with which their work will be evaluated.  



Mahabeer & Akoo: Connecting assessment and feedback    103 

 

     
  

Andrew and Letty suggested that students benefit from feedback that can be applied across 

modules in the same discipline and that such feedback could be beneficial even in the future. 

Letty indicated that feedback could be used across modules and disciplines. She said, “I can 

learn for my other assignments. . . I should be able to carry it through in the future.” 

Feedback may not be helpful if students complete many modules simultaneously since they 

may receive inconsistent feedback from different lecturers and become more confused. If 

feedback across disciplines is not consistent, this can negatively affect student performance 

and leave them feeling disgruntled (Crisp, 2007; Scott & Morrison, 2006). 

In higher education, assessment is focal to advancing quality in student learning and 

development. Through assessment lecturers can determine the level of the student’s learning 

and ensure the attainment of their teaching goals (Byrd, 2013). Assessments provide lecturers 

and students with information that could assist them in modifying their learning strategies, 

eradicate misconceptions, fix mistakes, and motivate them to achieve their goals (Lipnevich 

& Smith, 2009).  

Students’ capacity to understand and engage with assessment feedback is essential to how 

they will act to improve their learning (Carless, 2006). It is not surprising, then, that these 

students indicated that they were more receptive to positive feedback and motivated to act 

upon it when their work was “acknowledged” or “appreciated” before negative feedback was 

given, as Letty explained. She indicated that lecturers should “acknowledge” students’ 

efforts. At the same time, Andrew suggested that “there must be both positive feedback and 

negative feedback immediately” since “the purpose of feedback is not to discourage, but . . . 

is to motivate the student.” Shute (2008) also confirmed that helpful formative assessment 

feedback must clarify what is lacking between the students’ performance and their learning 

goals, highlight their strengths, and explain suggested improvements clearly. Formative 

assessment and feedback encourage a profound learning experience: how students are 

assessed influences their learning, and the feedback from the formative assessment is seen as 

an instrument to improve their knowledge (Higgins et al., 2002). 

Students in this study expressed the desire for acknowledgment, dialogic engagement, and 

inspiring feedback from their lecturers. They valued constructive feedback directed towards 

their learning goals and indicated how the feedback should be actioned for knowledge-

building. In line with these views, Ajjawi and Boud (2017) proposed adopting the socio-

constructivist approach to feedback that sees it as dialogic between student and lecturer. 

Constructive feedback assists students to understand learning goals; develop their ability to 

monitor, regulate, and evaluate their learning; identify criteria, and reflect on their strengths 

and weaknesses (Ajjawi & Boud, 2017; Koen et al., 2012). Moreover, lecturers and students 

should share their conceptions of useful feedback and what they can do collaboratively for 

more effective and meaningful interaction between students and lecturers to take place 

(Perreira et al., 2016).  

 



104    Journal of Education, No. 83, 2021 

 

Adopting a blended approach to feedback: Face-to-face and digital (online) 

written feedback  

The use of technology may provide students with relevant and meaningful feedback and 

understanding of the process behind their acquisition of the mark (West & Turner, 2016). In 

engaging with feedback, Turner & West (2013) revealed that students appreciate online 

explanations and perceive them as simple to understand. Significantly, emotions play an 

indispensable role in the way in which students respond to feedback and how they will action 

assessment feedback (Pitt & Norton, 2017). With online assessment and feedback, emotion 

control is important since students may experience challenges. Therefore, understanding the 

psychological complexities surrounding students’ experience of online assessment and 

feedback, and how students are dealing with this experience, is critical (Bowker, 2018) and 

warrants further investigation. 

The students in this study found difficulty interpreting and implementing online feedback 

without consultation and face-to-face feedback from lecturers. Three of the five participants 

expressed what they preferred and said that they benefitted from this mixed approach to 

feedback. Andrew said, “When you speak to him or her face-to-face now, you also help that 

student to calm down, to cease panicking and after that, you can help him [or her].” Yolanda 

suggested that face-to-face feedback could take the form of “focus group discussions,” and 

Noma thought face-to-face feedback was necessary because “if they can call me and tell me 

what is expected from me then maybe [I] can improve by doing this.” Letty, however, felt 

that face-to-face feedback was not always helpful for students since, in her experience, “the 

explanation he gave to me was the same explanation he gave in class . . . So, the feedback 

didn’t help, and the consultation didn’t help.” It is evident that the usefulness of feedback is 

also dependent on the individual lecturer and student relationship. Students may find it 

challenging to approach a lecturer because of a lack of confidence or a strained relationship 

with that person (Poulos & Mahony, 2008).  

It is clear from the findings that although some students find written comments sufficient, 

others prefer a combination of digital feedback and face-to-face interaction with the lecturer. 

With rising pressures and increasing student numbers at universities, and the call for 

implementing an online virtual space for teaching, this is not always possible (Poulos & 

Mahony, 2008).  

Essential to knowledge-building is inspiring students to develop collaboration and 

metacognition skills to actualise what they are doing and what they should do next (Yang et 

al., 2020). Metacognition is socially developed by interacting with others (students and 

lecturers in this case) and seeing what others think, and it is premised on the principles of 

agency (critically questioning), community knowledge (working together cumulatively and 

interactively), and improvable ideas (including problem-solving) that sets in motion the 

transformation of students’ knowledge and proficiencies (Yang et al., 2020). It would be 

helpful for students to gain a deep and practical learning experience when they are engaging 

with and acting on the negative and/or positive feedback they have received. Of significance, 

is recognising the reciprocal relationships between the emotional, cognitive, and social 



Mahabeer & Akoo: Connecting assessment and feedback    105 

 

     
  

dynamics of collaborative knowledge-building (Zhu & Kim, 2017). The student’s ability to 

engage with and action feedback (digital and traditional), consider their emotional reactions, 

and use feedback as an educational tool to achieve learning goals is of utmost importance in 

attaining improved knowledge-building.  

The successful blending of digital and face-to-face feedback demands transparency of 

purpose and interactive and collaborative support of students by lecturers if knowledge-

building is to happen. Considering important knowledge-building principles and the findings 

of the study, we make some suggestions.  

There is a need to design more integrative and informative assessment tools with authentic 

and reflective activities to develop agency and collective responsibility (Yang et al., 2020; 

Zhu & Kim, 2017). Lecturers could consider reflective assessment in knowledge-building 

that facilitates assessment and feedback to nurture student agency and improve their learning 

(Yang et al., 2020; Zhu & Kim, 2017). Students could assess their own and other’s learning 

collaboratively, and they could take an active role in their rational thinking about feedback 

and interact and develop social and collaborative capabilities (Lei & Chan, 2018; Yang et al., 

2020). Furthermore, lecturers could provide opportunities for students to scaffold and 

improve metacognition by encouraging them to reflect, inquire, and use other diverse lenses 

to reflect on their work (Yang et al., 2020; Zhu & Kim, 2017). The lecturer who adopts the 

knowledge-building approach facilitates students to become responsible for their learning, 

and students participate directly in high-level intellectual work such as “goal-setting, 

planning and monitoring” (Zhu & Kim, 2017, p. 1). The lecturer could employ a dialogical 

approach to engage students continually through reflective and productive dialogues to 

discuss feedback on formative assessments to improve learning.  

With the knowledge-building principle of concurrent, embedded, and transformative 

assessment as put forward by Scardamalia (2002) and Yang et al. (2020), students can use 

timely feedback (concurrent assessment) to promote agency and improve their writing, which 

will enhance students’ self-efficacy and higher-level competencies and help them understand 

their learning goals. It is crucial to produce an ideal collaborative, productive, and reflective 

knowledge-building classroom culture by creating a community of students who focus on 

collective efforts to improve the effectiveness of feedback and promote knowledge-building 

(Yang et al., 2020). In doing so, lecturers can create an environment that offers meaningful 

and worthwhile feedback using various tools that would enable students to reflect, explain, 

and action. In a higher education South African context students can become “active 

constructors of knowledge and managers of the process of improving their learning” (Pereira 

et al., 2016, p. 8). Lecturers should try to create alternative opportunities for more 

collaboration and dialogue about assessment and feedback (peer-peer and lecturer-student), 

so that any misconceptions can be clarified before students action the feedback (Mulliner & 

Tucker, 2017). It is important to nurture lecturer and student awareness of the value and the 

different methods of feedback, and for students to be engaged actively in the assessment and 

feedback process from the designing phase (Mulliner & Tucker, 2017). Technology can give 

lecturers agency in assessing students’ work (Yang et al., 2020). Accordingly, lecturers must 



106    Journal of Education, No. 83, 2021 

 

develop and transform students’ learning experiences by providing practical assessment and 

feedback tools to overcome barriers to learning, such as technology and language. Lecturers 

should understand how to use feedback effectively as a valuable educational tool to facilitate 

face-to-face and online feedback to transform students from passive receivers of knowledge 

into active participants responsible for their enhanced learning (Higgins et al., 2002; Koen et 

al., 2012).  

Feedback is the most influential and powerful aspect of the assessment cycle in improving 

student learning and performance (Dawson et al., 2019; O’Donovan et al., 2016). Yet, the 

disjuncture between the purpose and theory of feedback and the practice of providing useful 

feedback to students has increased (O’Donovan et al., 2016). This disjuncture infers that 

students in some disciplines receive vague and ambiguous feedback messages that might 

have repercussions beyond merely passing the module. Here, the importance of discipline-

specific literacy and feedback is accentuated. Students find benefit in feedback that can be 

“applied across modules” as Letty suggested. It is essential for lecturers in the same 

discipline to provide consistent and standardised feedback, which may be possible through 

communication between and among lecturers so that students are not confused when 

expectations of similar assessments are entirely different between two modules. It may be 

worthwhile to explore why there is a dissonance between feedback provided by lecturers and 

feedback expected by students. It becomes imperative for lecturers to be mindful of how 

students receive feedback, the purpose and goals of assessment and feedback to improve 

students’ learning experience and knowledge-building, and how to motivate them to action 

feedback and become active in the learning process.  

Conclusion 

Postgraduate students’ experiences, reactions, and sentiments on receiving feedback on 

formative assessments varied with some students valuing it as developmental. While students 

unable to comprehend meaningful feedback or action feedback reacted adversely and ignored 

aspects of feedback they did not understand. Some chose to seek assistance from other 

lecturers and their peers. Most of the participants found the assessment grid ineffectual as a 

tool to provide practical and worthwhile feedback. The face-to-face feedback meant to 

elaborate on the digital feedback given was ineffectual. Factors such as the preference for 

feedback (traditional face-to-face or digital), the ability to use technology, understanding 

language, the contextual and socio-economic differences, the student-lecturer relationship, 

and the emotional, metacognitive, and collaborative capabilities of students and lecturers 

affect valuable and practical feedback to knowledge-building. Therefore, students suggested 

adopting a blended feedback approach as a more useful tool that customises and personalises 

feedback for students to action. They suggested that an amalgamation of digital and face-to-

face feedback exchanges could result in meaningful and effective feedback as part of the 

process of collaborative knowledge-building that is discipline-specific and that could be 

applied across modules.  

 



Mahabeer & Akoo: Connecting assessment and feedback    107 

 

     
  

References 

Adams, J., & McNab, N. (2013). Understanding arts and humanities students’ experiences of 

assessment and feedback. Arts and Humanities in Higher Education, 12(1), 36–52. 

Ajjawi, R., & Boud, D. J. A. (2017). Researching feedback dialogue: An interactional 

analysis approach. Assessment Evaluation in Higher Education, 42(2), 252–265.  

Ali, N., Ahmed, L., & Rose, S. (2018). Identifying predictors of students’ perception of and 

engagement with assessment feedback. Active Learning in Higher Education, 19(3), 

239–251. 

Bailey, R., & Garner, M. (2010). Is the feedback in higher education assessment worth the 

paper it is written on? Lecturers’ reflections on their practices. Teaching in Higher 

Education, 15(2), 187–198. 

Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: 

Principles, Policy & Practice, 18(1), 5–25. 

Bereiter, C. (2002). Education and mind in the knowledge age. Lawrence Erlbaum. 

Bertram, C., & Christiansen, I. (2014). Understanding research: An introduction to reading 

research (1st ed.). Van Schaik Publishers. 

Black, P., & Wiliam, D. (2004). The formative purpose: Assessment must first promote 

learning. Yearbook of the National Society for the Study of Education, 103(2), 20–30. 

Bowker, N. (2018). Loss management and agency: Undergraduate students’ online 

psychological processing of lower-than-expected assessment feedback. Waikato 

Journal of Education, 23(2), 25–41.  

Byrd, R. (2013). Introduction to assessment. http://www.hsc.wvu.edu/faculty 

development/assessment-materials/introduction-to-assessment/ 

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher 

Education, 31(2), 219–233. https://doi.org/10.1080/03075070600572132 

Cohen, L., Manion, L. & Morrison, K. (2007). Research methods in education (6th ed.). 

Routledge. 

Creswell, J. W. (2014) Research design: Qualitative, quantitative, and mixed methods 

Approaches (4th ed.). SAGE. 

Crisp, B. R. (2007). Is it worth the effort? How feedback influences students’ subsequent 

submission of assessable work. Assessment & Evaluation in Higher Education, 32(5), 

571–581. 



108    Journal of Education, No. 83, 2021 

 

Crooks, T. (2001, September). The validity of formative assessments. Draft paper presented at 

BERA (British Educational Research Association) 27th Annual Conference, 

University of Leeds, Leeds, West Yorkshire. 

Dawson, P. J. A. (2017). Assessment rubrics: Towards clearer and more replicable design, 

research, and practice. Assessment Evaluation in Higher Education, 42(3), 347–360.  

Dawson, P., Henderson, M., Mahoney,P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. 

(2019). What makes for effective feedback: Staff and student perspectives, 

Assessment & Evaluation in Higher Education, 44(1), 25–36. 

https://doi.org/10.1080/02602938.2018.1467877 

Eisner, E. (1998). The enlightened eye: Qualitative inquiry and the enhancement of 

educational practice. Prentice-Hall, Inc. 

Ferguson, P. (2011). Student perceptions of quality feedback in teacher education. 

Assessment & Evaluation in Higher Education, 36(1), 51–62. 

Fong, C. J., Williams, K. M., Williamson, Z. H., Lin, S., Kim, Y. W., Schallert, D. L. (2018). 

“Inside out”: Appraisals for achievement emotions from constructive, positive, and 

negative feedback on writing. Journal of Motivation & Emotion, 42(2), 236–257.  

Glazer, N. (2014). Formative plus summative assessment in large undergraduate courses: 

Why both? International Journal of Teaching and Learning in Higher Education, 

26(2), 276–286. 

Higgins, R., Hartley, P. & Skelton, A. (2002). The conscientious consumer: Reconsidering 

the role of assessment feedback in student learning. Studies in Higher Education, 

27(1), 53–64. 

Kashif, M., ur Rehman, A., Mustafa, Z., & Basharat, S. (2014). Pakistani higher degree 

students’ views of feedback on assessment: Qualitative study. International Journal 

of Management Education, 12(2), 104–114. 

Kinne, L. J., Hasenbank, J. F., & Coffey, D. (2014). Are we there yet? Using rubrics to 

support progress toward proficiency and model formative assessment. AILACTE 

Journal, 11(1), 109–128. 

Koen, M., Bitzer, E. M., & Beets, P. A. D. (2012). Feedback or feed-forward? A case study 

in one higher education classroom. Journal of Social Sciences, 32(2), 231–242. 

Lei, C., & Chan, C. K. K. (2018). Developing meta-discourse through reflective assessment 

in knowledge building environments. Computers & Education, 126, 153–169. 

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage., Y. S., & Guba, E. G. 

(1985). Naturalistic inquiry. Sage. 



Mahabeer & Akoo: Connecting assessment and feedback    109 

 

     
  

Lipnevich, A. A., & Smith, J. K. (2009). “I really need feedback to learn:” Students’ 

perspectives on the effectiveness of the differential feedback messages. Educational 

Assessment, Evaluation and Accountability, 21(4), 347–367. 

Mulliner, E., & Tucker, M. (2017). Feedback on feedback practice: Perceptions of students 

and academics. Assessment & Evaluation in Higher Education, 42(2), 266–288. 

https://doi.org/10.1080/02602938.2015.1103365 

Munro, A. J., Cumming, K., Cleland, J., Denison, A. R., & Currie, G. P. (2018). Paper versus 

electronic feedback in high stakes assessment. Journal of the Royal College of 

Physicians of Edinburgh, 48(2), 148–152. https://doi.org/10.4997/JRCPE.2018.209 

O’Donovan, B., Rust, C., & Price, M. (2016). A scholarly approach to solving the feedback 

dilemma in practice. Assessment & Evaluation in Higher Education, 41(6), 938–949. 

https://doi.org/10.1080/02602938.2015.1052774 

Padgett, D. K. (1998). Does the glove really fit? Qualitative research and clinical social work 

practice. Social Work, 43(4), 373 –381. 

Pereira, D., Flores, M. A., Simão, A. M. V., & Barros, A. (2016). Effectiveness and relevance 

of feedback in Higher Education: A study of undergraduate students. Studies in 

Educational Evaluation, 49, 7–14.  

Pitt, E., & Norton, L. (2017). ‘Now that’s the feedback I want! ‘Students’ reactions to 

feedback on graded work and what they do with it. Assessment & Evaluation in 

Higher Education, 42(4), 499–516. 

Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: The students’ perspective. 

Assessment & Evaluation in Higher Education, 33(2), 143–154. 

Remler, D. K., & Van Ryzin, G. G. (2014). Research methods in practice: Strategies for 

description and causation. Sage Publications. 

Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of 

knowledge. Liberal Education in a Knowledge Society, 97, 67–98. 

Scardamalia, M., & Bereiter, C. (2006). Knowledge building: Theory, pedagogy, and 

technology. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences 

(pp. 97–115). Cambridge University Press. 

Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: 

Theory, pedagogy, and technology. In K. Sawyer (Ed.), Cambridge handbook of the 

learning sciences (2nd ed.) (pp. 397–417). Cambridge University Press. 

Scott, D., & Morrison, M. (2006). Key ideas in educational research. Continuum 

International Publishing Group. 



110    Journal of Education, No. 83, 2021 

 

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 

153–189. 

Turner, W., & West, J. (2013). Assessment for ‘digital first language’ speakers: Online video 

assessment and feedback in higher education. International Journal of Teaching and 

Learning in Higher Education, 25, 288–296. 

Watling, C. J., & Ginsburg, S. (2019). Assessment, feedback and the alchemy of learning. 

Journal of Medical Education, 53(1), 76–85.  

West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student 

perceptions, engagement and understanding using online video feedback. Innovations 

in Education and Teaching International, 53(4), 400–410. 

https://doi.org/10.1080/14703297.2014.1003954 

Yang, Y., van Aalst, J., & Chan, C. K. (2020). Dynamics of reflective assessment and 

knowledge building for academically low-achieving students. American Educational 

Research Journal, 57(3), 1241–1289. 

Zhu, G., & Kim, M. S. (2017). A review of assessment tools of knowledge building: Towards 

the norm of embedded and transformative assessment. Knowledge Building Summer 

Institute, Philadelphia, PA.