Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 33 www.jsaa.ac.za Research article From Inky Pinky Ponky to Improving Student Understanding in Assessment: Exploring the Value of Supplemental Instruction in a Large First-Year Class Mianda Erasmus* * Mianda Erasmus is a Lecturer in the Department of Psychology at North West University, South Africa. Email: Mianda.Erasmus@nwu.ac.za Abstract Large classes are a reality in many tertiary programmes in the South African context and this involves several challenges. One of these is the assessment process, including the provision of meaningful feedback and implementing strategies to support struggling students. Due to large student numbers, multiple- choice questions (MCQs) are often used in tests, even though researchers have found possible negative consequences of using MCQs. Giving appropriate feedback has been identified as a strategy to remedy some of these negative consequences. This paper reports on action research in which an intervention strategy was implemented in a large first year Psychology class where Supplemental Instructors (SIs) were used to give detailed feedback to students after assessments. The lecturer first modelled how to give feedback by discussing the MCQs in detail with the SIs and identifying possible errors in their reasoning and meta-cognitive processes. The SIs subsequently repeated this feedback process in their small-group sessions. After each assessment, students who performed poorly were advised to attend a certain number of SI sessions before the next test, and their attendance, even though voluntary, was monitored to determine the effectiveness of the intervention. Students’ performance in subsequent tests was compared and the results seem to indicate that attending SI sessions was mostly associated with improved test results. This strategy also appears to encourage attendance of SI sessions. In addition, students’ responses in a feedback survey indicate an overall positive perception of this practice. These results can inform other lecturers teaching large classes and contribute to quality enhancement in assessment and better support for students. Keywords supplemental instruction; assessment; MCQs; feedback; modelling Introduction Tertiary education plays an important role in the development of South Africa (DHET, 2013). The South African Department of Higher Education and Training (DHET) aims to improve quality in universities, and the White Paper for Post-School Education and Training published in 2013 indicated the envisaged increase of enrolment numbers from 17.3% to 25% (DHET, 2013). However, at the same time, funding is reduced, leading to http://dx.doi.org/10.24085/jsaa.v5i2.2701 http://www.jsaa.ac.za mailto:Mianda.Erasmus%40nwu.ac.za?subject= 34 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 an increase in the number of large classes, possibly negatively influencing the quality of education (Hornsby, Osman & De Matos-Ala, 2013; Hornsby & Osman, 2014). What constitutes a large class depends on the discipline and the learning environment, but large classes are a reality in many tertiary programmes in the South African context and this involves several challenges, especially in terms of the quality of education (Hornsby, Osman & De Matos-Ala, 2013). One of the challenges is the assessment process, including the provision of meaningful feedback and implementing strategies to support struggling students (Mulryan-Kyne, 2010). Due to large student numbers, multiple-choice questions (MCQs) are often used in tests. Although researchers have found possible negative consequences of using MCQs, giving appropriate feedback has been identified as a strategy to remedy some of these negative consequences (Butler & Roediger, 2008). Supplemental Instruction (SI) is a model focusing on high-risk courses, designed to support and assist students academically by using collaborative learning in peer-facilitated sessions (Arendale, 1994). A lot of research has been undertaken on the use of Supplemental Instruction to support students both globally (Blanc, DeBuhr & Martin, 1983; Congos & Schoeps, 1998; Etter, Burmeister & Elder, 2001; Hensen & Shelley, 2003; Huang, Roche, Kennedy & Brocato, 2017; Kochenour, Jolley, Kaup & Patrick, 1997; Lindsay, Boaz, Carlsen-Landy & Marshall, 2017; Martin & Arendale, 1992; McCarthy, Smuts & Cosser, 1997; Ning & Downing, 2010; Summers, Acee & Ryser, 2015) and in South Africa (Harding, Engelbrecht & Verwey, 2011; Harding, 2012; Paideya & Sookrajh, 2010; Paideya & Sookrajh, 2014; Zerger, Clark‐Unite & Smith, 2006; Zulu, 2003) and these studies clearly show the value of SI on different levels and its effectiveness in terms of improved student performance. However, fewer studies have explored the specific role that SI can play in the assessment process, or more specifically, in the feedback after assessment, using a quantitative methodology. The value of this study therefore lies in this niche area. This paper reports on the first cycle of an action research project in which I implemented an intervention strategy in my large first year Psychology class. I write this paper as lecturer, who identified a problem, but also as researcher who subsequently looked for a solution to this problem and assessed the effectiveness of the intervention. The feedback strategy involved Supplemental Instruction leaders (SIs) and the use of modelling. Using SI principles such as integrating skills and content, metacognition of learning, cooperative learning and modelling (Arendale, 1993, 1994) I modelled to the SIs how to give detailed feedback to students after assessments, how to facilitate these sessions in order to help students to identify the errors they made, to understand the work better and to prepare for the following assessment. SIs subsequently repeated this process in their SI sessions. Students who performed poorly in tests were tracked to determine if the intervention helped them to improve their marks. By using a t-test, their marks before and after the intervention were compared. Students also shared their perceptions of SI and the intervention in an online survey. The main purpose of this article is to explore the value of SI in improving the assessment process in a large class. The outline of this article will follow the process as the action research unfolded, namely: identification of the problem, planning to act, action, evaluation, reflection and http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 35 finally improvement for the next cycle. Firstly, the specific context of this research will be described, then the challenge that was experienced in this teaching and learning environment will be explained, followed by a short literature review that helped to inform the intervention strategy. The next section will explain what the intervention strategy entailed and how it was implemented. This will lead to the research questions in terms of evaluating the intervention, the research that was conducted, the results and discussion, and a reflective section on the limitations and what will be considered for the second cycle. Background Context of the study The context of this research is a first year psychology class of about 600 students taught by one lecturer (me). As a result of venue size restrictions, the students are divided into two groups. The first semester module is ‘Introduction to Psychology’, which covers a broad span of topics, including a lot of new concepts and theories which students often find quite overwhelming and challenging. In the second semester the module is ‘Social and Community Psychology’. Since these students are first years, the academic programme is structured in such a way as to assist them in the adaptation from high school. Many different assessment opportunities are provided to encourage students to study the material in small chunks. To check their understanding, there is an online MCQ quiz after every chapter. They also write four class tests, a semester test, have a group assignment and some other activities before they write the exam. Due to the large numbers and limited resources, multiple choice questions (MCQs) are used – both in the continuous assessment in the form of online quizzes, as well as in the more formal class and semester tests. Preparing high-quality MCQs which are at the correct cognitive level and consisting of a good question (stem) and plausible choices (distractors) (Tarrant, Ware & Mohammed, 2009) allows me to assess knowledge and understanding of the theories, as well as include application-type questions by using scenarios. This method makes it possible to give prompt feedback with the marks available either immediately (in the case of online quizzes) or within a few hours after a test has been written. Each context has its own challenges and it is important to keep the student profiles in mind (Scott, 2015; Van Rooy & Coetzee-Van Rooy, 2015). Many of the students in this particular context are first-generation students and most of them do not have English as a mother tongue, but as second or even third language. They often come from poor backgrounds and dysfunctional secondary schools, making them underprepared for university and putting them at a disadvantage, especially as far as academic literacy skills in English are concerned (Cross & Carpentier, 2009; Krugel & Fourie, 2014; Mhlongo, 2014). Since a MCQ consists of a stem (the question or scenario/case study) and then at least four distractors (the possible answers) (Jennings, 2012), this type of test often involves a lot of reading, which can be challenging for some of these students (Bharuthram, 2012; Paxton, 2007, 2009). Especially with the use of scenarios in order to include application questions, a 50-question test can easily be between eight and ten pages long. It also requires careful 36 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 reading in order to identify the correct response, and if English is not a first language, this might prove to be quite difficult (Butler & Van Dyk, 2004; Scott, 2015; Van Wyk, 2014). At our institution, modules with large classes are considered high-risk modules and therefore support is made available in the form of Supplemental Instruction (SI). The SI leaders are senior students who did well in the module and who I select through an interview process. There are usually between six and eight SIs per semester. They attend my classes, meet with me weekly and each one conducts two to three sessions (with a maximum of 25 students) per week. The SI sessions are voluntary and open to any student to attend. Challenge As part of the feedback after a test, I used to make the test memo available for students on the learning management system (LMS). This allowed students to reflect on their test and identify the mistakes they made. Or rather, that was the aim with making the memo available. However, in repeating some questions in subsequent tests, I realised that students tended to study the questions by heart from the memo, without deeper understanding of the content. In repeating the question, the options would be placed in a different order, but there was a trend that students would repeat whatever happened to have been the correct response in the previous test (B for example), instead of reading and understanding the question before choosing the appropriate answer. This had an influence on their performance and contributed to a lower pass rate. Research shows that more detailed, quality feedback can remedy this situation (Guo, Palmer-Brown, Lee & Cai, 2014; Iahad, Dafoulas, Kalaitzakis & Macaulay, 2004; Malau- Aduli & Zimitat, 2012). Due to the heavy work load, it is impossible to use class time to go through the test in order to give detailed feedback and explanations of how to approach the questions. As outlined in the context above, the limited resources do not allow for the possibility of using different types of assessment instead of MCQs. So the complex dilemma is: What can be done to improve the assessment process? How can quality feedback be provided to students in the current situation? How can students be assisted to develop test- taking skills and improve their reasoning patterns when it comes to answering a MCQ, but also understand the content better? How can we replace the “inky, pinky, ponky” strategy when doing MCQs with a true understanding of A, B and C? How can the pass rate be improved without lowering the standard? A literature review was subsequently done to explore and determine possible interventions that could be developed. Literature Review Large classes Quality education is a key element in developing countries and plays a vital role in economic growth (Hornsby, Osman & De Matos-Ala, 2013). Having said this, with the enrolment numbers increasing, and limited resources, classes are increasingly becoming http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 37 larger (Ehrenberg, Brewer, Gamoran & Willms, 2001). This is often associated with lower student performance (Hornsby & Osman, 2014). However, student learning is not necessarily determined by the class size, but rather by the skills and expertise of the lecturer as well as by the use of the appropriate teaching approaches and active participation of students (Mulryan-Kyne, 2010). It is therefore important that large classes are not given to the most junior lecturer with the least experience, but rather that senior, experienced academics take this responsibility and mentor junior staff in the process (Jawitz, 2013). Although large classes can pose a number of challenges, with innovative teaching methods it is possible to overcome these challenges and literature on large class pedagogy in higher education is increasing (Hornsby & Osman, 2014). Large classes are not necessarily “bad”, since the diversity and energy can be used to incorporate interactive class activities and offer a high-quality learning experience, as long as the strengths and limitations are well understood (Jawitz, 2013). Assessment Assessment can be particularly challenging in large classes, especially if resources are limited and there is not extra help with marking available. Assessment can have a feed-out function, indicating performance, or it can have a feedback function, aimed at providing information that will assist in continuous learning (Knight, 2002). In addition, it is crucial that the assessment aligns directly with the module outcomes. Different assessment strategies should be used in order to cater for the different student learning styles (Brady, 2005). Assessment should allow students to receive feedback on their learning and also give guidance to further learning (Carless, Salter, Yang & Lam, 2011; Knight, 2002) and here MCQ assessments can be valuable. Multiple-choice questions and feedback There are numerous advantages to using MCQs, for example, that they are more objective, more time-efficient in terms of writing and marking, and they offer the possibility to cover a wider range of the work (Higgins & Shelley, 2003). However, there are also several limitations and potential disadvantages linked to the use of MCQs. One of the biggest questions is whether MCQs allow for higher-order cognitive skills assessment or simply factual recall, especially since critical thinking is important in higher education (Brady, 2005; Jennings, 2012). MCQs are often seen as “easy” and as testing superficial, factual knowledge only (Palmer & Devitt, 2007). However, this depends greatly on how the question is asked and whether functional, plausible distractors are given (Tarrant et al., 2009). A MCQ can be structured in such a way as to assess the higher cognitive levels of comprehension or application and therefore be versatile if designed appropriately (Brady, 2005; Yonker, 2011). In an application question, for example, a case study can be used, requiring comprehension and application skills and much more than factual, surface knowledge. In their study, Leung, Mok and Wong (2008) found that some students placed 38 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 more emphasis on understanding in preparation for a MCQ assessment and that scenario- based MCQs were perceived to help them in developing critical thinking skills. Another problem concerns the fact that students can potentially guess the right answer (Delaere & Everaert, 2011). Students might joke saying that if in doubt with a MCQ, you can always resort to a rhyme like “inky, pinky, ponky” or “eeny, meeny, miney, moe” to help you make a choice. Although it is possible to guess, there are also ways in which guessing can be discouraged, like negative marking (Scharf & Baldwin, 2007). Brady (2005) postulates that there are many disadvantages if MCQs are poorly designed and these can cause under-performance or over-performance which are not related to the students’ ability. For example, if the distracters are not plausible, it’s easier to eliminate them, even without much knowledge (Tarrant et al., 2009). On the other hand, if the distracters are not well written, they can confuse students, even though they know the theory. Since MCQs allow for assessing detail, obscure knowledge is sometimes asked instead of sticking to the module outcomes (Brady, 2005). Setting and designing efficient, objective and high quality MCQs on the appropriate level is a skill, is time-consuming and requires commitment (Jennings, 2012). So although time is saved in the marking process, a lot of effort goes into compiling these assessments. Research has shown that effective, quality feedback is very important in enhancing students’ understanding of the questions (Lizzio & Wilson, 2008; Nicol, 2009). However, students should receive more than simply the correct answer. It is vital that they understand why they chose the wrong answer and not only where they made the wrong choice. Students need to understand and be able to explain the reason behind their choice and where they faulted in their reasoning. However, writing this type of feedback for every distracter of every question can be very time consuming. Feedback is a pedagogical practice that supports learning, but quality feedback is often not readily available for undergraduate students (Taras, 2006). Due to the nature and format of the MCQ, students are exposed to correct and incorrect information, which could lead to confusion and negative effects. In their study, Butler and Roediger (2008) found that giving feedback after a multiple-choice test improved performance on subsequent tests, probably due to the fact that it allows the student to correct previous mistakes. They focused specifically on MCQ assessments and explored the role of feedback in increasing the positive effects and decreasing the negative effects of MCQs. By comparing different groups, either having no feedback, immediate feedback or delayed feedback, they concluded that giving students’ feedback after the test is vital and that it also allowed them to have more clarity on what they knew and what they did not know (Butler & Roediger, 2008). These findings are echoed by a more recent study by Guo et al. (2014) where feedback on MCQ assessments was given online by means of analysing the students’ responses with the help of the snap-drift neural network approach. Tinto (2014) also recommends the use of technology and predictive analytics in the feedback process, which can help to reduce the workload. http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 39 Supplemental Instruction model The SI model was founded in the early 70s at the University of Missouri in Kansas City where there was a very high dropout rate (Arendale, 1993). It was decided to move away from the traditional medical model approach of supporting students who had been identified as having a problem or being at risk, and rather implementing a non-traditional approach where the focus was on difficult or high-risk modules and where assistance was available for everyone from the start of the module (Martin & Arendale, 1992). Supporting this principle, research has also found that SI sessions are beneficial to all students, regardless of their performance, although it has more impact on struggling students (Wilson, Waggenspack, Steele & Gegenheimer, 2016). The purpose of the SI programme is to increase academic performance and retention by providing opportunities for students to be involved in collaborative learning in peer-facilitated sessions. Sessions are open to all students and attendance is voluntary (Arendale, 1994). Prospective SIs are expected to meet certain criteria before being considered as a possible candidate. They are students who have completed the module before, preferably with the same lecturer, and who have performed well. The SIs act as “model” students by showing the students how successful students think about the module and process the module content. After they have been selected, they receive training in collaborative learning techniques which assist the students in knowing “how” to learn (transferable academic skills), as well as “what” to learn (content) (Arendale, 1994; McGuire, 2006). The theoretical framework in which the SI model is embedded, includes a wide variety of important learning theories including Piaget’s constructivism, Vygotsky’s Zone of Proximal Development, Tinto’s Model of Student Retention, Weinstein’s metacognition, Collaborative learning (Dewey and Bruner), Keimig’s Hierarchy of Learning and Improvement Programs and Dale’s Cone of Experience (Arendale, 1993). Social learning theory and the concept of modelling also play an important role, especially in the intervention discussed in this paper. It is of vital importance to train SIs well in the theories underpinning the SI model so that they can implement it successfully in the sessions (Jacobs, Hurley & Unite, 2008). There have been many studies focusing on the effectiveness of SI (Coletti et al., 2014; Fayowski & MacMillan, 2008; Kilpatrick, Savage & Wilburn, 2013; Latino & Unite, 2012; Malm, Bryngfors & Mörner, 2012; Okun, Berlin, Hanrahan, Lewis & Johnson, 2015; Summers et al., 2015; Terrion & Daoust, 2011). In a systematic review of the relevant literature between 2001 and 2010, Dawson, Van der Meer, Skalicky and Cowley (2014) found that SI participation is correlated with improved performance as well as lower failure and withdrawal rates. These studies did not only look at effectiveness from an academic performance perspective, but also included overall graduation rates, the impact on the development of academic skills as well as the effect on general well-being, social relationships and engagement. 40 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 These results are also reflected in more recent studies (Malm, Bryngfors & Mörner, 2015; Paloyo, Rogan & Siminski, 2016; Ribera, BrckaLorenz & Ribera, 2012; Wilson & Rossig, 2014). SI improves students’ long-term retention of the module content (Price, Lumpkin, Seemann & Bell, 2012), helps them to be more engaged in their learning while getting a deep understanding of the work (Paideya & Sookrajh, 2010, 2014) and also contributes to their sense of belonging (Summers et al., 2015). With the influence of technology, a recent study (Hizer, Schultz & Bray, 2017) explored the effectiveness of offering SI sessions online and found that it had similar effects to the face-to-face model. Methodology The intervention The intervention that was implemented is a discipline-specific strategy that took place within the first year psychology modules, with a very close collaboration between the SIs and the lecturer of these modules. As I have already indicated, research has emphasised the importance of effective, quality feedback in enhancing students’ understanding of questions in a MCQ assessment. Although feedback can be given in a written format, students might still not fully understand or might not take the time to read it. The fact that the SI model is based on, among other things, modelling by senior students and the development of skills (not only a focus on content), prompted me to take this modelling a step further. The intervention is based on allowing the students to get quality feedback on the tests, in small groups, via the SIs. However, it was important to ensure that the SIs were empowered with the necessary skills to be able to give this feedback. Instead of making the test memos available to the students on the LMS, I made it available through the SIs. After every test, the SIs were required to attend a meeting with me to which each one had to bring a memo for the test that they had worked out themselves. This ensured that they went through the test thoroughly and had a similar experience to the students in considering all the options in the process of deciding which option they considered the correct answer. During the meeting, I modelled the feedback process, illustrating how the feedback should be given to allow for better understanding and deeper learning. Based on what the SI leaders chose as answers, each question and distracter was discussed in detail, allowing me to identify possible errors in the SIs’ reasoning and understanding while illustrating how to address these errors. With the correct memo, the SIs subsequently took this discussion to the small-group sessions where they repeated the feedback process with the students. The fact that this was the only way students got access to the memo aimed to encourage students to attend these sessions. After each assessment students who performed poorly were advised to attend a certain number of SI sessions before the next test. This number differed, depending on the available time before the next test. Attending SI sessions remained voluntary, but in order to determine the effectiveness of the intervention, students’ attendance was monitored. http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 41 It is important to emphasise that the sessions were still open to everyone and that the attendees consisted of good, average and struggling students. In line with the SI design, this is not a remedial programme and the sessions are not focused on or exclusively for students who performed poorly. In addition, it is often the interaction between fellow students that promotes a conducive learning environment. Assessing the intervention In assessing the effectiveness of the intervention as well as the value of SI from the students’ point of view, the following questions guided the enquiry: 1. What are students’ perceptions of the value of SI, in particular in assessment? 2. What effect does the intervention have in improving students’ performance? Action Research In this study, action research was used as it allowed me to focus on a practical problem in the teaching and learning environment and enabled me to look for a practical solution in my specific context. Action research is cyclical in nature (Maree, 2007). The current paper reports on the first cycle of this research. As previously explained, certain aspects of my teaching practices needed attention, and action, in the form of an intervention, to improve practice. After identifying the challenge, a scan of literature informed the planning and implementation of the intervention. Assessment of the intervention had to be done to determine whether practice was indeed improved (McNiff, 2013). The final step was to reflect and amend or improve the practice for the second cycle (Laycock & Long, 2009). The reflection also allowed for my professional development, as the lecturer (Kayaoglu, 2015; Ryan, 2013), for practices to change (Kemmis, 2009) and for enhancement of the scholarly approach to teaching and learning. Action research is often a multi-method approach, using a holistic perspective to solve the problem at hand (Maree, 2007). In this study, in addition to the reflection and literature review to develop the intervention strategy, a survey was used to acquire students’ feedback on the strategy and students’ marks were monitored to determine whether the strategy improved their academic performance. Data collected In the Feedback survey, students were asked questions about SI in general (whether they attended, the value of SI sessions) and also more specifically about the intervention strategy (whether it encouraged them to attend SI sessions and whether it helped them to improve academically). A Likert scale was used for most of the questions in collecting quantitative data. The last question was an open-ended question where students could give feedback in their own words regarding the role SI played in their journey as first years. The students who underperformed in a test were tracked after the test and in subsequent tests. Pre-intervention and post-intervention test performance scores were used for students who were part of the intervention strategy, to determine whether their performance improved. 42 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 Population The population in this study constituted 219 of the approximately 600 first-year psychology students at the Mafikeng campus of the North-West University. Participation in the study and being part of the intervention strategy was voluntary. For ethical reasons, students completed the feedback survey anonymously and no names were used at any point. A total of 219 students completed the feedback survey electronically. The number of students who attended the SI sessions where the intervention strategy was put in place, varied from test to test. Results In what follows, the results of the first stage of the action research will be given. These results were obtained from the feedback survey that was done electronically on eFundi (a  Sakai LMS) at the end of the semester, as well as from the students’ performance, for which the t-test results will be given. 1. What are students’ perceptions of the value of SI, in particular in assessment? With the aim of validating the responses received in the survey, the students who completed the online survey were asked whether they actually attended SI sessions and how often. Only 15% of the students who responded in this survey had never attended SI sessions. A total of 85% of the respondents did attend the sessions, even though some attended more often than others. It can therefore be concluded that the results from this survey reflect students’ perceptions accurately. In gauging the students’ perceptions of SI, they were asked to indicate to what extent they think they would make use of the SI services in the future. Their responses are shown in the chart that follows. 0 10 20 30 40 50 60 strongly disagree disagree undecided agree strongly agree 56% 31% 8% 2%3% Use SI again? Figure 1: Future use of SI service http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 43 Their experience of SI as first years encouraged 87% of the respondents to indicate that they will continue to make use of this service. In order to get a better idea of how the students were helped by attending SI sessions, they were given a list of possible areas and could select as many options as they thought applicable in terms of their personal experience. The following shows the percentage of respondents who selected each option. 0 10 20 30 40 50 60 The SI sessions helped me to ... 70 80 90 54% 36%work with other students 43%network with other students learn from other students 54%improve my test writing skills 61%get new ideas on how to study 73%understand test questions better 79%improve my marks 80%understand the work better Figure 2: The value of SI sessions as perceived by students This graph gives a clear picture of the variety of areas in which students feel they were assisted by attending SI sessions. In terms of the specific feedback strategy under investigation in this study, it is evident that the test feedback made a difference. Students indicated that the SI sessions helped them to improve their test writing skills (54%), their understanding of test questions (73%) and their overall understanding of the work (80%) which also resulted in better performance (79%). These results concur with previous research that found that quality feedback can have a positive influence (Butler & Roediger, 2008; Lizzio & Wilson, 2008). It also indicated that the use of SIs in providing feedback in the assessment process, helped students move away from the random guessing associated with MCQs (inky, pinky or ponky?) to understanding the questions and the different possibilities (A, B and C) as they developed test-taking skills. The survey also included two separate questions that dealt with this particular feedback intervention. After every test, I posted a list of student numbers of the students in need, who were advised to attend SI sessions before the next test. Students were asked to indicate whether this practice encouraged them to attend sessions and whether attendance helped them to improve their marks. The graph below shows the results. 44 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 0 10 20 30 40 50 60 strongly disagree disagree undecided agree strongly agree Perception of feedback strategy 4 7 15 24 50 3 6 12 28 51 List encouraged to attend Marks improved Figure 3: Students’ perceptions of feedback strategy Being in a position of need after a test and receiving the directive and advice to attend sessions did encourage students and helped them to consequently improve their marks. The last question in the online survey asked students to give feedback on how the SI sessions helped them in their journey as first year students. The themes that emerged from these responses support the results of the preceding questions, and also give some more insight and possible avenues to explore in future research. In terms of the specific intervention which is the focus in this article, the following themes were identified: • Improvement in test-writing skills • Better performance in tests • Enhanced understanding of content and questions • Increased confidence in approaching MCQ tests To illustrate the perception that the SI assistance was valuable in assessment and in improving marks, here are a few quotes from students: “My SI always made it easy and normal for us to participate in sessions without being ashamed. My marks improved drastically, I went from 46% to 48% then from 48% to 64% and then I  got a distinction on my last test 88%.” “The SI helped me to improve from zero to hero.” “SI sessions are very informative and guide you on test writing skills and what to actually look at when preparing for tests and exams.” “The SI helped on how to tackle the multiple-choice questions, how to prepare for the test and also to be able to understand the questions on the test.” “It helped me understand how to interpret questions and understand them to choose correct answers during my tests.” http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 45 “SI helped me to have better understanding about this module. At first I failed, and again I failed second test. After that I was advised and convinced to attend the SI. Since I started attending SI I was doing well with my tests and I started to love psychology. Thematic analysis of the students’ responses on the question: ‘How did SI sessions help you in your journey as first year or doing first year psychology?’ yielded the following additional themes. Some quotes are given to illustrate these themes. Themes Possibility to use own language / mother tongue Opportunity to ask ques�ons Cope be�er with workload and pace in class Feel cared for Encourage study outside the class / sessions Provide prac�cal, relevant examples Improve self-esteem, believe in self Transferral of skills Correct reasoning Improve study methods “It encouraged me to be posi ve and to believe in myself.” “It helped to correct the mistakes and wrong interpreta on of concepts.” “She also helped me to apply her advices on other modules, so that I can perform well.” “The SI session has helped me to relax and enjoy varsity life in a good manner, every one say varsity life is difficult and people fail and that no one cares whether you pass or not but that’s not true, people are caring here.” “Allowing everyone to ask ques ons and in some moments we used our own language.” “At first I didn’t hear the lecturer because I had problems with English. Some SI made it easy for me to understand and gave me the skills to apply in class for understanding.” Figure 4: Value of SI: themes 2. What effect does the intervention have in improving students’ performance? By using a dependent t-test with paired samples, the pre-intervention and post-intervention test performance scores were compared to determine whether their performance improved as part of the intervention strategy. Since attendance was voluntary, some students attended whilst others did not. Comparing these two groups enabled me to link the difference to the intervention strategy implemented. Since non-random sampling was used and attendance was voluntary, statistical inference about the population cannot be drawn. Therefore effect sizes, more specifically Cohen’s d, was calculated to indicate the practical significance of any differences found. According to Ellis and Steyn (2003), a small effect would be d=0.2, a medium effect d=0.5 and a large effect d=0.8. This could also be indicated as practically non-significant, practically visible and practically significant. 46 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 Table 1: Results of t-test SI Session Assessment Mean Standard deviation Effect size Attended Early detection quiz 44.51 11.034 1.18 Test 1 57.53 10.451 NOT Attended Early detection quiz 43.95 10.815 0.91 Test 1 53.74 12.488 Attended Test 1 41.16 6.36 1.08 Test 2 48.00 11.49 NOT Attended Test 1 40.19 5.98 0.42 Test 2 42.69 10.08 Attended Test 2 40.24 6.371 1.37 Semester Test 48.94 8.771 NOT Attended Test 2 39.53 6.511 0.69 Semester Test 44.03 8.848 Based on the effect size of 1.18, 1.08 and 1.37, the difference in the test scores of the students attending the SI sessions is practically significant, improving in performance for the following assessment (44.51 to 57.53; 41.16 to 48.00; and 40.24 to 48.94). The test scores of the students NOT attending the SI sessions improved much less, as indicated by the smaller effect sizes of 0.91, 0.42 and 0.69. Thus one can conclude that the intervention did have the desired effect. 0 0.2 0.4 0.6 0.8 1 1.2 Did not attend Attended Semester 1 1.4 1.6 Semester 2 Semester 3 Figure 5: Effect sizes indicating practical significance This graph portrays the influence of the SI sessions and in particular the intervention in the form of the feedback strategy that was offered during the sessions. There is a notable http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 47 difference in terms of performance between the group that attended SI and the group that did not attend. These results give some indication that this type of intervention can play a valuable role in assisting students in understanding the assessment process and improving their performance and concurs with other research that SI can be effective in improving students’ performance (Kilpatrick et al., 2013; Malm, Bryngfors & Mörner, 2015; Paloyo, Rogan & Siminski, 2016; Summers et al., 2015). Discussion In a feedback survey, students were asked questions about SI in general, and also about the specific intervention strategy. Responses in the feedback survey indicated an overall positive perception of this practice. Students were asked how often they attended SI sessions and they were also asked to indicate how the sessions helped them. The responses that were chosen by the highest percentage of students are linked to the feedback intervention, indicating that the strategy had positive influences. The sessions are also believed to allow students to work and network with other students and to learn from them, as is the purpose with the collaborative learning SI model (Arendale, 1994). The fact that students who are struggling are specifically reminded about the availability of SI sessions and advised to attend, also appears to encourage attendance of SI sessions. From the findings in the open-ended question, it is clear that the SI sessions played a big role in assisting the students in understanding assessment, which confirms findings in other studies (Malm et al., 2012; Ribera et al., 2012). In addition, from this data interesting new themes emerged that would allow for further exploration in the next cycle. Keeping the student profile in mind, language seems to play an important role and the fact that some SIs are able to communicate in the students’ mother tongue, might play a vital role in the success of this strategy. Students’ performance in subsequent tests was compared and the results seem to indicate that attending SI sessions was mostly associated with improved test results. These results can therefore inform other lecturers teaching large classes and contribute to quality enhancement in assessment. Reflection: Limitations of the Study It is vital to be aware of any limitations in a study. In the action research process, it is also important to reflect on every action in a cycle and determine how practice can be improved and what else can be done. This has been an exciting learning process for me as the lecturer. There are several limitations, both in terms of the methodology and research, as well as the intervention itself. One limitation of this study is that it was conducted on a small scale, within one class in one specific context. This means that one cannot generalise or assume that it would have similar results in a different context. However, as part of a teaching approach, these principles might be deemed valuable to lecturers in similar situations, experiencing similar problems. As far as the t-test results are concerned, this study only followed the students that were struggling and did not consider the impact of 48 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 the intervention on the other students, whether average or good. This could be addressed in the second cycle. In terms of the intervention, is has to be mentioned that it is rather time consuming and requires dedication. The time spent with the SIs after every test to model the feedback process is considerable. However, it is still much less time consuming than giving the feedback in a large class or drafting detailed individualised written feedback on all the questions in every test. The added value of this process for both the SIs and the students should also be taken into account when considering this option. The advantage of having done this with the first group of students, is that SIs for the next year will already have experience of this process (having been in the sessions) and have been exposed to different models (the different SIs they attended sessions with) before they start modelling the behaviour in sessions to the next group of first years. This prior experience also makes my modelling easier and quicker, since they are already familiar with the process. Having experienced this effect, I do believe that it can be a sustainable process that can help students develop. Second Cycle of the Study The focus in this research was on the students in need. In subsequent cycles, the other students could also be included to see whether SI feedback helped to improve their test- taking skills and enhance their overall performance in the module. Another approach that could be considered is to start the feedback process by giving students detailed written feedback for the online quizzes while still continuing with the modelling through the SIs after the tests. In terms of assessing students’ as well as SIs’ experience of the process, more qualitative data will be collected in the next cycle. This could be done by having focus group interviews with some of the students, but also with the SIs in order to determine what the SIs themselves gained from being involved in this process. Did they also develop skills that helped them in their own studies? Investigating the transferability of these skills to other modules will also add to understanding the value of this practice, by asking students if the intervention helped them in other modules as well. Exploring the development of meta-cognitive skills as well as other possible influences (like the role of language) will further extend our understanding of the role and value of this intervention. In the second cycle, the results of first cycle will be displayed to the new group of first year students as motivation for them to attend SI sessions since Goldstein, Sauer and O’Donnell (2014) found that students’ perceptions of the value of SI sessions can influence their motivation and increase their attendance. Based on the work of Quinton and Smallbone (2010) and supported by the findings of Boud and Molloy (2013), I can also consider asking students to reflect on the feedback with the purpose of helping them to apply their learning in a feed forward into the next assessment and developing self-regulation in the process. http://dx.doi.org/10.24085/jsaa.v5i2.2701 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 49 Conclusion In this paper, I discussed an approach to giving valuable feedback in the context of a large class by using Supplemental Instruction and modelling. The results of this study showed that the intervention seems to improve students’ performance, and that students had a positive perception of the process. SI can play a valuable role in the assessment process in a large class, especially in giving quality feedback on assessment that allows students to learn test-writing skills and develop their reasoning, but also to understand the content better. Instead of using “inky, pinky, ponky” strategies to answer MCQs, students were empowered to understand the different options given in A, B and C and make the appropriate choice. These results can inform other lecturers’ practice in teaching large classes, and contribute to quality enhancement in assessment and better support for students. Even though it was done in a very specific context and within a psychology module, this strategy could also be used in other contexts and disciplines. Acknowledgements This paper was developed with the support of funding from the DHET NCTDG Project: “The improvement of teaching and learning in South African universities through researching and evaluating TDG projects in the First Year Experience (FYE) initiatives, Tutorials, Mentoring and Writing Retreat.” A word of thanks goes to Elda Lyster, my mentor in this project, for the help and guidance in writing this article. References Arendale, D. (1993). Foundation and theoretical framework for Supplemental Instruction. Supplemental Instruction: Improving first-year student success in high-risk courses, 2, 19–26. Arendale, D. (1994). Understanding the supplemental instruction (SI) model. New Directions for Teaching and Learning, 60(4), 11–22. https://doi.org/10.1002/tl.37219946004 Blanc, R.A., DeBuhr, L.E. & Martin, D.C. (1983). Breaking the attrition cycle: The effects of supplemental instruction on undergraduate performance and attrition. The Journal of Higher Education, 54(1), 80–90. https://doi.org/10.1080/00221546.1983.11778153 Boud, D. & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2 012.691462 Brady, A.-M. (2005). Assessment of learning with multiple-choice questions. Nurse Education in Practice, 5(4), 238–242. https://doi.org/10.1016/j.nepr.2004.12.005 Butler, A.C. & Roediger, H.L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36(3), 604–616. https://doi.org/10.3758/ MC.36.3.604 Carless, D., Salter, D., Yang, M. & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407. https://doi.org/10.1080/03075071003642449 Coletti, K.B., Wisniewski, M.E.O., Shapiro, M.R., DiMilla, P.A., Reisberg, R. & Covert, M. (2014). Correlating freshman engineers’ performance in a general chemistry course to their use of supplemental instruction. Paper presented at the Proceedings of the American Society for Engineering Education 2014 Annual Conference and Exhibition. https://doi.org/10.1002/tl.37219946004 https://doi.org/10.1080/00221546.1983.11778153 https://doi.org/10.1080/02602938.2012.691462 https://doi.org/10.1080/02602938.2012.691462 https://doi.org/10.1016/j.nepr.2004.12.005 https://doi.org/10.3758/MC.36.3.604 https://doi.org/10.3758/MC.36.3.604 https://doi.org/10.1080/03075071003642449 50 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 Congos, D.H. & Schoeps, N. (1998). Inside supplemental instruction sessions: One model of what happens that improves grades and retention. Research and Teaching in Developmental Education, 47–61. Cross, M. & Carpentier, C. (2009). ‘New students’ in South African higher education: institutional culture, student performance and the challenge of democratisation. Perspectives in Education, 27(1), 6–18. Dawson, P., Van der Meer, J., Skalicky, J. & Cowley, K. (2014). On the effectiveness of supplemental instruction: A systematic review of supplemental instruction and peer-assisted study sessions literature between 2001 and 2010. Review of Educational Research, 84(4), 609–639. https://doi. org/10.3102/0034654314540007 Delaere, D. & Everaert, P. (2011). Performance on multiple choice and essay examinations. University of Ghent Press. DHET, D.o.H.E.a.T. (2013). White Paper for Post-School Education and Training: Building an expanded, effective and integrated post-school system. Department of Higher Education and Training, Pretoria, South Africa. Ehrenberg, R.G., Brewer, D.J., Gamoran, A. & Willms, J.D. (2001). Class size and student achievement. Psychological Science in the Public Interest, 2(1), 1–30. https://doi.org/10.1111/1529- 1006.003 Ellis, S. & Steyn, H. (2003). Practical significance (effect sizes) versus or in combination with statistical significance (p-values): research note. Management Dynamics: Journal of the Southern African Institute for Management Scientists, 12(4), 51–53. Etter, E.R., Burmeister, S.L. & Elder, R.J. (2001). Improving student performance and retention via supplemental instruction. Journal of Accounting Education, 18(4), 355–368. https://doi.org/10.1016/ S0748-5751(01)00006-9 Fayowski, V. & MacMillan, P. (2008). An evaluation of the supplemental instruction programme in a first year calculus course. International Journal of Mathematical Education in Science and Technology, 39(7), 843–855. https://doi.org/10.1080/00207390802054433 Goldstein, J., Sauer, P. & O’Donnell, J. (2014). Understanding factors leading to participation in supplemental instruction programs in introductory accounting courses. Accounting Education, 23(6), 507–526. https://doi.org/10.1080/09639284.2014.963132 Guo, R., Palmer-Brown, D., Lee, S.W. & Cai, F.F. (2014). Intelligent diagnostic feedback for online multiple-choice questions. Artificial Intelligence Review, 42(3), 369–383. https://doi.org/10.1007/ s10462-013-9419-6 Hensen, K.A. & Shelley, M.C. (2003). The impact of supplemental instruction: Results from a large, public, midwestern university. Journal of College Student Development, 44(2), 250–259. https://doi. org/10.1353/csd.2003.0015 Higgins, K.A. & Shelley, M.C. (2003). Exploring the potential of multiple-choice questions in assessment. Learning & Teaching in Action, 2(1), 1–16. Hizer, S.E., Schultz, P. & Bray, R. (2017). Supplemental Instruction Online: As Effective as the Traditional Face-to-Face Model? Journal of Science Education and Technology, 26(1), 100–115. https://doi.org/10.1007/s10956-016-9655-z Hornsby, D.J. & Osman, R. (2014). Massification in higher education: large classes and student learning. Higher Education, 67(6), 711–719. https://doi.org/10.1007/s10734-014-9733-1 Huang, L., Roche, L.R., Kennedy, E. & Brocato, M.B. (2017). Using an Integrated Persistence Model to Predict College Graduation. International Journal of Higher Education, 6(3), 40. https://doi. org/10.5430/ijhe.v6n3p40 http://dx.doi.org/10.24085/jsaa.v5i2.2701 https://doi.org/10.3102/0034654314540007 https://doi.org/10.3102/0034654314540007 https://doi.org/10.1111/1529-1006.003 https://doi.org/10.1111/1529-1006.003 https://doi.org/10.1016/S0748-5751(01)00006-9 https://doi.org/10.1016/S0748-5751(01)00006-9 https://doi.org/10.1080/00207390802054433 https://doi.org/10.1080/09639284.2014.963132 https://doi.org/10.1007/s10462-013-9419-6 https://doi.org/10.1007/s10462-013-9419-6 https://doi.org/10.1353/csd.2003.0015 https://doi.org/10.1353/csd.2003.0015 https://doi.org/10.1007/s10956-016-9655-z https://doi.org/10.1007/s10734-014-9733-1 https://doi.org/10.5430/ijhe.v6n3p40 https://doi.org/10.5430/ijhe.v6n3p40 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 51 Iahad, N., Dafoulas, G.A., Kalaitzakis, E. & Macaulay, L.A. (2004). Evaluation of online assessment: The role of feedback in learner-centered e-learning. Proceedings of the 37th Annual Hawaii International Conference on System Sciences. https://doi.org/10.1109/HICSS.2004.1265051 Jacobs, G., Hurley, M. & Unite, C. (2008). How learning theory creates a foundation for SI leader training. Journal of Peer Learning, 1(1), 6–12. Jawitz, J. (2013). The challenge of teaching large classess in higher education in South Africa: a battle to be waged outside the classroom. In: D.J. Hornsby, J. De Matos-Ala & R. Osman (Eds.), Large- Class Pedagogy – Interdisciplinary perspectives for quality higher education (pp. 137–146). Stellenbosch: Sun Press. https://doi.org/10.18820/9780992180690/09 Jennings, D. (2012). The design of multiple choice questions for assessment. University College, Dublin, Ireland. Kayaoglu, M.N. (2015). Teacher researchers in action research in a heavily centralized education system. Educational Action Research, 23(2), 140–161. https://doi.org/10.1080/09650792.2014.997260 Kemmis, S. (2009). Action research as a practice-based practice. Educational Action Research, 17(3), 463–474. https://doi.org/10.1080/09650790903093284 Kilpatrick, B.G., Savage, K.S. & Wilburn, N.L. (2013). Supplemental instruction in intermediate accounting: An intervention strategy to improve student performance. In: D.  Feldmann & T.J.  Rupert (Eds.), Advances in Accounting Education: Teaching and Curriculum Innovations (pp. 153–169). United Kingdom: Emerald Group Publishing Limited. Knight, P.T. (2002). Summative assessment in higher education: practices in disarray. Studies in Higher Education, 27(3), 275–286. https://doi.org/10.1080/03075070220000662 Kochenour, E., Jolley, D., Kaup, J. & Patrick, D. (1997). Supplemental instruction: An effective component of student affairs programming. Journal of College Student Development, 38(6), 577. Krugel, R. & Fourie, E. (2014). Concerns for the language skills of South African learners and their teachers. International Journal of Education Science, 7(1), 219–228. Latino, J.A. & Unite, C.M. (2012). Providing academic support through peer education. New Directions for Higher Education, 157, 31–43. https://doi.org/10.1002/he.20004 Laycock, D. & Long, M. (2009). Action Research? Anyone can! IBSC Global Action Research Project. Retrieved 2 December 2012 from http://drjj.uitm.edu.my/DRJJ/MATRIC2010/5.% 20Anyone_can_Action_Research-DRJJ-02022010. pdf Lindsay, K., Boaz, C., Carlsen-Landy, B. & Marshall, D. (2017). Predictors of Student Success in Supplemental Instruction Courses at a Medium Sized Women’s University. International Journal of Research in Education and Science, 3(1), 208–217. Lizzio, A. & Wilson, K. (2008). Feedback on assessment: students’ perceptions of quality  and effectiveness. Assessment & Evaluation in Higher Education, 33(3), 263–275. https://doi. org/10.1080/02602930701292548 Malau-Aduli, B.S. & Zimitat, C. (2012). Peer review improves the quality of MCQ examinations. Assessment & Evaluation in Higher Education, 37(8), 919–931. https://doi.org/10.1080/02602938. 2011.586991 Malm, J., Bryngfors, L. & Mörner, L.-L. (2012). Supplemental instruction for improving first year results in engineering studies. Studies in Higher Education, 37(6), 655–666. https://doi.org/10.108 0/03075079.2010.535610 Malm, J., Bryngfors, L. & Mörner, L.-L. (2015). The potential of Supplemental Instruction in engineering education – helping new students to adjust to and succeed in University studies. European Journal of Engineering Education, 40(4), 347–365. https://doi.org/10.1080/03043797.201 4.967179 https://doi.org/10.1109/HICSS.2004.1265051 https://doi.org/10.18820/9780992180690/09 https://doi.org/10.1080/09650792.2014.997260 https://doi.org/10.1080/09650790903093284 https://doi.org/10.1080/03075070220000662 https://doi.org/10.1002/he.20004 http://drjj.uitm.edu.my/DRJJ/MATRIC2010/5. https://doi.org/10.1080/02602930701292548 https://doi.org/10.1080/02602930701292548 https://doi.org/10.1080/02602938.2011.586991 https://doi.org/10.1080/02602938.2011.586991 https://doi.org/10.1080/03075079.2010.535610 https://doi.org/10.1080/03075079.2010.535610 https://doi.org/10.1080/03043797.2014.967179 https://doi.org/10.1080/03043797.2014.967179 52 Journal of Student Affairs in Africa | Volume 5(2) 2017, 33–53 | 2307-6267 | DOI: 10.24085/jsaa.v5i2.2701 Maree, K. (2007). First steps in research. Pretoria, South Africa: Van Schaik Publishers. Martin, D.C. & Arendale, D.R. (1992). Supplemental Instruction: Improving First-Year Student Success in High-Risk Courses. The Freshman Year Experience: Monograph Series Number 7. Columbia, S.C.: South Carolina University. McCarthy, A., Smuts, B. & Cosser, M. (1997). Assessing the effectiveness of supplemental instruction: A critique and a case study. Studies in Higher Education, 22(2), 221–231. https://doi.org/10.1080/0 3075079712331381054 McGuire, S.Y. (2006). The impact of supplemental instruction on teaching students how to learn. New Directions for Teaching and Learning, 106, 3–10. https://doi.org/10.1002/tl.228 McNiff, J. (2013). Action research: Principles and practice. Abingdon, U.K.: Routledge. Mhlongo, G.J. (2014). The impact of an academic literacy intervention on the academic literacy levels of first year students: The NWU (Vaal Triangle Campus) experience. Potchefstroom Campus, North West University, South Africa. Mulryan-Kyne. (2010). Teaching large classes at college and university level: challenges and opportunities. Teaching in Higher Education, 15(2), 175–185. https://doi.org/10.1080/13562511003620001 Nicol, D. (2009). Assessment for learner self-regulation: enhancing achievement in the first year using learning technologies. Assessment & Evaluation in Higher Education, 34(3), 335–352. https://doi. org/10.1080/02602930802255139 Ning, H.K. & Downing, K. (2010). The impact of supplemental instruction on learning competence and academic performance. Studies in Higher Education, 35(8), 921–939. https://doi. org/10.1080/03075070903390786 Okun, M.A., Berlin, A., Hanrahan, J., Lewis, J. & Johnson, K. (2015). Reducing the grade disparities between American Indians and Euro-American students in introduction to psychology through small-group, peer-mentored, supplemental instruction. Educational Psychology, 35(2), 176–191. https://doi.org/10.1080/01443410.2013.849324 Palmer, E.J. & Devitt, P.G. (2007). Assessment in higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? BMC Medical Education, 7(49), 1–7. https://doi.org/10.1186/1472-6920-7-49 Paloyo, A.R., Rogan, S. & Siminski, P. (2016). The effect of supplemental instruction on academic performance: An encouragement design experiment. Economics of Education Review, 55, 57–69. https://doi.org/10.1016/j.econedurev.2016.08.005 Price, J., Lumpkin, A.G., Seemann, E.A. & Bell, D.C. (2012). Evaluating the impact of supplemental instruction on short- and long-term retention of course content. Journal of College Reading and Learning, 42(2), 8–26. https://doi.org/10.1080/10790195.2012.10850352 Quinton, S. & Smallbone, T. (2010). Feeding forward: using feedback to promote student ref lection and learning – a teaching model. Innovations in Education and Teaching International, 47(1), 125–135. https://doi.org/10.1080/14703290903525911 Ribera, A., BrckaLorenz, A. & Ribera, T. (2012). Exploring the fringe benefits of Supplemental Instruction. Paper presented at the Association for Institutional Research Annual Forum, New Orleans, L.A. Ryan, T.G. (2013). The scholarship of teaching and learning within action research: Promise and possibilities. I.E.: Inquiry in education, 4(2), 3. Scharf, E.M. & Baldwin, L.P. (2007). Assessing multiple choice question (MCQ) tests – A mathematical perspective. Active Learning in Higher Education, 8(1), 31–47. https://doi. org/10.1177/1469787407074009 Scott, L. (2015). English lingua franca in the South African tertiary classroom: recognising the value of diversity. Stellenbosch University, South Africa. http://dx.doi.org/10.24085/jsaa.v5i2.2701 https://doi.org/10.1080/03075079712331381054 https://doi.org/10.1080/03075079712331381054 https://doi.org/10.1002/tl.228 https://doi.org/10.1080/13562511003620001 https://doi.org/10.1080/02602930802255139 https://doi.org/10.1080/02602930802255139 https://doi.org/10.1080/03075070903390786 https://doi.org/10.1080/03075070903390786 https://doi.org/10.1080/01443410.2013.849324 https://doi.org/10.1186/1472-6920-7-49 https://doi.org/10.1016/j.econedurev.2016.08.005 https://doi.org/10.1080/10790195.2012.10850352 https://doi.org/10.1080/14703290903525911 https://doi.org/10.1177/1469787407074009 https://doi.org/10.1177/1469787407074009 Mianda Erasmus: From Inky Pinky Ponky to Improving Student Understanding in Assessment ... 53 Summers, E.J., Acee, T.W. & Ryser, G.R. (2015). Differential Benefits of Attending Supplemental Instruction for Introductory, Large-Section, University US History Courses. Journal of College Reading and Learning, 45(2), 147–163. https://doi.org/10.1080/10790195.2015.1030516 Taras, M. (2006). Do unto others or not: equity in feedback for undergraduates. Assessment & Evaluation in Higher Education, 31(3), 365–377. https://doi.org/10.1080/02602930500353038 Tarrant, M., Ware, J. & Mohammed, A.M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC medical education, 9(1), 40. https://doi.org/10.1186/1472-6920-9-40 Terrion, J.L. & Daoust, J.-L. (2011). Assessing the impact of supplemental instruction on the retention of undergraduate students after controlling for motivation. Journal of College Student Retention: Research, Theory & Practice, 13(3), 311–327. https://doi.org/10.2190/CS.13.3.c Tinto, V. (2014). Tinto’s South Africa lectures. Journal of Student Affairs in Africa, 2(2), 5–28. https://doi. org/10.14426/jsaa.v2i2.66 Van Rooy, B. & Coetzee-Van Rooy, S. (2015). The language issue and academic performance at a South African University. Southern African Linguistics and Applied Language Studies, 33(1), 31–46. https://doi.org/10.2989/16073614.2015.1012691 Wilson, B. & Rossig, S. (2014). Does Supplemental Instruction for Principles of Economics improve outcomes for traditionally underrepresented minorities? International Review of Economics Education, 17, 98–108. https://doi.org/10.1016/j.iree.2014.08.005 Yonker, J.E. (2011). The relationship of deep and surface study approaches on factual and applied test- bank multiple-choice question performance. Assessment & Evaluation in Higher Education, 36(6), 673–686. https://doi.org/10.1080/02602938.2010.481041 How to cite: Erasmus, M. (2017). From Inky Pinky Ponky to Improving Student Understanding in Assessment: Exploring the Value of Supplemental Instruction in a Large First-Year Class. Journal of Student Affairs in Africa, 5(2), 33–53. DOI: 10.24085/jsaa.v5i2.2701 https://doi.org/10.1080/10790195.2015.1030516 https://doi.org/10.1080/02602930500353038 https://doi.org/10.1186/1472-6920-9-40 https://doi.org/10.2190/CS.13.3.c https://doi.org/10.14426/jsaa.v2i2.66 https://doi.org/10.14426/jsaa.v2i2.66 https://doi.org/10.2989/16073614.2015.1012691 https://doi.org/10.1016/j.iree.2014.08.005 https://doi.org/10.1080/02602938.2010.481041 http://dx.doi.org/10.24085/jsaa.v5i2.2701