11 December 2009, Vol. 1, No. 1 AJHPE Article Problem-based learning (PBL) is now an accepted component of medical school programmes in many parts of the world, such as the USA, Canada, the UK, the Middle East, Asia1-3 and Africa, including South Africa. In PBL the small-group tutorial environment is believed to not only support the development of knowledge of the disciplines included in the course but also to foster result-orientated professional skills such as teamwork, clinical reasoning, communication ability and information literacy.4,5 In this system, tutors and experts advise students, enabling them to actively and independently develop learning skills for the processing, organisa- tion, understanding, evaluation and application of scientific and clinical information to real-life situations. PBL is also believed to promote life- long self-directed learners.6 The MB ChB programme at Walter Sisulu University (WSU) follows a curriculum designed on the ‘SPICES’ mod- el, i.e. the student-centred, problem-based, integrated, community-orien- tated model, that has electives and a systematic organisation. Student assessment methods in PBL are diverse and include: modi- fied essay questions (MEQs), individualised process assessment (IPA), objective structured practical/clinical examination (OSPE/OSCE), tutori- al continuous assessment (TUT), multiple choice questions (MCQs), one best answer questions (OBAs), extended matching questions (EMQs), and short/long essay questions (SEQ/LEQs). Studies have been conduct- ed to analyse the students’ comparative performances in these different modes of assessments, identifying the pros and cons.7 The MB ChB III programme at the WSU integrates the four broad disciplines of anatomical pathology, pharmacology, chemical pathology and microbiology. Students register for integrated modules arranged in four blocks; the assessment is also integrated, and marks are allocated to blocks, not to individual disciplines. The main modes of assessment in this programme are MEQs, TUTs, IPAs and OSPEs. Background. Problem-based learning (PBL) is now an accepted component of many medical school programmes worldwide. Our university also follows the PBL ‘SPICES’ model for MB ChB III. The assessment modalities used are the modified essay questions (MEQ), objective structured practical examination (OSPE), individualised process assessment (IPA) and tutorial continuous assessment (TUT). This study was done to compare the students’ performances in individual assessment components with the final mark to determine the correlation between these parameters. Materials and methods. The study was retrospective, descriptive and analytical, based on the integrated marks of all the MB ChB III students at Walter Sisulu University (WSU) in 2007. Assessment marks were stratified according to blocks and different types of assessment (MEQ, TUT, OSPE, IPA). Regression analysis was used to compute and scrutinise these vis-à-vis their correspondence with the final marks for each block. Results. Three hundred and seventy-nine block assessment marks of 96 students from 4 blocks of MB ChB III were analysed and the correlation between the assessment components and final mark were compared. Regression analysis showed good correlation when analysing the assessment modality versus the final mark for the MEQs (r=0.93, 0.93, 0.94, 0.96), followed by OSPEs (r=0.71, 0.70, 0.76, 0.77) and IPAs (r=0.62, 0.51, 0.68, 0.77). However, correlation was not significant with the TUT. Conclusion. There was good correlation between the students’ performance in the majority of assessment modalities and the final mark in the dif- ferent blocks of the MB ChB III examination. There may be a need to make tutorial assessment methods more objective, partly by additional tutor training. Correlation between different PBL assessment components and the final mark for MB ChB III at a rural South African university Mirta E Garcia-Jardon, MD, MSc, Associate Professor, Department of Anatomical Pathology Ernesto V Blanco-Blanco, MD, MSc, Professor and Head, Department of Chemical Pathology Vivek G Bhat, MB BS, MD, Senior Lecturer, Department of Medical Microbiology Sandeep D Vasaikar, MD, Associate Professor, Department of Medical Microbiology Enoch N Kwizera, MB ChB, MSc, PhD, Professor and Head, Department of Pharmacology Andrez Stepien, MB ChB, PhD, Professor and Head, Department of Anatomical Pathology Faculty of Health Sciences, Walter Sisulu University, Mthatha, Eastern Cape, South Africa Corresponding author: Vivek Bhat (vivekbhat2005@yahoo.com) Table I. Calculation of the final mark Final mark Continuous assessment (60%) End-of-block exam (40%) MEQ 1 + MEQ 2 TUT OSPE IPA 45% 15% 10% 30% Article 12 December 2009, Vol. 1, No. 1 AJHPE Article Computation of the final mark for each block involves both the con- tinuous assessment component and the end-of-block exam components. The weighting of the different assessment components in the calculation of the final mark is shown in Table I. The pass mark is 50%, and students scoring ≥75% pass with distinc- tion. The objective of this study was to determine the correlation between the different components of the continuous assessment and the final ex- amination mark with regard to students’ performance in each of the four blocks. This would provide an insight into the students’ formative and summative performance-related aspects of our PBL system. Materials and methods The study was retrospective, descriptive and analytical based on the inte- grated marks of all the MB ChB III students at WSU in 2007. Continuous assessment and end-of-block components were determined according to the weighting shown in Table I, and summed up to give the final mark. Continuous assessment comprises MEQs (scenario-based) and tuto- rials. The former are paper/pencil exams that test for content mastery across the blocks, involving content of the four disciplines. Tutorials are smal-group learning sessions, case based and student centred, conducted on 2 weekly-based sessions of 3 hours each, to small groups. Mid-block formative assessment is done, and end-of-block summative assessment is reflected in Table I. OSPE is a round of ‘stations’ measuring selected components of the block content. IPA is the exercise which duplicates, for an individual student (and faculty examiners), the process carried out in tutorial groups. Assessment marks were stratified according to blocks and the differ- ent types of assessment (MEQ, TUT, OSPE, IPA). Regression analysis was used to compute and scrutinise these vis-à-vis their correspondence with the final marks for each block with the help of EPINFO 6 statistical software. The correlation coefficient (r) was used to assess the degree of dependence between each of the assessment components and the final mark. Results There were 96 students in the MB ChB III programme at WSU in 2007. A total of 379 block assessment marks with their respective 4 assessment types were compared. Regression analysis showed good correlation when analysing the as- sessment modality versus the final mark for the MEQ (r=0.93, 0.93, 0.94, 0.96 with p<0.001 for all values), followed by OSPE (r=0.71, 0.70, 0.76, 0.77 with p<0.001 for all values), and IPA (r=0.62, 0.51, 0.68, 0.77 with p<0.001 for all values). However, correlation was not significant with the TUT. MEQ correlation with final marks was the highest, followed by OSPE for blocks 1, 2, 3 and 4 respectively (Figs 1 - 5). Stratified analysis per block showed increasing positive correlation (for MEQ and OSPE), with the progress of the blocks with the highest coefficient being that of the MEQ for block 4 (Fig. 5). Trends in TUT and IPA block marks did not show significant difference as blocks pro- gressed. Fig. 1. Correlation between assessment components and final mark, block 1.  Fig. 2. Correlation between assessment components and final mark, block 2.  Fig.3. Correlation between assessment components and final mark, block 3.  Fig. 4. Correlation between assessment components and final mark, block 4. 13 December 2009, Vol. 1, No. 1 AJHPE Article Discussion The development of effective student assessment techniques in PBL is challenging because of its student-centred focus and emphasis on self- directed learning,8 which are in contrast to traditional learning systems. Effective assessment tools should be able to judge students’ performance and progress through the course in a fair and objective manner. Also, they must ensure that students derive the maximum benefits from PBL and that the PBL process itself is being conducted effectively for the given environment.9 Some of the important principles of assessment are that the students should be assessed in a context similar to that in which they learn, and that the assessment should be appropriate to the developmental level, the subject matter and the programme outcomes.9,10 At WSU an effort is made to approximate these principles by using the assessment methods described in this article. The MEQs are a series of questions based on patient problems. They test the students’ understanding and integration of concepts and their ability to relate this to patient problems, rather than testing mere factual recall. It is evident in the present study that students who performed well in the MEQs also tended to perform well in the end-of-course exams. This supports the idea that MEQs are a good way of assessing in-course performance. In the TUTs, the tutor assesses the students’ knowledge base, clinical reasoning and decision-making skills, self-directed learning, collabora- tive work, attitudes and professionalism.11 In this article we do not dem- onstrate such a close correlation between the tutor assessments and the final course mark. This has been demonstrated in previous work.12,13 One of the reasons could be the number of attributes or competencies that tu- tors are expected to assess at a time, as it may be difficult to assess many people objectively simultaneously. It is also probable that some of the students may do well when they have a set of learning issues to prepare from one case study for presentation and discussion, but then tend to fal- ter when confronted with the larger scope of entire systems in the end-of- block summative assessments. Additionally, there may be other variables that contribute to the decreased reliability in tutorial assessment, includ- ing a lack of clarity regarding the true domains being assessed (i.e. skills related to the process of learning/self-directed learning versus acquisition of specific biomedical content), inadequate observation of relevant stu- dent performance, and lack of support from teachers for the method itself or for the manner in which the assessment is implemented.5, 14-16 The IPA 1 component consists of a long case with sequential disclo- sure of information. Students complete given ’tasks’ and hand in their answers before the next part of the case/problem is given to them. This is followed up by the IPA 2 – a viva voce – and is a discussion-based, integrated examination. The OSPE is being increasingly used in many institutions for reasons such as objectivity and reliability.13,16 Like a prac- tical MEQ, students rotate through a series of timed, 5-minute stations. At each station, they are given tasks that cover practical and clinical as- pects of the four broad disciplines. In the present study, there was good correlation between these components and the final mark for the block, supporting the use of these assessment methods. There was an increasing trend of correlation with the MEQs as the blocks progressed (shown by increasing correlation coefficients) com- pared with the other assessment modes, although the significance is un- clear. It could represent an incidental finding, as it does not appear to convey any specific information regarding students’ performance dynam- ics. There is a need for further evaluation of the different assessment tools in PBL and for comparing and correlating them to identify and imple- ment objective and optimal assessment modalities in the dynamic PBL environment. Conclusion There was good correlation between the students’ performance in the majority of assessment modalities and the final mark in the different blocks of the MB ChB III examination. It supports the use of this panel of examinations as a useful model for a PBL programme. There is a need to improve the quality of tutor assessments, which may be achieved by providing assessment training for PBL tutors. Acknowledgements The authors acknowledge the Walter Sisulu University Faculty of Health Sciences for providing the data used in the study. Note. Permission to conduct the study and to publish the findings was requested and obtained from the Faculty of Health Sciences. There is no conflict of interests. References 1. Kwan CY. What is problem-based learning (PBL)? It is magic, myth and mindset. CDTL Brief 2000; 3: 1-2. 2. Bligh J. Problem-based, small group learning. BMJ 1995; 311: 342-343. 3. Ravi Shankar P. Problem-based learning: The right direction for medical teaching? Med Princ Pract 2008; 17: 171-172. 4. Norman GR, Schmidt HG. The psychological basis of problem-based learning: a review of the evidence. Acad Med 1992; 67: 557-565. 5. Dalrymple KR, Wong S, Rosenblum A, Wuenschell C, Paine M, Shuler CF. PBL Core Skills Faculty Development Workshop 3: Understanding PBL Process Assessment and Feedback via Scenario-Based Discussions, Observation, and Role-Play. J Dent Educ 2007; 71: 1561-1573. 6. Carrera LI, Tellez TE, D’Ottavio AE. Implementing a problem-based learning cur- riculum in an Argentinean medical school: Implications for developing countries. Acad Med 2003; 78: 798-801. 7. Norcini JJ, McKingley DW: Assessment methods in medical education. Teacher and Teaching Education 2007; 23: 239-250. 8. Harden RM, Sowden S, Dunn WR. Some educational strategies in curriculum development: the SPICES model. Med Educ 1984; 18: 284-297. 9. Waters R, McCracken M. Georgia Institute of Technology. Assessment and evalu- ation in problem-based learning. Available from www.succeed.ufl.edu/papers/ fie97/fie97-010.pdf (accessed 21 April 2009). Fig. 5. Correlation between assessment components and final mark.  Article 14 December 2009, Vol. 1, No. 1 AJHPE 10. Friedman BM. The role of assessment in expanding professional horizons. Med Teacher 2000; 22: 472-477. 11. Elizondo-Montemayor LL. Formative and summative assessment of the problem- based learning tutorial session using a criterion-referenced system. JIAMSE 2004; 14: 8-14. 12. Epstein RM. Assessment in medical education New Engl J Med 2007; 356: 387- 396. 13. Eva KW. Assessing tutorial-based assessment. Adv Heath Sci Educ 2001; 6: 243- 257. 14. Valle R, Petra L, Martinez-Gonzalez A, Rojas-Ramirez JA, Morales-Lopez S, Pina-Garza B. Assessment of student performance in problem-based learning tuto- rial sessions. Med Educ 1999; 33: 818-822. 15. Govaerts MJ, Van der Vleuten CP, Schuwirth LW, Muijtjens AM. Broadening per- spectives on clinical performance assessment: rethinking the nature of in-training assessment. Adv Health Sci Educ Theory Pract 2007; 12: 239-260. 16. Natu M V, Singh T. Objective structure practical examination (OSPE) on pharma- cology students’ point of view. Indian Pharmacol 1994; 26: 188-189.