vii Biki Lepota Biki.Lepota@umalusi.org.za Umalusi (Council for Quality Assurance in General and Further Education and Training) & Department of Basic Education Stephen Taylor Umalusi (Council for Quality Assurance in General and Further Education and Training) & Department of Basic Education Editorial This special issue, framed by the theme Assessing the achievement of curriculum standards: An ongoing dialogue, is dedicated to peer-reviewed articles which were originally presented as papers at the 42nd International Association for Educational Assessment (IAEA) conference hosted by Umalusi, the Council for Quality Assurance in General and Further Education and Training, in Cape Town on 21-26 August 2016. The conference attracted more than 300 delegates from various educational institutions and organisations representing over 40 different countries spanning all the continents of the world. Approximately 120 papers, including four keynotes, were presented. The papers dealt with different aspects of the theme, particularly focusing on the alignment between curriculum standards, teaching and assessment on the one hand and standardised testing and innovative ways of reporting on learner performance on the other. The object of interest in administering examinations is so that obtained marks can be used to make inferences regarding skills and knowledge acquired by examinees during the teaching and learning processes. What remains an issue of interest, however, is the lack of common operational approach to link assessment to curriculum standards. This is what informs the theme of this publication. In an attempt to draw a link between curriculum and assessment, Lolwana (2005: 69) put forward an argument that curriculum articulates the standard to be attained in the examination. Stated differently, a good quality assessment draws on a good quality curriculum. Especially in the South African education system, low or high education standards are interpreted in relation to pass rates. When the system produces high pass rates, suggestions surface that examinations might not have been set at a sufficiently high standard. More specifically, the most frequently asked questions are “Are standards dropping? Are the results real or have they been manipulated? How is our education system doing?” (Reddy, 2006: xii) Since these arguments revolve around pass rates but not on the content of examination and whether that which is examined is in alignment with curriculum content, we find that line of argument questionable. The key question should be what the pass rates mean in terms of what learners know and can do. For example, Kanjee (2006: 80) argues against the use of marks or averages as predictors of education standards because “average pass rates provide a distorted picture viii pertaining to learner performance in key subjects areas, such as mathematics, languages or science”. Taylor and Taylor (2014) turn the debate around by arguing for the placement of teacher disciplinary knowledge, subject knowledge for teaching and classroom competence at the heart of the discourse. This is a different but important perspective in that it establishes a clear link between the quality of teaching, which can serve as an enabler or a barrier to learning and the achievement of curriculum standards. The seven articles that appear in this special issue of Perspectives in Education collectively make the same argument that if the gap between curriculum content and assessment standards is too great, this can have a negative effect on teaching and learning. The articles consider the matter of aligning assessment standards with curriculum standards from different perspectives. The article by Prinsloo and Harvey discusses the utility of the Early Grade Reading Assessment (EGRA) instrument to determine improvements in learner language and literacy development in the lower levels of the schooling system. This is informed by a growing realisation of the importance of development of foundational literacy skills in the early grades of formal education. Their choice of the EGRA instrument was based on two reasons: the efficiency with which it can be administered and the adaptability of the instrument to suit complex linguistic situations. The discussion on the instrument centres around two recent impact evaluations of teacher interventions in two provinces of South Africa. The first intervention, targeted at literacy development in English as the second language was tested on two cohorts of grades 1, 4 and 7 learners in a selection of schools in Limpopo. The second intervention focussed on literacy development in Setswana and was administered in the North West. Considering its reliability index of 0.90, the instrument proves to have high reliability. The benefits of the evaluation tool, which include usefulness, suitability, reliability, validity, reduction in learner anxiety levels and how it assists teachers, are discussed at length. Based on the insights gained from the two interventions, the article makes specific recommendations in terms of how the instrument can be amended and enhanced for use in the future. Abrams, Varier and Jackson discuss assessment data as a lever to measure the degree to which curriculum, instruction and assessment are accurately aligned. More specifically, the article reports on the results of a qualitative study conducted to establish how teachers in the United States of America (USA) use assessment data and under what conditions in order to strengthen subsequent teaching and learning. The study was based on multiple tape-recorded and transcribed interviews of 14 focus groups with 60 teachers from elementary and middle schools in the USA. Two key findings emerged from the study. Firstly, the study established a clear link between the culture of data use with a school and the actual utilisation of data by teachers. Such a culture can be established through communities of practice. Secondly, it emerged that teachers’ use of assessment data is influenced by the type of data and sources thereof and the quality of evidence gathered. The article by Kanjee and Moloi critiques the methods currently used in South Africa and other similar education systems to report on learner performance. It goes on to highlight the limitations of the existing practices, key amongst which is the lack of descriptions of what learners know and are able to do. Consequently, they put forward an argument for considering the Angoff method in reporting learner performance due to its usefulness in providing information regarding what a learner knows and can do and the method’s ability to show what learning gaps exist. Drawing on data from the Annual National Assessments administered to grades 3 and 6 in English First Additional Language and mathematics, Kanjee and Moloi explore how subject matter experts can generate learner performance standards to provide information in terms of what learning gaps exist and how to address them. One of the key findings is that the proposed Angoff method improves the reporting of large-scale assessment ix results by providing plausible cut-scores with a high degree of inter-rater reliability, which in turn, can support teachers to address specific learning needs of their learners. Dennis Opposs’ article shifts the dialogue slightly to focus on the challenges facing the validity and reliability of school-based assessment (SBA) in the context of the General Certificate of Secondary Education (GCSE) and A Levels in England. The article begins by providing a historical account of the changes regarding the reduction in the weighting of SBA in the determination of final grades in some subjects over the past 30 years. A key argument for the reduction in the use of SBA is the abundantly available evidence of learners submitting work that lacks originality, learners receiving assistance from parents and teachers in completing their SBA tasks, all of which reduces the power of SBA marks to signal the true ability of the learners. Concluding the article, Opposs considers how the decreased use of SBA has positively influenced the taught curriculum. The final three articles add a statistical flavour to the dialogue. They illustrate how examination data could be used to align curriculum and assessment standards. In their article, Combrink, Scherman and Maree argue in the context of high stakes examination that it is essential that examinees be given standards-referenced feedback in terms of what skills and knowledge they have gained. They maintain that this type of feedback enables learners to know how they can improve while at the same time assisting teachers to fine-tune their teaching strategies in order to target curriculum standards that have not been achieved. They employ a Rasch analysis to determine the competency levels of learners in English, mathematics and natural sciences assessments administered to grades 8 to 11 learners. Based on the content and item difficulties, the authors are able to generate descriptions for the proficiency levels in the particular subject area. The analysis confirms that the Rasch Item Map method is a useful way of aligning assessments and curriculum-standards, which in turn, facilitates the identification of areas for improving teaching and learning in the subject areas concerned. Given that the current study is based on a smaller sample of 1113 learners, the article concludes by suggesting further studies with larger samples while at the same time conducting cross-validation studies. Ojerinde, Popoola, Onyeneho and Egberongbe investigate the degree to which score tables obtained prior to and after the process of equating assessments are comparable. They used data from a subject called Use of English, which forms part of the Unified Tertiary Matriculation Examination (UTME), conducted by Nigeria’s Joint Admissions and Matriculation Board (JAMB) to select qualifying students for entry into tertiary institutions. Specifically, data were drawn from 2012 pre-test and 2013 post-test. The rationale is to allow the marks to be subjected to the same interpretation and use. Considering the comparable results between the pre-equated and the post-equated models, they conclude by arguing for the continued use of the pre-equated model. In their work, Moothedath, Chaporkar and Belur question the rationale for using marks obtained from norm-referenced feedback to reach conclusions about students’ capabilities. Given the huge administrative costs involved with fully implementing computer adaptive testing (CAT), they propose an evaluation method that mimics CAT’s evaluation process. Using uncalibrated questions, they apply a 3-parameter logistic ogive model to simulate examinee ability relative to question difficulty, considering a guessing factor. The findings indicate that compared to the conventional marks-based approach, their method produces better results. It is our hope that much theoretical, methodological and pedagogical value will be gained from this scholarly work with regard to how best to achieve a greater degree of alignment between the curriculum, pedagogy and assessment. Acknowledgements Our sincere thanks are due to all reviewers of articles for working hard under very tight deadlines to make the publication of this special issue possible. We would like to express appreciation and thanks on behalf of the authors and readers of Perspectives in Education for your assistance. References Kanjee, A. 2006. Comparing and standardizing performance trends in the matric examinations using a matrix sampling design. In V. Reddy (Ed.). Marking matric. Colloquium proceedings. Pretoria: Human Sciences Research Council Press. Lolwana, A. 2006. Comparing and standardizing performance trends in the matric examinations using a matrix sampling design. In V. Reddy (Ed.). Marking matric. Colloquium proceedings. Pretoria: Human Sciences Research Council Press. Reddy, V. (Ed.). 2006. Marking matric. Colloquium proceedings. Pretoria: Human Sciences Research Council Press. pp. xii – xix. Taylor, N. & Taylor, S. 2014. In N. Taylor, S. van der Berg & T. Mabogoane (Eds.). Creating effective schools. Cape Town: Pearson.