Microsoft Word - Viewpoint 2- Imriyas Stimulating Learning with Integrated Assessments in Construction Education Imriyas Kamardeen (University of New South Wales, Australia) Abstract Quality of learning students experience is heavily dependent on the effectiveness of course design. Assessments are a key component in course design and students determine their level of involvement in a learning activity based on whether it is assessed or not. Assessments are therefore a powerful tool that lecturers can utilise to drive learning. However, designing effective assessments to stimulate learning is challenging in the presence of disciplinary, contextual dimensions. A case study approach is adopted to demonstrate how effective integrated assessment schemes may be developed and implemented for construction education. The scheme in the case study amalgamated case- based learning, online quizzes and adaptive eTests to provide a variety of assessments, aligned with lecture topics and contemporary real-word scenarios. It was found that the presence of both formative and summative tasks in the assessment scheme complemented each other, kept students constantly motivated and engaged in learning, and resulted in a good learning experience for them. The study provide evidence, and valuable insights and tips for lecturers in similar degree programs as to how they could modify pedagogical styles in their courses for better learning experiences for students and improved teaching ratings for themselves. Keywords: Pedagogy, Integrated assessment, Case-based learning, Online quiz, Adaptive eLearning Introduction Research shows that students regard assessments as more crucial than learning itself and they spend less than 10 percent of their time on non-assessed learning tasks. Students engage with a subject matter based on whether a task will be evaluated (Elton & Johnston 2002; Lombardi 2008). Kember and McNaught (2007) noted that assessments are critical to learning because students are largely driven and motivated by them. Hence, assessments need to be used as a means to achieve the ends: driving student learning and realising different learning outcomes. Brown (2004) reinforced that assessments are possibly the best tool that lecturers could utilise to drive student learning. Students could ignore a lecturer’s teaching, but if they need a qualification they have to complete the assessment. Hence, it is worthwhile exploring how lecturers can achieve integration and constructive alignment to ensure that assessments help learning in their courses. The way lecturers assess will significantly influence how students learn. It is a conventional practice in higher education that in-class quizzes, homework, class tests, laboratory practicals and other similar tasks are used to evaluate what students have learnt. Usually some form of credit, associated with feedback, is given to students’ work. The feedback can be short or detailed, depending on the task and class size. In large classes, elaborative feedback is often limited due to resource constraints. Watkin (1998) criticised that while this form of summative assessment facilitates grading, it has failed in developing high-order learning outcomes as it makes students value rote learning and mark accumulation in preference to deep learning. Over the past decades, alternative approaches to assessment have been practised by a number of scholars in different disciplines in a bid to utilise assessments as a toolkit to promote student learning (Tang & Biggs 1998; Carless 2002). Although there are multiple examples of alternative assessment approaches, they Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 87 cannot be merely repeated in other courses. Brown (2004) claimed that context is of critical importance in assessment; an assessment approach that works well for one subject/discipline may not work equally well in another subject/discipline. Thus, lecturers who endeavour to offer authentic learning experiences to students must formulate tailored solutions that are appropriate and meaningful to their subject/discipline (Lombardi 2008). To this end, this study aims to demonstrate how effective integrated assessment schemes can be developed to improve the quality of learning and teaching in construction education. The paper first explores higher education literatures to understand key considerations for designing learning-oriented assessment instruments. Second, it discusses how the knowledge acquired from the literatures was operationlised in implementing an integrated assessment scheme in the context of Construction Management Degree program. Then students’ views on the effectiveness of the assessment scheme for quality learning are expounded followed by a conclusion. Theoretical Background Assessments in Higher Education Dietel, Herman and Knuth (1991) defined assessment as “any method used to better understand the current knowledge that a student possesses”. Boud (1990) articulated that assessments serve two key purposes: improving the quality of learning and measuring student performance. Additional uses of assessments include: motivating students through the recognition of achievement and aiding administrative matters. Assessments are therefore a core element of instructional design. Higher education literatures categorise assessments into two types: summative (traditional) assessments and formative (alternative) assessments. The summative approach aims to judge a student’s competency/proficiency at the completion of an instructional unit or part of it. The most common summative assessment tools are: a mid-unit quiz/test, an end-of-unit quiz/test/exam, a final essay/paper/drawing, a final presentation, a final project, etc. Formative assessments instead inform students of their learning progress towards some expected learning outcomes, and offer help and guidance to strengthen their areas of weakness and thereby improving their performance. Examples of formative assessments include: problem-based learning, case studies, scenario-based learning, literature reviews, investigative project-based learning, portfolios, learning logs/journals, simulations/role plays, etc. Davies (2000) posited that formative assessments offer deep learning platforms as they promote active student engagement, as well as encourage them to take risk, and learn by doing and making mistakes. Additionally, formative assessments provide greater opportunities for lecturers to continually improve their teaching during an instructional period/unit. Formative checks of students’ understanding of the subject unit during an instructional period can result in a lecturer changing the instructional strategy if the initially used method was ineffective. Law and Eckes (1995) argued that summative assessments are an easy tactic for evaluating student performances as they are more objective and consistent. However, several disadvantages of summative assessments have been reported in the literature. Bailey (1998) critiqued traditional summative assessments as inauthentic and decontextualised, homogeneous, single-occasion tests, speed-centred and no meaningful feedback is provided to leaners. Law and Eckes (1995) added that this form of assessment can only measure learners’ ability to perform a task at a given time and the test score is not a good indicator of student progression. Simonson et al. (2000) stated that summative assessments only evaluate learners’ low-order cognition and thinking skills such as memorisation and recall. Summative assessments force students to demonstrate their knowledge in a predefined manner (Brualdi 1998). Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 88 Formative assessments, in contrast, evaluate higher-order cognition and thinking abilities as well as problem-solving skills (Reeves 2000). In this form of assessment, students have adequate opportunities to demonstrate their ability and therefore this tactic is more student growth focused. For example, if a student fails to perform in a task at a given time, s/he would still have the opportunity in a different scenario and time (Dikli 2003). Simonson et al. (2000) discussed many other benefits of formative assessments. Because formative assessments are designed by simulating real-world scenarios, they provide students with opportunities to practise authentic tasks, apply their knowledge and skills to real-life settings, and work collaboratively. Moreover, lecturers are able to conceive better insights of student learning. However, formative assessments demand more time and efforts from both the lecturer and students (Law & Ecke 1995), thus students may not favour formative assessment tasks. In summary, while summative assessments help lecturers with grading students and administrative tasks, they promote rote learning and low-order cognition and thinking amongst students. Formative assessments, on the other hand, encourage deep learning with the aid of real-life contexts, and develop high-order cognition and thinking within students. Students are highly driven and motivated by assessments (Kember & McNaught 2007). Student will choose their learning style depending on the form of assessment; if an assessment rewards information retrieval, they will adopt rote learning and memorisation and if problem solving ability is rewarded, students will emphasise on solving problems. Assessments should be used as a means to achieve the ends, student learning. Moreover, in an educational setting grading students is as important as promoting learning. Hence, in order to obtain better outcomes for students and the lecturer, an integrated assessment scheme should be practised, which will allow attaining a balance between using assessment to drive learning and to grade students. Key Considerations for Designing Learning-oriented Assessments It is hypothesised above that the adoption of an integrated assessment scheme would simultaneously be a catalyst for student learning and a gauge for evaluating student performance. Several assessment instruments are suggested above that can possibly be combined to form an integrated assessment scheme. However, the selection of appropriate instruments is largely influenced by the context of the discipline and the course taught; what works well for one course/discipline may not work equally well in another course/discipline. Lecturers who desire to offer authentic learning experiences to their students should formulate meaningful instruments, tailored to their courses. Although this is a challenging task, closely following the key principles discussed below would facilitate a smooth process. Keppell and Carless (2006) conceptualised three components for a learning-oriented assessment design:  Constructive alignment – learning outcomes, objectives, contents, assessment items and instructional design should be aligned; assessment tasks should be designed such that time and efforts of students are spread evenly across the module study period rather than concentrating all at the end; establish connections between assessment tasks and real-life scenarios; and allow some degree of student choice to stimulate their motivation and engagement;  Student involvement – tasks should encourage reflection, cooperation rather than competition, peer feedback, and self-evaluation. Gibbs and Simpson (2004) argued that for learning to be more effective assessments should encourage learners to spend ‘time on task’ in and out of class; and  Forward-looking feedback – feedback has to be less final and conclusive (Boud 1995), more interactive and forward-feeding (Carless 2002), timely and with a possibility for remediation (Gibbs & Simpson 2004). Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 89 Seven further characteristics of assessments that help students to learn were postulated in the literature. These are:  Fit-for-purpose – assessments need to be consistent with learning outcomes, i.e. designed to provide opportunities for students to acquire the desired learning outcomes by completing them. Different aims such as motivating students, encouraging activity, providing guidance and feedback for remediation, and grading will impact on the selection of assessment technique. Consider using a different/suitable method (i.e., portfolios, case studies, reflective commentaries, critical incident accounts, posters, reviews, role-plays or annotated bibliographies) to suit the purpose of assessment rather than over-using unseen time constraint exams, essays and reports (Brown & Knight 1994; Brown & Smith 1997);  Practice-oriented and authentic – an effective assessment strategy would closely replicate the practice of the discipline and seek to measure how students apply to practice what they learnt (Kember & McNaught 2007);  Appropriately timed – for assessments to be instrumental to learning, they should be scheduled appropriately to allow students to act upon the feedback given, and amend and remediate errors in their original work (Brown 2004);  Inclusive – deploy a variety of assessment methods so that all students will have equal opportunities to demonstrate their skills and maximise their potentials, and the same students are not always disadvantaged due to a single format of assessment. Maintaining a variety is also crucial in aligning learning outcomes with assessments; that is, as courses have multiple learning outcomes, it is difficult to devise a single assessment item or type of assessment which achieves all the learning outcomes. In these cases, it is preferred to have an assessment package with different types of assessments (e.g. projects/case studies, exams, quizzes, games, participations etc.) (Brown 2004; Kember & McNaught 2007);  Transparent – assessment criteria are clearly framed and communicated explicitly in simple language to students and are available from the onset (Thorpe 2000);  Reliable – assessment criteria are clearly articulated to ensure inter-assessor reliability (i.e., different evaluators derive the same grade for similar work) and intra- assessor reliability (i.e., individual evaluators mark consistently) (Brown 2004); and  Efficient – where possible, make assessments more efficient and valid by utilising technologies for creating more authentic tasks, enhancing the quality and timeliness of feedback, and enabling students to monitor their own learning (Brown, Rust & Gibbs 1994; Davies 2010). The above literature synthesis provides an overarching guideline for developing integrated assessment schemes. However, their application is significantly influenced by the nature of the course/discipline, and therefore careful instructional planning and design is warranted. Considerations should also be given to aspects such as: resources - staff, budget and space requirements; ethics in circumstances where research dominates assessment tasks; and other logistical limitations. Designing and Implementing an Integrated Assessment Scheme: a Case Study This section demonstrates how the principles discussed above were operationalised in designing and implementing a learning-oriented, integrated assessment scheme for a course in Construction Management degree program. The case study could provide valuable insights for lecturers in similar degree programs as to how they might modify their pedagogical styles for better learning and teaching outcomes. Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 90 The Context The study was conducted in the context of a first year core course, Construction and Property Economics, in the Bachelor of Construction Management and Property degree program at the author’s university. The aim of the course is defined as “enriching students with competencies that are essential for performing economic analyses of construction projects and construction markets”, and the course covers three key areas to achieve it, including: economics of building designs, project appraisal, and the construction industry and the national economy. The expected learning outcomes of the course are articulated as “at the conclusion of the course students will:  be able to contribute significantly to decisions on the selection of appropriate design and construction methods in view of their cost and value effectiveness;  have detailed knowledge of the commonly used project feasibility appraisal techniques and be able to apply them to construction projects; and  be able to provide constructive suggestions for construction business planning in light of national and global economic trends”. It is noteworthy that this course has a student enrolment of up to 150. The author has been teaching the course for the last six years using the traditional model of delivery, involving lectures, tutorials, a major assignment based on literature review and a final class test. The author’s experience of teaching the same course over the past years suggests some insights/reflections that are also crucial considerations. These are:  Students are more grade-focused than learning-oriented. They appear interested in learning tasks and active participation in lectures or tutorials if these are directly related to an assessment or grade;  Students favour the continual assessment system over the final exam system;  Students prefer hands-on tasks over formal written submissions; and  Flexibility, informality and simplicity are three features that students want in their studies. The instructional strategy was modified in light of the assessment-driven learning/instructional paradigm and the insights above. The account below explains the newly adopted assessment and instructional approaches. Aligned Assessments and Instructional Approaches The theme of “contextualised teaching and assessment strategy” underpinned the design and implementation of the assessment and instructional approaches adopted in the course. Figure 1 portrays the integrated assessment scheme designed and implemented. Case- based assessments were positioned in the core of the scheme with regular online quizzes and an end-of-course class test supplementing them. The various assessment instruments were tightly aligned with the instructional mode to facilitate the achievement of the expected learning outcomes. The selection of the above assessment methods for the course was informed by the literature on instructional design in higher education. Angelo and Boehrer (2002) argued that the case-based assessment method is a powerful student-centred instructional tactic in that students learn by solving complex real-world problems. It allows the application of theoretical concepts and encourages students to see them from an action perspective - bridging the gap between theory and practice. Daly (2002) added that the case-based approach develops several skills in students, viz.:  Exploring a case entails researching and evaluating multiple data sources, which nurtures information literacy;  Good time management and organisational skills are essential to work on a case; Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 91  It enhances student aptitude in teamwork as well as written and oral communications; and  The case-based learning approach is also effective for increasing real-world professional skills such as managing a meeting, negotiation, giving presentations, etc. Figure 1: Integrated assessment scheme Likewise, Salas-Morera, Arauzo-Azofra and García-Hernández (2012) reported that the integration of online quizzes with other instructional activities in a teaching strategy helped to keep students up with the subject, strengthened their involvement in other activities, and had a very positive impact on academic outcomes. Equally for end-of-unit tests, Law and Eckes (1995) claimed that they are objective, reliable and valid, and therefore an easy approach for measuring student performances for grading purposes. Two case-based assessments were set to drive learning in the course and students were required to work in groups of five. The first case-based assessment dealt with the economics of building designs in that real building designs were given to student groups and they were required to make a class presentation on cost optimisation suggestions for the design provided. The assessment was introduced in week one just after the lecture that dealt with economics of building designs, which enabled students to deeply analyse and apply the theory to real-world contexts. During the analysis process, students were offered formative feedback to enable them to improve their work. When the student groups made final oral presentations of their findings, they were provided with both summative and formative feedback on their work. The entire process took place in the first two weeks of the session. The second case-based assessment was related to project appraisal whereby the student groups were required to undertake a cost benefit analysis of a real-world infrastructure development project of their choice (for instance: roads, bridges, dams, railways, airports, tunnels, etc.). This assessment task was introduced just after the lecture that dealt with cost benefit analysis of public projects. Students were required to undertake a literature review to identify economic, social and environmental costs and benefits related to their chosen project type. For example, if a group chose a dam project for analysis, they were first required to undertake a literature review on economic, social and environmental costs and benefits of dam projects. Then the group would apply their knowledge and understanding in analysing the selected case. Extensive analyses, including benefit cost ratio calculations and Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 92 sensitivity testing, constituted the task. The student groups were required to submit a full report of their study within eight weeks. Weekly meetings were held with the student groups to regularly assess their progress and provide formative feedback for remediation. Students were enthusiastically involved in the assessment as they were studying projects of their interests and from their localities. For both case-based assessments, detailed marking criteria were made available at the onset, which enabled the students to self-assess their progress from time to time. Table 1 shows the assessment criteria used for the cost benefit analysis assessment. Table 1: Assessment criteria for cost benefit analysis Specific Criteria Rating Low….. ….High Effectiveness of executive summary/abstract: Provides a concise summary of the study, its findings, recommendations & implications. 1 2 3 4 5 6 7 8 9 10 Soundness of introduction: Soundness of study motivation, problem statement, the aim & the method. 1 2 3 4 5 6 7 8 9 10 Adequacy of literature review: Evidence of critical review of literature related to the case type. 1 2 3 4 5 6 7 8 9 10 Depth of case study: The case is analysed from multiple perspectives - identification and quantification of benefits and dis-benefits of the case from the perspectives of multiple stakeholders; accurate appraisals; and satisfactory sensitivity analyses. 1 2 3 4 5 6 7 8 9 10 Appropriateness of supporting evidence: The adequacy and accuracy of data that support arguments / analyses. 1 2 3 4 5 6 7 8 9 10 The intellectual coherence and quality of conclusion: The conclusion is supported by evidence and arguments – has original views and understandings (your own reflections of the topic studied, drawing from the investigation and findings). 1 2 3 4 5 6 7 8 9 10 Structure and organisation of the report: Clear and concise presentation - has all relevant sections and a clear structure; and accurately referenced. 1 2 3 4 5 6 7 8 9 10 Comments: Two online quizzes were also used for the topic of project appraisal in addition to the case- based assessment. The quizzes dealt with the calculation aspects of the topic. A third quiz was designed for topics related to the construction industry and the national economy. At the start of the lectures, students were informed that they would be required to complete an online quiz for the topic covered in the lecture on that day. This made students to be more attentive in the lecture and more diligent in their exercise in tutorial classes than previous years when online quizzes were not in place. The tutorials were avenues for applying the theories to solve problems, which essentially were simulated scenarios. The end-of-session class test functioned as a summative evaluator/grader of the students. All the topics taught in the session were tested. In order to assist with test preparation, an Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 93 adaptive eTest was set up on Moodle eLearning platform. The adaptive eTest had model questions that were application-focused. Students were able to practise the questions unlimited number of times and automated rapid feedback was provided for incorrect answers, which helped students to reflect on their mistakes and remediate them. Below shown is a sample question and corresponding feedback for different answers. The underlined option is the correct answer for the question. When a student selects wrong answers, the possible mistakes in his/her though process are highlighted, guiding towards correct understanding. When the selection is correct, the student is encouraged positively to boost confidence. Essay type model questions were also provided to encourage students to engage in discussions and brainstorming using those questions. Question: Which of the following will decrease the present value of the mixed cash flows for years 1 through 5 of $1,000; $4,000; $9,000; $5,000; and $2,000 respectively given a 10% discount rate? a) Decrease the discount rate by 2% b) Switch cash flows for years 1 and 5 so that year 1 is $2,000 and year 5 is $1,000 c) Switch cash flows for years 2 and 4 so that year 2 is $5,000 and year 4 is $4,000 d) Switch cash flows for years 2 and 5 so that year 2 is $2,000 and year 5 is $4,000 Feedback: a) This actually increases the PV of each individual cash flow and therefore the PV of the entire set of cash flows. b) This will increase the PV of year 1 and decrease the PV of year 2. The net effect is that the PV of the entire cash flow sequence will increase. c) This will increase the PV of year 2 and decrease the PV of year 4. The net effect is that the PV of the entire cash flow sequence will increase. d) Well done! Measure of Efficacy of the Integrated Assessment Scheme for Improved Learning: Students’ Perspectives At the conclusion of the teaching session, a questionnaire survey was administered to ascertain how the integrated assessment scheme influenced students’ approach to learning and their learning experience in the course. The questionnaire consisted of two sections. The first section evaluated the effectiveness of the integrated assessment scheme with four subsections, including: overall integrated assessment scheme, effectiveness of the case- based assessment approach, effectiveness of the summative assessment approach, and overall learning experience. Altogether there were twenty-six questions belonging to these subsections. Participants’ responses to these questions were collected on a 5-point Likert scale. The last section of the questionnaire received descriptive comments on two questions: what were the best features of the assessment scheme used in the course; and what aspect of the assessment scheme needs improvements and how. All students in the class were invited to participate in the survey, however only 64 out of 112 students responded to it, making a response rate of 57%. The survey data was analysed to explore students’ perspectives. Descriptive statistical measures of the quantitative responses to the questionnaire were computed as shown in Tables 2 - 4. Qualitative comments made by the students were also analysed separately. Table 2 summarises the responses of the students on the questions that assessed how well the assessment scheme satisfied the key characteristics of learning-oriented assessment, which are: constructive alignment, practice-orientation, variety, appropriate distribution, and assisting progressive learning through feedback. The findings suggest that the integrated Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 94 assessment scheme has satisfied all these well; the mean ratings of responses to all relevant questions are above 4.00 out of 5.00. Table 2: Efficacy of the assessment scheme Ref Survey question Mean rating Standard deviation A1 The assessment scheme deployed in the course was well aligned with the course contents and expected learning outcomes. 4.28 0.55 A2 The assessment scheme in the course was well distributed in terms of content coverage, time and efforts. 4.08 0.72 A3 The assessment scheme in the course maintained a variety to provide you with opportunities to demonstrate and maximise your potentials. 4.31 0.73 A4 The assessment scheme deployed in the course acted as catalyst for your learning (motivated you to be actively involved in learning). 4.00 0.80 A5 The assessment scheme in the course better prepared you for a professional role that involves economic analyses of construction. 4.17 0.72 Table 3: Comparison of assessment methods Ref Survey question Case-based assessments Online quizzes & adaptive eTest Mean rating Standard deviation Mean rating Standard deviation B1 The assessment tasks used in the course were practice-oriented, and intellectually challenging and stimulating. 4.08 0.63 4.03 0.69 B2 The assessment tasks used in the course engaged you in self-directed, deep learning in and out of class. 3.91 0.73 3.95 0.76 B3 The assessment tasks offered opportunities for receiving timely and forward-looking feedback on your progress that can be acted upon to remediate your mistakes. 4.06 0.79 4.06 0.85 B4 Completing the assessment tasks fostered your information literacy (ability to research and evaluate information from multiple sources for a given purpose). 3.95 0.72 4.00 0.84 B5 Working on the assessment tasks in the course improved your organisational and time management skills. 3.98 0.83 3.98 0.79 B6 Working on the assessment tasks in the course increased your proficiency with oral and written communications. 3.75 0.85 3.81 0.83 B7 Completing the assessment tasks in the course developed your teamwork and collaboration skills. 4.22 0.73 3.86 0.83 B8 Working on the assessment tasks in the course enhanced your analytical and critical thinking abilities required for problem solving. 4.03 0.78 4.13 0.77 B9 The assessment tasks were effective for advancing your ability to engage in independent and reflective learning. 3.92 0.78 4.05 0.81 B10 The assessment tasks promoted active engagement in learning activities. 4.05 0.74 4.16 0.86 Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 95 Table 3 compares the performances of the case-based approach and the summative approach (only quizzes and the adaptive eTest) in driving student learning. The wordings of the questions in this table have been modified to make them common to both assessment tasks so that they can be presented and compared in a single table. However, in the exact questions references were made to appropriate assessment tasks and questions were presented separately for each type. The evidence suggests that the case-based tasks and online quizzes have performed equally well in:  Intellectually challenging and stimulating students;  Engaging students in self-directed, deep learning;  Offering opportunities to remediate progress/mistakes based on feedback;  Improving organisational and time management skills of students;  Developing information literacy and communication proficiencies;  Enhancing critical thinking and analytical capabilities for problem solving; and  Engaging in learning activities. Moreover, the case-based assessment tasks have been better for developing teamwork and collaboration skills whilst the online quizzes have been better for independent and reflective learning. Overall, they have been complementing each other and as a result learning has been driven quite effectively as evidenced by the high mean ratings for all the variables. Table 4 confirms that the quality of learning that the students experienced with the integrated assessment scheme was ranked to be between “very good” and “excellent” as evidenced by the mean rating of 4.29 out of 5.00 in a continuum wherein 1= poor quality, 2 = average quality, 3 = good quality, 4 = very good quality and 5 = excellent quality. Table 4: Overall learning quality Ref Survey question Mean rating Standard Deviation L Overall, how would you rate the quality of learning you experienced in the course, which was driven by the assessment scheme deployed? 4.29 0.59 A thematic analysis was conducted on the qualitative responses of students to identify the aspects of the assessment scheme that resulted in a better learning experience. The analysis discovered five themes from the textual descriptions, which are: online quizzes, group work, assessment variety, alignment of assessment, and relevance to industry. Online quizzes, including the adaptive eTest, were quoted the most (43%) while alignment and relevance to industry were quoted the least (5%) in the descriptions. Students stated several reasons for favouring the online quizzes in the assessment scheme. Some of the direct quotes with those reasons are as follows: The quizzes based on the lectures provided me with more understanding of each topic and what I need to concentrate on more to understand better in the future. Really enjoyed the online quizzes, they were challenging and supplied me with immediate feedback, thus allowing me to work on my mistakes. The online quizzes relating to each tutorial exercise were very helpful in learning skills throughout the progression of the course. They provided a good diversity in the course assessment scheme. The content of the quiz was challenging and helped to promote consistent learning of all topics covered. Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 96 Efficient online quizzes published after the lectures improved the understanding of the knowledge. In class exercises and adaptive e-test gave us chances to practise what we have learnt. Similarly, students liked the case-based assessments because they featured group work, interactions and research. Some direct quotes of students on these are as follows: Interacting with others in a group environment enhanced the formulation of ideas. The course had more group work rather than individual work; therefore, it improved my communication skills dramatically. Encouraged further research and deeper thinking, and group tasks fostered further knowledge. Allowed engaged learning and participation in helpful and detailed activities. The presence of a variety in assessment tasks, aligning tasks closely with lectures and tutorials, and relating assignments to the real-world were other aspects that students liked in the assessment scheme as quoted below. The variety of assignments. Variety of assessment work spread throughout the semester rather than a heavily weighted final exam. I believe the assessments in alignment with the topic enriched my ability to learn. They followed the course very well, easy to access. Relevant to real-world scenarios. A similar thematic analysis discovered two themes for potential improvements to the scheme: online quizzes and case-based tasks. Students suggested the following for future improvements: For online quizzes: Not only the answer but the working out showing how the answer was derived should have been shown with the results of the online quizzes. Answers of all online quizzes should be published after everyone has finished them. More online quizzes. For case-based tasks: More assessment tasks based around promoting communication skills; this is important as students enter the workforce. The case-based assessment needs to be guided through more regularly in each tutorial. The major assessment should be marked in parts, ie. Draft worth 10%...., ….?%, …?%, and be handed in over a series of weeks. Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 97 Summary and Conclusion Students regard assessment as the most central element of their university learning and decide the importance of a learning activity based on whether it is assessed or not. Hence, assessment is probably the most important element in course design that lecturers can utilise to drive student learning. Assessments are of two types: summative and formative. While summative assessments help lecturers with grading, they promote rote learning and low-order thinking amongst students. Formative assessments on the other hand encourage deep learning and develop high-order thinking within students, but helps minimally with grading. In an educational setting, grading students is as important as promoting student learning. It has been proven in this study that an integrated assessment scheme is essential for driving student learning. In developing such a strategy in a course, the following key principles should be absorbed into:  Closely align lectures with assessment tasks both in content coverage and timing;  Utilise online quizzes to reinforce the understanding of lectures;  Relate assessments to real-world scenarios through approaches like case-based learning;  Offer ample opportunities for discussions and teamwork through assessments;  Break the monotony in learning through a variety of assessment tasks;  Provide rapid, detailed feedback to enable students to reflect on their mistakes and remediate them before it is too late; and  Schedule regular progress reviews and marking for case-based or similar assessments, which involve significant efforts and time. Despites the valuable findings, the study suffers from a limitation. It was conducted in the context of only a single course in a single episode. Thus, it is hard to generalise the findings to university learning as a whole. Further studies in other courses with similar assessment schemes may be conducted to ascertain similarities and differences in findings. Based on this, general conclusions can be drawn. Nonetheless, the study could be an exemplary demonstration and provide useful insights for future efforts by other lecturers. References Angelo, T & Boehrer, J. 2002, ‘Case learning: How does it work? Why is it effective?’, Case Method Website: How to Teach with Cases, University of California, Santa Barbara. URL: http://www.soc.ucsb.edu/projects/casemethod/teaching.html. [1 Oct 2013] Bailey, K. M. 1998, Learning about language assessment: dilemmas, decisions, and directions. Heinle & Heinle, US. Boud, D. 1990, ‘Assessment and the promotion of academic values’, Studies in Higher Education, 15 (1), 101-111. Boud, D. 1995, ‘Assessment and learning: contradictory or complementary’, In: P. Knight (ed.) Assessment and learning in higher education, Kogan Page, London, 35-48. Brown, S & Smith, B. 1997, Getting to Grips with Assessment, SEDA Publications, Birmingham. Brown, S. & Knight, P. 1994, Assessing Learners in Higher Education, Kogan Page, London. Brown, S. 2004, ‘Assessment for learning’, Learning and Teaching in Higher Education, 2004-05 (1), 81-89. Brown, S., Rust, C. & Gibbs, G. 1994, Strategies for Diversifying Assessment, Oxford Centre for Staff Development, Oxford. Australasian Journal of Construction Economics and Building Kamardeen, I. 2014, ‘Stimulating Learning with Integrated Assessments in Construction Education’, Australasian Journal of Construction Economics and Building, 14(3), 86-98. 98 Brualdi, A. 1996, ‘Implementing performance assessment in the classroom’, Practical Assessment, Research & Evaluation, 6(2). URL: http://ericae.net/pare/getvn.asp?v=6&n=2. [1 Oct 2013] Carless, D. 2002, ‘The ‘mini-viva’ as a tool to enhance assessment for learning’, Assessment & Evaluation in Higher Education, 27(4), 353–363. Daly, P. 2002, ‘Methodology for using case studies in the business English language classroom’, Internet TESL Journal. 8(11), 1-7. URL: http://Daly, 2002/Techniques/Daly-CaseStudies/. [2 Oct 2013] Davies, A. 2000, Making Classroom Assessment Work, Connections Publishing, Merville, BC. Davies, S. 2010, Effective Assessment in a Digital Age. URL: http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassass_eada.pdf. [15 Oct 2013] Dietel, R. J., Herman, J. L., & Knuth, R. A. 1991, What does research say about assessment? NCREL, Oak Brook. Available online: http://www.ncrel.org/sdrs/areas/stw_esys/4assess.htm. [1 Oct 2013] Dikli, S. 2003, ‘Assessment at a distance: Traditional vs. Alternative Assessments’, The Turkish Online Journal of Educational Technology, 2(3), 13-19. Elton, L & Johnston, B. 2002, Assessment in Universities. URL: http://eprints.soton.ac.uk/59244/1/59244.pdf. [5 Oct 2013] Gibbs, G. & Simpson, C. 2004, ‘Conditions under which assessment supports students’ learning’, Learning & Teaching in Higher Education, 2004-05 (1), 3–31. URL: www.glos.ac.uk/departments/ clt/lathe/issue1/index.cfm [19 Oct 2013]. Kember, D. & McNaught, C. 2007, Enhancing University Teaching: Lessons from Research into Award-Winning Teachers, Routledge, Oxon. Keppell, M & Carless, D. 2006, ‘Learning-oriented assessment: a technology-based case study’, Assessment in Education, 13 (2), 179-191. Law, B. & Eckes, M. 1995, Assessment and ESL, Peguis publishers, Manitoba. Lombardi, M. M. 2008, Making the Grade: the role of assessment in authentic learning. URL: http://net.educause.edu/ir/library/pdf/eli3019.pdf. [8 Oct 2013] Reeves, T. C. 2000, ‘Alternative assessment approaches for online learning environments in higher education’, Educational Computing Research, 3(1), 101-111. Salas-Morera,L, Arauzo-Azofra, A & García-Hernández, L. 2012, ‘Analysis of online quizzes as a teaching and assessment tool’, Journal of Technology and Science Education 2(1), 39-45. Simonson M., Smaldino, S, Albright, M. & Zvacek, S. 2000, Teaching and Learning at a Distance: Foundations of Distance Education, Prentice-Hall, Upper Saddle River, NJ. Tang, C. & Biggs, J. 1998, ‘Assessment by portfolio’, In D. Watkins, C. Tang, J. Biggs & R. Kuisma (eds), Assessment of university students in Hong Kong: how and why, assessment portfolio, students’ grading, City University, Hong Kong. Thorpe, M. 2000, ‘Encouraging students to reflect as part of the assignment process: student responses and tutor feedback’, Active Learning in Higher Education, 1(1), 79-92. Watkins, D. 1998, ‘Assessing university students in Hong Kong: how and why’, In: D. Watkins, C. Tang, J. Biggs & R. Kuisma (eds), Assessment of university students in Hong Kong: how and why, assessment portfolio, students’ grading, City University, Hong Kong.