Australasian Journal of Educational Technology, 2020, 36(4). 72 Investigating online tests practices of university staff using data from a learning management system: The case of a business school Anna Stack University of Technology Sydney, Australia Bopelo Boitshwarelo, Alison Reedy, Trevor Billany, Hannah Reedy, Rajeev Sharma, Jyoti Vemuri Charles Darwin University, Australia While research on online tests in higher education is steadily growing, there is little evidence in the literature of the use of learning management systems (LMSs), such as Blackboard Learn as rich sources of data on online tests practices. This article reports on an investigation that used data from Blackboard Learn to gain insight into the purpose for and practice of online tests at the Charles Darwin University (CDU) Business School in Australia. The results indicate both formative and summative use of online tests, with a range of practices across the school. Focusing on curriculum and pedagogical practices, the article identifies indications of good practice as well as potential issues related to curriculum mapping, including possible misalignment between learning outcomes and online tests. It also affirms the versatility of using data from LMSs in the study of e-assessment in general and online tests in particular. Implications for practice or policy: • Reviewing data from the LMS about online tests can help academics and other stakeholders understand and improve practices. • Establishing patterns of practice across multiple units in a school or faculty provides direction for further in-depth research into perceptions and perspectives of staff and students on online tests. Keywords: assessment, feedback, learning management system, learning outcomes, online tests Introduction Research on the use of online tests in higher education (HE) is steadily growing. In one of the latest literature reviews in this area, Boitshwarelo, Reedy, and Billany (2017) identified about 50 relevant articles published mostly after the year 2000, which included conceptual articles, literature reviews and empirical research. The empirical research articles located in that review were mostly focused on investigating perceptions of staff and students (Baleni, 2015; Donnelly, 2014), student performance analysis (Angus & Watson, 2009; Kibble, 2007; Smith, 2007), comparative analysis between engagement in formative online tests and summative performance (Smith, 2007; Yonker, 2011) using commonly used research methods such as surveys, interviews and statistical analysis. However, there were no studies identified that investigated staff practices on online tests across a pool of units or modules within a discipline or across a school or faculty. This is despite the fact that learning management systems (LMSs) such as Blackboard Learn, where online tests are often created and deployed, are a source of rich data which can be used to provide understanding of such practices. Data from an LMS can provide a variety of information from multiple units or subjects across an institution to reveal patterns of practice that can, in turn, inform learning design interventions. A key tenet of learning design is the need to ensure that there is alignment between curriculum intentions and pedagogical practices, including assessment approaches (Boitshwarelo &Vemuri, 2017). While the empirical research methods mentioned earlier provide valuable insight in this regard, LMSs provide a database where extensive information about curriculum intentions and pedagogical practices related to multiple units or subjects in an institution can be harnessed to provide insight into educational practices related to online tests. Taking Charles Darwin University (CDU) as a case in point, this article reports on Australasian Journal of Educational Technology, 2020, 36(4). 73 an investigation that used data from Blackboard Learn to gain insight into the online test practices at the CDU Business School in Australia. This investigation was a precursor to a broader study, the findings of which will be reported separately. The rest of this article outlines the objectives and describes the context and methods of the study; it then discusses the findings and draws conclusions related to the investigation itself and, more broadly, the versatility of the research method. Objective of the study The objective of this study was to review data that relates to the use of online tests in the CDU Business School with a view to gaining insight into the nature of the curriculum and pedagogical practices and the extent of their alignment. More specifically the study investigated the practices around online tests in terms of: (1) Curriculum design intentions in the form of the number of tests per unit, distribution across disciplines and study levels, the weightings, and the type of learning outcomes the tests are mapped to. Decisions about these are made at the curriculum design stage of the unit before it is implemented. (2) Pedagogical practices such as the question types in the tests, the feedback practices and the tests settings (i.e., mainly availability, number of attempts and randomisation settings within the LMS). These are choices that are made about how to implement what has already been intended in the curriculum. Context and rationale This study focused on the Business School at CDU. CDU is a dual-sector (HE & vocational education and training) and multi-campus university located across many regions of Australia, including Darwin Waterfront, Darwin Casuarina, Alice Springs, Sydney, and Melbourne. HE at CDU is predominantly delivered online, with more than 50% of students enrolled externally with 46% of students studying part time (CDU, 2017). Almost all HE units offered by CDU, whether for external or internal students, include reliance on online delivery using the LMS, Blackboard Learn. Learnline is CDU’s online learning environment and consists of Blackboard Learn and a suite of other online learning tools. As most HE units delivered by CDU include the use of Learnline sites for dissemination of content, communication and assessment submission and feedback, the LMS provides a wealth of information in terms of the what, the where, the when, and the how of the resources and activities therein. One of the key tools in Learnline is the test tool, a tool that enables staff to create and/or import, deploy and manage tests online. A key feature of this tool is that it allows for auto-marking of tests, depending on the type of questions used. Our definition of online tests in this article is thus limited to tests deployable on this tool, particularly those using questions where auto-marking is possible. The use of online tests has grown significantly in HE, and CDU is a high user of online tests owing to its heavy reliance on online delivery of education to cater for both external and internal students. Analytics data obtained from CDU’s Learnline at the beginning of 2017 indicated that well over 5000 online tests, including both deployed and archived online tests, were stored within the LMS, demonstrating the widespread use and potential impact of online tests to both students’ learning experiences and assessment practices. Of these online tests, close to 45% were attributed to units in the CDU Business School. The CDU Business School teaches units in the disciplines of accounting, economics, management, marketing, business law and other generic units at both undergraduate and postgraduate levels. The bachelor degrees run by the school are three-year programs and have three unit levels: 100, 200 and 300 levels. Typically, 100 level units are the foundational units, normally done in the first year of study; 200 level units are intermediate in nature and are predominantly studied in the second year of study and 300 level units are the advanced units which form the final year of a bachelor’s degree study. In some instances, there may be crossovers, for example, some foundational units may be introduced at 200 level. the honours units are coded at 400 level and the master’s and postgraduate certificate and diploma units are typically coded at 500 level. Australasian Journal of Educational Technology, 2020, 36(4). 74 Assessment practices at the school are varied, not just in terms of whether they are formative or summative in nature, but also in the methods of assessment being used, with some approaches being more common in some disciplines than others. For example, in the accounting discipline short answer questions related to analysis of accounting problems or case studies are quite common. However, a common assessment method in the school that cuts across most disciplines and levels of study is online tests, that is, objective tests that are deployed, taken and marked online. From the literature, as noted by Boitshwarelo, Reedy, and Billany (2017), online tests are generally used to review objective content at the lower levels of Krathwohl’s (2002) revision of Bloom’s taxonomy, with a focus on their formative assessment role. The formative assessment role means that certain practices are adopted to ensure students engage effectively with the online tests. Therefore, the evidently high usage of online tests in the CDU Business School units prompted a need for an exploration of how online tests were being used in the school and for what purpose. With just about everything related to online tests happening within Learnline, it was natural to turn to it as a rich source of data. While the data from Learnline does not give a complete picture, it provides a relatively objective baseline of data which formed the basis of further investigation. Methods This study formed part of broader project approved by the Charles Darwin University Human Research Ethics Committee (CDU-HREC) in 2017. The review of CDU Business School units in Learnline was intended to collect data from all the units across all the business disciplines offered in Semester 2 of 2016 and Semester 1 of 2017, which covered all units offered by the school at the time. Although over 5000 tests have been saved within the test tool, representing both archived and deployed tests, this research was restricted to the deployed tests so as to better identify teaching practice and current use of online tests. As per the objective of the study, the aim was to examine the curriculum intentions and pedagogical practices evident from the Learnline environment. The data obtained had a two-pronged purpose: Firstly, data from documents or resources, primarily the unit information documents or unit overviews uploaded in the Learnline environment, gave information about the intention of online tests as it relates to curriculum mapping, accreditation information and sequencing of intended unit activities. Secondly, through reviewing the use of the online tests tool within Learnline data was systematically obtained regarding test options and settings to provide us with information of actual pedagogical practices. It is noted that there can be overlaps with what we term curriculum intentions and pedagogical practices. The following key data were identified under each objective: (1) Extent of use and curriculum intentions • Learning outcomes mapped to online tests and their cognitive levels • The number of deployed online tests in each unit • The number of graded versus ungraded online tests • The weighting of graded online tests (2) Pedagogical practices • The types of question used (e.g., multiple choice) • The frequency of tests (e.g., weekly) • The number of questions per test • The quality of instructions given to students • Whether random blocks or question pools were implemented • The number of attempts at a test allowed • Question display, whether all at once or one at a time • Location within the unit • Feedback settings and type • Test availability • Time restrictions • Question ordering • Question source and whether publisher test banks were used. The collection of this data provided insights into both assessment intentions and practices around online tests locally at CDU. Australasian Journal of Educational Technology, 2020, 36(4). 75 To review the Learnline data, all CDU Business School units offered in the two semesters were divided up amongst the research group members and reviewed. Regular meetings were held to determine consistency of data entry. Data was captured only for tests that were deployed within the unit and categorised as graded, contributing to the overall grade for the unit, or ungraded if the online test was provided in an unweighted or practice manner. Research group members reviewed the location of online tests within each Business School unit and recorded test settings for each available test, capturing data on the question types, length of time the test was available to students and whether there were time restrictions, backtracking through previous questions allowed, or feedback on individual questions prepared for release to students. Along with technical settings within the LMS, data was collected from the learning outcomes mapped to online test assessment items within accreditation documents. The learning outcomes mapped against online test assessment items were identified in each unit from the unit information documents (i.e., study guides), and an attempt was made to interpret them according to levels in the revised Bloom’s taxonomy's cognitive domain (Krathwohl, 2002). While this was a subjective process, there was confidence that for well-written learning outcomes the interpretation will be relatively accurate. Other data recorded included weighting of the tests for each unit. The raw data was then collated and entered into a spreadsheet categorised by unit levels and disciplines, as outlined earlier, against the key data. By categorising data by discipline and level, patterns of use could be identified across units. The collated data was converted into graphs and visualisation to compare and analyse the use of online tests across the Business School and its disciplines. The data was then analysed for patterns according to identified key themes. Results and discussion In this section, the findings are presented and discussed either as curriculum intentions or pedagogical practices. Curriculum intentions Number of tests and distribution of tests across disciplines The Learnline review of 78 units across two teaching periods identified 490 deployed online tests. Of these 490 online tests, 228 were graded, that is, contributed to the overall grade, while 262 were ungraded online tests. The focus was mostly on the graded online tests as these were prescribed in the curriculum. Table 1 Distribution of tests across disciplines in the CDU Business School 2016/2017 Discipline No. of units No. of tests No. of graded tests Graded tests per unit Accounting 20 248 84 4.2 Economics 13 34 14 1.1 Management 20 182 121 6.1 Marketing 8 4 4 0.5 Business Law 7 4 2 0.3 Other (Research/Placement/Honours) 10 18 3 0.3 Business School (Total) 78 490 228 2.92 Table 1 shows the distribution across the CDU Business School disciplines by overall number of tests, number of graded tests and the number of graded tests per unit. Accounting and management units had the highest relative use of online tests, that is well above the CDU Business School average ratio of about three graded tests to one unit, whereas business law, marketing and “other” were on the lower end of the spectrum, with an average of much less than one graded test per unit. It is not clear why there is such a distribution. For accounting, being a numerical and competency-based subject, it makes sense that it would have a relatively higher use of tests. On the other hand, management units were not expected to be highest users of online tests due to the perceived subjective nature of some of their content, at least relative to accounting. The literature reviewed showed that online tests are used across the various disciplines of business, such as marketing (Douglas, Wilson, & Ennis, 2012), accounting (Ibbett & Wheldon, 2016) and Australasian Journal of Educational Technology, 2020, 36(4). 76 economics (Buckles & Siegfried, 2006). However, it is not clear from the literature which disciplines seem to use them most. Distribution of tests according to study levels Online tests are commonly used within foundational units, usually in first year, owing to the relative suitability to assess lower levels of Krathwohl’s (2002) revised Bloom’s taxonomy of outcome in the cognitive domain (Douglas et al., 2012; Simkin & Kuechler, 2005). They are typically used less as one proceeds through study levels. Figure 1 shows the distribution of tests per unit at every level of each of the disciplines. Figure 1. Mean graded tests by unit level across disciplines The composite Business School bar on the chart shows that about 31% of units with online tests are 100 level, 25% at 200 level, 18% at 300 level, 3% at 400 level and 24% at 500 level. If the 500 level units, which are postgraduate units, are excluded, there is a pattern of fewer tests per unit the higher one goes up the study levels. This is consistent with expectations and likely reflects sound curriculum practice. However, in terms of individual disciplines, accounting, which has a relatively high tests per unit ratio, is the only discipline where this pattern is clearly obvious. On the other hand, economics and management disciplines have more online tests at 200 level than any other level; they also have a significant percentage of units with online tests at 300 level. Interestingly, law and marketing disciplines (which have fewer units) only had online tests at 300 level. Some of the units in the “other” disciplines are honours units, only offered at 400 and 500 levels, hence a higher percentage of online tests at 400 level units. The above scenario of a high number of online tests at 200 and 300 level units particularly in management and economics, while unexpected, may have a plausible explanation. First, there are usually not enough units at 100 level in the courses to cover all the introductory content, resulting in some foundational concepts being introduced at 200 level, supporting a higher number of online tests per unit even at levels higher than 100. Secondly, the nature of some of the disciplines could contribute to a more even distribution of online tests across levels; for example, economics content, particularly mathematical concepts, does align itself to online testing, even at a higher level. However, it is not clear why the marketing and law disciplines have online tests exclusively at 300 level. Regarding the significantly high percentage of online tests per unit at 500 level units, which is unexpected, the explanation possibly lies in the nature of the coursework postgraduate programs. The school runs 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Accounting Economics Management Marketing Law Other Business School P er ce nt ag e of m ea n gr ad ed te st s by le ve l Level 100 Level 200 Level 300 Level 400 Level 500 Australasian Journal of Educational Technology, 2020, 36(4). 77 intensive postgraduate programs enabling students to pursue a qualification outside of their undergraduate qualifications, meaning they include a wide array of units, including those that cover foundational content. The reasons above are only speculative and indicate the need to nuance the curriculum mapping and sequencing decisions that resulted in this distribution; hence, there was need to reflect on how the online tests are mapped to learning outcomes. Mapping of online tests with learning outcomes The results of mapping of online tests to Krathwohl’s (2002) revised Bloom’s taxonomy levels are summarised in Figure 2, which shows the number of units against cognitive levels of learning outcomes mapped to online tests. Figure 2. Mapping of online tests to Krathwohl’s (2002) revised Bloom’s taxonomy of cognitive levels by unit level The chart indicates that all the cognitive levels were represented, quite significantly with no obvious pattern. However, when the 500 level units (postgraduate) are excluded from the distribution, a pattern, of less online tests with higher cognitive levels, somewhat emerges: The remember and create cognitive levels do not quite comply with the pattern, with the former, lower than would be expected and the latter, higher. The remember level being lower can possibly be explained by the fact that since the units investigated are at university level, there is a limited number of them with learning outcomes at this level. On the other hand, while the literature identifies the create cognitive level as the most difficult to assess through online tests (Selby, Blazey, & Quilter, 2008), this data shows that learning outcomes that were mapped to this cognitive level were being assessed, at least partly, through this method in a significant number of units. As far as 500 level units are concerned, there was no obvious pattern except that the apply and evaluate learning outcomes were highly represented. This could indicate that these were the dominant learning 0 5 10 15 20 25 30 35 Remember Understand Apply Analyse Evaluate Create N um be r of u ni ts Level 100 Level 200 Level 300 Level 400 Level 500 Australasian Journal of Educational Technology, 2020, 36(4). 78 outcomes in the units, but whether online tests are a suitable assessment approach is questionable especially with respect to the evaluate learning outcomes. These results on mapping of learning outcomes to online tests were revealing. The decrease in use from understand to evaluate was as would be expected. However, the fact that there was still a fair representation of units using online tests to assess at the level of analyse and evaluate raises questions. More glaringly, the significant presence of online tests mapped against create learning outcomes, in particular, raised doubts about whether these tests are successfully assessing these outcomes. Although this study did not include the evaluation of the online tests themselves, a cursory look at some of the tests did not indicate that they were assessing beyond the foundational levels of the revised Bloom’s taxonomy (Krathwohl, 2002) especially given the quality and type of questions in them. Weightings The data showed that over 80% of units with graded online tests assigned between 1% and 20% total weighting. The weighting was either split into weekly tests (approximately 10–12 tests per semester) or one to few tests per semester, with a higher percentage (52%) tending towards fewer than three tests per semester. This reflects the prevalence of mid-semester tests or the practice of one or two higher stakes online tests, some of which are in preparation for examination-like circumstances. The weighting indicates that while online tests are widely used in the school, there is generally not an over-reliance on them as a method of assessment. Pedagogical practices Question types As per expectation (Nicol, 2007; Simkin & Kuechler, 2005), there was an overwhelming prevalence of multiple-choice questions (MCQs) used within online tests. All online tests within Business School units included MCQs, 46% of online tests included true-false questions, 9% included fill-in-the-blank and 2% each for multiple answer, formula, jumbled sentence, matching and ordering questions. The high reliance on MCQs, while not unexpected, does raise some interesting questions regarding how higher cognitive levels are tested. Test settings There was some variability as well as crosscutting practices in the settings of the online tests with the following being of key interest due to their implications: • Availability and number of attempts: Online test availability function was widely used to restrict access to online tests to a specific timeframe. Availability varied from online tests being open to students for less than 24 hours, to a few days, to a week, while some were open for weeks or until the end of semester. This variability in practice points to the tensions that staff face in trying to minimise opportunities for students to cheat while also providing flexibility to students in when they can access the tests. The majority of online tests with date restrictions were those that were graded. Of the tests, 90% used randomised questions and restricted students to a single attempt at taking the test. As with the availability function, the use of the randomisation and single attempt functions may be influenced by the need to maintain integrity of the online test (Harmon, Lambrinos, & Buffolino, 2010). • Presentation settings: The possibility of cheating is often deterred by utilising control LMS features, that is, the randomisation of questions and responses, a single question delivered on each screen, prevention of backtracking to previous questions and the setting of tight time frames in which to answer the questions (Harmon et al., 2010) These deterrents were broadly employed across the online tests reviewed in this project, with 68% of all online tests providing test questions one at a time. As with availability and the number of attempts allowed, providing questions one at a time may be implemented to maintain the integrity of the assessment as it allows for logs to be maintained showing students’ progress through the test in case of a student dispute. Feedback practices On review of the feedback responses and settings within Learnline it was noted that while 93% of online tests in the CDU Business School provided feedback to students following submission, it was, by and large, only in the form of a score, indicating the total number of correct answers. In contrast to this, both score Australasian Journal of Educational Technology, 2020, 36(4). 79 and qualitative feedback was provided to students in only 8% of online tests. The lack of feedback was evident in publisher test bank–sourced questions as well as lecturer-developed questions. In terms of the immediacy of feedback, the dominant practice was the provision of immediate feedback after submission in the form of a score per question, which is Blackboard’s default setting for creating an online test within Blackboard Learn. Thus, while there is some feedback provided to students, the practices are certainly nowhere near as comprehensive as some strategies recommended in the literature (Epstein, Lazarus, Calvano, & Matthews, 2002; Nicol, 2007; Voelkel, 2013) Conclusions The aim of this study was to investigate the practices around online tests at the CDU Business School, in terms of the cognitive level of knowledge being assessed by online tests as understood through curriculum mapping decisions and pedagogical practices. This was achieved by reviewing the use of online tests from CDU’s Business School deployed in an LMS, Learnline. The following is a summary of the findings as discussed in the previous section, followed by general conclusions: (1) There is a significant use of online tests in the school (on average, 3 tests to 1 unit); however, there is not an over-reliance on them, as in most units they account for up to 20% only of the total unit mark. (2) The presence of graded and ungraded tests deployed to students in the CDU Business School indicates formative and summative use, respectively. Most of the ungraded tests were used for practice or self-assessment; the more frequently administered graded tests seemed to serve both formative and summative purposes. (3) Online tests were mapped to learning outcomes from across cognitive levels of the revised Bloom’s taxonomy (Krathwohl, 2002) with the lower cognitive levels understand and apply being the highest used in the undergraduate units, and apply and evaluate for postgraduate. (4) The nature of use was such that online tests: • were more commonly used in some business disciplines than others, with management and accounting being the highest users and marketing and business law subjects, the lowest users; • were used across all year levels in coursework programs, including postgraduate units. (5) The question types predominantly used were MCQs. (6) A significant number of tests were sourced from publisher test banks. (7) Feedback practices that promote the formative role of online tests such as multiple attempts and qualitative feedback were limited in use. (8) The presentation settings for tests tended to be restrictive, suggesting a strong consideration for integrity of the tests by minimising opportunities for students to collude or cheat. While the frequency of tests and their distribution across the semester seemed to have been generally well thought out, the lack of practices that promote the formative assessment role such as multiple attempts, flexible availability of tests and detailed feedback possibly reduce their effectiveness. It is speculated that some of these practices are used to mitigate possible breaches of the integrity of tests, a prevalent challenge in unsupervised online tests. As a general conclusion from these findings, it would seem that some of the curriculum and pedagogical practices at the CDU Business School are educationally sound as per the literature reviewed. A key positive aspect was the fact that the online tests are used for both formative and summative purposes and that when they are used for the latter there are not given excessive weighting. However, the mapping of online tests, which are predominantly characterised by textbook-sourced MCQs, to learning outcomes across all the cognitive levels of the revised Bloom’s taxonomy (Krathwohl, 2002) is questionable and warrants further investigations. In particular, the high number of units using online tests to assess create learning outcomes was of key concern. The other related concern is the prevalence of online tests in higher level units. More broadly, indications are that a strategic alignment between curriculum intentions and pedagogical practices or actions (Boitshwarelo & Vemuri, 2017) was lacking in some instances. That is, the curriculum intent may have been to test a deeper level of understanding, but the actual online test practices demanded only shallow engagement, suggesting the inappropriateness of either assessment method or the question types used in the tests. Australasian Journal of Educational Technology, 2020, 36(4). 80 Implications and recommendations This study has demonstrated ways and means of exploring uses of online tests in a university setting. Using the data from an LMS has allowed for rich, valid and varied information related to online tests to be harnessed, perhaps in more effective ways than are possible through other means. The data collected from the LMS and compared against curriculum documents has informed a broader study into the use of online tests and directed new avenues of data collection from staff and students based on the above findings. Specifically, staff and student surveys and interview of selected staff were carried out subsequently, of which the findings will be reported in a separate article. In addition to identifying specific issues for further study, the immediate consequences of this investigation have been the need for better support of academic staff implementing online tests, particularly around deciding on test settings that are best suited to the purpose of the test. Additional professional development for CDU Business School staff has also taken the form of reporting back findings from this study, which has consequently generated conversations that facilitate reflection on the various practices. The potential for this approach is greater than has been presented here. Firstly, some of the data collected, such as duration of tests, information on consistency of instructions about tests, where tests are deployed in the learning environment, though useful, were not analysed as it was slightly beyond the scope of this article or needed to be explored further. However, these residual data demonstrate that investigations can be done on other aspects of online tests using the varied available data from the LMS. Secondly, more data, including correlational and analytics data, could potentially be collected for greater insights; however, our study did not go that far. Curriculum design for online tests is usually temporally separated from actual implementation and as such it can be difficult to determine alignment between what was intended and what is practised. This article submits that LMS data, in a way used in this study, is an effective way of identifying issues around this alignment, including noticing patterns of practice. This is beneficial for reviewing practice in a holistic way at departmental or even institutional levels. An important point to make, though, is that this exercise is labour intensive and involved a number of research team members manually harvesting data from dozens of units, all of which had to be accessed one by one. Most of the data that was required is not automatically harvestable through the functionalities of analytics or other data harvesting tools, and perhaps there is a case for that to be explored with LMS providers, such as Blackboard, to extend the capabilities of their platforms in this regard. Overall, the use of the LMS as a data source is recommended to the research and practitioner community in their investigations of the use of online tests at their own institutions. This is particularly important for two reasons: First, from a practice perspective, such data is important for identifying patterns of practice across multiple units in a school or institution, which could help with coming up with appropriate and crosscutting learning design intervention to address any identified issues. Secondly, from a research perspective, the patterns of practice established from this data can act as precursor to more in-depth investigations. There is thus clear potential for this approach to advance the developmental research agenda, particularly design-based research. References Angus, S. D., & Watson, J. (2009). Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set. British Journal of Educational Technology, 40(2), 255–272. https://doi.org/10.1111/j.1467-8535.2008.00916.x Baleni, Z. G. (2015). Online formative assessment in higher education: Its pros and cons. Electronic Journal of e-Learning, 13(4), 228–236. Retrieved from http://www.ejel.org/issue/download.html?idArticle=433 Boitshwarelo, B., Reedy, A. K., & Billany, T. (2017). Envisioning the use of online tests in assessing twenty-first century learning: a literature review. Research and Practice in Technology Enhanced Learning, 12(1), art. 16. https://doi.org/10.1186/s41039-017-0055-7 https://doi.org/10.1111/j.1467-8535.2008.00916.x https://doi.org/10.1111/j.1467-8535.2008.00916.x https://doi.org/10.1186/s41039-017-0055-7 Australasian Journal of Educational Technology, 2020, 36(4). 81 Boitshwarelo, B., & Vemuri, S. (2017). Conceptualising strategic alignment between curriculum and pedagogy through a learning design framework. International Journal for Academic Development, 22(4), 278–292. https://doi.org/10.1080/1360144X.2017.1367298 Buckles, S., & Siegfried, J. J. (2006). Using multiple-choice questions to evaluate in-depth learning of economics. The Journal of Economic Education, 37(1), 48–57. https://doi.org/10.3200/JECE.37.1.48- 57 Charles Darwin University. (2017). Annual report: 2016 in review. Retrieved from http://www.cdu.edu.au/sites/default/files/mace/docs/annual-report-2016.pdf Donnelly, C. (2014). The use of case based multiple choice questions for assessing large group teaching: Implications on student’s learning. Irish Journal of Academic Practice, 3(1), 1–15. https://doi.org/10.21427/D7CX32 Douglas, M., Wilson, J., & Ennis, S. (2012). Multiple-choice question tests: A convenient, flexible and effective learning tool? A case study. Innovations in Education and Teaching International,, 49(2), 111–121. https://doi.org/10.1080/14703297.2012.677596 Epstein, M. L., Lazarus, A. D., Calvano, T. B., & Matthews, K. A. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record, 52(2), 187–201. https://doi.org/10.1007/BF03395423 Harmon, O. R., Lambrinos, J., & Buffolino, J. (2010). Assessment design and cheating risk in online instruction. Online Journal of Distance Learning Administration, 13(3). Retrieved from https://www.learntechlib.org/p/52616/ Ibbett, N. L., & Wheldon, B. J. (2016). The incidence of clueing in multiple choice testbank questions in accounting: Some evidence from Australia. The e-Journal of Business Education & Scholarship of Teaching, 10(1), 20–35. Retrieved from ERIC database. (EJ1167417) Kibble, J. (2007). Use of unsupervised online quizzes as formative assessment in a medical physiology course: effects of incentives on student participation and performance. Advances in Physiology Education, 31(3), 253–260. https://doi.org/10.1152/advan.00027.2007 Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory Into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2 Nicol, D. (2007). E‐assessment by design: Using multiple‐choice tests to good effect. Journal of Further and Higher Education, 31(1), 53–64. https://doi.org/10.1080/03098770601167922 Selby, J., Blazey, P., & Quilter, M. (2008). The relevance of multiple choice assessment in large cohort business law units. Journal of the Australasian Law Teachers Association, 1(1), 203–212. Retrieved from https://researchers.mq.edu.au/en/publications/the-relevance-of-multiple-choice-assessment-in- large-cohort-busin Simkin, M. G., & Kuechler, W. L. (2005). Multiple‐choice tests and student understanding: What is the connection? Decision Sciences Journal of Innovative Education, 3(1), 73–98. https://doi.org/10.1111/j.1540-4609.2005.00053.x Smith, G. (2007). How does student performance on formative assessments relate to learning assessed by exams? Journal of College Science Teaching, 36(7), 28. https://doi.org/10.1111/j.1540- 4609.2005.00053.x Voelkel, S. (2013). Combining the formative with the summative: The development of a twostage online test to encourage engagement and provide personal feedback in large classes. Research in Learning Technology, 21. https://doi.org/10.3402/rlt.v21i0.19153 Yonker, J. E. (2011). The relationship of deep and surface study approaches on factual and applied test‐ bank multiple‐choice question performance. Assessment & Evaluation in Higher Education, 36(6), 673–686. https://doi.org/10.1080/02602938.2010.481041 Corresponding author: Bopelo Boitshwarelo, bopelo@gmail.com Copyright: Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY-NC-ND 4.0. Please cite as: Stack, A., Boitshwarelo, B., Reedy, A., Billany, T., Reedy, H., Sharma, R., & Vemuri, J. (2020). Investigating online tests practices of university staff using data from a learning management system: The case of a business school. Australasian Journal of Educational Technology, 36(4), 72–81. https://doi.org/10.14742/ajet.4975 https://doi.org/10.1080/1360144X.2017.1367298 https://doi.org/10.3200/JECE.37.1.48-57 https://doi.org/10.3200/JECE.37.1.48-57 http://www.cdu.edu.au/sites/default/files/mace/docs/annual-report-2016.pdf https://doi.org/10.21427/D7CX32 https://doi.org/10.1080/14703297.2012.677596 https://doi.org/10.1007/BF03395423 https://www.learntechlib.org/p/52616/ https://doi.org/10.1080/03098770601167922 https://researchers.mq.edu.au/en/publications/the-relevance-of-multiple-choice-assessment-in-large-cohort-busin https://researchers.mq.edu.au/en/publications/the-relevance-of-multiple-choice-assessment-in-large-cohort-busin https://doi.org/10.1111/j.1540-4609.2005.00053.x https://doi.org/10.1111/j.1540-4609.2005.00053.x https://doi.org/10.1111/j.1540-4609.2005.00053.x https://doi.org/10.3402/rlt.v21i0.19153 https://doi.org/10.1080/02602938.2010.481041 mailto:bopelo@gmail.com https://creativecommons.org/licenses/by-nc-nd/4.0/ https://doi.org/10.14742/ajet.4975 Introduction Objective of the study Context and rationale Methods Results and discussion Curriculum intentions Number of tests and distribution of tests across disciplines Distribution of tests according to study levels Mapping of online tests with learning outcomes Weightings Pedagogical practices Question types Test settings Feedback practices Conclusions Implications and recommendations References