Microsoft Word - 13 DEANZ 2012 Wilson 040412.doc Wilson, A. D. 156 Categorising E-learning Amy Wilson, Massey University Abstract Categorising e-learning is almost as problematic as defining the term. In an attempt to quantify/qualify the level of e-learning use in the tertiary sector in New Zealand, the Ministry of Education (MoE) established a classification system for courses in the tertiary sector. The value of this tool was disputed, and a new system was proposed but later withdrawn. Following a period of sector discussion and consultation, the MoE has now abandoned the classification system. With institutions no longer being required to report the number of courses using e-learning tools, the question arises as to whether there is a need to classify e-learning, and what purpose that might serve. The responses to interviews conducted as part of doctoral study regarding the MoE classification system prompted the author to embark on an exploratory study to determine if there is still a need to classify e-learning. The purpose of this study is twofold: first, to consider the MoE’s system to gain a better understanding of how e-learning was classified; and second, to recommend a replacement for this system that might be more practical in terms of institutional analysis and planning. The study proposes four options for those institutions that recognise the importance of continued data collection to inform choices regarding professional development, resourcing, and strategic direction. The classification options are discussed in terms of clarity of classification and ease of data collection. Keywords: professional learning and development; e-learning; personalised learning; virtual learning; online communities of practice; professional change; online communities Introduction In the last decade, the New Zealand government has implemented a number of strategies to boost the capability of e-learning use in the tertiary sector. Advisory groups were created and two major funds established to address research and new project implementation in e-learning. The Ministry of Education (MoE) introduced a classification system to help tertiary institutions categorise e-learning use. After the system was introduced, concern was expressed that it did not provide adequate detail to identify the appropriate level of e-learning use. Added to this concern was the fact that e-learning has changed over the last few years. In response, the MoE proposed changes to the classification in 2010. These proposals were subsequently deferred for further discussion, and the classification system was dropped. This outcome may place institutions in a quandary as to whether to continue collect e-learning classification data for their courses. What purpose might it serve? Further, if institutions continue Journal of Open, Flexible, and Distance Learning, 16(1) 157 to collect the information, what format should it take and how could it best be analysed for planning and strategic direction? This exploratory study proposes to address these issues in order to continue debate regarding an e-learning classification system, particularly at institution level. The study also provides recommendations for improved processes to ensure that e-learning managers continue to have input into the data collected to measure e-learning use in their institutions. Context/background E-learning studies in New Zealand have examined e-learning at a national level. Many of these studies have been cross-sector, and they have recognised that there is a range of e-learning capability within all of the institutions across the sector. Following two major government incentives (Ministry of Education, 2007a; Ministry of Education, 2007b), many institutions in New Zealand have introduced more e-learning into their delivery choices. Most have implemented a learning management system—computer software that enables students to access resources and activities through the internet or computer networks (Mitchell, Clayton, Gower, & Barr, 2005). Following the implementation of the system, teaching staff were encouraged to upload resources for their students. Some embraced it, while others were quite fearful of the changes that e-learning might bring (Mitchell et al., 2005). Within this atmosphere, the MoE attempted to measure the use of e-learning by introducing a classification system in 2003 (Ministry of Education, 2008). The effectiveness of this system in capturing accurate data has been disputed. It was found to be somewhat problematic because there was a great deal of overlap between the levels in the classification. Based on feedback from e-learning professionals in the tertiary sector, the MoE proposed that the classification system be replaced by a model used by the Organisation for Economic Co-operation and Development (OECD) (Minstry of Education, 2010a). Although the perception was that the OECD model had some merit, concern was still expressed by e-learning managers and individuals (Ministry of Education, 2010b). Subsequent to this discussion, the MoE decided to drop the e-learning classification system and discontinue the data-collection process. In any process of reform, it is important to consider national and institutional drivers, so that any new system produces data to support decision makers at both levels. Additionally, it is helpful to briefly define the terminology to be used in the report, particularly in the context of the literature. Terminology: Distance, flexible and blended learning Although e-learning as a concept has existed for a number of years, there are still many interpretations of the term. Therefore, it is important to define e-learning in order to place it in the context of the New Zealand tertiary sector and this study. It is also important to get a sense of the historical perspective, in terms of how e-learning fits in with traditional methods of delivering education. A number of studies have defined and described e-learning, online learning, flexible learning, and distance learning (Nichols, 2008; e-Learning Advisory Group, 2002). There is a considerable amount of overlap for each of the terms, and many outside of the e-learning arena do not recognise the terms or the differences. Even those within the e-learning area may have different interpretations (Bates, 2008; Marshall, 2005; Mitchell et al., 2005; Seaman, 2003). E-learning is most often defined as any technology-enabled learning. This includes students using software in a computer lab, on a CD-Rom and over the internet. Initially, the terminology included distance learning, distributed learning, or correspondence- course delivery. In its broadest sense, distance learning can incorporate a number of different modes or types of learning. E-learning terminology now includes hybrid or blended courses, Wilson, A. D. 158 where face-to-face class time is reduced and replaced by online activities (Nichols, 2008). This type of learning describes delivery modes that fall anywhere along a continuum of classroom- based teaching to fully online learning (Bullen & Janes, 2006, as cited in Nichols, 2008). The most appropriate mix is determined by the learner profile and learning objectives for each course (Marshall, 2005). Categories of e-learning use In an effort to measure courses that used alternative types of delivery, the MoE scale or classification was introduced in 2003, with the same classification types in place from 2004 (Ministry of Education, 2008). The classification was called the “Internet Based Learning Indicator” and the description was: “The field is used to indicate whether teaching and learning in each course is currently available in part or as a whole via the Internet” (Ministry of Education, 2008, p. 93). Additionally, the classification manual provides the following basis for capturing the data: “The field is used by the Ministry for tertiary sector reporting and policy purposes. For example, is internet-based learning helping to increase participation, comparison of outcomes for students learning online to those learning on campus” (Ministry of Education, 2008, p. 93). It was the responsibility of institutions to report the level of e-learning use to the MoE once a year as part of their single data return reporting (Ministry of Education, 2008). The categories of the classification are: 1. No Access is where no part of the paper or course is accessible online. 2. Web-Supported is where a paper or course provides students access to limited online materials and resources. Access is optional, as online participation is likely to be a minor component of study. 3. Web-Enhanced is where a paper or course expects students to access online materials and resources. Access is expected, as online participation is likely to make a major contribution to study. 4. Web-Based is where a paper or course requires students to access the accompanying online materials and resources. Access is required, as online participation is required. (Ministry of Education, 2008, p. 93) When using any type of scale or rubric to describe e-learning, there is the danger of overlap, but in looking at the essential meaning of each of these levels, it is possible to gain some clarity as to what the levels constitute. The issue of which level of interaction should define e-learning is also important. Further analysis may be used to break these levels down in terms of administration, communication, and real engagement. Courses that sit within the first level of the MoE classification indicate that there is no access to digital technology, so there is little interaction. Courses within the web-supported category use the internet to provide administration tools to students, perhaps in the form of email announcements and links to course material such as a virtual file repository. Although this level provides access, it does not indicate that there is a great deal of engagement with the materials or other students through the learning management system. It can therefore be assumed that communication follows a ‘transmission’ as opposed to an ‘engagement’ approach (Hegarty, et al., 2005; Taylor & Dunne, 2011). Providing resources on a learning management system does support good teaching in the sense that it contributes to an organised coherent class environment and may allow lecturers time to get on with other less administrative tasks in their teaching—for example, planning quality learning activities. Journal of Open, Flexible, and Distance Learning, 16(1) 159 However, it does not necessarily enhance or extend learning, in either a classroom-based or online environment. The term web-supported may refer more to ‘encouraging’ students to use online resources such as quizzes or optional discussion forums. The MoE classification does not discuss classroom- based activities, so it may be assumed that class time is not reduced. It is of interest that in the proposed changes to the MoE classification, the MoE specifically refers to “classroom-based teaching” (Ministry of Education, 2010a), whereas the original MoE classification did not make that distinction. The final level of the MoE classification could refer to anything from blended to fully online learning, where access to the web (or course area) is mandatory. What was the single data return code used for, and was it effective? The MoE has stated that the collection of the data is for “sector reporting and policy making” (Ministry of Education, 2008, p. 93). However, there have been only a few sector-wide publications that used extensive single data return. The first was the evaluation of the impact of the e-learning collaborative development funds (Ham & Wenmoth, 2007), and the second was on e-learning provision in the sector (Guiney, 2011). Both reports did provide e-learning provision rates in the tertiary sector, with the Guiney report including student demographics, qualification level, and institutional information. However, it did not comment a great deal on the quality of the e-learning resources or environment. The Guiney (2011) report also acknowledged that it was difficult to “establish clear boundaries between the Web-Supported and Web-Enhanced categories” (p. 6). This sentiment was echoed by 13 e-learning managers from institutes of technology and polytechnics (ITPs) who were interviewed as part of a doctoral study. The interviews included a question about the single data return internet field. Throughout the interview process it became clear that, because the e- learning managers were not the individuals who reported internet use to the MoE, it was often problematic for them to report the categories. Although the e-learning managers had received an email copy of the questions beforehand, they often had to contact those individuals who were responsible for the student administration system. The resulting data was often emailed to the researcher at a later date. Some data was quantitative—occasionally this would be the number of courses, but more often it was the percentage of courses in each category. Other e-learning managers provided a general overview of their perception of the number or percentage of courses in the different categories. In some cases, the difficulties highlighted by the e-learning managers related to issues of how to differentiate the levels. For example, one e-learning manager indicated that the perception of the difference between web-supported and web-enhanced was difficult to gauge, because lecturers don’t contact the e-learning team as “they don’t see a significant change from what they used to be doing”. There is some ambiguity in terms of what is meant by the terminology. For example, one e-learning manager queried whether providing a resource online meant that the students are ‘expected’ or ‘required’ to access it online.  Still other respondents indicated difficulty in determining what was meant by e-learning. For example, some institutions use video-conferencing links between campuses. The managers were uncertain whether or not video-conferencing was considered e-learning. Others wondered if class-based e-learning activities should be considered as part of the percentage of courses in the web-supported category. Other criticism arose when the MoE invited individuals to comment on proposed changes to the classification system (Ministry of Education, 2010b). Wilson, A. D. 160 What are the benefits and what are the options? In addition to criticism of the classification system, there were also some recommendations for how such a system might assist managers at an institution level. In their response to the government request, New Zealand member institutions of the Australasian Council on Open, Distance and E-Learning indicated in their response that the categories may be used by institutions to inform students about the course requirements and digital expectations (response citation). It could also be used by institutions in designing a programme and providing a basis for instructional design work, professional development, and e-learning support that might be required for staff teaching on the programme (Australasian Council on Open, Distance and E- Learning, as cited in Ministry of Education, 2010). The use of frameworks to establish and maintain high-quality online courses is well documented (Gaytan, 2009; Park, 2011; Swan, Garrison, & Richardson, 2009). In a study of mobile learning, Park (2011) recommended using a framework of mobile learning activities to inform instructional designers to “design and implement mobile learning more effectively” (p. 95). The framework was based on a ‘mobility hierarchy’, which described the level of interaction in a similar way to the categories of the MoE classification. It also included a framework of technological affordances, including group and individual ways of working (p. 82). So what might the options be for institutions that wish to continue recording data about their courses in establishing a framework that will inform their academic development and quality assurance? Four options that might be considered are listed below. Option 1: Continue to use the internet-based learning indicator categories One of the criticisms of the proposed changes to the internet field is that “the value of this indicator in an increasingly e-learning environment does not justify the work involved in loading the data” (Ministry of Education, 2010b, p. 12). Introducing a new system may simply compound the situation. Institutions that currently find value in the classification could continue to maintain data on their courses. Option 2: Slight modifications to the classification system Reinterpreting the system might solve some problems. A banded approach could be used, replacing the four current levels with the following categories: Figure 1 A banded approach to the MoE system Journal of Open, Flexible, and Distance Learning, 16(1) 161 How does this compare with the MoE classification? It is very similar, but may remove some of the ambiguity. The first level in the MoE system and the first band in this system are identical— there is no use of digital technologies. The next band in this system indicates that students and teaching staff use technology to complete administrative tasks—sharing course descriptors, timetables, and assignment submission. Unlike the MoE system there is no reference to access, as there is an increasing expectation that students will need to use technology to complete these tasks. The third band relates more to the use of e-learning in teaching. Does using the technology enhance or reinforce learning? Learning design is inherent when using technology to enhance learning. How will the digital technology support the students in their learning? What tools might be used? If a course sits within the third band, we would expect teaching staff to either have skills to use the technology, or that they would gain these skills through professional development. The final band is similar to the MoE system—it assumes access because the course is either completely online or relies heavily on the use of digital technologies. If the problem with the retired system is that it is difficult to distinguish between the middle two categories, the banded approach would differentiate the categories based on judgements of whether digital technology is being used as a tool or for the pedagogical process. Would this system aid in planning for professional development and project management? One example to consider might be a lecturer who is taking over a course that provides for online assignment submission. In the short term, the lecturer only needs to learn particular technical skills in order to support their students. This would sit within the second band. A hands-on skills session may be the best approach (Wilson, in press). However, if the lecturer wants to provide meaningful feedback through digital technology, further professional development might be suggested. This type of feedback practice would fit within the third band. One example from the ITP sector shows how this system might work. One institution has created a scale to categorise e-learning. It includes categories that are driven from an e-learning management perspective, where templates and project management structures are provided (Nelson-Marlborough Institute of Technology, n. d.) The three levels are: fully online course; blended course (described as some online, some face-to-face); and e-filing cabinet to supplment a face-to-face course. The type of course could be recorded in the same style format as the original classification. Courses that do not have an area on the learning management system would be considered to fall within the ‘no access’ category. Option 3: Taxonomy Taxonomy could be used as a form of classification system for e-learning (Rudak & Sidlor, 2010). Each category could be described in terms of the e-learning practice, and could include the types of technology, tools, or activities that are provided through digital technology. This would allow for greater granularity, but it may be too prescriptive to allow all institutions to report their data accurately. Wilson, A. D. 162 Figure 2 Taxonomy of e-learning Option 4: Rubrics to categorise e-learning The last option for consideration is a rubric. Rubrics, peer evaluations, and other tools are often used to evaluate the quality of online course design (Roblyer & Weincke, 2003). Additionally, these evaluation tools can be used to assist managers in making decisions for setting professional development criteria (Palloff & Pratt, 2011; Wood & Friedel, 2009). The example rubric shown in Figure 3 classifies e-learning use in terms of strands, and provides behaviours for three levels of e-learning. Figure 3 E-learning classification rubric Each of these options would need to be considered in light of institutional needs. Further consultation with e-learning professionals and government advisory groups would need to occur to establish priorities for a new classification system. When and how could the data be collected? Recording data for the single data return has been typically manual, with either the e-learning or academic records department entering data regarding the e-learning category of the courses. Online development projects, course audits, moderations, and reviews provide managers with Journal of Open, Flexible, and Distance Learning, 16(1) 163 opportunities to determine how e-learning is being used in courses (Stubbs, Martin, & Endlar, 2006). Learner analytics might be a more effective way of capturing the data. Learner analytics is described as: “interpretation of a wide range of data produced by and gathered on behalf of students in order to assess academic progress, predict future performance, and spot potential issues” (Johnson, Smith, Willis, Levine, & Haywood, 2011, p. 42). It combines data sets, statistical analysis, and modelling (Campbell, DeBlois, & Oblinger, 2007). Many systems are used as an early-intervention system to alert teaching staff to identify at-risk students (Arnold, 2010; Campbell, DeBlois, & Oblinger, 2007). More recent studies have described the use of learner analytics in terms of online course quality (Oliver & Whelan, 2011) and student collaboration (Reich, Murnane, & Willett, 2012). Adaption of the analytics model may result in using it to help managers to assess online course quality and identify areas of need for staff development. With any analytical software there is a concern that the process is ‘measuring’ learner access and statistical information. However, it might be possible to determine the interactivity of a course by comparing the number of discussion forums, interactive web pages, and quizzes, as opposed to more static resources such as PowerPoints and documents. From an economic perspective, it would be useful if groups of institutions could work together on designing these analytics or using already existing software. Conclusions There were two purposes to this study. The first was to consider the MoE’s system to gain a better understanding of how e-learning was classified, and the second was to recommend a replacement system that might be more practical in terms of institutional and sector analysis and planning. In determining whether a new classification system should be implemented, perhaps the issue is not a question of measuring the web components or online components of a course. Instead, it may be a wider issue of how the technology might be used in all of the categories of courses. Defining a role for the type of technology use may mean that a more banded approach would be more effective. The study also discussed learner analytics as a method for effective data collection. The difficulty in this method will be determining valid measures of interactivity in order to categorise the courses. As an exploratory foray into the debate of e-learning classification, this study probably raises as many questions as it provides answers. No doubt the next few years will witness robust discussion regarding not only how e-learning can be measured, but also whether it should be. One thing is certain—this discussion will not be limited to New Zealand. References Arnold, K. E. (2010). Signals: Applying academic analytics. EDUCAUSE Quarterly, 33(1). Bates, A. W. (2008, September). E-learning and vocational education and training. Paper presented at the E-learning in industry symposium. Hamilton, New Zealand. Campbell, J. P., DeBlois, P. B., & Oblinger, D. G. (2007, July/August). Academic analytics: A new tool for a new era. EDUCAUSE Review, 41–57. e-Learning Advisory Group. (2002). Highways and pathways: The report of the e-Learning Advisory Group. Wellington, New Zealand: Author. Wilson, A. D. 164 Gaytan, J. (2009). Analyzing online education through the lens of institutional theory and practice: The need for research-based and -validated frameworks for planning, designing, delivering, and assessing online instruction. The Delta Pi Epsilon Journal, LI(2), 62–75. Guiney, P. (2011). E-Learning Provision and Participation: Trends, Patterns and Highlights. Wellington. : Ministry of Education. Ham, V., & Wenmoth, D. (2007). Evaluation of the e-Learning Collaborative Development Fund. Wellington, New Zealand: Tertiary Education Commission. Hegarty, B., Penman, M., Brown, C., Coburn, D., Gower, B., Kelly, O., . . . Suddaby, G. (2005). Approaches and implications of elearning adoption in relation to academic staff efficacy and working practice. Palmerston North, New Zealand: Universal College of Learning. Johnson, L., Smith, R., Willis, H., Levine, A., & Haywood, K. (2011). The horizon report. Austin, TX: New Media Consortium. Marshall, S. (2005). NZ e-learning capability determination: Determination of New Zealand tertiary institution e-learning capability: An application of an e-learning maturity model report on the E-Learning Maturity Model evaluation of the New Zealand tertiary sector. Wellington, New Zealand: Victoria University of Wellington. Ministry of Education. (2007a). (e)Learning Collaborative Development Fund (eCDF). Wellington, New Zealand: Author. Ministry of Education. (2007b). Tertiary (e)Learning Research Fund (TeLRF). Wellington, New Zealand: Author. Ministry of Education. (2008). 2009 Single data return: A manual for tertiary education organisations and student management system developers: Specifications of the Ministry of Education and Tertiary Education Commission data requirements for the single data return for the 2009 academic year. Retrieved from http://cms.steo.govt.nz/NR/rdonlyres/8F4D8AE3- 03B6-4FA1-B3F4-D7472FF35752/0/SDRManual2009v111.pdf Ministry of Education. (2010a). Introduction and summary: Proposed changes to the 2011 single data return feedback process. Retrieved from http://cms.steo.govt.nz/NR/rdonlyres/BEB572AB-C1F8-4409-BEC1- 246BE3B6150D/0/IntroandSummaryofProposedChanges2011SDRv1_2.pdf Ministry of Education. (2010b). Response to sector feedback on the proposed changes to 2011 SDR changes. Wellington, New Zealand: Author. Mitchell, D., Clayton, J., Gower, B., & Barr, H. (2005). E-learning: An annotated bibliography. Hamilton, New Zealand: Waikato Institute of Technology. Nelson-Marlborough Institute of Technology. (n. d.). NMIT – Course: Tutors’ guide to NMIT Online. Nelson, New Zealand: Author. Nichols, M. (2008). E-primer series: E-learning in context. Wellington, New Zealand: Ako Aotearoa. Palloff, R. M., & Pratt, K. (2011). The excellent online instructor: Strategies for professional development. San Franciso, CA: Jossey Bass. Journal of Open, Flexible, and Distance Learning, 16(1) 165 Park, Y. (2011). A pedagogical framework for mobile learning: Categorizing educational applications of mobile technologies into four types. The International Review of Research in Open and Distance Learning, 12(2), 78–102. Reich, J., Murnane, R., & Willett, J. (2012). The state of wiki usage in U. S. K-12 schools: Leveraging Web 2.0 data warehouses to assess quality and equity in online learning environments. Educational Researcher, 41(1), 7–15. doi: 10.3102/0013189X11427083 Roblyer, M. D., & Weincke, W. R. (2003). Design and use of a rubric to assess and encourage interactive qualities in online courses. The American Journal of Distance Education, 17(2), 77–98. Rudak, L., & Sidlor, D. (2010). Taxonomy of e-courses. In M. Iskander, V. Kapila, V. A. Vikram, M. Iskander, V. Kapila, & M. Karim (Eds.), Technological developments in education and automation (pp. 275–280). New York, NY: Springer. Seaman, J. (2003). The Sloan survey of online learning. Sloan C View, 4(2), 5. Stubbs, M., Martin, I., & Endlar, L. (2006). The structuration of blended learning: Putting holistic design principles into practice. British Journal of Educational Technology, 37(2), 163–175. Swan, K., Garrison, D. R., & Richardson, J. C. (2009). A constructivist approach to online learning: The community of inquiry framework. In C. R. Payne (Ed.), Information technology and constructivism in higher education: Progressive learning frameworks. Hershey, PA: Information Science Reference. doi: 10.4018/978-1-60566-654-9 Taylor, C. A., & Dunne, M. (2011). Virtualization and new geographies of knowledge in higher education: Possibilities for the transformation of knowledge, pedagogic relations and learner identities. British Journal of Educational Technology, 32(4), 623–641. Wilson, A. D. (in press). Effective professional development for e-learning: What do the managers think? British Journal of Educational Technology. Wood, D., & Friedel, M. (2009). Peer review of online learning and teaching: Harnessing collective intelligence to address emerging challenges. Australasian Journal of Educational Technology, 25(1), 60–79. Biographical notes Amy Wilson A.D.Wilson@massey.ac.nz Dr Amy Wilson is Senior E-Tutor for the Graduate Diploma in Education (e-learning) at Massey University. Dr Wilson has spent the last 8 years developing both online and web-supported courses and has enjoyed the opportunity to work with a number of teaching staff. Her interests are in professional development and instructional design. � This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Wilson, A. D. (2012). Categorising e-learning. Journal of Open, Flexible and Distance Learning, 16(1), [pp. 156–165].