Australasian Journal of Educational Technology, 2016, 32(5). 19 Incorporating meaningful gamification in a blended learning research methods class: Examining student learning, engagement, and affective outcomes Meng Tan and Khe Foon Hew The University of Hong Kong In this study, we investigated how the use of meaningful gamification affects student learning, engagement, and affective outcomes in a short, 3-day blended learning research methods class using a combination of experimental and qualitative research methods. Twenty-two postgraduates were randomly split into two groups taught by the same instructor. The experimental group attended a course that incorporated the notion of meaningful gamification – that is, utilising the game mechanics of points, badges, and a leader board, as well as activities based on self-determination theory. The control group attended the same course and activities taught by the same instructor but without the game mechanics. Data sources included students’ pre-and post-tests scores, group artefact scores, discussion forum posts, students’ questionnaire survey, students’ interviews, and the teacher’s self-reflections. Results suggest that students in the experimental group posted more messages in the discussion forums than the control group. Furthermore, the quality of group artefacts produced by the participants in the experimental group was overall higher than those in the control group. All students in the experimental group strongly agreed or agreed that they found the course motivating. However, only about half the participants in the control group found the course motivating. Introduction Gamification may be defined as the application of game-like mechanics to non-game situations or context (Deterding, Dixon, Khaled, & Nacke, 2011). The main purpose of gamification is to motivate users to perform certain activities. Gamification is different from game-based learning as the latter refers to the use of games for teaching and learning. In this study, we did not engage in game-based learning because we did not develop any online games but rather used game mechanics to enhance student motivation in learning. Many current forms of gamification focus on using online game mechanics to create a compelling and engaging experience that can drive and change users’ behaviors (Zichermann & Cunningham, 2011). Examples of game mechanics include points, badges, levels, challenges, virtual goods, and leader boards. These can be summarised as follows (Bunchball, 2010; Educause, 2011): • Points refer to tokens that can be collected by users, which can be used as status indicators, to unlock access to certain content, or to spend on virtual goods or gifting. • Badges or trophies refer to tokens that appear as icons or logos on a web page that signify accomplishments of a particular activity such as completion of a project. • Levels refer to status that signifies a level of mastery of a particular task. • Challenges refer to missions that are to be accomplished by a user. Challenges provide users a purpose or goal to shoot for. • Virtual goods or gifts refer to non-physical, intangible objects for use in online communities or online games. Some virtual goods can be sold for real dollars. • Leader boards refer to high-score tables that indicate an individual’s performance compared with other users. These game mechanics correspond to a variety of human desires such as the need for reward, status, achievement, self-expression, competition, and altruism (see Table 1, adapted from Bunchball, 2010, p. 9). The black dots shown in Table 1 suggest the types of primary desire affected by a particular game mechanic, while the grey dots show the other possible desires that it affects (Bunchball, 2010). For example, the use of a leader board most closely fulfills a person’s desire for competition, while at the same time it may also cater to his or her need for status, or achievement. Badges most closely meet an individual’s desire for achievement, while fulfilling a need or desire for reward or status. Australasian Journal of Educational Technology, 2016, 32(5). 20 Table 1 Interaction between basic human desires and game mechanics (adapted from Bunchball, 2010, p. 9) Game mechanics Human desires Reward Status Achievement Self expression Competition Altruism Points Levels Challenges Virtual goods/gifts Leader boards Badges Note: symbolises the primary human desire a game mechanic may affect; symbolises other possible desires a game mechanic could affect. There is a growing interest in gamification in the field of higher education because it provides an alternative means for educators to engage students during the teaching and learning process (de Sousa Borges, Durelli, Reis, & Isotani, 2014). Gamification gives instructors online game-like tools to motivate and reward students to give their full selves to learning (Lee & Hammer, 2011). Online game-like tools have the potential to motivate students, particularly those familiar with digital video games, to engage in the learning activities (Lee & Hammer, 2011). Current research in education-related gamification is still at its nascent stage. A majority of education- related gamification studies to date had been conducted in the higher education sector (see de Sousa Borges et al., 2014; Dicheva, Dichev, Agre, & Angelova, 2015). Most papers report the application of gamification in the subject domain of computer science and information technology (Dicheva et al., 2015). The most commonly used game mechanics were points, badges, and leader boards (Dicheva et al., 2015; Hamari, Koivisto, & Sarsa, 2014). Points, badges and leader boards may be categorised as a form of extrinsic incentive that acts as a reinforcement (Skinner, 1957) to motivate an individual to act. Overall, the findings of previous studies in higher education suggested that actual evidence regarding the impact of gamification on student learning is still fairly weak. This is because a majority of previous studies used student self-reported surveys to measure learning outcomes (e.g., Cheong, Cheong, & Filippou, 2013). A limitation of self-reported data is that participants usually have correct notions about socially desirable answers, and thus tend to provide answers that cause them to look good (Hancock & Flowers, 2001). We found five studies employing objective measures such as tests, grades, task completion rates, or quizzes to evaluate student learning outcomes (i.e., Coetzee, Fox, Hearst, & Hartmann, 2014; Dominguez et al., 2013; Hakulinen, Auvinen, & Korhonen, 2013; Ibanez, Di-Serio, & Delgado-Kloos, 2014; Li, Grossman, Fitzmaurice, 2012). Out of these five, only three studies employed comparison-based design such as experimental-control research method (i.e., Coetzee et al., 2014, Dominguez et al., 2013; Hakulinen et al., 2013). Overall, prior research studies suggest that the learning impact of gamification on students is mixed. Some studies found that students who followed traditional exercises or courses tended to perform similarly in overall score than those who followed gamified exercises (e.g., Coetzee et al., 2014; Hakulinen, et al., 2013). Dominguez et al. (2013) found that badges had a positive effect on students’ practical assignments but a negative effect on written assignments. On the other hand, previous findings regarding the effects of gamification on student engagement and motivation were generally positive. Overall, the use of game mechanics such as badges, points, and leader boards has a significant positive impact on improving student engagement. For example, the quantity of students’ contributions such as message posts in gamified forums (Coetzee et al., 2014; Denny, 2013) was greater than those in non-gamified forums. Students also found the game mechanics made the course activities more enjoyable and fun (Li et al., 2012). Australasian Journal of Educational Technology, 2016, 32(5). 21 Meaningful gamification Gamification can be classified into two main categories: (a) extrinsic reward-based gamification and (b) meaningful gamification. Extrinsic reward-based gamification techniques mainly involve the application of points, badges, and leader boards. Extrinsic reward-based gamification can be an excellent motivator because it caters to people’s desires for reward, achievement, and competition (Bunchball, 2010, see Table 1). However, not all students may find reward-based gamification satisfactory. Nicholson (2012) introduced the notion of meaningful gamification that not only uses game mechanics to provide extrinsic incentive but also aims to apply student-centred activities to make a course meaningful to participants. One way to achieve the latter is through the use of self-determination theory of motivation (Deci & Ryan, 2004). The self-determination theory of motivation assumes that all individuals regardless of gender, age, or culture possess three fundamental psychological needs that move them to act or not to act – the needs for autonomy, relatedness, and competence. When these needs are fulfilled, people will find the tasks meaningful and continue to participate in them, as opposed to people whose needs for autonomy, mastery, and competence are not met. Figure 1 provides a bird’s-eye view of extrinsic reward-based gamification versus meaningful gamification. Figure 1. Extrinsic reward-based gamification versus meaningful gamification Autonomy refers to the need for freedom or perceived choice over one’s action (Deci & Ryan, 2000). The psychological need for autonomy is assumed to provide a motivational basis for students’ behavioural engagement in a course such as completing an assignment (Skinner, Furrer, Marchand, & Kindermann, 2008). Feeling autonomous is also expected to have a motivating effect of producing higher levels of emotional engagement such as enjoyment towards the course (Skinner et al., 2008). Competence or mastery refers to the need for a person to master one’s pursuits or learning. A sense of mastery about the topic being studied would encourage a learner to further participate in the course activities, as well as foster positive learner feelings about the course. Relatedness refers to the need for an individual to connect to other people (Deci & Ryan, 1991). Several studies have demonstrated that a greater sense of relatedness is linked to increased levels of behavioural and emotional engagement (e.g., Furrer & Skinner, 2003). Research questions As noted earlier, there is a dearth of empirical evidence about the effectiveness of gamification in education contexts. This study aims to make a contribution in this respect by investigating the use of gamification in a higher education context via a combination of experimental and qualitative research methods. Some results of this study have been reported elsewhere (Hew, Huang, Chu, & Chiu, 2016). In this paper, we described the possible interaction between basic human desires and game mechanics (Table 1), explained the meaningful gamification framework (Figure 1), compared the experimental and control group students’ perceptions of the course (via survey data), provided qualitative findings related to students’ perceptions of gamification (via interview data), and the instructor’s perceptions of gamification (via self-reflection data). Australasian Journal of Educational Technology, 2016, 32(5). 22 We also discussed whether the use of gamification encourages surface or deep learning, and proposed a possible future research design (design-based research) which enables a researcher or instructor to iteratively revise a gamified course over a longer period of time while advancing its theoretical underpinnings at the same time. This could potentially yield more generalizable practical design principles for using gamification as opposed to a one-off experimental or mixed method approach. In this study, the experimental group attended a course that incorporated the notion of meaningful gamification – that is, utilising the game mechanics of points, badges, and leader board, as well as activities based on self-determination theory (autonomy, mastery, and relatedness, see Figure 1). The control group, on the other hand, attended the same course and activities taught by the same instructor but without having access to the game mechanics of points, badges, and leader board. The following research questions guided the experimental research component: (a) Do students in the experimental group perform better in terms of learning the content (post-test scores) than students in the control group? (b) Do students in the experimental group produce higher quality artefacts than students in the control group? (c) Do students in the experimental group participate more in the course forums than students in the control group? Based on the self-determination theory, as well as the use of game mechanics (points, badges, and leader board), we posit that: H1: Students in the experimental group will perform better in learning the content (as learning performance demonstrates a sense of mastery), compared to the control condition. H2: Students in the experimental group will produce better artefacts (as completion of assignment provides an indication for autonomy), compared to the control condition. H3: Students in the experimental group will post more forum messages (as more discussion posts demonstrate a stronger sense of relatedness), compared to the control condition. The following questions guided the qualitative research component: (a) How do students in the experimental group perceive the use of game mechanics? (b) How does the instructor perceive the use of meaningful gamification? Method Participants Participants (N = 22) of the present study were recruited in summer 2014 at a large Asian public university under voluntary participation to enroll in a three-day blended learning course entitled “Methods of research and enquiry: Designing good questionnaire”. Of the 22 participants, 8 were males and 14 females. Participants were randomly assigned into an experimental group and a control group (non-gamified group), yielding a total of 11 students for each group. Both groups were taught by the same instructor. The learning materials were uploaded onto the Moodle platform. Moodle provided a one-in-all platform for the upload of learning materials and activities, quizzes, and questionnaire. It is also used as the university learning management system, and all participants were already familiar with it. More specifically, our blended learning course may be called a flipped classroom model incorporating both face-to-face and online components. Before class, the participants accessed the learning materials such as video lectures at home so that in-class face-to-face time was used for classroom discussion on the subject, and for carrying out student-centred learning activities such as group work. The assessment tasks included the post-test and student group activity on designing questionnaires. Participants were given a choice of six different questionnaire topics to design (see the following section for more information). Participants worked in groups of 2 or 3 people (self-selected) to choose one topic and to design the questionnaire for the topic. The assessment task was identical irrespective of which topic the students picked. After the class had ended, the participants examined other groups’ artefacts (questionnaires) and provided suggestions for Australasian Journal of Educational Technology, 2016, 32(5). 23 improvement. This was conducted online through the course forums. Revisions to the artefacts would be made if necessary. Table 2 shows the implementation of the course. Table 2 Implementation of the blended learning course (adapted from Hew et al., 2016, p. 225) Day 1 Day 2 Day 3 Online mode Access course materials at own time and pace Face-to-face mode Pre-test + Instructor-led discussion + Student group activity on designing questionnaire + Post-test Online mode Online discussion about group activity + Revision of questionnaire based on comments and suggestions Experimental group Figure 2 illustrates the design of the course activities used in the experimental group. Specifically, the course activities aimed to fulfill an individual’s three psychological needs: the needs for autonomy, relatedness, and competence. Figure 2. Overview of the course activities used in the experimental group (Hew et al., 2016, p. 225) To cater to a learner’s need for autonomy, two strategies were employed. First, learners were provided with a recommended list of helpful resources which could deepen their understanding of questionnaire design. This reading list was made optional rather than a mandatory because individuals need to feel that they are acting from their own volition and voluntarily participating in an activity, instead of being forced into doing something. Australasian Journal of Educational Technology, 2016, 32(5). 24 Second, participants were provided with six different topics of questionnaire design for them to choose one to complete. These topics were grouped into three modes – easy mode, medium mode, and hard mode: • Easy mode – Topic 1 “Student evaluation of teaching and learning” • Easy mode – Topic 2 “User experience: how customers feel about the services provided by XXX hotel” • Medium mode – Topic 3 “Market research for a product: understanding how your target market will feel about a new product” • Medium mode – Topic 4 “Health survey: understanding how your target audience feels about their health” • Hard mode – Topic 5 “A survey on the psychological demands of the elderly” • Hard mode – Topic 6 “Customer satisfaction: finding out what customers think about your company and how it compares to your competitors”. Students (in groups) who chose to do the easy mode topic would be awarded one point, while those who opted for the medium and hard modes would be given two and three points respectively. However, students would not get any points if they merely designed the questionnaires without care or thought. In other words, if the instructor deemed the created questionnaires were poor in quality, students would not be rewarded with any points. The purpose of doing this was to stimulate the students to focus on choosing a topic they could manage well, as well as carefully applying the concepts they learned in the course to design an appropriate questionnaire. All the points collected by the groups were accumulated and the results displayed in the course leader board in Moodle. To cater to a learner’s need for competence, active learning strategies were used. Active learning may be defined as instructional activities that involve students in doing things and thinking about the things they are doing (Bonwell & Eison, 1991). More specifically, Meyers and Jones (1993) identified elements of active learning as cognitive activities that allow students to clarify, question, consolidate, and appropriate new knowledge. In the present study, students (in groups) were required to design actual questionnaires and post their questionnaires in a forum for peer discussion. Designing actual questionnaires enabled the participants to apply the concepts they had learned, while online peer discussion allowed students to question and comment on each other’s work. Students’ completed questionnaires were then examined by an expert skilled in questionnaire design to assess and provide feedback. The instructor would also examine the students’ completed activities posted in the forums, analysed their discussion posts, and provide constructive feedback. To cater to a learner’s need for relatedness, group work was used. The use of groups enabled participants to connect with one another. In addition, working in groups allows students the opportunity to exchange and discuss ideas. The interaction or discussion among students could generate extra activities (e.g., explanation, disagreement) as well as additional cognitive mechanisms (e.g., knowledge elicitation and sharing) which may not occur as frequently in traditional, individual learning (Dillenbourg, 1999). Three game mechanics were used in the experimental group: badges, points, and a leader board. These catered to people’s desires for reward, achievement, and competition (see Figure 1) (Bandura, 1977; Skinner, 1957; Suls & Wheeler, 2012). Badges, points, and leader boards are also the most widely used game mechanics in many gamified activities. It is therefore reasonable to investigate their impact on student engagement in the present study. Participants who accessed the lecture slides before class were automatically rewarded with an “Early Bird” badge. Participants who replied six times in discussion forum will be rewarded with a “Reply Warrior” badge. Participants would win a “Questionnaire Ace Team” badge if their group won the first place in the course leader board. Table 3 shows screen captures of the three badges. Furthermore, students who collected two badges would win one point for their group. The points collected for each group were tabulated and displayed in the course leader board for all to see. Australasian Journal of Educational Technology, 2016, 32(5). 25 Table 3 Screen captures of badges in the experimental group (Hew et al., 2016, p. 226) Badge Criteria Students who log in Moodle and access the PPT slides before class will be awarded the “Early Bird” badge. Students who participate in discussion forum and post 6 replies will be awarded the “Reply Warrior” badge. Students who win the first place in the leader board will be rewarded with this badge. Control group The control group had access to the same course contents and participated in the same course activities as the experimental group but without any game mechanics. The control group was also implemented as a flipped classroom model taught by the same instructor (see Figure 2) but on different days than the experimental group. Before class, the participants accessed the learning materials at their own pace and time. The same pre- and post-tests were employed in the control group. Participants in the control group were also given the same choice of six different questionnaire topics to design. Participants worked in self- selected groups of 2 or 3 people to choose one topic and to design the questionnaire for that particular topic. After the class had ended, the participants examined other groups’ artefacts (questionnaires), and provided suggestions for improvements via the course online discussion forum. Data collection and analysis Data sources included students’ (a) pre-and post-test data, (b) assignment artefact scores, (c) online forum posts, (d) online survey, (e) interviews, as well as (f) the teacher’s self-reflections. Prior to the commencement of the course, all participants in both the experimental (n = 11) and control groups (n = 11) completed a pre-test concerning their understanding of questionnaire and its design. Table 4 shows the specific questions used. Table 4 List of questions used in the pre-test (adapted from Hew et al., 2016, p. 225) (1) Name 3 types of questionnaire questions (3 marks) (2) List 3 advantages of using questionnaires in conducting a survey (3 marks) (3) List 3 disadvantages of using questionnaires in conducting a survey (3 marks) (4) List 3 techniques of designing a good questionnaire (3 marks) (5) Describe 3 ways to improve response rates in questionnaire design (3 marks) (6) Name 3 sampling methods that can be used in choosing survey participants (3 marks) The pre-test was implemented in Moodle as a quiz. The total possible scores for the pre-test ranged from 0 to a full mark of 18. At the end of the course, both groups completed a post-test to examine how much the participants had learned in the course. The questions of post-test were similar to those of the pre-test but the order of the questions was changed. All participants in the experimental and control group completed the post-test. Differences in pre-test and post-test scores between the experimental and control groups were examined using Mann-Whitley tests as these tests are robust against small sample sizes and possible skewed data. The students’ group assignment artefacts (i.e., newly developed questionnaires) were graded by an expert who is skilled in questionnaire design to assess the quality of the group products. A total of eight group assignment artefacts were examined: four from the experimental class and four from the control class. The Australasian Journal of Educational Technology, 2016, 32(5). 26 expert graded the artefacts in terms of their relevancy to the overall purpose of each questionnaire, the wordings used (e.g., avoiding bias, double-barrel questions, double negatives), and the response choices (e.g., avoiding inconsistent number of options for a Likert scale, overlapping of choices). The possible marks awarded by the expert ranged from 0 to 10. Each completed questionnaire was given 10 marks at the start of the grading. If a mistake was found in the questionnaire (e.g., using a double-barrel question), one mark was deducted. Every mistake found would therefore reduce the marks further. The number of posts that students contributed in the course forum (i.e., forum posts) was used as a proxy for student engagement in this study. Forum post is a standard metric used by other researchers to measure student engagement (Anderson, Huttenlocher, Kleinberg, & Leskovec, 2014; Coffrin, de Barba, Corrin, L., Kennedy, 2014; Denny, 2013). A Mann-Whitney statistic test was performed to determine if there was any statistical difference in the number of discussion posts between the experimental and control groups. At the end of the course, all students in both groups also completed an online survey on their perceptions towards the course. In addition, 2 days after the course ended, five students (volunteers) in the experimental group were interviewed individually over the telephone. Semi-structured telephone interviews were conducted because it was inconvenient for the students to return to campus after the course was over. A more purposeful sampling strategy was not used because we were unable to get the consent of specific participants to do the interviews. Therefore, in the present study, we sampled interview participants by simply asking for volunteers. We acknowledge this as a limitation in the Discussion section because convenience sampling can lead to the under-representation or over-representation of particular groups. No volunteers from the control group were available. Examples of questions used during the semi-structured interview are shown in Table 5. Table 5 Examples of questions used in the semi-structured interviews (1) Tell me what game mechanics most engage you in this course? Why? (2) What do you like most about this course? Why? (3) What do you dislike most about this course? Why? (4) Is there anything else you want to tell me about this course? According to Sturges and Hanrahan (2004), the quantity and quality of data collected by telephone interviews are comparable to that collected in face-to-face interviews. Telephone interviews are also more feasible than Skype interviews (Weinmann, Thomas, Brilmayer, Heinrich, & Radon, 2012). Student interview data were analysed using the inductive qualitative analysis approach (Punch, 2005) to generate insights regarding what they liked or disliked about the course, as well as the use of game mechanics. Finally, the teacher was asked to reflect on the following open-ended questions at the conclusion of the course: (a) how much effort was required to produce the gamified course? (b) what was the most challenging part about the course design? (c) how did the teacher perceive the use of game mechanics? The teacher answered the questions using Microsoft Word. Results Pre- and post test scores Table 4 shows the test scores and knowledge gains in both the experimental and control groups. The comparison of the pre-test and post-test scores shows that the two groups made gains in learning how to design good questionnaire. To determine the significance between the groups, we performed a series Mann- Whitney tests as these are non-parametric and thus robust against skewed data. No statistically significant difference was found in the pre-test scores of both groups (Mann-Whitney U = 59.0, Z = -0.100, P = 0.949). Hence, both groups might be considered equal in terms prior knowledge on the topic of questionnaire before the lessons were conducted. Overall, the experimental group showed higher mean score in the post-test (M = 13.55), and reached greater knowledge gains (M = 8.45) than the control groups (M post-test = 11.55, M gain = 6.18) (see Table 4), although the difference in post-test scores did not reach a significant level (Mann- Whitney U = 39.5, Z = -1.390, P = 0.171). Thus, hypotheses H1 was not confirmed. Australasian Journal of Educational Technology, 2016, 32(5). 27 Table 6 Summary of pre-test, post-test, and gain scores Experimental group (n =11 participants) Control group (n = 11 participants) Pre-test Post-test Gain M = 5.09 (SD = 2.66) M = 5.36 (SD = 2.11) M = 13.55 (SD = 2.66) M = 11.55 (SD = 3.30) M = 8.45 (SD = 3.09) M = 6.18 (SD = 2.89) Quality of participants’ artefacts As mentioned previously, the students’ completed questionnaires were graded by an expert to assess the quality of the group artefacts. The possible marks awarded by the expert ranged from 0 to 10. Table 7 shows the marks received by the groups. Results indicate that the quality of group artefacts produced by the participants in the experimental group (M = 7.50, median = 7.50) was generally higher than those in the control groups (M = 5.75, median = 5.50). Thus, hypotheses H2 was confirmed. Table 7 Topics attempted by groups and marks obtained (Hew et al., 2016, p. 226) Marks obtained Groups (experimental) Groups (control) Topic 1 2 3 4 1 2 3 4 Easy - - - - 6 9 3 - Medium - 6 - - - - - - Hard 9 - 5 10 - - - 5 We also observed an interesting phenomenon regarding the students’ choice of topics between the experimental and control groups. As noted earlier, students were given the choice of six topics to do. Topics 1 and 2 were easy modes worth one point each, topics 3 and 4 were medium modes worth two points each, and topics 5 and 6 were hard modes worth three points each. We found that three teams in the experimental group chose the hard mode topics, and one team opted to complete a medium mode topic for their designing questionnaire activities. No team in the experimental group chose an easy topic to do. On the other hand, three teams in the control group chose the easy mode topics and only one team opted to complete a hard mode topic. Student engagement In addition to the test score results and the quality of the participants’ artefacts between the two groups, we were also interested to see the effects of using game elements such as badges on students’ engagement with the course. We used the number of posts that a student contributed in the online forum (i.e., forum posts) as a proxy for engagement. Table 6 gives a summary of the number of messages posted in the course forum for students in the experimental and control groups. Totals, averages, standard deviations, and medians are shown. Table 8 Comparison of forum posts in the experimental and control groups Experimental (n =11) Control (n = 11) Forum post Total Mean SD Median Total Mean SD Median 58 5.27 1.348 5.00 15 1.36 1.206 1.00 The results of a Mann-Whitney test found that students in the experimental condition posted significantly more messages in the online forums than the control group (Mann-Whitney U = 0, Z = -4.010, p < 0.001). This suggests that introducing a badge system can significantly increases forum participation. Consequently, hypotheses H3 was confirmed. Student perceptions of the course and game mechanics Figures 3 and 4 summarise the results of the questionnaire of the experimental and control groups respectively. All students (100%) in the experimental group (compared to 82% in the control group) agreed Australasian Journal of Educational Technology, 2016, 32(5). 28 or strongly agreed that the course had equipped them with the knowledge to solve real-world problems related to conducting survey. 81% of students in the experimental group, as opposed to only 45% in the control group, agreed or strongly agreed that they had many opportunities to exchange ideas with ideas. Interestingly, 91% of students in the experimental group agreed or strongly agreed that they wished to learn more about survey methods as a result of attending the course. In contrast, only 72% of those in the control group expressed an interest in learning more about survey methods. Overall, 100% of the students in the experimental group agreed or strongly agreed that they found the course motivating. On the other hand, only 54% of the participants in the control group found the course motivating. Figure 3. Student perception towards the course (experimental group) Figure 4. Student perception towards the course (control group) Students in the experimental group were also asked during the interviews about their perceptions of the game mechanics used in the course. Essentially, the use of points, badges, and leader boards gave participants in the experimental group a target to shoot for. For example, a student explained: I like the whole game elements in this lesson, so I downloaded and read the lecture slides when I read the introductive email. In the class activity, all of our group members unanimously agreed that choosing hard mode topic would be the best way to get more points. Also, all of us want to get the “reply warrior” badge so we keep replying to other students’ post and find many mistakes in other people’s questionnaire. We also revised our questionnaire to make it better. At last, we obtained the first place in leader board and I’m so excited when I knew it. (Student C; phone interview; 16 August 2014) Australasian Journal of Educational Technology, 2016, 32(5). 29 Another student remarked: I was excited when I knew I could win one point for my group if I got two badges, so I downloaded the PPT slides long before class began. During class, I focused on what teacher was teaching so I have the confidence to choose the harder mode in class activity. If there was no point being rewarded, I didn’t think I would download the lecture slides before class or listened so attentively to the teacher. The badges and points changed my attitude and made me more motivated to learn the topics presented in this course. (Student A; phone interview; 16 August 2014) Student A initially received 0 in his pre-test but subsequently attained a score of 14 in the post-test. He was also seen to be highly engaged and active as observed by the teacher during the course. Although a majority of students were interested in gaining badges, there were two participants who were not interested. These participants were usually the weaker students. For example, Student B said that he had no interest in gaining badges because he did not care about getting points or about the leader board. He scored below the class average in the post-test. The participants in the experimental group were also asked in the questionnaire survey what other types of game element they would like to see employed in future courses (see Table 9). Only two participants felt that the use of points, badges, and leader board were sufficient. A majority of participants (n = 7) desired the use of role-play scenarios in future courses. Four participants reported that they eventually wished to receive some tangible material rewards after collecting the points or being top in the leader board, while three participants would be satisfied with virtual gifts. Finally, two participants wished to have “levels of learning process”. Levels of learning process refers to the educational application of the game element level, such as learning achievement. For example, students who have collected a certain number of points or have completed a certain mission or activity would increase their learning process bar. Once the progress reaches 100%, students would be able to unlock the next level of learning. The process bar would motivate students to accomplish their learning as soon as possible to unlock the next higher level activity or content. Table 9 Types of other game mechanics desired by students. Student could give more than one answer Type of game mechanic Number of participants Levels 2 Role play 7 Virtual gifts 3 Material rewards 4 Badges, points, and leader board are enough 2 Instructor’s perceptions and efforts spent on using the game mechanics In this study, the notion of meaningful gamification represented a new pedagogy for teaching and learning for the instructor. Perhaps the most challenging part was to design the course activities and content around the three basic components of self-determination theory – autonomy, competence, and relatedness. For example, the instructor had to think of the types of active learning activities that could motivate students to complete. The instructor also had to decide how best to offer students the choice to select the activities they wished to do. See Figure 2 for an overview of the course activities that were designed and utilised. Designing the game mechanics (i.e., badges, points, and leader board) took the instructor 3 days. The instructor used an online badge design website to create attractive and specific badges. It was important, in the instructor’s view, to make both the badge name and picture interesting as these would help motivate participants to collect them. It took a further 7 days for the instructor to work with the technician to set up the badge system in Moodle so that learning management system could automatically give badges to participants when they met the specific badge’s requirement. Despite the amount of work needed, the instructor found that the use of game mechanics stimulated students’ interest in learning the topics as well as participating in the course activities compared to students who did not use them. Australasian Journal of Educational Technology, 2016, 32(5). 30 Discussion In this study, we investigated how the use of meaningful gamification affects student learning, engagement, and affective outcomes in a short, 3-day blended learning research methods class using a combination of experimental and qualitative research methods. We found no significant difference in student post-test scores between the experimental and control groups. In this study, the post-test focused on the cognitive process of remembering – which might suggest a more surface approach to learning (Biggs, 1987). This indicates that participants, regardless of the aid of game mechanics or not, learn factual information equally well. This suggests that the use of game mechanics may not encourage or increase the learning of factual knowledge. On the other hand, we found the overall quality of group artefacts produced by the participants in the experimental group (M = 7.50) higher than those in the control group (M = 5.75) (Table 7). This finding appears to support several other researchers’ conclusion that game mechanics can help develop practical competencies more than factual knowledge (Dominguez et al., 2013; Ke, 2009). More interestingly, more participants in the experimental group opted to grapple with more challenging group work activities, as opposed to those in the control group. Specifically, we found three teams in the experimental group choosing to do the hard mode questionnaire activities, with one team opting to complete a medium mode activity. On the contrary, three teams in the control group chose to do the easy mode activity and only one team opted to complete a hard mode activity (Table 7). We may tentatively conclude that the use of game mechanics has a positive effect on motivating students to engage with more difficult tasks in the course as supported with interview data (for example, see Student C; phone interview; 16 August 2014 reported in the Results section). The effort to engage with harder assignment tasks might suggest a deep approach to learning. However, at this moment, it might be too early to conclude whether the uptake of gamification would encourage students to be surface or deep learners due to the small sample size and the short duration of the present study. The results of this study clearly show that the use of game mechanics (points, badges, and leader board) generated more positive student attitude towards the course. All students in the experimental group strongly agreed or agreed that they found the course motivating. However, only half the participants in the control group found the course motivating. Game mechanics also produced greater student engagement in the discussion forum. Consequently, hypotheses H2 was confirmed, that is, students in the experimental group posted significantly more forum messages, as opposed to those in the control group. There are two plausible explanations for this phenomenon. First, the use of game mechanics gives explicit goals for participants to aspire to (Kumar & Herger, 2013). Previous research on goal setting suggests that when users are given a clear goal (e.g., individuals who participate in discussion forum and post six replies will be awarded the “Reply Warrior’ badge), their performance increased, as opposed to users who were not given an explicit goal (Jung, Schneider, & Valacich, 2010). Clear goals are one of the main elements of goal-setting theory (Locke, Shaw, Saari, & Latham, 1981). Having clear goals increases a user’s determination to reach the goal, thus increasing the amount of users’ actions within a gamified learning activity (Salen & Zimmerman, 2004). Second, participants in the experimental group were always shown where their performance stood in regard to other users in a visual display via a leader board. On the other hand, it is possible that participants in the control group did their best, but had no point of reference to judge their performance (Mekler, Bruhlmann, Opwis, & Tuch, 2013). Qualitative analysis of the participant interview data suggests that using a leader board catered to the competitive nature of human beings, which prompted the participants to generate more discussion posts than their counterparts in the control group. As Bhattacharyya (2010, p. 572) wrote: Competition is one of the most basic functions of nature. Competition occurs naturally between living organisms, which co-exist in the same environment. Those best able to compete within an environmental niche survive. Those least well adapted die out. Competition remains a powerful instinctual drive in human nature. Man for its survival competes with each other, sometime unable to win with others he competes with himself and even if necessary, he often competes with other groups. Australasian Journal of Educational Technology, 2016, 32(5). 31 Yet, it is important to note that not all participants found it fun to compete with their classmates for more badges or for a higher rank in the leader board. In line with the findings of Heeter, Lee, Medler, and Magerko (2011), we conclude from our interview data that gamified activities may be more appealing to people who are super-achievers (individuals who desire to both develop task competence regardless of others and to compete with others), or performance-oriented (individuals who are interested in doing better than others). Gamified activities may not be appealing to non-achievers, that is, people who have little or no desire to master a task or compete with others. Despite the apparent success of game mechanics such as points, badges, and leader board in motivating users to engage with more difficult assignment activities or contribute more discussion posts, only two participants felt that their use was sufficient. Most participants wished for the addition of role-play scenarios. Role-playing scenarios (e.g., story line, character history, roles) are a form of exploration-type activity which enables users to explore or find out things from different angles or perspectives (Bartle, 1996; Yee, 2006). Such exploration provides a sense of fun and curiosity. Previous research has suggested that non- achievers tend to enjoy exploration-type tasks (Heeter et al., 2011); thus, employing such activities in future courses may help make a gamified course more appealing to the non-achievers. In addition, several participants reported that they desired to receive material rewards eventually. This suggests that the use of gamified activities that merely provide virtual incentives would not work in the long run. This is in line with Zichermann’s (2011) observation that certain people do not value points or badges highly over time, and they eventually expect to redeem some actual tangible objects after collecting the points or badges, or obtaining the top spot in the leader board. An instructor may, for example, allow students to convert their collected points into a certain percentage of their total course score. Although the present study has provided a snapshot of the impact of meaningful gamification on student learning, engagement, and affective outcomes, the findings should be viewed with caution. One limitation of the present study was the short duration of the course. Another limitation was that we sampled interview participants by simply asking for volunteers. The use of convenience sampling can lead to the under- representation or over-representation of particular groups. No volunteers from the control group were available. In addition, the small sample of the present study size limits the generalisation of the quantitative results. Conclusion Previous studies on gamification are primarily restricted to learners in the USA. The present study extends the research to participants in an Asian country. Specifically, in this study, we investigated how the use of meaningful gamification affects student learning, engagement, and affective outcomes in a short 3-day blended learning graduate research methods class using a combination of experimental and qualitative research methods. We found that the use of game mechanics produced greater student engagement in the discussion forums, but no significant impact on students’ factual learning of the topic. Although we lack statistical evidence to support a significant increase in the student post-test scores, the quality of group artefacts produced by the participants in the experimental group was overall higher than those in the control groups. The use of game mechanics also had a positive effect of motivating students to engage with more challenging activities in the course. All students in the experimental group strongly agreed or agreed that they found the course motivating. However, only slightly more than half the participants in the control group felt the course motivating. In future courses, we intend to extend both the duration and sample size of the participants. For example, we could investigate the effects of game mechanics over a longer period of time, preferably over 6 months to see if the novelty of points, badges, or leader boards wear off, and how this may affect students’ motivation to participate in the course activities. Additional studies could also investigate the use of role- play scenarios, and the redemption of material rewards, besides employing points, badges, and leader board. We also plan to examine how gamification could affect students in other subject disciplines, or students in various age groups such as primary and secondary school learners. To date, the study of gamification in K- 12 settings has been very scarce because most attention has focused on the higher education context. Australasian Journal of Educational Technology, 2016, 32(5). 32 In this study, we used a post-test that focused solely on remembering or recall questions such as naming three advantages and three disadvantages of using questionnaire. Remembering is considered the lowest level cognitive process (Anderson & Krathwohl, 2001). Further studies should employ a post-test that examines higher level cognitive processes such as analysing and evaluating (Anderson & Krathwohl, 2001). For example, students may be given several examples of questionnaires and be asked to critique each one and make suggestions for improvement. The focus on higher level cognitive processes such as analysing and evaluating would encourage students to engage in a deep approach to learning. Finally, in the present study, we performed an experimental study because it could help draw causality conclusions. We have also used qualitative research methods such as participant interviews to provide richer data to help develop possible explanations for why an intervention might have an effect. However, there are other research designs that could also be employed to examine and evaluate gamification. One such possible research design is design-based research (DBR) (Anderson & Shattuck, 2012). Design-based research allows one to iteratively adjust and improve a gamified course over a longer period of time while focusing on and advancing its theoretical underpinnings. This could potentially yield more generalisable practical design principles for using gamification as opposed to a one-off experimental study. References Anderson, A., Huttenlocher, D., Kleinberg, J., & Leskovec, J. (2014). Engaging with massive online courses. In C. W. Chung et al. (Eds.), Proceedings of the 23rd International Conference on the World Wide Web (pp. 687–698). New York, NY: ACM Press. http://dx.doi.org/10.1145/2566486.2568042 Anderson, L., & Krathwohl, D. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives. New York, NY: Longman. Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education research? Educational Researcher, 41(1), 16–25. http://dx.doi.org/10.3102/0013189x11428813 Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. http://dx.doi.org/10.1037/0033-295x.84.2.191 Bartle, R. A. (1996). Hearts, clubs, diamonds, spades: Players who suit MUDs. Journal of MUD Research, 1, 1. Retrieved from http://www.arcadetheory.org/wp- content/uploads/2014/03/1996bartle.pdf Bhattacharyya, N. (2010). Individuality as a negative characteristic in students to carry out teamwork and the challenges for a soft skills trainer to groom management students – A critical review. Asian Journal of Management Research, 11(1), 566–577. Biggs, J. B. (1987). Student approaches to learning and studying. Melbourne: Australian Council for Educational Research. Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom (ASHE- ERIC Higher Education Report No. 1). Washington, DC: George Washington University. Bunchball. (2010). Gamification 101: An introduction to the use of game dynamics to influence behavior. Retrieved from http://www.bunchball.com/gamification101 Cheong, C., Cheong, F., & Filippou, J. (2013). Quick quiz: A gamified approach for enhancing learning. In J.-N. Lee, J.-Y. Mao, & J. Tong (Eds.), Proceedings of the Asia Conference on Information Systems. Retrieved from http://aisel.aisnet.org/pacis2013/206 Coetzee D., Fox, A., Hearst, M. A., & Hartmann, B. (2014). Should your MOOC forum use a reputation system? In S. Fussell, & W. Lutters (Eds.), Proceedings of CSCW 2014 (pp. 1176–1187). New York, NY: ACM Press. http://dx.doi.org/10.1145/2531602.2531657 Coffrin, C., de Barba, P., Corrin, L., & Kennedy, G. (2014). Visualizing patterns of student engagement and performance in MOOCs. In M. Pistilli, J. Willis, D. Koch, & K. Arnold (Eds.), Proceedings of the Fourth International Conference on Learning Analytics And Knowledge (pp. 83–92). New York, NY: ACM Press. http://dx.doi.org/10.1145/2567574.2567586 Deci, E., & Ryan, R. (2000). The ‘what’ and ‘why’ of goal pursuits: Human needs and the self- determination of behavior. Psychological Inquiry, 11(4), 227–268. http://dx.doi.org/10.1207/S15327965PLI1104_01 Deci, E., & Ryan, R. (2004). Handbook of self-determination research. Rochester, NY: University of Rochester Press. http://dx.doi.org/10.1145/2566486.2568042 http://dx.doi.org/10.3102/0013189x11428813 http://dx.doi.org/10.1037/0033-295x.84.2.191 http://www.arcadetheory.org/wp-content/uploads/2014/03/1996bartle.pdf http://www.arcadetheory.org/wp-content/uploads/2014/03/1996bartle.pdf http://www.bunchball.com/gamification101 http://aisel.aisnet.org/pacis2013/206 http://dx.doi.org/10.1145/2531602.2531657 http://dx.doi.org/10.1145/2567574.2567586 http://dx.doi.org/10.1207/S15327965PLI1104_01 Australasian Journal of Educational Technology, 2016, 32(5). 33 Denny, P. (2013). The effect of virtual achievements on student engagement. In W. E. Mackay (Ed.), Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 763–772). New York, NY: ACM Press. http://dx.doi.org/10.1145/2470654.2470763 Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: Defining gamification. In A. Lugmayr (Ed.), Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments (pp. 9–15). New York, NY: ACM Press. http://dx.doi.org/10.1145/2181037.2181040 de Sousa Borges, S., Durelli, V. H. S., Reis, H. M., & Isotani, S. (2014). A systematic mapping on gamification applied to education. In Y. Cho & S. Y. Shin (Eds.), Proceedings of the 29th Annual ACM Symposium on Applied Computing (pp. 216–222). New York, NY: ACM Press. http://dx.doi.org/10.1145/2554850.2554956 Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in education: A systematic mapping study. Educational Technology & Society, 18(3), 1–14. Dillenbourg, P. (1999). Introduction: What do you mean by ‘collaborative learning?’ In P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches (pp. 1–19). Amsterdam: Pergamon Elsevier Science. Domínguez, A., Saenz-de-Navarrete, J., de-Marcos, L., Fernández-Sanz, L., Pagés, C., & Martínez- Herráiz, J.-J. (2013). Gamifying learning experiences: Practical implications and outcomes. Computers & Education, 63, 380–392. http://dx.doi.org/10.1016/j.compedu.2012.12.020 Educause. (2011). 7 things you should know about gamification. Retrieved from https://net.educause.edu/ir/library/pdf/ELI7075.pdf Furrer, C., & Skinner, C. (2003). Sense of relatedness as a factor in children’s academic engagement and performance. Journal of Educational Psychology, 95(1), 148–162. http://dx.doi.org/10.1037/0022- 0663.95.1.148 Hakulinen, L., Auvinen, T., & Korhonen, A. (2013). Empirical study on the effect of achievement badges in TRAKLA2 online learning environment. In A. Berglund & N. Thota (Eds.), Proceedings of Learning and Teaching in Computing and Engineering (LaTiCE) Conference (pp. 47–54). Macau: IEEE Press. http://dx.doi.org/10.1109/LaTiCE.2013.34 Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work? A literature review of empirical studies of gamification. In R. H. Sprague (Ed.), Proceedings of the 47th Hawaii International Conference on System Sciences (pp. 3025–3034). Piscataway, NJ: IEEE Press. http://dx.doi.org/10.1109/HICSS.2014.377 Hancock, D. R., & Flowers, C. P. (2001). Comparing social desirability responding on World Wide Web and paper-administered surveys. Educational Technology Research and Development, 49(1), 5–13. http://dx.doi.org/10.1007/BF02504503 Heeter, C., Lee, Y. H., Medler, B., & Magerko, B. (2011). Beyond player types. In T. L. Taylor (Ed.), Proceedings of the 2011 ACM SIGGRAPH Symposium on Video Games (pp. 43–46). New York, NY: ACM Press. http://dx.doi.org/10.1145/2037692.2037701 Hew, K. F., Huang, B., Chu, K. W. S., & Chiu, D. K. W. (2016). Engaging Asian students through game mechanics: Findings from two experiment studies. Computers & Education, 92-93, 221-236. http://dx.doi.org/10.1016/j.compedu.2015.10.010 Ibanez, M.-B., Di-Serio, A., & Delgado-Kloos, C. (2014). Gamification for engaging computer science students in learning activities: A case study. IEEE Transactions on Learning Technologies, 7(3), 291– 301. http://dx.doi.org/10.1109/TLT.2014.2329293 Jung, J., Schneider, C., & Valacich, J. (2010). Enhancing the motivational affordance of information systems: The effects of real-time performance feedback and goal setting in group collaboration environments. Management Science, 56(4), 724–742. http://dx.doi.org/10.1287/mnsc.1090.1129 Ke, E. (2009). A qualitative meta-analysis of computer games as learning tools. In R. E. Ferdig (Ed.), Effective electronic gaming in education (pp. 1–32). Hershey, PA: Information Science Reference. http://dx.doi.org/10.4018/978-1-59904-808-6.ch001 Kumar, J. M., & Herger, M. (2013). Gamification at work: Designing engaging business software. Aarhus: The Interaction Design Foundation. http://dx.doi.org/10.1007/978-3-642-39241-2_58 Lee, J. J., & Hammer, J. (2011). Gamification in education: What, how, why bother? Academic Exchange Quarterly, 15(2), 146–151. Li, W., Grossman, T., & Fitzmaurice, G. (2012). GamiCAD: A gamified tutorial system for first time AutoCAD users. In R. Miller (Ed.), Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (pp. 103–112). New York, NY: ACM Press. http://dx.doi.org/10.1145/2380116.2380131 http://dx.doi.org/10.1145/2470654.2470763 http://dx.doi.org/10.1145/2181037.2181040 http://dx.doi.org/10.1145/2554850.2554956 http://dx.doi.org/10.1016/j.compedu.2012.12.020 https://net.educause.edu/ir/library/pdf/ELI7075.pdf http://dx.doi.org/10.1037/0022-0663.95.1.148 http://dx.doi.org/10.1037/0022-0663.95.1.148 http://dx.doi.org/10.1109/LaTiCE.2013.34 http://dx.doi.org/10.1109/HICSS.2014.377 http://dx.doi.org/10.1007/BF02504503 http://dx.doi.org/10.1145/2037692.2037701 http://dx.doi.org/10.1016/j.compedu.2015.10.010 http://dx.doi.org/10.1109/TLT.2014.2329293 http://dx.doi.org/10.1287/mnsc.1090.1129 http://dx.doi.org/10.4018/978-1-59904-808-6.ch001 http://dx.doi.org/10.1007/978-3-642-39241-2_58 http://dx.doi.org/10.1145/2380116.2380131 Australasian Journal of Educational Technology, 2016, 32(5). 34 Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance: 1969–1980. Psychological Bulletin, 90(1), 125–152. Mekler, E. D., Bruhlmann, F., Opwis, K., & Tuch, A. N. (2013). Do points, levels and leaderboards harm intrinsic motivation? An empirical analysis of common gamification elements. In L. E. Nacke, K. Harrigan, & N. Randall (Eds.), Proceedings of the First International Conference on Gameful Design, Research, and Applications (pp. 66–73). New York, NY: ACM Press. Meyers, C., & Jones, T. (1993). Promoting active learning: Strategies for the college classroom. San Francisco, CA: Jossey-Bass. Nicholson, S. (2012). A user-centered theoretical framework for meaningful gamification. Paper presented at Games+Learning+Society 8.0, Madison, WI. Retrieved from http://scottnicholson.com/pubs/meaningfulframework.pdf Punch, K. (2005). Introduction to social research. London: Sage. Salen, K., & Zimmermann, E. (2004). Rules of play: Game design fundamentals. Cambridge, MA: MIT Press. Skinner, B. F. (1957). The experimental analysis of behavior. American Scientist, 4(4), 343–371. http://dx.doi.org/10.1037/11324-008 Skinner, E., Furrer, C., Marchand, G., & Kindermann, T. (2008). Engagement and disaffection in the classroom: Part of a larger motivational dynamic? Journal of Educational Psychology, 100(4), 765– 781. http://dx.doi.org/10.1037/a0012840 Sturges, J. E., & Hanrahan, K. J. (2004). Comparing telephone and face-to-face qualitative interviewing: A research note. Qualitative Research, 4(1), 107–118. http://dx.doi.org/10.1177/1468794104041110 Suls, J. E., & Wheeler, L. (2012). Social comparison theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (Vol. 1, pp. 460–482). Thousand Oaks, CA: Sage. http://dx.doi.org/10.4135/9781446249215.n23 Weinmann, T., Thomas, S., Brilmayer, S., Heinrich, S., & Radon, K. (2012). Testing Skype as an interview method in epidemiologic research: Response and feasibility. International Journal of Public Health, 57(6), 959–961. http://dx.doi.org/10.1007/s00038-012-0404-7 Yee, N. (2006). Motivations of play in online games. Cyber Psychology and Behavior, 9(6), 772–775. http://dx.doi.org/10.1089/cpb.2006.9.772 Zichermann, G. (2011). Intrinsic and extrinsic motivation in gamification. Retrieved from http://www.gamification.co/2011/10/27/intrinsic-and-extrinsic-motivation-in-gamification/ Zichermann, G., & Cunningham, C. (2011). Gamification by design: Implementing game mechanics in web and mobile apps. Sebastopol, CA: O’Reilly Media. Corresponding author: Khe Foon Hew, kfhew@hku.hk Australasian Journal of Educational Technology © 2016. Please cite as: Tan, M., & Hew, K. H. (2016). Incorporating meaningful gamification in a blended learning research methods class: Examining student learning, engagement, and affective outcomes. Australasian Journal of Educational Technology, 32(5), 19-34. http://dx.doi.org/10.14742/ajet.2232 http://scottnicholson.com/pubs/meaningfulframework.pdf http://dx.doi.org/10.1037/11324-008 http://dx.doi.org/10.1037/a0012840 http://dx.doi.org/10.1177/1468794104041110 http://dx.doi.org/10.4135/9781446249215.n23 http://dx.doi.org/10.1007/s00038-012-0404-7 http://dx.doi.org/10.1089/cpb.2006.9.772 http://www.gamification.co/2011/10/27/intrinsic-and-extrinsic-motivation-in-gamification/ mailto:kfhew@hku.hk http://dx.doi.org/10.14742/ajet.2232 Introduction Meaningful gamification Research questions Method Participants Experimental group Control group Data collection and analysis Results Pre- and post test scores Quality of participants’ artefacts Student engagement Student perceptions of the course and game mechanics Instructor’s perceptions and efforts spent on using the game mechanics Discussion Conclusion References