Australasian Journal of Educational Technology, 2019, 35(1). 163 User acceptance model of computer-based assessment: Moderating effect of self-regulation Jian-Wei Lin, Yung-Cheng Lai Chien Hsin University of Science and Technology, Taiwan Computer-based assessment (CBA) is an important area of e-learning research. Most studies of CBA technology add new constructs to existing user acceptance models and rarely consider the moderating effects. However, the self-regulation (SR) levels (i.e., high or low) in an e-learning environment substantially affect individual learning behaviour and performance. This study investigates the moderating effects of SR levels on the relationships between factors in the CBA acceptance model, which was based on the unified theory of acceptance and use of technology (UTAUT) model. The main findings are that, firstly, both the perceived performance expectancy and social influence significantly affect CBA behavioural intention in all students regardless of their SR level. Secondly, the effect of effort expectancy on CBA behavioural intention is significantly larger in low-SR students compared to high-SR students. Finally, behavioural intention significantly predicts CBA use behaviour in high-SR students but not in low-SR students. Theoretical background User acceptance of computer-based assessment (CBA) Computer-based assessment (CBA) and web-based assessment (WBA) are critical issues in e-learning and have recently attracted the attention of researchers because of their many advantages for academics and practitioners. Advantages include flexibility in time and place, usability of a greater variety of media and test types, immediate scoring, feedback and low staff requirements, automatic record keeping for item analysis, and improve learning performance (Deutsch, Herrmann, Frese, & Sandholzer, 2012; Terzis & Economides, 2011a). Generally, CBA can be classified as formative or summative (Deutsch et al., 2012; Lin & Lai, 2013a). Formative CBA methods, which are continuously embedded in the teaching and learning processes of a curriculum, are designed to improve learning achievement. The objective of formative CBA is not to evaluate students, but to provide feedback that helps them identify their strengths and weaknesses. In contrast, the objective of summative CBA, for example, midterm and final examinations, is to check learning achievements at the end of the curriculum (Lin & Lai, 2013a). Since CBA has many advantages and plays an important role in e-learning, many studies have investigated CBA technology acceptance (Acosta-Gonzaga & Walet, 2018; Cigdem & Oncu, 2015; Deutsch et al., 2012; Nikou & Economides, 2017; Maqableh, Masa’deh, & Mohammed, 2015; Terzis & Economides, 2011a, 2011b; Terzis, Moridis, & Economides, 2012a, 2012b; Terzis, Moridis, Economides, & Mendez, 2013). Most studies of CBA technology acceptance use the technology acceptance model (TAM) or unified theory of acceptance and use of technology (UTAUT), which was initially developed for use in business settings to investigate determinants of information technology acceptance. According to de Oca and Nistor (2014), studies extending these acceptance models (e.g., TAM and UTAUT) to educational context should consider complex educational settings. Nistor (2014) further claimed that, when extending technology acceptance models from the business context to the education context, the complexity of acceptance constructs should consider educational contexts (e.g., higher education or lifelong learning; in schools, universities, and organisations) and should reflect both individual factors (e.g., personality or self-efficacy) and social factors (e.g., community, social capital) that are relevant to the learning process. Accordingly, many TAM/UTAUT-based studies added these new related constructs to investigate CBA user acceptance, including different countries (Terzis, Moridis, Economides, & Mendez , 2013), personalities (Terzis et al., 2012b), self-efficacy (Maqableh et al., 2015; Nikou & Economides, 2017; Terzis & Economides, 2011a), and social influence (Acosta-Gonzaga & Walet, 2018; Maqableh et al., 2015; Nikou & Economides, 2017; Terzis & Economides, 2011a). For example, Acosta-Gonzaga and Walet, (2018) extended TAM by including measures of computer self-efficacy and social influence to investigate user acceptance of mathematical CBA. Terzis, Moridis, Economides, and Mendez (2013) was based on TAM and UTAUT and considered different countries (i.e., Greece and Mexico) while Terzis et al., (2012b) added personalities as new constructs. Other constructs that have been added to the CBA user acceptance model include Australasian Journal of Educational Technology, 2019, 35(1). 164 perceived playfulness (Maqableh et al., 2015; Terzis & Economides, 2011a), goal expectancy (Maqableh et al., 2015; Terzis & Economides, 2011a; Terzis et al., 2012b), and question content (Cigdem & Oncu, 2015; Maqableh et al., 2015; Nikou & Economides, 2017; Terzis et al., 2012a). Moderating effects of self-regulation (SR) levels on CBA’s user acceptance Although CBA technology acceptance has been discussed extensively and sufficiently by adding new constructs, the mediating effects on CBA have rarely been considered. For increased explanatory power, the mediating effects must be discussed for a clear understanding of the factors that influence CBA user acceptance. To the best of our knowledge, only two studies, that is Deutsch et al. (2012) and Terzis and Economides (2011b), have investigated the mediating effect of gender on CBA user acceptance. The likely reason is that investigating the moderating effects would require detailed studies, possibly via experimental designs, and these are costly and time consuming (Benbasat & Barki, 2007). Self-regulation (SR) is the extent to which learners are meta-cognitively, motivationally, and behaviourally active in their own learning process (Zimmerman, 2000). The self-regulation construct represents the degree to which students can regulate aspects of their thinking, motivation and behaviour during learning (Pintrich & Zusho, 2007). A highly self-regulated (high self-regulatory skill or high-SR herein) learner can play an active role in learning, setting task-oriented and proper goals, taking responsibility, being persistent for their own learning, monitoring their own learning, and maintaining their own learning motivation (Wang, 2011; Zimmerman, 1998). Several studies have investigated how different SR levels (high or low) affect learning behaviour and performance in an e-learning environment. For example, Wang (2011) compared the effects of two different e-learning systems (i.e., one with peer collaboration support and one without it) in individuals with different SR levels. Lin, Huang, and Chuang (2015) found that SR interferes with the learning behaviour and efficiency of students with varying network centrality (i.e., students with high network centrality have the most close friends within a community) in a computer-supported collaborative learning (CSCL) environment. Lin, Szu, and Lai (2016) also found that the learning behaviour of students in different CSCL systems (i.e., a system that provides the context of peer interaction back to a student versus a system without such support) depends on their SR levels. In sum, although students with different SR levels differ in their learning motivation, behaviour, and performance in e-learning environments, few studies have investigated how SR levels affect e-learning (CBA) user acceptance. Additionally, few studies of CBA user acceptance have considered, or involved, the construct of actual user behaviour: user acceptance usually being investigated after the subjects have used CBA systems only one time (Deutsch et al, 2012; Terzis & Economides, 2011a; Terzis et al., 2012a) This would make it impossible to measure actual CBA usage. For example, Deutsch et al (2012) examined attitudinal changes towards CBA after the students underwent CBA for the first time. A sufficiently long duration of immersion in an e-learning environment with technological interventions is needed to explore the long-term trajectory of learning behaviour (Lin & Tsai, 2016). Benbasat and Barki (2007) recommended that longitudinal studies which view and assess system use over time are likely to be particularly revealing, as they improve understanding of the fluid relationships between an adoption model’s constructs and various mutually influential behaviours typically exhibited by users, for example, their adaptation and hands-on usage behaviours. A formative assessment is often performed after students complete a chapter (unit), and several formative CBAs are often performed over the course of a semester. Deutsch et al. (2012) advocated that future research should collect longitudinal data to investigate how multiple CBA experiences affect student attitudes. Restated, many studies that have investigated CBA technology acceptance analysed subjects after they experienced CBA only one time and did not considered the actual CBA behaviour use. Research aims As described above, our study addresses the moderating effects of self-regulation on CBA technology acceptance. Notably, the moderating variables can impact the entire acceptance model (Šumak, Heričko, & Pušnik, 2011). This empirical study investigated the moderating effect of self-regulation on the relationships between the UTAUT constructs to examine the technology acceptance of formative CBA. A 1-month experiment was designed so that subjects could experience the CBA for a sufficiently long period Australasian Journal of Educational Technology, 2019, 35(1). 165 to allow us to measure actual CBA behaviour use. The objective was to identify trajectories of CBA behaviour so that some effective measures to increase continued use of CBA could be adopted. Research model and hypotheses Khechine, Lakhal, Pascot, and Bytha (2014) argued that, in the context of e-learning, the UTAUT model is superior to previous models (e.g., TAM) in terms of explaining variance in the intention to use technologies. Meanwhile, several studies asserted that many crucial factors in the UTAUT should be considered in the educational technology acceptance, including social influence (e.g., Nistor, 2014; Terzis & Economides, 2011a). Thus, this study extended UTAUT to analysis of CBA technology use. Figure 1 shows the proposed model, which was based on the UTAUT (Venkatesh, Morris, Davis, & Davis, 2003). Performance expectancy (PE) is defined as the degree to which students believe that the system can help them improve their academic performance. Effort expectancy (EE) is defined as the ease of using a system. Social influence (SI) is defined as the degree to which an individual perceives that important others believe he or she should use the new system (i.e., individuals behave according to how they believe they will be viewed as a result of having used the technology). Behavioural intention (BI) is defined as the factors that motivate an individual to perform a behaviour. Notably, this study did not investigate the original relationships in UTAUT (dotted and black arrows in Figure 1: PE to BI, EE to BI, SI to BI, etc.) because many CBA studies agree that these common relationships are significant. For example, many studies agree that perceived ease of use (PEOU) the counterpart of EE, is among the most important determinants of CBA acceptance (Terzis & Economides, 2011a; Terzis et al., 2012a). Regarding facilitating condition (FC), the original UTAUT also shows that FC impact use behaviour (UB). However most studies of CBA technology acceptance did not consider UB and have focused on the impact of FC on PEOU (Maqableh et al., 2015; Nikou & Economides, 2017; Terzis & Economides, 2011a, 2011b; Terzis, Moridis, & Economides, 2013; Terzis, Moridis, Economides, & Mendez, 2013). Thus, this study did not consider FC; it only considered the effect of BI on UB. Additionally, our study did not consider the moderating effects of variables age and experience in the UTAUT model, because this experiment was performed at a Taiwan University, in which most participants (students) were of a similar age and had similar internet experience. Another moderator, voluntariness of use of the UTAUT, was also excluded since all subjects were free to use the CBA system. Therefore, the original UTAUT moderators were excluded. Restated, this study focuses on the moderating effects of self- regulation on the CBA technology acceptance (bold and blue arrows in Figure 1). Each of these moderating effects is discussed in the following subsections. Figure 1. The proposed UTAUT-based research model BI UB PE SI EE SR RQ1RQ2 RQ3 RQ4 Australasian Journal of Educational Technology, 2019, 35(1). 166 The moderating effect of self-regulation Compared with low-SR students, high-SR students are intrinsically better at applying motivational strategies and performing learning behaviours (Pintrich, 2004). High-SR learners are capable of setting task-oriented and proper goals and taking responsibility for their own learning; moreover, high-SR students are highly independent and can systematically apply metacognitive, motivational, and behavioural learning strategies (Wang, 2011). Zimmerman (1998) further reported that, compared to learners with low self- regulatory skills, those with high self-regulatory skills have more intrinsic motivation, more interest in the subject, and greater ability to adapt to contextual cues. Thus, this study presumes that SR levels interfere the impacts of the constructs of the UTAUT (e.g., PE, EE) on the BI. For example, Pintrich and Schunk (2002) stated that learners with high self-regulation usually have high self-efficacy. Self-efficacy refers to beliefs about one’s capabilities to implement actions necessary to acquire the skill needed to perform a specific task (Zimmerman, 2000). Thus, when students use CBA, those with different SR levels (high or low) may have different effort expectancy. Students with low SR may have low confidence in their ability to operate a CBA, which can hinder their intention to use CBA. Olasehinde and Olatoye (2014) also found that social (peer) influence has a significant positive association with self-regulation. Lin et al. (2016) further found that, in an online project-based learning environment, the effects and duration of peer influence are larger in high-SR students than in low-SR students. This implies that the effect of social (peer) influence on CBA behavioural intention may differ between high-SR and low-SR students. Based on the above deduction, the first three research questions are the following: RQ1. Do SR levels (high and low) moderate the effect of performance expectancy on CBA behavioural intention? RQ2. Do SR levels moderate the effect of effort expectancy on behavioural intention to use CBA? RQ3. Do SR levels moderate the effect of social influence on behavioural intention to use CBA? As stated, most studies of CBA user acceptance have not considered UB. Nistor (2014), and de Oca and Nistor (2014) further claimed that not all studies regarding educational technology acceptance have successfully proved that BI to use a technology predicts actual usage (e.g., yields weak or non-significant effects). A possible reason is the moderating effect of self-regulation. Specifically, skillful self-regulating learners are behaviourally active in their own learning process (Zimmerman, 2000). A self-regulated learner plays an active role in learning, takes responsibility for learning, and persists in learning (Wang, 2011). Since online learning (herein referred to as CBA) is autonomous and requires high discipline, high-SR students often outperform low-SR students, who have low persistence and determination (Lin & Tsai, 2016). In contrast, low-SR students rarely apply learning strategies and are less likely to activate constructive learning behaviours (Finkel & Campbell, 2001). Hence, the effect of BI to use CBA on CBA UB may differ between high-SR and low-SR students. Thus, the fourth research question is: RQ4. Does SR level moderate the effect of behavioural intention to use CBA on CBA use behaviour? Methodology The used CBA system Figure 2 shows an example screenshot of one question of an online assessment in Chinese. An assessment includes single-choice and multiple-choice questions. The CBA system gives students several options: annotating a difficult question, pushing the “Next” button to advance to the next question, pushing the “Previous” button to return to the previous question, or selecting specific questions by entering the item number on the top of every screen. Before submitting their assessments, students can click on the “Review all answers” button to review the status of all items (i.e., answered, unanswered, annotated). After submission, the students can immediately review their results to discover whether their answers were correct or not, and to view the correct answers. The system also records the assessment results, enabling students to login and review their historical assessment results. Australasian Journal of Educational Technology, 2019, 35(1). 167 Single-choice question The question content Next or previous question The number of questions Jump to a specific question Review all answers (open a new window) Annotate this question Submit the assessment. Figure 2. Snapshot of one question of an assessment of the CBA Participants The experiment was administered to 186 third-year undergraduates at a Taiwanese university. The students had an average age of 20 years. Before the experiment, students in all classes were informed that their classes would be provided with some instructional methods as an intervention. The students were also informed that some data would be anonymously collected and analysed, and that they were free to drop the class section and take another class section taught by a different teacher. Measurement The 20-item self-regulation questionnaire used in this study was based on the questionnaire of Lin et al. (2015). Respondents answered each item using a 5-point Likert scale ranging from 5 (strongly agree) to 1 (strongly disagree). The questionnaire contained five sub-scales: self-monitoring (7 items), deep strategy use (4 items), shallow processing (4 items), persistence (2 items), and environmental structuring (3 items). Cronbach’s alpha for the overall questionnaire was 0.81, and Cronbach’s alpha for each sub-scale was: self- monitoring (0.84), deep strategy use (0.80), shallow processing (0.92), persistence (0.65), and environmental structuring (0.79). Appendix A shows the constructs (PE, EE, SI, BI) with their corresponding questionnaire items, which were operationalised by modifying previously validated scalesn. The items were measured on a 5-point Likert scale ranging from 5 (strongly agree) to 1 (strongly disagree). Two field specialists were consulted for the item pool. In accordance with their comments, some changes were made, but no items were removed from the pool. The developed scale has been modified in terms of clarity and understandability of the items. This study added two items for measuring UB: the cumulative number of assessments conducted by a student and the cumulative number of system logins by a student since the experiment began. Since the two items could be retrieved from the database log, the data for these two items was considered more reliable than self-reported data. According to Pituch and Lee (2006), each construct should be measured by at least two items; moreover, the items for construct that only has two items should be similar to the items used by other researchers to measure the constructs. All constructs had three items, except UB, which was measured with the two items used in Deng, Liu, and Qi (2011). The final scale for the above five constructs (i.e., PE, EE, SI, BI, UB) was validated by the SEM measurement model as described below. Overall, the measurement model had an acceptable fit and acceptable psychometric properties, and each item within a construct had a strong association with its respective factor (i.e., factor loading). Experiment procedure The four lessons in the electronic commerce course used in this experiment were: Basic Electronic Commerce, The Strategy and Plan of the Electronic Commerce, Electronic Commerce Application, and E- Logistics and E-payment. To ensure that the question banks were sufficiently large, two teachers jointly collected and edited the question banks and generated 35, 30, 33, and 20 questions for lessons 1 to 4, Australasian Journal of Educational Technology, 2019, 35(1). 168 respectively. The question content was mainly derived from teaching materials (e.g., textbook and handouts). Before starting the experiment, an instructor demonstrated how to use the CBA and answered questions raised by the students. All students then registered in the system and completed the self-regulation questionnaire. Students who had higher than average scores on the questionnaire were classified as high- SR students, and the rest were classified as low-SR students (Lin et al., 2015). A 1-week period for each lesson was used as an indication of learning one lesson. During the experiment, all classes followed the same teaching and assessment schedules. Upon completion of face-to-face teaching for the lesson at the beginning of each week, all students in all classes were free to take the corresponding online assessment once at any time during that week. This procedure enabled the researchers to observe the learning behaviours of all classes through the four opportunities (from lessons 1–4). The experimental course took 3 hours per week and ran for 4 weeks (one lesson per week). At the end of the experiment, students completed a summative assessment (i.e., the midterm) and then completed the questionnaire survey (Appendix A). To encourage participation in this survey, the participants were informed that their assessment scores and their questionnaire responses would not affect their final course grades. The participants were also offered bookstore gifts (e.g., cash coupons). The number of valid questionnaires retrieved was 180. The valid response rate was as high as those reported in Terzis and Economides (2011a) and Terzis et al. (2012a). Our high valid response rate might have resulted from the following method of questionnaire collection. The score of the midterm occupied 40% of the final course grades which heavily determine whether they pass the course or not, so almost all students participated in the midterm. To facilitate questionnaire collection and enhance the response rate, the instructor personally issued the paper questionnaires to every student in class directly after the midterm, for completion and return at the time. At the end of the experiment, six high-SR students and six low-SR students were randomly selected for interviews. To elicit their feelings toward the CBA intention and usage, the following questions were asked: (1) How do these constructs (e.g., PE) affect your CBA behavioural intention? and (2) Why do you (or other peers) state an intention to use the CBA but do not follow through with it (de Oca & Nistor, 2014). The purpose of these questions was to determine the cause of any moderating effect. Data analysis The AMOS 20 software was used to perform structural equation modelling (SEM). After performing confirmatory factor analysis (CFA), to validate the measurement model, the structural model was used for hypothesis testing. The two criteria used to assess the structural model and hypotheses were: (1) the variance measured (R2) by the antecedent constructs; and (2) the significance of the path coefficients (beta values) (Terzis & Economides, 2011a). Results Measurement model The measurement model, which included five latent variables (PE, EE, SI, BI, and UB), was validated by confirmatory factor analysis. The overall measurement model is shown in Table 1. All of the model-fit indices exceeded the minimum values suggested in the literature (Pituch & Lee, 2006). Table 1 Goodness-of-fit measures of the research model Goodness-of-fit measure Recommended value Entire sample χ2/df < = 3.00 2.93 Normed fit index (NFI) > = 0.90 0.91 Non-normed fit index (NNFI) > = 0.90 0.93 Comparative fit index (CFI) > = 0.90 0.92 Root mean square error of approximation (RMSEA) < = 0.10 0.07 Next, the reliability, convergent validity, and discriminant validity of the measurement model were examined. The reliability of constructs was measured by composite reliability (CR) and by Cronbach’s alpha. Table 2 shows that all constructs exceeded the acceptable criterion of 0.80, which indicated Australasian Journal of Educational Technology, 2019, 35(1). 169 acceptable reliability. Convergent validity was assessed in terms of unidimensionality and average variance extracted (AVE). Unidimensionality was assessed in terms of factor loading and t-value of items. Table 2 shows that the factor loadings of the items of the five factors model ranged from 0.70 to 0.92, and their t- values were significant at the level of p < 0.05. All factor loading values exceeded the recommended threshold of 0.50. All AVEs ranged from 0.93 to 0.97, which also exceeded the recommended threshold of 0.50. Table 2 Descriptive statistics, average variance extracted (AVE), composite reliability (CR) and factor loading of construct measurement Variables M SD Factor loading t-value AVE CR alpha Performance expectancy 4.27 0.71 0.97 0.99 0.87 PE1 4.26 0.82 0.91 14.60* PE2 4.28 0.76 0.77 11.72* PE3 4.28 0.81 0.80 12.18* Effort expectancy 4.30 0.73 0.97 0.99 0.91 EE1 4.27 0.81 0.89 14.67* EE2 4.32 0.77 0.85 13.62* EE3 4.33 0.79 0.89 14.53* Social influence 4.00 0.81 0.97 0.99 0.83 SI1 3.71 1.07 0.70 9.57* SI2 4.14 0.89 0.82 12.71* SI3 4.16 0.85 0.92 14.72* Behavioural intention 4.08 0.80 0.96 0.98 0.91 BI1 4.09 0.86 0.88 12.06* BI2 4.13 0.84 0.78 10.93* BI3 4.04 0.89 0.85 11.45* Use behaviour 3.51 1.05 0.93 0.96 0.81 UB1 3.24 1.02 0.78 3.50* UB2 3.78 1.27 0.90 3.03* Notes. a 1 = strongly disagree and 5 = strongly agree; * p < .05 Discriminant validity was tested by comparing the square root of the AVE of each construct and its correlation coefficients with other constructs. Table 3 shows the comparison results. For all constructs, the square roots of the AVEs exceeded the correlation coefficients with other constructs, which indicated good discriminant validity. In summary, all constructs in the measurement model had adequate reliability, convergent validity, and discriminant validity. Table 3 Square root of average variance extracted (AVE) and correlations of all constructs 1 2 3 4 5 1. PE 0.98 2 EE 0.74* 0.98 3 SI 0.62* 0.58* 0.98 4 BI 0.68* 0.60* 0.69* 0.98 5 UB -0.09 -0.00 -0.13 -0.09 0.96 Structural model Before conducting the structural model, the effects of self-regulation upon PE, EE, SI, BI and UB were examined by t-test. Table 4 shows the mean scores (M), standard deviations (SD), and significant t values. The t-test results showed that only EE and UB were significantly higher in high-SR students compared to low-SR students. Australasian Journal of Educational Technology, 2019, 35(1). 170 Table 4 Descriptive statistics and t-test results Construct SR n M SD t PE high 89 4.32 0.65 1.02 low 91 4.21 0.76 EE high 89 4.43 0.70 2.09* low 91 4.20 0.73 SI high 89 4.02 .080 033 low 91 3.98 0.84 BI high 89 4.09 0.82 0.21 low 91 4.07 0.77 UB high 89 4.03 1.12 4.05* low 91 3.15 1.12 * p < 0.05 The structural model was tested with the data from entire data sample (i.e., students with high and low self- regulation pooled together) and each of the subsamples (i.e., high self-regulation taken separately and low self-regulation taken separately). Table 5 presents the properties of the causal paths, including standardised path coefficients, significant differences, and variance explained for behavioural intention to use CBA. Figure 3 further shows the diagrams of structural equation model testing for the high-SR students (upper part) and low-SR students (lower part). Table 5 Self-regulation difference in relationships of PE-BI, EE-BI, SI-BI, and BI-UB Entire sample Students with high SR Students with low SR Difference between high-SR and low-SR students R2 beta R2 beta R2 beta UB 0.03 0.11 0.19 BI 0.55 0.54 0.61 PE-BI 0.49* 0.49* 0.34* ns EE-BI 0.09 -0.05 0.35* * SI-BI 0.54* 0.51* 0.63* ns BI-UB 0.15 0.36* -0.31 * * p < 0.05; ns - not significant Australasian Journal of Educational Technology, 2019, 35(1). 171 Figure 3. Results of structural equation model testing for the high-SR students (top) and low-SR students (bottom) Hypothesised differences between high-SR students and low-SR students were tested by statistical comparisons of corresponding path coefficients in both structural models. This statistical comparison was performed using the procedure suggested by Chin (2000) for multi-group analysis (Padilla-MeléNdez, Del Aguila-Obra, & Garrido-Moreno, 2013). According to this procedure, a t-test was calculated following equation, which follows a t-distribution with m + n - 2 degrees of freedom. BLSR and BHSR represent values of paths for these two groups (i.e., low-SR students and high-SR students), respectively, while SELSR and SEHSR are the values of standard error of paths for these groups, respectively. Finally, m and n are the samples of these two groups, respectively. 𝑡𝑡 = 𝐵𝐵𝐿𝐿𝐿𝐿𝐿𝐿 − 𝐵𝐵𝐻𝐻𝐿𝐿𝐿𝐿 �(𝑆𝑆𝑆𝑆𝐿𝐿𝐿𝐿𝐿𝐿 2 + 𝑆𝑆𝑆𝑆𝐻𝐻𝐿𝐿𝐿𝐿 2 )� The multi-group analysis revealed significant differences in two paths of the proposed model. Firstly, the path coefficient from EE to BI was significantly higher in the structural model for low-SR students compared to the structural model for high-SR students. EE significantly predicted BI for low-SR students but not for high-SR students. Secondly, the path coefficient from BI to use behaviour was significantly stronger in the structural model for high-SR students compared to the structural model for low-SR students. BI was a significant predictors of UB in high-SR students but not in low-SR students. However, in the remaining paths of the model, no significant differences were found between students with low SR and students with high SR. Discussion Regarding RQ1, the means for both PE and BI were similar in high-SR and low-SR students (Table 4). Both groups (high-SR and low-SR) agreed that the CBA improved their academic performance, and the two groups had a similar BI to use the CBA. Both high-SR and low-SR interviewees agreed that the desire to get high good grades drove their intentions. This result is in line with the previous review literatures that PE (perceived usefulness) positively influence BI for all students (Cigdem & Oncu, 2015; Nikou & Economides, 2017). All students, regardless of their SR level, perceived that the formative CBAs were useful for improving learning performance (Table 4), which is consistent with Deutsch et al. (2012) and Lin and Lai (2013b). BI (R2=0.54) UB (R2=0.11) PE SI EE High-SR students 0.49* -0.05 0.51* 0.36* BI (R2=0.61) UB (R2=0.19) PE SI EE Low-SR students 0.34* 0.35* 0.63* -0.31 *<.05 *<.05 Australasian Journal of Educational Technology, 2019, 35(1). 172 Regarding RQ2, high-SR students had a significantly higher mean value for EE compared to low-SR students while the mean values for BI did not significantly differ between high-SR and low-SR students (Table 4). The effect of perceived EE on BI to use CBA was significantly larger in low-SR students compared to high-SR students (Table 5). EE significantly affected BI to use CBA in low-SR students, but not in high-SR students. According to Zimmerman (1998, 2000), learners with high SR also have high task- specific self-efficacy. Compared to low-SR students, high-SR students are generally more confident about using technology and are more proactive in seeking all available resource to address technology use (Lin et al., 2015). Thus, EE (ease of use) is apparently less of a hurdle for high-SR students than for low-SR students. Such results can provide design-oriented advice (Benbasat & Barki, 2007) that the provision of easy-to-use user interface particularly increase the CBA’s behaviour intention for low-SR students. Regarding RQ3, the mean value for social influence of high-SR students was approximated that of low-SR students (Table 4). The effect of perceived SI on BI to use CBA did not significantly differ between high- and low-SR students (Table 5). This result is in line with Nikou and Economides (2017) that SI positively affects the BI of all students, regardless of SR level. Both low-SR and high-SR interviewees generally agreed that “seeing or knowing peers conducting assessment will influence my intention to use the CBA.” Regarding RQ4, the two groups (high-SR and low-SR students) had similar BI to use CBA, but the mean score for CBA behaviour use observed in high-SR students was significantly higher than that observed in low-SR students (Table 4). Notably, the effect of perceived BI on CBA UB significantly differed between high-SR students and low-SR students (Table 5). Regarding the interviewee opinions about actual CBA use, one low-SR interviewee stated, “My roommates or classmates asked me to go out with them and then I usually went, instead of staying in class. Thus, I took less assessment.” Another stated: I tend to prepare until the week of an exam. At that time, I asked my classmate, who is also my friend, to print out his results of historical assessments so that I can review it on papers and concentrates on memorising question terms and answers. Thus, I took less assessment. In contrast, one high-SR interviewee reported: Conducting the assessments is boring and thus I often performed the assessments with the closed classmates to work and discuss together. After completing an assessment, we go out for a yummy meal as a reward for our studying. Some students, particularly those with high SR, may attempt to increase their extrinsic motivation to perform an academic task by promising themselves extrinsic rewards upon completing the task (Pintrich, 2004). High-SR students are better at incorporating friends into their study routines, helping to regulate their studying and keep them focused on the task (Pintrich & Zusho, 2007). One high-SR interviewee said: I can understand my learning status by taking assessments and make appropriate adjustment next time if my score for this assessment is low. Additionally, I can review my historical assessment results before the exam. Another high-SR interviewee reported, “I regularly took the assessments and address what I did not understand and went back over the material or asked for peers for answers. I do not like to do the work at the last minute.” High-SR students have the self-control needed to focus on the task and optimise their effort (Zimmerman, 2000). Their good time (effort) management skills (Pintrich, 2004) improves their learning efficiency by reducing a task to its essential parts and preparing each part carefully (Zimmerman, 2000). The above literature agree that high-SR students are overtly skillful at regulating their motivation (e.g., reward oneself), cognition (e.g., understand learning status and comprehend question), and behaviour (e.g., time management), all of which are conducive to completing the formative CBA tasks (i.e., engaged more CBA performing) even that the tasks might be boring. These regulating effects explain why a large body of empirical evidence shows that learners with high self-regulation are effective learners. They are more persistent and higher achievers (Nicol & Macfarlane‐Dick, 2006; Pintrich, 2004). Australasian Journal of Educational Technology, 2019, 35(1). 173 Implications and conclusions This study used the UTAUT model to investigate the moderating effects of self-regulation on CBA technology use. The results improve understanding of how self-regulation affects the relationship between the constructs (e.g., PE) and BI, and between BI and UB. This study had three main contributions. First, this study revealed that both PE and SI have significant positive impacts on BI to use CBA in all students, regardless of SR level (high or low). Second, while perceived EE was significantly higher in high-SR students than in low-SR students, the effect of perceived EE on BI to use CBA was larger in low-SR students compared to high-SR students. Third, the CBA’s UB was significantly higher in high-SR students than in low-SR students. Specifically, the BI significantly predicted the CBA UB of high-SR students but not that of low-SR students. The above confirm that individual traits can partially interfere the relationships between the factors within CBA technology adoption. Our findings have four implications of practical application of CBA and CBA future research. Firstly, an effective CBA must contain high-quality questions (understandable, related to the course content). To facilitate understanding of the course material, teachers can disseminate question content using images, videos, and other multimedia resources. Including multimedia would increase the use of CBA by students (Terzis, Moridis, Economides, & Mendez, 2013) and increase their perception that the system is useful to enhance their learning performance or productivity. Secondly, since SI also has a very strong effect on BI to use CBA in all students (regardless of their SR levels), the CBA platform can be modified to include a social network awareness mechanism (i.e., students are able to be aware of peers’ learning activities and performance) (Lin & Lai, 2013b), which would increase the effect of SI on BI (i.e., enhance student participation in CBA). Thirdly, an effective CBA requires a user-friendly interface, particularly for low-SR students. Teachers and developers should collaboratively design a user-friendly CBA or provide training courses to demonstrate the ease of use of the CBA and to increase students’ familiarity with the CBA, a tactic which is particularly potent for low-SR students. Finally, low-SR students may need support in transforming intention into physical action. External scaffolds (teachers and peer stimulus) can effectively motivate students to perform self-regulated learning behaviour (Lin & Lai, 2013b; Lin et al., 2016). For example, the teacher can remind students, particularly those with low SR, to apply cognitive and motivated strategies. Additionally, incorporating collaboration mechanisms in a CBA, where peer can help request and response when encountering questions during assessments, can increase opportunities for peer interaction to collaboratively address incorrect question or misconception. According to Lin et al. (2016), seeking help and responding to requests for help can perceptibly increase the sense of self-responsibility in students and reinforce their own self-regulation learning behaviour. The more learning becomes self- regulated, the more students assume control over their learning (Nicol & Macfarlane‐Dick, 2006). It is acknowledged that the questionnaire survey used in this study may have biased the results due to the following reasons. The first concern is that the instructors asking students to hand in their questionnaires one by one to him might cause that the answers of students might favour of the instructor. The second concern is that, as stated in the experiment procedure, students were free to use or not use the CBA system during the experiment, therefore the completed questionnaires might have included some from students who did not perform assessments (i.e., no CBA experience) during the experiment and thus their responses might be unreliable. However, the system logs revealed that less than 2% respondents did not perform assessments, and therefore should have limited impact on the analysed results. As in any empirical study, this study has limitations. One limitation is that this study did not consider moderating variables (e.g., prior experience) to explain technology acceptance, especially in the case of UB where the explanatory power of the model was lower. A second limitation is that data from other universities, regions or countries were not used, which limits the generalisability of the results. References Acosta-Gonzaga, E., & Walet, N. R. (2018). The role of attitudinal factors in mathematical on-line assessments: A study of undergraduate STEM students. Assessment & Evaluation in Higher Education, 43(5), 710-726. https://doi.org/10.1080/02602938.2017.1401976 Benbasat, I., & Barki, H. (2007). Quo vadis TAM? Journal of the Association for Information Systems, 8(4), 211-218. Retrieved from https://aisel.aisnet.org/jais/vol8/iss4/16 https://doi.org/10.1080/02602938.2017.1401976 https://aisel.aisnet.org/jais/vol8/iss4/16 Australasian Journal of Educational Technology, 2019, 35(1). 174 Chin, W. W. (2000). Frequently asked questions – partial least squares and PLS-graph, home page. Retrieved from http://disc-nt.cba.uh.edu/chin/plsfaq.htm Cigdem, H., & Oncu, S. (2015). E-assessment adaptation at a military vocational college: Student perceptions. Eurasia Journal of Mathematics, Science & Technology Education, 11(5), 971-988. Retrieved from https://eric.ed.gov/?id=EJ1074088 Deng, S., Liu, Y., & Qi, Y. (2011). An empirical study on determinants of web based question-answer services adoption. Online information Review, 35(5), 789-798. https://doi.org/10.1108/14684521111176507 de Oca, A. M. M., & Nistor, N. (2014). Non-significant intention–behavior effects in educational technology acceptance: A case of competing cognitive scripts? Computers in Human Behavior, 34, 333-338. https://doi.org/10.1016/j.chb.2014.01.026 Deutsch, T., Herrmann, K., Frese, T., & Sandholzer, H. (2012). Implementing computer-based assessment–a web-based mock examination changes attitudes. Computers & Education, 58(4), 1068- 1075. https://doi.org/10.1016/j.compedu.2011.11.013 Finkel, E. J., & Campbell, W. K. (2001). Self-control and accommodation in close relationships: An interdependence analysis. Journal of Personality and Social Psychology, 81(2), 263-277. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/11519931 Khechine, H., Lakhal, S., Pascot, D., & Bytha, A. (2014). UTAUT model for blended learning: The role of gender and age in the intention to use webinars. Interdisciplinary Journal of E-Learning and Learning Objects, 10(1), 33-52. https://doi.org/10.28945/1994 Lin, J. W., Huang, H. H., & Chuang, Y. S. (2015). The impacts of network centrality and self-regulation on an e-learning environment with the support of social network awareness. British Journal of Educational Technology, 46(1), 32-44. https://doi.org/10.1111/bjet.12120 Lin, J. W., & Lai, Y. C. (2013a). Harnessing collaborative annotations on online formative assessments. Educational Technology & Society, 16(1), 263–274. Retrieved from https://www.j- ets.net/ETS/journals/16_1/23.pdf Lin, J. W., & Lai, Y. C. (2013b). Online formative assessments with social network awareness. Computers & Education, 66, 40-53. https://doi.org/10.1016/j.compedu.2013.02.008 Lin, J. W., Szu, Y. C., & Lai, C. N. (2016). Effects of group awareness and self-regulation level on online learning behaviors. The International Review of Research in Open and Distributed Learning, 17(4), 224-241. https://doi.org/10.19173/irrodl.v17i4.2370 Lin, J. W., & Tsai, C. W. (2016). The impact of an online project-based learning environment with group awareness support on students with different self-regulation levels: An extended-period experiment. Computers & Education, 99, 28-38. https://doi.org/10.1016/j.compedu.2016.04.005 Maqableh, M., Masa’deh, R., & Mohammed, A. (2015). The acceptance and use of computer based assessment in higher education. Journal of Software Engineering and Applications, 8(10), 557. https://doi.org/10.4236/jsea.2015.810053 Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. https://doi.org/10.1080/03075070600572090 Nikou, S. A., & Economides, A. A. (2017). Mobile-based assessment: Investigating the factors that influence behavioral intention to use. Computers & Education, 109, 56-73. https://doi.org/10.1016/j.compedu.2017.02.005 Nistor, N. (2014). When technology acceptance models won’t work: Non-significant intention-behavior effects. Computers in Human Behavior, 34, 299-300. https://doi.org/10.1016/j.chb.2014.02.052 Olasehinde, K. J., & Olatoye, R. A. (2014). Self-regulation and peer influence as determinants of senior secondary school students’ achievement in science. Mediterranean Journal of Social Sciences, 5(7), 374-380. https://doi.org/10.5901/mjss.2014.v5n7p374 Padilla-MeléNdez, A., Del Aguila-Obra, A. R., & Garrido-Moreno, A. (2013). Perceived playfulness, gender differences and technology acceptance model in a blended learning scenario. Computers & Education, 63, 306-317. https://doi.org/10.1016/j.compedu.2012.12.014 Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 358-407. https://doi.org/10.1007/s10648- 004-0006-x Pintrich, P. R. & Schunk, D. H. (2002). Motivation in education: Theory, research, and applications. Upper Saddle River, NJ: Merrill-Prentice Hall. Pintrich, P. R., & Zusho, A. (2007). Student motivation and self-regulated learning in the college classroom. In R. P. Perry, & J. C. Smart (Eds), The scholarship of teaching and learning in higher http://disc-nt.cba.uh.edu/chin/plsfaq.htm https://eric.ed.gov/?id=EJ1074088 https://doi.org/10.1108/14684521111176507 https://doi.org/10.1016/j.chb.2014.01.026 https://doi.org/10.1016/j.compedu.2011.11.013 https://www.ncbi.nlm.nih.gov/pubmed/11519931 https://doi.org/10.28945/1994 https://doi.org/10.1111/bjet.12120 https://www.j-ets.net/ETS/journals/16_1/23.pdf https://www.j-ets.net/ETS/journals/16_1/23.pdf https://doi.org/10.1016/j.compedu.2013.02.008 https://doi.org/10.19173/irrodl.v17i4.2370 https://doi.org/10.1016/j.compedu.2016.04.005 https://doi.org/10.4236/jsea.2015.810053 https://doi.org/10.1080/03075070600572090 https://doi.org/10.1016/j.compedu.2017.02.005 https://doi.org/10.1016/j.chb.2014.02.052 https://doi.org/10.5901/mjss.2014.v5n7p374 https://doi.org/10.1016/j.compedu.2012.12.014 https://doi.org/10.1007/s10648-004-0006-x https://doi.org/10.1007/s10648-004-0006-x Australasian Journal of Educational Technology, 2019, 35(1). 175 education: An evidence-based perspective (pp. 731-810). Dordrecht: Springer. https://doi.org/10.1007/1-4020-5742-3_16 Pituch, K. A., & Lee, Y. K. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47(2), 222-244. https://doi.org/10.1016/j.compedu.2004.10.007 Šumak, B., Heričko, M., & Pušnik, M. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior, 27(6), 2067-2077. https://doi.org/10.1016/j.chb.2011.08.005 Terzis, V., & Economides, A. A. (2011a). The acceptance and use of computer based assessment. Computers & Education, 56(4), 1032-1044. https://doi.org/10.1016/j.compedu.2010.11.017 Terzis, V., & Economides, A. A. (2011b). Computer based assessment: Gender differences in perceptions and acceptance. Computers in Human Behavior, 27(6), 2108-2122. https://doi.org/10.1016/j.chb.2011.06.005 Terzis, V., Moridis, C. N., & Economides, A. A. (2012a). The effect of emotional feedback on behavioral intention to use computer based assessment. Computers & Education, 59(2), 710-721. https://doi.org/10.1016/j.compedu.2012.03.003 Terzis, V., Moridis, C. N., & Economides, A. A. (2012b). How student’s personality traits affect Computer Based Assessment Acceptance: Integrating BFI with CBAAM. Computers in Human Behavior, 28(5), 1985-1996. https://doi.org/10.1016/j.chb.2012.05.019 Terzis, V., Moridis, C. N., & Economides, A. A. (2013). Continuance acceptance of computer based assessment through the integration of user's expectations and perceptions. Computers & Education, 62, 50-61. https://doi.org/10.1016/j.compedu.2012.10.018 Terzis, V., Moridis, C. N., Economides, A. A., & Mendez, G. R. (2013). Computer based assessment acceptance: A cross-cultural study in Greece and Mexico. Educational Technology & Society, 16(3), 411-424. Retrieved from https://www.j-ets.net/ETS/journals/16_3/31.pdf Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540 Wang, T. H. (2011). Developing web-based assessment strategies for facilitating junior high school students to perform self-regulated learning in an e-Learning environment. Computers & Education, 57(2), 1801-1812. https://doi.org/10.1016/j.compedu.2011.01.003 Zimmerman, B. J. (1998). Developing self-fulfilling cycles of academic regulation: An analysis of exemplary instructional models. In D. H. Schunk, & B. J. Zimmerman (Eds.), Self-regulated learning: From teaching to self-reflective practice (pp. 1-19). New York, NY: Guilford Press. Retrieved from https://psycnet.apa.org/record/1998-07519-001 Zimmerman, B. J. (2000). Attaining self-regulated learning: a social-cognitive perspective. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13-39). San Diego, CA: Academic Press. https://doi.org/10.1016/B978-012109890-2/50031-7 Corresponding author: Jian-Wei Lin, jwlin@uch.edu.tw Please cite as: Lin, J. W. & Lai, Y. C. (2019). User acceptance model of computer-based assessment: Moderating effect of self-regulation. Australasian Journal of Educational Technology, 35(1), 163-176. https://doi.org/10.14742/ajet.4684 https://doi.org/10.1007/1-4020-5742-3_16 https://doi.org/10.1016/j.compedu.2004.10.007 https://doi.org/10.1016/j.chb.2011.08.005 https://doi.org/10.1016/j.compedu.2010.11.017 https://doi.org/10.1016/j.chb.2011.06.005 https://doi.org/10.1016/j.compedu.2012.03.003 https://doi.org/10.1016/j.chb.2012.05.019 https://doi.org/10.1016/j.compedu.2012.10.018 https://www.j-ets.net/ETS/journals/16_3/31.pdf https://doi.org/10.2307/30036540 https://doi.org/10.1016/j.compedu.2011.01.003 https://psycnet.apa.org/record/1998-07519-001 https://doi.org/10.1016/B978-012109890-2/50031-7 https://doi.org/10.14742/ajet.4684 Australasian Journal of Educational Technology, 2019, 35(1). 176 Appendix A Questionnaire items Constructs Items Operational definition Sources of literature Performance expectancy 1. PE1 Using the system improves my learning performance. Terzis & Economides (2011a); Terzis, Moridis, & Economides, (2012a) 2. PE2 Using the system increases my learning effectiveness. 3. PE3 Using the system enables me to achieve a high academic performance. Effort expectancy 4. EE1 Learning to operate the system is easy for me. Terzis & Economides (2011a); Terzis, Moridis, & Economides, (2012a) 5. EE2 I can easily become skillful at using the system. 6. EE3 The system has a clear and friendly user interface. Social influence 7. SI1 Classmates who are important to me affect my use of the system. Terzis & Economides (2011a); Terzis, Moridis, & Economides, (2012a) 8. SI2 People who influence my behaviour think that I should use CBA. 9. SI3 My university generally supports the use of CBA. Behavioural intention 10. BI1 I intend to use the system in the following months. Terzis & Economides (2011a); Venkatesh, Morris, Davis, & Davis, (2003) 11. BI2 I plan to use the system in the following months. 12. BI3 I predict I would use the system in the following months.