This study is intended to understand teaching quality of English student teachers when they conduct their teaching practicum. Teaching quality is conceptualized based on the principles of effective teaching resulted by teacher effectiveness studies. Thes IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 560 The Validation of the Information Systems Success Model: LMS Integration during Covid-19 MIFTAHUDDIN 1 , LANTIP DIAT PRASOJO 2, * , AND AWNIS AKALILI 3 Abstract This study adapted DeLone & McLean information system success model (D&M IS success model) regarding the implementation of learning management system (LMS) during Covid-19. Six variables are included; system quality, information quality, service quality, system usage, user satisfaction, and net benefits with 23 initial items. A total of 279 undergraduate students from a public university in Indonesia participated in this study. The factor structure of the instrument was investigated using a survey design. The survey data were calculated using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Six variables emerged from the EFA methods, establishing a valid and reliable model; a few items were eliminated due to cross-loading. The suggested model was successfully mapped as a consequence of the results. The CFA confirmed that the instrument was suitable for the Indonesian setting. The findings led to the development of 19 reliable and valid items. The measured scale has psychometric qualities, allowing for future research with a tool to capture D & M IS success model technology integration. Keywords CFA, EFA, IS success model, Validation 1 Senior lecturer of history education at Universitas Negeri Yogyakarta, Indonesia. 2 Professor at the Fakultas Ilmu Pendidikan, Universitas Negeri Yogyakarta, Yogyakarta, Indonesia; corresponding email: lantip@uny.ac.idlantip@uny.ac.id 3 Lecturer of Communication science of social science faculty, Universitas Negeri Yogyakarta, Indonesia. mailto:lantip@uny.ac.id mailto:lantip@uny.ac.id IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 561 Introduction The use of technology has transformed traditional teaching and learning techniques into a more active and dynamic condition. Face-to-face contact is no longer the sole way for instructors and students to communicate (Hernandez-de-Menendez et al., 2020). In current years, the use of technology in education, particularly in higher education, has increased student access and engagement. Since then, the educational methodology has evolved from traditional techniques and toward electronic learning (e-learning). The learning management system (LMS) is one type of e-learning used in higher education. The demand for quick transmission of knowledge and information at any time and from any location has fueled the rise of LMS throughout the world and is a critical component of corporate success of educational activities. When combined with the prevalence of coronavirus disease 2019 (Covid-19), LMS is a recipe for the disaster (Abazi-Bexheti et al., 2018; Rossini et al., 2021). LMS is a web-based learning platform that allows users to access information and knowledge regardless of time or location. LMS is a comprehensive e-learning platform with full multimedia integration, instructor-led and real-time instruction, and a collaborative environment. The real-time and synchronous LMS distributions and the asynchronous LMS distributions are the two forms of LMS (Namada, 2021). It has become an essential component of remote learning; an idea that must be fully grasped for the new learning environment paradigm. Humanity has encountered numerous challenges since the Spanish flu outbreak. Without question, the Covid-19 is unparalleled. With almost three billion individuals under quarantine since its start, the scope and impact are unprecedented in contemporary global history. The technology, on the other hand, substantially separates the contemporary situation from the past, altering quarantine history. The employment of technology in all parts of life is unavoidable in this Covid-19 environment (Gabr et al., 2021). Several studies have looked at how technology is used in schools during the pandemics. However, there are few empirical studies on the success of LMS implementation during Covid-19. The current study is part of a larger attempt to develop a valid and reliable scale for measuring LMS implementation using the D&M IS success model. The research was carried out at one university, where students utilized LMS on a regular basis during the Covid-19 teaching and learning process. Literature Review Understanding information system success is a topic that many scholars, practitioners, and management stakeholders are interested in. This knowledge helps to emphasize the system’s worth and may be used to inform future decisions about similar systems. There are several methods to measure success; the D&M IS model is among the widely used and well-validated metrics. The model was initially proposed in 1992; it was revised in 2003 (DeLone & McLean, 2003). The model facilitates six interrelated dimensions: system quality, information quality, service quality, system usage, user satisfaction, and net benefits (Delone & McLean, 2014; DeLone & McLean, 2003; Wang, 2008). In previous IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 562 research, the model has been used and verified; e-commerce platforms (Sharma & Aggarwal, 2019; Tam et al., 2020), knowledge sharing (Halonen et al., 2010; Sarkheyli & Song, 2019), e-government (Lessa & Tsegaye, 2019; Mellouli et al., 2020), and technology integration in education (Al-Azawei, 2019; Safsouf et al., 2020; Shahzad et al., 2021). It’s worth noting that the majority of research using the D&M IS model to measure success have been conducted in developed nations, with just a handful specifically confirming the model regarding LMS integration in developing ones, particularly during Covid-19 (Wang, 2008). As a result, the primary goal of the current research is to evaluate the proposed D&M IS model to evaluate LMS integration success during Covid-19 in Indonesia. The D&M IS model, which presents six interconnected constructs of information system success metrics, is used to drive this research (Al-Azawei, 2019; Halonen et al., 2010; Lessa & Tsegaye, 2019; Mellouli et al., 2020; Safsouf et al., 2020; Sarkheyli & Song, 2019; Shahzad et al., 2021). 1) system quality is said to assess an information system’s desired qualities. This has been assessed in several information systems research utilizing factors such as system tools, reaction speed, and adaptability. This research, on the other hand, evaluated system quality by looking at the ease of use of LMS integration and their function and adaptability. 2) Information quality is content concerns and output qualities of information systems. It has been assessed by evaluating an information system’s output about timeliness, correctness, dependability, and trustworthiness. 3) service quality is determined by the level of assistance provided by the creator of the information system. Service quality characteristics like assurance and response by the systems support department, and the provision of user training, have been used in studies to measure this. Service quality was assessed in this study by looking at the technical assistance provided to users of a hospital information system, the network infrastructure in place, and the system’s dependability. 4) intention to use is focused on evaluating how an information system is utilized. Various studies have looked at actual usage or, in certain cases, frequency of use to determine this. 5) User satisfaction is one of the most significant indicators of a system’s performance, and it’s frequently assessed by total user happiness. 6) Net benefits: this is also one of the most significant indicators of information system performance, as it indicates how much an information system contributes to the success of various stakeholders, whether good or bad. It has been assessed by measuring individual or organizational influence on occasion. Need for an instrument Statistical data on technological integration, as seen by teachers, has been the subject of many studies (Dong et al., 2015; M. Liu, 2013; Liu, 2013; Ndongfack, 2015; Polly et al., 2010). Besides, students have also been involved in the context of technology integartion research (Al-Ani, 1979; Dasig & Pascua, 2016; Ervin, 2014; Lisenbee & Ford, 2018). However, just a few research provided enough data on success of information technology (DeLone & McLean, 2003; Halonen et al., 2010; Sarkheyli & Song, 2019; Wang, 2008), especially in developing countries. Therefore, this study’s objective is to examine the validity and reliability of the instrument in the context of developing country perceived by students regarding the success of the implementation of LMS during Covid-19. Instrument IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 563 development should include a sufficient number of indicators to fit the setting and context (Connell et al., 2018; Hosseini & Kamal, 2012; Jamieson-Proctor et al., 2013; Valtonen et al., 2015; Zelkowski et al., 2013). It aims at capturing critical aspects of constructs of the study. This study refers to the context and setting in Indonesian scope through the use of Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) (James, 2009; Kyriazos, 2018; Padgett & Morgan, 2021). Specifically, the study was establihsed by the following research question: How valid and reliable the proposed instruments regarding LMS implementation success among Indonesian students during Covid-19? Methodology Research design and participants This study applied survey as the main data collection method (Ball, 2019; Geldsetzer, 2020; Weiss et al., 2016). We initiated the survey instrument by thoroughly evaluating related previous studies (Andrews & Diego-Mantecón, 2015; Hanniball et al., 2021). Afterwards, the instrument indicators was validated through content validity and distributed for a pilot study (Hazzi & Maaldaon, 2015; Leon et al., 2011). After the validation of the data normality (Alejo et al., 2015; Miot, 2017; Noel, 2021), the data were assessed for the validity and reliability through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The present study’s population was made up of all undergraduate students at an Indonesian public university. Cluster random sampling was used by the researchers because it allows them to investigate the selection of groups rather than individuals. Three hundred undergraduate students were given online survey instruments, and a total of 279 students responded. To gain a better comprehension of the data, the instruments were prepared in the participants’ native language, Indonesian. Data collection A questionnaire was developed to gather the data required for analysis. In ensuring validity, 24 items were adapted from prior studies with validated scales (Cho et al., 2015; Ojo & Popoola, 2015; Pai & Huang, 2011; Tilahun & Fritz, 2015). We discussed the questionnaire with five experts and five users in order to assess all items through the face and content validity process. Adjustments were made as needed and 21 items were submitted for the primary data collection; three were removed because they were inappropriate for the study’s topic, context, and setting. The current study employs a survey design consisting of methods for quantitative approaches that allow surveyors distribute an instrument to sample or the entire population to gather information on the respondents’ views and perceptions. Individual accounts of social reality make it easier to build a foundation for cultivation and silent study. Survey research is very critical in education. A survey is described as a combination of data obtained from the responses of a sample of people (Liang et al., 2013; Williams, 2013). Creswell (2014) mentioned that the survey design contrasts from the experimental design in that it does not provide a treatment for the participants or subjects. Because investigators do not change the settings like experimental researchers do, they are IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 564 unable to draw conclusions about cause and effect. Instead of informing robust interpretations, surveys exaggerate data patterns (Rowan et al., 2001). In a quantitative, survey research is a technique in which researchers disseminate a survey data collection device to a sample or the entire population in order to elucidate their views, beliefs, behaviors, or traits (Chan, 2012). Data analysis The current study covered several data screening concerns, namely missing data, multicollinearity, outliers, and normality, before moving on to the primary data analysis. The statistical results were computed using SPSS 23.0. For each variable, we used a box plot to calculate outliers. The skewness and kurtosis were assessed to ensure that the data was normal (-1.96 to +1.96 at the 0.05 significance level) (Chou et al., 1998; Singh & Masuku, 2014). When the correlation matrix was more than the cutoff of .900, multicollinearity was detected. Following that, data was examined in two phases for each construct: EFA and CFA. The kaiser-meyer-olkin (KMO) value, bartlett’s value, factor loading, eigenvalue, scree plot, and varimax rotation were all included in the EFA. The KMO index needs to be higher than .500. A KMO score of less than .500 indicates that the sample size for the EFA technique is insufficient; the results may not be trustworthy. At p .050, bartlett’s test of sphericity was significant. Each indicator’s factor loading should be more than .500 (Habibi et al., 2020; James, 2009). An eigenvalue is the proportion of variance contribution retrieved by each factor using factor analysis; an eigen of less than 1.000 must be eliminated, and communality must be > .300. According to CFA, quality of fit was assessed using some measurements, such as loading of >.500 (Kyriazos, 2018; Truong et al., 2010; Zainudin, 2012), chi-square at p > .05, Comparative Fit Index or CFI of > .800, and Root Mean-Square Error of Approximation or (RMSEA of < .080) (Padgett & Morgan, 2021); details are presented in Table 4. The stability of the values acquired is characterized as reliability. Cronbach’s alpha, Construct Dependability (CR), and Average Variance Extracted (AVE) values were computed to determine the data’s reliability. Cronbach’s alpha must be larger than .700, CR values should be greater than .600, and AVE values should be greater than .500. Findings Preliminary analysis The quantity of missing data in the current study ranged from 0% to 0.4 percent on each item. Multiple imputations, an iterative type of stochastic imputation, were employed to cope with the missing data. The observed data distribution was used to estimate several values that reflected the real value’s uncertainty instead of replacing it with a single value. Table 1 shows the correlation matrix, skewness, and kurtosis, indicating a satisfactory study. The early study of system quality, information quality, service quality, system utilization, user happiness, and net benefits found that they were all univariate normal (skewness and kurtosis values ranging from -.419 to -.132 and from .137 to 1.193, respectively). IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 565 Inter-correlations among the six variables ranged from .295 to.615 in terms of multicollinearity; discriminant validities were achieved (correlations of .900). Table 1. Correlation matrix, skewness, and kurtosis SQ IQ SerQ SU USat NB System quality 1 .615 ** .475 ** .465 ** .393 ** .436 ** Information quality .615 ** 1 .482 ** .495 ** .354 ** .455 ** Service quality .475 ** .482 ** 1 .418 ** .360 ** .392 ** System usage .465 ** .495 ** .418 ** 1 .457 ** .379 ** User satisfaction .393 ** .354 ** .360 ** .457 ** 1 .295 ** Net benefits .436 ** .455 ** .392 ** .379 ** .295 ** 1 Mean 3.6254 3.5968 3.4719 3.7145 3.6476 3.7195 SD .45867 .53941 .62578 .58705 .61913 .62686 Skew -.318 -.154 -.318 -.419 -.214 -.132 Kurt .137 1.145 1.121 1.193 1.159 1.137 Exploratory factor analysis The EFA included the measurement of all 21 items. One item (SQ1, “The system is easy to use”) with a loading value of below .500 was dropped. KMO values were above .500, and all values were significant at p < .005 (Table 2). With an eigenvalue of greater than 1.0, six factors reported: system quality (1.024), information quality (1.590), service quality (1.377), system usage (1.377), user satisfaction (1.936), and net benefits (6.937). The factor loadings of all variables are above .400 system quality (.482 to .769), information quality (.539 to .745), service quality (.727 to .809), system usage (.585 to .773), user satisfaction (.759 to .813), and net benefits (.681 to .747). All communality values exceed .300, showing sufficient values of communality. Table 3 informs all values of loadings, communalities, eigenvalues, cross-loadings Table 2. KMO and Bartlett’s test Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .865 Bartlett’s Test of Sphericity Approx. Chi-Square 2585.691 df 190 Sig. .000 IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 566 Table 3. Loadings, communalities, eigenvalues, cross-loadings Variable Eigen. Item (statement) M SD Comm. 1 2 3 4 5 6 Net benefits 6.937 NB3 3.6237 .72341 .747 .809 NB1 3.8136 .84459 .681 .764 NB2 3.7885 .74071 .701 .740 NB4 3.6523 .79403 .669 .738 User satisfaction 1.936 Usat2 3.6344 .68570 .759 .859 Usat3 3.6631 .68470 .764 .833 Usat1 3.6452 .72437 .813 .818 Information quality 1.590 IQ2 3.6631 .67411 .745 .818 IQ1 3.5305 .69295 .714 .744 IQ4 3.5197 .69334 .539 .644 IQ3 3.6738 .67141 .605 .553 Service quality 1.377 SerQ3 3.5878 .69840 .809 .834 SerQ2 3.3154 .70518 .747 .803 SerQ1 3.5125 .76268 .727 .750 System usage 1.240 SU3 3.7706 .71298 .773 .812 SU2 3.5161 .72389 .706 .788 SU1 3.8566 .70524 .585 .616 System quality 1.024 SQ4 3.5699 .53774 .769 .844 SQ3 3.5806 .52947 .767 .836 SQ2 3.7240 .67789 .482 .499 Confirmatory factor analysis CFA was conducted for the factorial validity of the six variables. The measurement model was satisfactory after one covariance activity was done between IQ1 and IQ3. One item was dropped since the value of loading is less than .500 (NB4, the system is an important and valuable aid to me in the performance of my classwork) (Truong et al., 2010; Zainudin, 2012). All loading values for the CFA surpass the standard cutoff value after the dropping. 500 (Figure 1). Figure 1 also shows the standardized coefficients of the CFA, addressing the correlation between factors and items for all variables: χ2/df = 1.830, CFI = .800, and RMSEA = .055. This computation was exceeding the threshold value of .500 (Table 4). The following six popular model-fit metrics were used to assess the model’s goodness-of-fit: The chi-square ratio, goodness-of-fit index (GFI), adjusted goodness-of-fit index (AGFI), normalized fit index (NFI), root mean square residual (RMSR), and root mean square error of approximation (RMSEA) are all terms that can be used to describe how well a model fits. Table 4 indicates that the model fit indices exceeded their suggested acceptability thresholds, indicating that the measurement model fits the data quite well. IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 567 Table 4. model fit indices Parameter Threshold Obtained value Chi-square ratio (χ2/df) ≤3.406 1.830 GFI ≥0.90 .906 AGFI ≥0.80 .868 CFI >0.80 .800 RMSR ≤0.10 .065 RMSEA ≤0.08 .055 Figure 1. CFA results Cronbach’s alpha and composite reliability (CR) values of all varibales were found to be satisfactory of above .700: system quality (CR = 0.772; α = .706), information quality (CR = 0.814; α = .799), service quality (CR = 0.830; α = .833), system usage (CR = 0.767; α = .760), user satisfaction (CR= 0.853; α = .864), and net benefits (CR = 0.814; α = .786). In addition, the AVE for all varibles also exceed the desirable threshold value of .500, denoting that this study had acceptable discriminant validity (Table 6). Through the examination and elaboration of EFA and CFA in measuring the scale, the finding elaboration suggested the establishment of the validity and reliability of the survey instrument, a scale to provide a valid and reliable scale to measure the success of LMS implementation through the use of D&M IS success model. IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 568 Table 5. CFA results of all constructs Variable Items Loading CR AVE α Net benefits NB3 (Overall, the system is successful) .720 0.814 0.770 .786 NB1 (The system has a positive impact on my learning) .820 NB2 Overall, the performance of the system is good) .770 User satisfaction Usat2 (I think the system is very helpful) .760 0.853 0.891 .864 Usat3 (Overall, I am satisfied with the system) .900 Usat1 (I have a positive attitude or evaluation about the way the system functions) .900 Information quality IQ2 (Information I get from the system is accurate) .740 0.814 0.723 .799 IQ1 (Information from the system is relevant to my work) .780 IQ4 (The information is presented in a useful format) .630 IQ3 (It is easy to understand information from the system) .740 Service quality SerQ3 (Overall, the support services meet my needs) .810 0.830 0.787 .833 SerQ2 (The support services give me individual attention) .740 SerQ1 (The support services for the system are dependable) .810 System usage SU3 (I only use the system when it is absolutely necessary for learning) .770 0.767 0.723333 .760 SU2 (I depend upon the system) .710 SU1 (I frequently use the system) .690 System quality SQ4 (I can retrieve information I need easily) .750 0.772 0.723333 .706 SQ3 (The system is easy to learn) .850 SQ2 (The system is useful) .570 Discussion The process of developing the scale of the current study was conducted within some stages. It aims at producing a scale with examined validity and reliability. Initially, twenty-four items were established, adapted from prior studies (Cho et al., 2015; Ojo & Popoola, 2015; Pai & Huang, 2011; Tilahun & Fritz, 2015). To filter the instruments, three items were removed from the list during face and content validity processes. The items (n. 21) were distributed to respondents (279 undergraduate students) from one public university in Indonesia, EFA was conducted using Varimax rotation with principal component analysis. Through this process, one item was dropped. The dropping indicator process did not result IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 569 in removing any important content of the scale. The dropping process helped improve the reliability and validity of the scale, leaving the scale with twenty items for the CFA process. The processes for determining the validity and reliability of an instrument development should contain a sufficient number of items that are appropriate for the setting and context. As a result, it may be able to capture important features of the structures in a study (MacLeod et al., 2018; Maul, 2017). The technique aims to reveal the unknown elements that impact the co-variation of various LMS integration observations in the Indonesian setting. The technique is then repeated using CFA to confirm the factorial validity of the D&M IS success model D&M IS success model D&M IS success model D&M IS success model D&M IS success model D&M IS success model components in relation to the usage of LMS during COVID-19. The EFA-based data were calculated for CFA using SPSS AMOS 23.0; just one item was eliminated since its loading value was less than its cutoff value. The reliability of the remaining 20 indications was investigated. Cronbach’s alpha, AVE, and CR values are adequate in this procedure, resulting in a legitimate and trustworthy scale. Previous research using similar techniques has looked at the CFA process to corroborate the EFA results. This method is critical for determining sub-construct measurements that are compatible with our knowledge of their nature (Kyriazos, 2018; Miller & Pellegrino, 2018). The scale is appropriate for the measuring model and can help future researchers perform comparative studies. Conclusion During the Covid-2019 pandemic, the current study seeks to create and validate the D&M IS success model in the setting of a learning management system (LMS) in a university in a poor nation (Covid-19). The final scale consists of 19 elements divided into six constructions (system quality, information quality, service quality, system usage, user satisfaction, and net benefits). The measured scale has adequate psychometric characteristics and can be used in future research. The scale’s reliability and validity were only tested at one university. It’s also necessary to incorporate a broader range of samples in research and to evaluate the link. Other study settings and contexts were also suggested. Furthermore, during COVID-19, more specific and applicable definitions of extended constructs and sub-constructs for measuring LMS integration will aid in the development of more consistent and exact survey instruments. Valid and trustworthy indications linked to technology integration might be included in such instruments during a future outbreak emphasized by a similar idea. Furthermore, more relevant definitions of technological integration during a pandemic epidemic are required, both conceptually and practically. Mapping the technology use in this context will move the focus away from the method of instruction and onto the subject. Disclosure statement No potential conflict of interest was reported by the authors. IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 570 Acknowledgments The research/publication of this article was funded by Universitas Negeri Yogyakarta. References Abazi-Bexheti, L., Kadriu, A., Apostolova-Trpkovska, M., Jajaga, E., & Abazi-Alili, H. (2018). LMS Solution: Evidence of Google Classroom Usage in Higher Education. Business Systems Research, 9(1), 31–43. https://doi.org/10.2478/bsrj-2018-0003 Al-Ani, F. Y. (1979). Re-examination of transformation within different species of Rhizobium. Zentralblatt Für Bakteriologie, Parasitenkunde, Infektionskrankheiten Und Hygiene. Zweite Naturwissenschaftliche Abteilung: Mikrobiologie Der Landwirtschaft Der Technologie Und Des Umweltschutzes, 134(4), 301–309. https://doi.org/10.1016/S0323-6056(79)80002-6 Al-Azawei, A. (2019). What drives successful social media in education and e-learning? A comparative study on Facebook and moodle. Journal of Information Technology Education: Research, 18. https://doi.org/10.28945/4360 Alejo, J., Montes-Rojas, G., Galvao, A., & Sosa-Escudero, W. (2015). Tests for normality in linear panel-data models. Stata Journal, 15(3). https://doi.org/10.1177/1536867x1501500314 Andrews, P., & Diego-Mantecón, J. (2015). Instrument adaptation in cross-cultural studies of students’ mathematics-related beliefs: learning from healthcare research. Compare, 45(4). https://doi.org/10.1080/03057925.2014.884346 Ball, H. L. (2019). Conducting Online Surveys. Journal of Human Lactation, 35(3). https://doi.org/10.1177/0890334419848734 Chan, L. (2012). A survey of the “new” discipline of adaptation studies: Between translation and interculturalism. Perspectives: Studies in Translatology, 20(4), 411–418. Cho, K. W., Bae, S. K., Ryu, J. H., Kim, K. N., An, C. H., & Chae, Y. M. (2015). Performance evaluation of public hospital information systems by the information system success model. Healthcare Informatics Research, 21(1). https://doi.org/10.4258/hir.2015.21.1.43 Chou, Y. M., Polansky, A. M., & Mason, R. L. (1998). Transforming non-normal data to normality in statistical process control. Journal of Quality Technology, 30(2). https://doi.org/10.1080/00224065.1998.11979832 Connell, J., Carlton, J., Grundy, A., Taylor Buck, E., Keetharuth, A. D., Ricketts, T., Barkham, M., Robotham, D., Rose, D., & Brazier, J. (2018). The importance of content and face validity in instrument development: lessons learnt from service users when developing the Recovering Quality of Life measure (ReQoL). Quality of Life Research, 27(7), 1893– 1902. https://doi.org/10.1007/s11136-018-1847-y Dasig, D. D., & Pascua, S. M. (2016). Effects of Digital Learning Objects in Teaching Realtime System. Site, 2013, 1488–1498. Delone, W. H., & McLean, E. R. (2014). Journal of Management Information Systems The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 1222(November). IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 571 DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748 Dong, Y., Chai, C. S., Sang, G. Y., Koh, J. H. L., & Tsai, C. C. (2015). Exploring the profiles and interplays of pre-service and in-service teachers’ technological pedagogical content knowledge (TPACK) in China. Educational Technology and Society, 18(1), 158–169. https://creativecommons.org/licenses/by-nc-nd/3.0/ Ervin, L. (2014). Assessing student learning with technology: A descriptive study of technology-using teacher practice and technological pedagogical content knowledge (TPACK). In ProQuest Dissertations and Theses (Issue December). https://search.proquest.com/docview/1658234466?accountid=15272 Gabr, H. M., Soliman, S. S., Allam, H. K., & Raouf, S. Y. A. (2021). Effects of remote virtual work environment during COVID-19 pandemic on technostress among Menoufia University Staff, Egypt: a cross-sectional study. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-14588-w Geldsetzer, P. (2020). Use of rapid online surveys to assess people’s perceptions during infectious disease outbreaks: A Cross-sectional Survey on COVID-19. In Journal of Medical Internet Research (Vol. 22, Issue 4). https://doi.org/10.2196/18790 Habibi, A., Yusop, F. D., & Razak, R. A. (2020). The role of TPACK in affecting pre-service language teachers’ ICT integration during teaching practices: Indonesian context. Education and Information Technologies, 25(3), 1929–1949. https://doi.org/10.1007/s10639-019-10040-2 Halonen, R., Thomander, H., & Laukkanen, E. (2010). DeLone & McLean IS Success Model in Evaluating Knowledge Transfer in a Virtual Learning Environment. International Journal of Information Systems and Social Change, 1(2). https://doi.org/10.4018/jissc.2010040103 Hanniball, K. B., Hohn, R., Fuller, E. K., & Douglas, K. S. (2021). Increasing the Utility of the Comprehensive Assessment of Psychopathic Personality–Lexical Rating Scale (CAPP-LRS): Instrument Adaptation and Simplification. Assessment. https://doi.org/10.1177/10731911211040108 Hazzi, O. A., & Maaldaon, I. S. (2015). A pilot study: Vital methodological issues. Business: Theory and Practice, 16(1). https://doi.org/10.3846/btp.2015.437 Hernandez-de-Menendez, M., Escobar Díaz, C. A., & Morales-Menendez, R. (2020). Educational experiences with Generation Z. International Journal on Interactive Design and Manufacturing, 14(3). https://doi.org/10.1007/s12008-020-00674-9 Hosseini, Z., & Kamal, a. (2012). Developing an instrument to measure perceived technology integration in teaching. International Magazine on Advances in Computer Science and Telecommunications, 3(1), 78–89. James, D. B. (2009). Choosing the Right Number of Components or Factors in PCA and EFA. JALT Testing & Evaluation SIG, 13(May). Jamieson-Proctor, R., Albion, P., Finger, G., Cavanagh, R., Fitzgerald, R., Bond, T., & Grimbeek, P. (2013). Development of the TTF TPACK survey instrument. Australian Educational Computing, 27(3), 26–35. IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 572 Kyriazos, T. A. (2018). Applied Psychometrics: Sample Size and Sample Power Considerations in Factor Analysis (EFA, CFA) and SEM in General. Psychology, 09(08). https://doi.org/10.4236/psych.2018.98126 Leon, A. C., Davis, L. L., & Kraemer, H. C. (2011). The role and interpretation of pilot studies in clinical research. Journal of Psychiatric Research, 45(5). https://doi.org/10.1016/j.jpsychires.2010.10.008 Lessa, L., & Tsegaye, A. (2019). Evaluation of the public value of e-government services in Ethiopia: Case of court case management system. ACM International Conference Proceeding Series, Part F148155. https://doi.org/10.1145/3326365.3326369 Liang, J.-C., Chai, C. S., Koh, J. H. L., Yang, C.-J., & Tsai, C.-C. (2013). Surveying in-service preschool teachers’ technological pedagogical content knowledge. Australasian Journal of Educational Technology, 29(4), 581–594. http://ascilite.org.au/ajet/submission/index.php/AJET/article/view/299 Lisenbee, P. S., & Ford, C. M. (2018). Engaging Students in Traditional and Digital Storytelling to Make Connections Between Pedagogy and Children’s Experiences. Early Childhood Education Journal, 46(1), 129–139. https://doi.org/10.1007/s10643-017-0846-x Liu, M. (2013). Blended Learning in a University EFL Writing Course: Description and Evaluation. Journal of Language Teaching and Research, 4(2). https://doi.org/10.4304/jltr.4.2.301-309 Liu, S. H. (2013). Exploring the instructional strategies of elementary school teachers when developing technological, pedagogical, and content knowledge via a collaborative professional development program. International Education Studies, 6(11), 58–68. https://doi.org/10.5539/ies.v6n11p58 MacLeod, J., Yang, H. H., Zhu, S., & Li, Y. (2018). Understanding students’ preferences toward the smart classroom learning environment: Development and validation of an instrument. Computers and Education, 122. https://doi.org/10.1016/j.compedu.2018.03.015 Maul, A. (2017). Rethinking Traditional Methods of Survey Validation. Measurement, 15(2). https://doi.org/10.1080/15366367.2017.1348108 Mellouli, M., Bouaziz, F., & Bentahar, O. (2020). E-government success assessment from a public value perspective. International Review of Public Administration, 25(3). https://doi.org/10.1080/12294659.2020.1799517 Miller, B., & Pellegrino, J. L. (2018). Measuring Intent to Aid of Lay Responders: Survey Development and Validation. Health Education and Behavior, 45(5). https://doi.org/10.1177/1090198117749257 Miot, H. A. (2017). Assessing normality of data in clinical and experimental trials. Jornal Vascular Brasileiro, 16(2). https://doi.org/10.1590/1677-5449.041117 Namada, J. M. (2021). Learning Management Systems in the Era of E-Learning. https://doi.org/10.4018/978-1-7998-5009-0.ch007 Ndongfack, M. N. (2015). Teacher Profession Development on Technology Integration Using the Mastery of Active and Shared Learning for Techno-Pedagogy (MASLEPT) Model. Creative Education, 06(03), 295–308. https://doi.org/10.4236/ce.2015.63028 IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 573 Noel, D. D. (2021). Normality Assessment of Several Quantitative Data Transformation Procedures. Biostatistics and Biometrics Open Access Journal, 10(3). https://doi.org/10.19080/bboaj.2021.10.555786 Ojo, A. I., & Popoola, S. O. (2015). Some Correlates of Electronic Health Information Management System Success in Nigerian Teaching Hospitals. Biomedical Informatics Insights, 7. https://doi.org/10.4137/bii.s20229 Padgett, R. N., & Morgan, G. B. (2021). Multilevel CFA with Ordered Categorical Data: A Simulation Study Comparing Fit Indices Across Robust Estimation Methods. Structural Equation Modeling, 28(1). https://doi.org/10.1080/10705511.2020.1759426 Pai, F. Y., & Huang, K. I. (2011). Applying the Technology Acceptance Model to the introduction of healthcare information systems. Technological Forecasting and Social Change, 78(4). https://doi.org/10.1016/j.techfore.2010.11.007 Polly, D., McGee, J. R., & Sullivan, C. (2010). Employing Technology-Rich Mathematical Tasks to Develop Teachers’ Technological, Pedagogical, and Content Knowledge (TPACK). Journal of Computers in Mathematics and Science Teaching, 29(4), 455–472. http://www.editlib.org/p/33276/ Rossini, T. S. S., do Amaral, M. M., & Santos, E. (2021). The viralization of online education: Learning beyond the time of the coronavirus. Prospects. https://doi.org/10.1007/s11125-021-09559-5 Rowan, B., Schilling, S., & Ball, D. (2001). Measuring teachers’ pedagogical content knowledge in surveys: An exploratory study. Study of Instructional Improvement. http://www.sii.soe.umich.edu/newsite_temp/documents/pck final report revised BR100901.pdf Safsouf, Y., Mansouri, K., & Poirier, F. (2020). Smart learning environment, measure online student satisfaction: A case study in the context of higher education in Morocco. 2020 International Conference on Electrical and Information Technologies, ICEIT 2020. https://doi.org/10.1109/ICEIT48248.2020.9113189 Sarkheyli, A., & Song, W. W. (2019). Delone and McLean IS Success Model for Evaluating Knowledge Sharing. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11235 LNCS. https://doi.org/10.1007/978-3-030-19143-6_9 Shahzad, A., Hassan, R., Aremu, A. Y., Hussain, A., & Lodhi, R. N. (2021). Effects of COVID-19 in E-learning on higher education institution students: the group comparison between male and female. Quality and Quantity, 55(3). https://doi.org/10.1007/s11135-020-01028-z Sharma, H., & Aggarwal, A. G. (2019). Finding determinants of e-commerce success: a PLS-SEM approach. Journal of Advances in Management Research, 16(4). https://doi.org/10.1108/JAMR-08-2018-0074 Singh, A. S., & Masuku, M. B. (2014). Normality and Data Transformation for Applied Statistical Analysis. International Journal of Economics, Commerce and Management, 2(7). Tam, C., Loureiro, A., & Oliveira, T. (2020). The individual performance outcome behind e-commerce: Integrating information systems success and overall trust. Internet Research, 30(2). https://doi.org/10.1108/INTR-06-2018-0262 IRJE |Indonesian Research Journal in Education| |Vol. 5| No. 2|Dec|Year 2021| |E-ISSN: 2580-5711|https://online-journal.unja.ac.id/index.php/irje/index| 574 Tilahun, B., & Fritz, F. (2015). Modeling antecedents of electronic medical record system implementation success in low-resource setting hospitals Healthcare Information Systems. BMC Medical Informatics and Decision Making, 15(1). https://doi.org/10.1186/s12911-015-0192-0 Truong, Y., McColl, R., & Kitchen, P. J. (2010). Uncovering the relationships between aspirations and luxury brand preference. Journal of Product and Brand Management, 19(5). https://doi.org/10.1108/10610421011068586 Valtonen, T., Sointu, E. T., Mäkitalo-Siegl, K., & Kukkonen, J. (2015). Developing a TPACK measurement instrument for 21st century pre-service teachers. Seminar.Net, 11(2), 87– 100. Wang, Y. S. (2008). Assessing e-commerce systems success: A respecification and validation of the DeLone and McLean model of IS success. Information Systems Journal, 18(5). https://doi.org/10.1111/j.1365-2575.2007.00268.x Weiss, K., Khoshgoftaar, T. M., & Wang, D. D. (2016). A survey of transfer learning. Journal of Big Data, 3(1). https://doi.org/10.1186/s40537-016-0043-6 Williams, S. (2013). gathering feedback from early-career faculty: speaking with and surveying agricultural faculty members about research data. Journal of EScience Librarianship, 2(2). https://doi.org/10.7191/jeslib.2013.1048 Zainudin, A. (2012). Analyzing the Moderating. In A Handbook on Structural Equation Modeling (SEM) using Amos (pp. 134–162). Zelkowski, J., Gleason, J., Cox, D. C., & Bismarck, S. (2013). Developing and validating a reliable TPACK instrument for secondary mathematics preservice teachers. Journal of Research on Technology in Education, 46(2), 173–206. https://doi.org/10.1080/15391523.2013.10782618 Biographical Notes Dr. MIFTAHUDDIN is a senior lecturer of History education at Universitas Negeri Yogyakarta. He graduated, Doctoral degree, from UIN Sunan Kalijaga, Yogyakarta in 2017. He was also studied history at Universitas Gadjah Mada for his master's degree. LANTIP DIAT PRASOJO is a Professor in Educational Management and Information System and working as the head of institute of educational development and quality assurance of Universitas Negeri Yogyakarta. He completed his bachelor degree form Gajah Mada University in 2001 and finished his master degree in 2005 at Univeristas Negeri Yogyakarta. Finally, he did his doctoral degree in 2015 from Univeristas Pendidikan Indonesia, Bandung. He has published more than 20 books and 15 articles in reputable journals such as Heliyon, Quality Access to Success, Current Science, and Data in Brief. AWNIS AKALILI is a lecturer of Communication science of social science faculty, Universitas Negeri Yogyakarta. She is a postgraduate student of Universitas Gadjah Mada, Indonesia