TF_Template_Word_Windows_2010 Australasian Journal of Educational Technology, 2018, 34(1). 88 Assessing the dimensionality and educational impacts of integrated ICT literacy in the higher education context Tefera Tadesse Jimma University, Jimma, Ethiopia Robyn M. Gillies The University of Queensland, Brisbane, Australia Chris Campbell Griffith University, Brisbane, Australia The purpose of this paper is threefold: first, to introduce a conceptual model for assessing undergraduate students’ integrated information and communication technology (ICT) literacy capacity that involves 12 items generated from the modified version of the Australasian Survey of Student Engagement (AUSSE) questionnaire (Coates, 2010); second, to illustrate the construct validity and internal consistency of the model as implemented in a sample of undergraduate students (n = 536) enrolled in two colleges within a large Ethiopian university; and third, to further demonstrate the criterion validity of the model by examining predictive validity of the identified ICT literacy factors on student learning outcomes. A multi-method approach is used, which comprises correlation analysis, multiple regression analysis and structural equation modelling (SEM) techniques. The main finding is the support found for the 4-factor model consisting of ICT use, cognitive process, reading task and writing task. Results of the multi-method approach provide specific guidelines to higher education (HE) institutions using this approach to evaluate ICT literacy capacity and the resultant learning outcomes among their undergraduate students. The paper provides a conceptual model and supporting tools that can be used by other HE institutions to assist in the evaluation of students’ ICT literacy capacities. Introduction The use of technology in everyday lives is becoming increasingly obvious, in different settings and affecting many aspects of our social and economic lives. Information and communication technology (ICT) has transformed people’s daily activities (Keane, Keane, & Blicblau, 2016). Technologies are a driving force behind much of the development and innovation in both developed and developing countries as the current knowledge economy and its functions depend heavily on ICT use (Deng & Tavares, 2015; Russell, Malfroy, Gosper, & McKenzie, 2014). Higher education (HE) institutions have adopted ICT as a means of enabling students to access the knowledge and skills required to meet the demands of the ever-changing global environment (Altbach, Reisberg, & Rumbley, 2009). It also adds value to the processes of learning and to the organisation and management of learning institutions (OECD, 2012) and its use in instruction is essential to the growth and development of teachers and students (Khan, Butt, & Baba, 2013). For the past decade, HE researchers and experts in ICT have argued for an extended version of ICT literacy due to the advent of ICT platforms (Leu et al., 2011). Rather than continue a more traditional concept of ICT literacy, restricted to the technical understanding and use of software and hardware technologies, they argue that attempts should be made to understand the holistic nature of ICT literacy (Safar & AlKhezzi, 2013). Among the topics that have received particular attention is integrated ICT literacy and its relationship to student achievement in the HE institution (Irvin & Alexius Smith, 2007). Research suggests that it is important for researchers in HE to understand the inter-relationships between ICT use, learning and development, and learning conditions (Canchu & Louisa, 2009). However, little is known in the literature about students’ ICT literacy capacity and its relationship to other forms of engagement as well as learning outcomes (Luu & Freeman, 2011), particularly in the developing countries context (Alemu, 2015; Tibebu, Bandyopadhyay, & Negash, 2010). The need for integrated ICT literacy measurements In a report released in 2012, the OCED proposed that ICT literacy is a multidimensional construct consisting of three separate components: the technical use of software and hardware, engagement in cognitive processes, and the extent of literacy tasks via reading and writing of digital materials (Asiyai, Australasian Journal of Educational Technology, 2018, 34(1). 89 2014). However, most existing research utilises cognitive and technical components combined as if they are homogenous proxies for overall ICT literacy suggesting it is one-dimensional in nature (Oliver & Goerke, 2008; Slechtova, 2015). Although there is a persistent use of ICT as tools (Asiyai, 2014), this may be somewhat spurious as it reduces the broader meaning of ICT including digital hardware, software and infrastructure (Wilson, Scalise, & Gochyyev, 2015). Numerous studies (e.g., Wilson et al., 2015) have highlighted the importance of ICT literacy in the 21st century and ask how it can be systematically employed for students’ effective participation in society. However, there exists an insufficient empirical measure to assess students’ ICT literacy capacity; even recent attempts have dealt with secondary students instead of tertiary ones (Lau & Yuen, 2014). Moreover, when studying students’ ICT literacy capacities and their impact the common focus is often on measuring linear relationships rather than complex ones (Luu & Freeman, 2011). If access and technology skills are indeed only parts of the digital divide (Keane et al., 2016), it is quite logical to look for more data to help us understand the digital divide in terms of its broader conception of ICT literacy and effective performance (Wilson et al., 2015). One possible way of providing such evidence may be through examining students’ learning experiences in integrating ICT literacy during the undergraduate years and assessing the extent to which progress in learning is predicted by a set of ICT- related capacities (Zylka, Christoph, Kroehne, Hartig, & Goldhammer, 2015). This helps to determine the effects of the different capacities as well as other related variables such as demographics and their collective effects on learning and academic achievement (Luu & Freeman, 2011). Over the years, several efforts have been underway to develop methodologies to measure the educational uses of ICT, surpassing the narrower definition, that is, restricted to the utilisation of hardware and software. Although the conceptual understanding of a broader meaning of ICT extends to embody other components (Keane et al., 2016), less is known about how to assess the dimensionality of integrated ICT literacy and measure and its resultant effects using a structural equation modelling (SEM) approach (Lau & Yuen, 2014). It is true that “simplistic, negative correlations between numbers of classroom computers and standardized literacy and numeracy test results provide headlines for the media but do little to illuminate the full impact of ICT on teaching and learning” (Watson, Finger, & Proctor, 2004, p. 67). This study provides a comprehensive examination of the dimensionality and educational effects of an integrated ICT literacy construct. It does so by testing the convergent and discriminant validity of measures of the proposed component factors and the overall factor structure. Specifically, we ask whether a 1-factor (integrated ICT literacy), 2-factor (ICT use and academic challenge), 3-factor (ICT use, cognitive process and literacy task) or 4-factor (ICT use, cognitive process, reading task, and writing task) structure best fits the data. Additionally, the study examines the impact of ICT on various learning measures. Aims of the study This study seeks to address the following key aims: (1) To introduce a conceptual model for assessing undergraduate student ICT literacy capacity that involves 12 items generated from the modified version of the Australasian Survey of Student Engagement (AUSSE) questionnaire (Coates, 2010) (2) To illustrate the factorial validity and internal consistency of the model as implemented in a sample of undergraduate students (n = 536) enrolled in two colleges within a large Ethiopian university (3) To demonstrate the criterion validity of the model by examining the predictive validity of the identified ICT literacy factors on student learning outcomes. Conceptual model of the study ICT literacy is defined as a set of transferable capacities related to ICT use (e.g., Bogel, 2007, p. 72; Keane et al., 2016, p. 769; Lonsdale & McCurry, 2004, p. 5). In this definition, the word ICT distinguishes ICT- related capacities from a broader set of capacities, such as generic skills (Ahmad, Karim, Din, & Albakri, 2013) or 21st century literacies (Keane et al., 2016). The word transferable points out that ICT literacy is a universal or generic tool, which could be applied for a variety of purposes in academic study or the workplace (Wilson et al., 2015). The word set indicates that ICT literacy itself is not homogenous as it is made up of various component capacities (Deng & Tavares, 2015). The term capacity includes not only Australasian Journal of Educational Technology, 2018, 34(1). 90 knowledge and instrumental abilities, but also personal and interpersonal attributes, as well as capabilities to apply them in specific contexts (Lonsdale & McCurry, 2004; Zylka et al., 2015). This paper reports on aspects of students’ engagement during their undergraduate years in Ethiopia with a particular focus on ICT literacy capacities. ICT literacy is conceptualised as an overarching term encompassing three perspectives: basic ICT skills perspective, cognitive capabilities perspective and literacy perspective representing applications of reading and writing (Lonsdale & McCurry, 2004). The conceptual model of this study is presented in Figure 1. Figure 1. Conceptual model of the study As shown in Figure 1, the conceptual model is composed of two major parts: ICT literacy and learning outcomes. ICT literacy consists of three elements: ICT use, cognitive processes and literacy tasks. By learning outcomes, the model describes self-reported gains as well as examines the interplay between the students’ ICT literacy capacity and explores their causal relationships with their learning outcomes in multiple perspectives, including students’ self-reported gains in general education, personal and social development and higher-order thinking. Methodology Participants Participants were volunteers recruited from the student population in the College of Natural Sciences (CNS) and College of Social Sciences and Law (CSSL) at a large public university in Ethiopia. Table 1 presents a summary of the participant characteristics as a percentage of the sample across the colleges. Table 1 Individual and entry characteristics of participants across colleges (n = 536) Characteristic College of Natural Sciences 206 (38.4%) College of Social Sciences and Law 330 (61.6%) Gender Women 37 (18%) 70 (21%) Men 165 (82%) 260 (81%) Classification Year II 111 (54%) 115 (35%) Year III 95 (46%) 176 (53%) Years IV & V 0 39 (12%) Age M = 21.33 (SD 1.35) M = 21.51 (SD 1.35) CGPA1 M = 2.90 (SD 0.46) M = 3.05 (SD 0.47) Note. 1Cummulative grade point average Participants in the study were 536 (nCNS = 206 and nCSSL = 330) undergraduate students. The sample participants’ gender composition (nfemale = 107 and nmale = 429) was male dominated. The mean age of students in the two colleges (MCNS = 21.33 and MCSSL = 21.51) was similar. In terms of CGPA, the mean score values (MCNS = 2.90 and MCSSL = 3.05) differ slightly between the two colleges studied. Self-reported gains 1. General education 2. Personal and social development 3. Higher-order thinking Literacy tasks ICT use Cognitive processes Integrated ICT Literacy Australasian Journal of Educational Technology, 2018, 34(1). 91 Data sources The dataset for the present study was extracted from the student engagement survey designed to assess the learning experiences. This student engagement survey was a modified version of the AUSSE (Coates, 2010). This survey is a self-reported measure, which is the most direct method of assessing participants’ thoughts and feelings about their lived experiences. This survey correlates with non-self-report measures and appears to be a valid tool for measuring university students’ lived experiences (Gonyea & Miller, 2011). Although the student engagement survey consisted of broad indicators of students’ experience, this study used only those items used to measure aspects of students’ enriching learning experiences, academic challenges and reading and writing. The ICT use items began with, “In your experience at your college during the current academic year, about how frequently have you completed each of the following?” and were scaled from 1 (never) to 4 (very often). Cognitive process items began with, “During the current academic year, to what extent did your coursework emphasise the following intellectual activities?” and again, students were asked to respond on a scale of 1 (yery little) to 4 (very much). Reading and writing items began with, “During the current academic year, about how much reading and writing have you undertaken for each of the following categories?” and were scaled from 1 (never) to 4 (very often). The student engagement survey was distributed to 621 students from both colleges, with 596 students responding to it. After excluding surveys that omitted socio-demographic background information, 536 surveys were included in the final analysis, giving a response rate of approximately 86%. Data analytic approach A multi-method approach was used, which comprises correlation, reliability multiple regression and factor analyses. Three scales reported in the student engagement dataset were constructed from individual items. The ICT-related measures were used as proxies for ICT use. Although the academic challenge measures served as proxies for cognitive processes, the reading and writing measures served as proxies for literacy tasks. Pearson’s correlation coefficient was used to compute associations both at the factor as well as item levels. Reliability was measured using Cronbach’s alpha at both item and structure levels. The entire analyses were conducted using Stata version 12 data analysis and statistical software packages (Cleves, 2008). Confirmatory factor analysis (CFA) was computed using SEM with iteration procedures of maximum likelihood (ML) to provide further construct validity and to refine the factor structure when necessary (Jöreskog, 1969). CFA was used to determine the fit and the number of factors to retain from the studied samples. CFA models define the relationships between items and the factors (Snook & Gorsuch, 1989). The data analysis was conducted at two levels – item and structure levels – to examine whether the proposed factor structure held a minimal number of measures with sound psychometric properties. Results This study assessed the convergent and discriminant validity, factorial validity and criterion validity of the integrated ICT literacy construct utilising correlational analysis, scale reliabilities, CFA and regression analysis. Item-level analyses included the 12 items of the integrated ICT literacy scale. Although self- reported gain items and scales were used for the regression analysis, this study addresses those measures in the scale-level analyses only. The causal relationships of the integrated ICT literacy model were examined across the measures of self-reported gains. In the subsequent sections, the major findings of the study will be presented followed by the analytic framework. Item-level statistics First, we assessed the descriptive statistics, inter-item correlations and scale alpha coefficients for the 12 items ICT literacy scale. Table 2 contains the 12 items with their descriptive statistics, inter-item correlations and scale alpha coefficients. Australasian Journal of Educational Technology, 2018, 34(1). 92 Table 2 Items with descriptive statistics, correlations, and the reliability coefficient for the proposed integrated ICT literacy framework Item n M SD Sign Item test corr Inter- item corr1 Alpha α Used computer and information technology for learning-related purposes. (ictuse1a) 536 2.5 0.97 + .57 .22 .76 Used an electronic medium (Internet) to discuss or complete an assignment. (ictu2) 534 2.8 0.95 + .57 .22 .76 Understanding facts, ideas or methods from your subjects and readings. (cpro1b) 536 3.1 0.74 + .57 .22 .76 Analysing the basic elements of idea, experience or theory. (cpro2) 536 3.1 0.73 + .59 .22 .76 Synthesising and organising ideas, information or experiences into new, more complex interpretations and relationships. (cpro3) 536 2.8 0.79 + .62 .22 .75 Making judgments about the value of information, arguments or methods. (cpro4) 535 2.8 0.78 + .59 .22 .76 Applying theories or concepts to practical problems or in new situations. (cpro5) 536 2.8 0.91 + .60 .22 .76 Number of readings on assigned textbooks or part of subject readings. (rt1c) 532 2.9 0.81 + .44 .24 .78 Number of books read on your own for personal enjoyment or academic enrichment. (rt2) 536 2.7 0.88 + .48 .24 .77 Number of written assignments below a page (< 500 words). (wt1d) 534 2.6 0.98 + .48 .23 .77 Number of written assignments between 2 and 3 pages (500–1000 words). (wt2) 534 2.8 0.94 + .44 .24 .78 Number of written assignments more than 3 pages (> 1000 words). (wt3) 534 3.1 0.93 + .52 .23 .77 2.83 Scale reliability .78 Note .aICT use; bcognitive process; creading task; dwriting task; 1Item-test correlation As can be seen from Table 2, the mean score for each item ranged between 2.5 and 3.1 on a 4-point scale. The overall scale mean was 2.8. The inter-item correlation has shown them to be statistically related with only small differences across the items. With regard to the item-level correlation analysis, the result of the analysis shows that the items used to measure ICT literacy were psychometrically sound. This was evident in the column labelled “Sign”, which contains all plus signs, suggesting that all items are positively correlated to the scale. The Cronbach’s alpha coefficient for all items is sufficiently high with an alpha value of .78, which indicates that the scale has high reliability (Nunally & Bernstein, 1994). Australasian Journal of Educational Technology, 2018, 34(1). 93 CFA This study employed a principal axis factoring extraction with an orthogonal varimax rotation (Kaiser off) on the 12 items to the sample (n = 536) to identify the factor loadings, factor structure and unique variance accounted for by each variable included in the scale. The Kaiser-Meyer-Olkin measure verified the sampling adequacy for the analysis (KMO = .77). Bartlett’s test of sphericity, chi2 (55) = 1352.49, p < .001, indicates that correlations between items were sufficiently large for a factor extraction. Results reveal that four factors had an eigenvalue over Kaiser’s criterion of 1 and explained 61.99% of the variance. Given the large sample size, Kaiser’s criteria components and the convergence of a scree plot, the final analysis retained the following factors: ICT use, cognitive process, reading task and writing task. Table 3 presents the summary of the factor analysis results. Table 3 Rotated factor loadings (pattern matrix) and unique variances for the integrated ICT literacy scale (n = 536) Variable Cognitive process ICT use Writing task Reading task Uniqueness ictuse1a .13 .88 .12 .02 .19 ictuse2 .12 .90 .06 .06 .16 cpro1b .65 .23 -.11 .21 .46 cpro2 .54 .32 -.10 .33 .48 cpro3 .69 .20 .07 .13 .46 cpro4 .77 .08 .18 -.04 .36 cpro5 .73 .07 .18 .07 .43 rt1c .09 .01 .10 .81 .32 rt2 .10 .09 .17 .73 .42 wt1d .12 .08 .67 .19 .50 wt2 .02 .06 .84 .07 .29 wt3 .16 .22 .66 .09 .49 Percentage of variance explained 20.15% 15.62% 14.33% 11.89% Eigenvalues 2.42 1.87 1.72 1.43 Cronbach’s alpha (α) .77 .81 .63 .48 Note: aICT use; bCognitive process; creading task; dwriting task A closer examination of the measure variables shows that each indicator variable had a factor loading well above the recommended .40 level (Stevens, 2002). The unique variance of each variable, which represents the variance not explained by the common factor, indicates low to moderate levels. This provides supporting evidence on the good quality of the instrument. The ICT use items had the highest factor loading, with values of ≥ .81. In terms of uniqueness, the identified factors had low to moderate levels (.19 – .50), indicating minimal level of variance that is not shared with others. Each factor accounted for more than 10% of the variance explained by the construct. The eigenvalues also confirmed the higher contributions of each factors for the construct. Although the cognitive process scale represented the construct the most, the ICT use scale was the most reliable measure (α = .81) (see Table 3). In terms of reliability, the writing task scale is marginally reliable and the reading task scale is poor. In order to assess model adequacy and estimate relations among the domains, we performed CFA using SEM. Figure 2 illustrates the 4-factor ICT literacy model developed based on earlier theories of ICT literacy in HE. Rectangles represent observed variables. Ovals represent latent variables. The ε’s are residual terms that denote measurement errors. The single arrowhead drawn from the oval to the rectangle represents the path connection between the measurement variable and the latent factor. The double arrowhead drawn between two ovals represents correlation. Australasian Journal of Educational Technology, 2018, 34(1). 94 Figure 2. A 4-factor ICT literacy measurement model. It is clear from Figure 2 that the factor loading of each item used in the scale improved from a 1-factor model to a 4-factor model. For example, in the 1-factor model, the factor loadings range between .26 and .50, but the factor loadings significantly improved with the 4-factor model, with the loadings ranging from .55 to .84. Moreover, the correlation between the factors as shown in the 4-factor model include low to moderate correlation coefficients. These correlations, together with the existing moderate to high factor loadings, provide evidence of the discriminant validity of the scale. Next we examined the data at the scale level to validate the individual-level results, thinking that aggregate data have several potential psychometric advantages. For this equation level tests and practical indexes were used to assess the different models’ goodness of fit. The summary of the results of these tests is presented in Table 4. Table 4 Values of fit statistic tests across different factor models for the integrated ICT literacy scale (n = 536) Model χ2 df5 χ2/df TLI CFI RMSEA SRMR CD AIC Δ χ2 p Δ df 1-factor model1 571.75 54 10.59 .56 .64 .135 .087 .81 15197 .000 2-factor model2 338.43 53 6.39 .75 .80 .101 .072 .95 14965 233 .000 1 3-factor model3 192.85 51 3.78 .87 .90 .073 .053 .98 14990 146 .000 2 4-factor model4 140.94 48 2.94 .91 .94 .061 .042 .99 14778 52 .000 3 Note: 1Integrated ICT literacy including all variables used in the scale. 2ICT use and academic challenge. 3ICT use, cognitive process and literacy tasks. 4ICT use, cognitive process, reading task and writing task. 5Degrees of freedom. Good model fit is indicated by a CFI and TLI of at least .90; RMSEA and SRMR should be below .06 and .08, respectively. As with the item-level data, we re-examined the model for the scale-level analysis with a 1-, 2-, 3- and 4- factor structure, and the model fit was significantly improved across measures (see Table 4). It is clear from the results that the 4-factor model has a chi-square statistic of 2.94, which indicates the adequacy of the model. Further, the Tucker-Lewis index and the comparative fit index (CFI), with values of .91 and .94 respectively, provide additional supporting evidence for the adequacy of the model. Moreover, the root mean square error of approximation (RMSEA) of .061 and standardised root mean squared residual (SRMR) of .042 indicate a good fit. The study also assessed the correlations among the factors in the integrated ICT literacy and self-reported gains scales. Table 5 presents the summary of the scores. Australasian Journal of Educational Technology, 2018, 34(1). 95 Table 5 Summary of inter-correlations, means and standard deviations of scores of integrated ICT literacy sub- scales as functions of students’ self-reported gains measures Measure General education Personal and social development Higher-order thinking M SD 1. ICT use .42 .38 .52 2.6 .96 2. Cognitive process .53 .54 .56 2.9 .79 3. Reading task .43 .44 .44 2.8 .85 4. Writing task .32 .33 .35 2.8 .95 M 3.1 3.3 2.8 SD .79 .74 .87 Note: Means and standard deviations for the integrated ICT literacy sub-scales are presented in the vertical columns, and means and standard deviations for self-reported gains sub-scales are presented in the horizontal rows. For all scales, higher scores are indicative of more extreme responding in the direction of the construct assessed. All correlations are significant < .001 level. The mean scores for the integrated ICT literacy range between 2.5 and 3.1, whereas the mean scores for the self-reported gains measures range between 2.8 and 3.3 on a 4-point scale. Variation in the correlations of the four integrated ICT literacy measures was minimal, indicating a stable pattern with slight fluctuations across measures. Overall, the integrated ICT literacy sub-scales positively related with the three outcomes. In this study, to measure student learning outcomes, participants also responded to the 13-item student learning gains scale. This scale consists of three sub-scales – students’ composite scores of self-reported gains in general education, personal and social development and higher-order thinking –with each subscale comprising 3–6 items. Participants responded to all gains items using a 4-point Likert rating scale ranging from 1 (very little) to 4 (very much) (see Table 6). Coates (2010) provided strong reliability and validity evidences for the gains scale. In this paper, we present the learning gains scale as a tool for testing the criterion validity of the ICT literacy capacities. Table 6 The 13-item learning gains scale Gains in personal and social development 1. Developing a personal code of values and ethics 2. Understanding people of other ethnic backgrounds 3. Improving the welfare of the community to which you are in contact 4. Learning effectively on your own 5. Working effectively with others Gains in general education 6. Writing clearly and effectively 7. Speaking clearly and effectively 8. Thinking critically and analytically 9. Acquiring broad general education Gains in practical competence 10. Acquiring job or work-related knowledge and skills 11. Using computing and information technology 12. Analysing quantitative problems 13. Solving complex real-world problems Regression models Two-step hierarchical regressions were used, including three controlling variables, along with the 4-factor ICT literacy scale as predictors. The two-step hierarchical regressions were employed to evaluate the effects of controlling variables and integrated ICT literacy variables in predicting students’ self-reported gains. The first step consisted of controlling variables: college, class year and CGPA to predict students’ self- reported gains as measured by predictions of general education, personal and social development and higher-ordered thinking. In the second step, the 4-factor integrated ICT literacy domains were added to the controlling variables for predictions of the self-reported gain outcomes across the three regression equations. The second step helped to reveal the proportion of variations in general education, personal and social development and higher-order thinking outcomes, explained by integrated ICT literacy, over and above that explained by controlling variables. Table 7 presents a summary of the hierarchical regression Australasian Journal of Educational Technology, 2018, 34(1). 96 analysis for variables predicting general education, personal and social development and higher-order thinking. Table 7 Two-step hierarchical multiple regression models predicting general education, personal and social development and higher-order thinking outcomes (n = 513) Prediction Step 1 Step 2 Model 1 Predictor B SE1 T β B SE T β General education College .12 .04 3.12 .14** .01 .03 0.44 .02 Class year .11 .04 2.92 .13** .06 .03 1.68 .06 CGPA .18 .04 4.50 .20*** .09 .03 2.90 .11** ICT use .10 .03 3.93 .18*** Cognitive process .33 .06 5.80 .32*** Reading task .17 .07 2.47 .14* Writing task .02 .05 0.16 .02 R2 .05 .35 F for change in R2 9.82*** 38.13*** Model 2 Predictor B SE T β B SE T β Personal and social development College .13 .04 3.13 .14** .03 .04 0.69 .03 Class year .12 .04 2.94 .15** .06 .04 1.76 .07 CGPA .20 .04 4.73 .21*** .12 .04 3.19 .12** ICT use .08 .03 2.71 .13** Cognitive process .37 .06 6.01 .33*** Reading task .21 .08 2.66 .16** Writing task .03 .05 0.52 .04 R2 .06 .34 F for change in R2 10.48*** 37.38*** Model 3 Predictor B SE T β B SE T β Higher- order thinking College .23 .04 5.42 .24*** .09 .03 2.64 .10** Class year .16 .04 3.83 .17*** .09 .03 2.61 .09** CGPA .15 .04 3.48 .15** .06 .03 1.66 .05 ICT use .17 .03 6.47 .28*** Cognitive process .33 .06 5.73 .30*** Reading task .18 .07 2.37 .13* Writing task .02 .05 0.33 .02 R2 .08 .42 F for change in R2 13.90*** 52.61*** Note: 1Standard error Significance levels. * p < .05, ** p < .01, *** p < .001 As shown in Table 7, in the first step, the controlling variables statistically predicted students’ gains in general education, personal and social development and higher-order thinking, when entered first into the regression models (Step 1: Model1 R2 = 0.05, F[3, 509] = 9.82, p < .001; Model2 R2 = 0.06, F[3, 509] = 10.48, p < .001, and Model3 R2 = .08; F[3, 509] = 13.90, p < .001). When the integrated ICT literacy variables were added to the regression models, they brought significant changes in predictions across the 3 models. Step 2: Model1 R2 = .35, ∆R2 .30, F Change [7, 505] = 38.13, p < .001, Model2 R2 = .34, ∆R2 = .28, F Change [7, 505] = 37.38, p < .001, and Model3 R2 = .42, ∆R2 = .34, F Change [7, 505] = 52.61, p < .001. These results show that the inclusion of integrated ICT literacy variables resulted in substantial changes in the capacity to predict the three measured outcomes, with the highest prediction on higher-order thinking. It is clear from Table 7 that in step 2, the integrated ICT literacy variables, rather than the control variables, contributed to the predictions of the measured outcomes. In model 1, the ICT use domain (β = .18, t [505] = 3.93, p < .001), the cognitive process domain (β = .32, t [505] = 5.80, p < .001) and the reading task Australasian Journal of Educational Technology, 2018, 34(1). 97 domain (β = .14, t [505] = 2.47, p < .014) contributed to the model. Similarly, in the second model, the ICT use domain (β = .13, t [505] = 2.71, p < .007), the cognitive process domain (β = .33, t [505] = 6.01, p < .001) and the reading task domain (β = .16, t [505] = 2.66, p < .008) contributed to the model. Also, in the final regression model, the ICT use domain (β = .28, t [505] = 6.47, p < .001), the cognitive process domain (β = .30, t [505] = 5.73, p <.001) and the reading task domain (β = .13, t [505] = 2.37, p < .018) contributed to the model. In the second step, CGPA contributed to predictions of general education and personal and social development while college and class year contributed to the prediction of higher-order thinking. The results of multivariate analyses show that the variation in students’ self-reported gains can be attributed to the four integrated ICT literacy variables, over and above the controlling variables where .34 ≤ R2 ≥ .42. In summary, the model suggests that higher frequencies of integrated ICT literacy experiences tend to also have higher levels of self-reported gains in general education, personal and social development and higher- order thinking. It also suggests that the writing task has no observable effect on the students’ self-reported gains attributed to greater experience of the writing task. It was also found that the relationship between major areas and students’ gains in general education and personal and social development outcomes may be spurious as well as the relationships between CGPA and students’ gains in higher order thinking. These apparent relationships disappear when the integrated ICT literacy sub-scales are taken into account. Discussion HE institutions around the world are investing considerable amounts of money to create ICT resources that meet their students’ and teachers’ instructional needs (Ansyari, 2015; John, 2015; Watty, McKay, & Ngo, 2016). This is so because ICT is now integral to student learning experiences in most HE institutions. ICT literacy has become recognised as the critical literacy for the 21st century that is used to describe various sets of ICT-related capacities (Zylka et al., 2015). In the scholarly literature, these capacities embody six perspectives: fundamental ICT knowledge, basic ICT skills, cognitive capabilities, inter-literacy, situated literacy and metacognitive capabilities (Lonsdale & McCurry, 2004). Although we are cognisant of this, the proposed model in this study is intended to provide a factor structure for the ICT literacy items that specifies these items into their underlying four perspectives based on the student engagement data set. In investigating the statistics and fit indices, a 4-factor solution was shown to be adequate for the ICT literacy scale. Establishment of the baseline model begins with specification and testing of the hypothesised model (i.e., postulated structure of the measurement instrument under the study) for the observed variables. However, this model has adequate theoretical and empirical support; in a sense, it is exploratory. As such, this baseline model specifies the number of subscales (i.e., factors), the location of the items (i.e., pattern by which the items load onto each factor) and postulated correlations among the subscales (i.e., existence of factor correlation). The validity of this baseline model is tested based on post estimation testing procedures. Ideally, this model is supposed to be well fitting and therefore a best fit to the data from the perspectives of both parsimony and substantive meaningfulness. The factor loadings are the correlation coefficients between the measurement variables and latent factors; thus, they describe the common factors (Bollen, 2002). The factor loadings for the measured variables of the 4-factor model range from .55 to .84. These values demonstrate low to moderate correlations. Furthermore, the absence of excessively high and negative correlations among the four latent factors is also another indication of sound psychometric properties of the scale (Kline, 1998). Finding positive relationships between the proposed ICT literacy model and the self-reported gains highlights the intimate relationship between perceptions of an ICT literacy experience in university and perceived attainment of gains (Coates, 2006; Kuh, 2008). The results reported in the current study are consistent with the literature in this field confirming the significant greater effects of ICT-related variables after adjusting for demographic variables (Khan et al., 2013; Luu & Freeman, 2011; Porchea, Allen, Robbins, & Phelps, 2010). Further, the positive associations found between the ICT literacy sub-scales and the self-reported gains in the current study offer predictive validity and additional convergent validity evidence in support of the scale. However, this initial validation shows only the evidence in selected variables and colleges; further study is needed to identify the variables more widely, including more colleges or institutions to support generalisations in a broader sense. The results for this study support earlier research in that the positive relationship and differential effects of the students experience in integrated ICT literacy have proven support (Zylka et al., 2015). It is clear from Table 6 that students’ ICT use, cognitive processes and their accomplishment of the reading task were Australasian Journal of Educational Technology, 2018, 34(1). 98 positively related to student outcomes, but with varying explanatory power ranging between β = .13 and .33. Effect size analysis revealed a substantial change in the outcome variables by the integrated ICT literacy variables, except for the variable writing task. There are several credible explanations for why the writing task did not show significant predictions of the three outcome measures. The first possibility is that the writing task may be replicated from year to year so that students may depend on earlier assignment reports for completion rather than investing time and energy to complete writing assignments, or maybe students complete most of the writing tasks in groups so that more able students may take charge of completing the task so that individual efforts are masked (Imran & Nordin, 2013). The notion of integrated ICT literacy could have significant implications for the field, which are still largely unexplored. The analyses suggest that cognitive and technical measures of ICT literacy are not equivalent, and that measuring one without the other may ignore significant interactions between the two, potentially distorting the results. Furthermore, other constructs (e.g., self-reported gains) tend to relate differently to each dimension, such that important relationships might be overlooked if ICT use, cognitive process, reading task and writing task are not considered simultaneously. This has implications for theory as well as measurement because it means that as a tool integrated ICT literacy is a multidimensional construct far more complex than previously suggested by cognitive or technical measures alone (Bogel, 2007; Wilson et al., 2015). In summary, although we support the call for greater focus on ICT literacy capabilities, such conceptual modelling can only be properly undertaken when an appropriate measurement strategy has been developed, including the four dimensional measures identified in this study. Limitations and directions for future research Further research will be necessary to determine whether the findings of the present study generalise to larger samples of students and over other non-self-reported measures of ICT literacy. Using self-reported data in the present study allowed testing of the predictive validity of the proposed construct against pre- established outcome measures. The benefits rendered from these choices outweigh the potential disadvantages of not doing so. Based on the results, certain scales may prove more useful than others in future research endeavours. Among the four sub-scales, the ICT use domain demonstrated the highest factor loadings. However, the number of items included in the scale needs to be expanded and further validity and reliability tests need to be undertaken. With respect to reading and writing, other similar indexes are very much needed as these scales displayed relatively weaker psychometric properties. We framed the integrated ICT literacy scale specifically for use with undergraduate student participants. As a result, we do not advocate its use with other student participants and other HE contexts. We suggest researchers employ questionnaires designed or adapted specifically for those contexts (Lau & Yuen, 2014; Lonsdale & McCurry, 2004). The notion of integrated ICT literacy is both new and somewhat contentious, so it requires extensive research to assess its reliability, validity and potential contribution to the larger literature on ICT in HE. The first new finding that requires extensive future research is the definitional claim that integrated ICT literacy is better conceptualised as a 4-component construct than as a 3-component construct. Based on the findings of this study, future research needs to provide further replications of the model in diverse educational settings beyond public universities and test the model hypotheses using rigorous statistical methods such as testing for configural invariance, measurement invariance and structural invariance of the scale across demographic variables such as gender and socioeconomic status. Conclusions This study offers a conceptual model for developing the tasks used to measure ICT literacy capacities of undergraduate students in an HE context. Results demonstrate a 4-factor model adequately representing the underlying construct satisfying the different model goodness-of-fit statistics and practical indexes. The observed moderate to high factor loadings of the variables within each factor and low to moderate correlations between the factors provide supporting evidences of convergent validity and discriminant validity, respectively. Moreover, an assessment of the predictive validity of these factors with the self- reported gains measures has demonstrated significant positive predictive associations with varying degree of relationships. This provides further supporting evidence of criterion validity and discriminant validity. Additionally, these analyses establish constructs that can be used for research in ICT literacy with Ethiopian HE undergraduate students. Australasian Journal of Educational Technology, 2018, 34(1). 99 In conclusion, students’ integrated ICT literacy capacity can be better explained by a 4-factor model, and these factors, as evidenced in this paper, can potentially impact students’ outcomes. It is hoped that the evidence presented in this article will be relevant for educational administrators, teachers and instructional designers who are anticipating improving the quality of student experiences in undergraduate programs (Alemu, 2015). The analyses generally confirm the practicability of the theoretical construct to frame integrated ICT literacy capacities of the undergraduate students and the hypothesised resultant effects of integrated ICT literacy on student learning gains. Overall, it is clear that improved capacity in integrated ICT literacy could also impact students’ learning and development. Implications for further research, theory and practice in undergraduate education The integrated ICT literacy measurement scale has been established from a sound theoretical basis of international literature relating to ICT literacy and the student engagement scale. This scale enables the measurement of students’ ICT literacy capacities. Although a systematic study of the 12 items, as applied in this study, demonstrated a degree of psychometric quality, further trialling of the instrument with larger samples and using a CFA approach is highly recommended to provide additional robustness that could help to refine the factor structure. Also, the proposed scale should be supplemented with other qualitative methodologies such as interviews, sampling of students’ practical work, and classroom observations, to cope with the ambiguities that might result from the self-reporting nature of the scale. Students should have acquired a certain level of ICT literacy capacities during their university years (Deng & Tavares, 2015), and teachers should be able to assess students’ ICT literacy capacities empirically and to integrate ICT into teaching and learning (Ansyari, 2015). Teaching and learning in undergraduate degrees has continued to focus on coverage of content, superficial learning and a feeling of dependence on the teacher. However, the relationship between ICT literacy capacities and students’ learning outcomes is extremely important in understanding the causal effects of student learning outcomes. Many students who were involved in this study experienced integrated ICT literacy capacities that positively influenced their learning during their university years. It would be valuable for teachers working with undergraduate students to assess the nature of integrated ICT literacy capacities included in their undergraduate courses along with course content in order to help students obtain greater learning gains,. Instructional designers may also need to consider how instructional practices comprising of integrated ICT literacy capacity affect student learning outcomes. Findings suggest that undergraduate students should acquire an enhanced level of ICT literacy capacity. For this to be realised, student empowerment, staff capacity building, and an adequate ICT infrastructure are very much needed (Ansyari, 2015). Results of the multi-method approach provide specific guidelines to HE institutions using this approach to evaluate ICT literacy capacity and the resultant learning outcomes among their undergraduate students. The paper provides a conceptual model and supporting tools that can be used by other HE institutions to assist in the evaluation of students’ ICT literacy capacities. This study expands our current understanding of the digital divide by examining the nature and resultant impacts of integrated ICT literacy capacities among undergraduate students’ in Ethiopia. In the last two decades, researchers have gradually refined the conceptualisation of the digital divide, moving from a dichotomous model based mainly on access, to a multidimensional model accounting for differences in usage levels and actors’ perspectives (Bogel, 2007; Wilson et al., 2015). From the perspective of the digital divide, the primary focus relies on groups of users and user characteristics instead of different processes of use (Murray & Pérez, 2014). Although ICT literacy is an important factor in the digital divide research, and studies examine user characteristics with respect to ICT literacy, few make the process of ICT literacy knowledge and skills acquisition their focal point (Keane et al., 2016). The findings of this study further our thinking by expanding the notion of user characteristics beyond demographic and socioeconomic differences to differences in the processes leading to integrated ICT literacy capacities (Safar & AlKhezzi, 2013). References Ahmad, M., Karim, A. A., Din, R., & Albakri, I. S. M. A. (2013). Assessing ICT competencies among postgraduate students based on the 21st century ICT competency model. Asian Social Science, 9(16), 32–39. doi:10.5539/ass.v9n16p32 http://dx.doi.org/10.5539/ass.v9n16p32 Australasian Journal of Educational Technology, 2018, 34(1). 100 Alemu, B. M. (2015). Integrating ICT into teaching-learning practices: Promise, challenges and future directions of higher educational institutes. Universal Journal of Educational Research, 3(3), 170–189. Retrieved from http://www.hrpub.org/journals/article_info.php?aid=2389 Altbach, P. G., Reisberg, L., & Rumbley, L. E. (2009). Trends in global higher education: Tracking an academic revolution. Paris: United Nations Educational, Scientific and Cultural Organization. Retrieved from http://unesdoc.unesco.org/images/0018/001831/183168e.pdf Ansyari, M. F. (2015). Designing and evaluating a professional development programme for basic technology integration in English as a foreign language (EFL) classrooms. Australasian Journal of Educational Technology, 31(6), 699–712. doi:10.14742/ajet.1675 Asiyai, R. (2014). Assessment of information and communication technology integration in teaching and learning in institutions of higher learning. International Education Studies, 7(2), 25–36. doi:10.5539/ies.v7n2p25 Bogel, G. (2007). Dimensions of ICT literacy. Knowledge Quest: Journal of the American Association of School Librarians, 35(5), 72–73. Bollen, K. A. (2002). Latent variables in Psychology and the social sciences. Annual Review of Psychology, 53(1), 605–634. doi:10.1146/annurev.psych.53.100901.135239 Canchu, L., & Louisa, H. (2009). Subcultures and use of communication information technology in higher education institutions. The Journal of Higher Education, 80(5), 564–590. doi:10.1353/jhe.0.0064 Cleves, M. A. (2008). An introduction to survival analysis using Stata. College Station, TX: Stata Press. Coates, H. (2006). Student engagement in campus-based and online education: University Connections. New York, NY: Routledge. Coates, H. (2010). Development of the Australasian survey of student engagement (AUSSE). Higher Education, 60(1), 1–17. doi:10.1007/s10734-009-9281-2 Deng, L., & Tavares, N. J. (2015). Exploring university students’ use of technologies beyond the formal learning context: A tale of two online platforms. Australasian Journal of Educational Technology, 31(3), 313–327. doi:10.14742/ajet.1505 Gonyea, R., & Miller, A. (2011). Clearing the AIR about the use of self-reported gains in institutional research. New Directions for Institutional Research, 2011(150), 99–111. doi:10.1002/ir.392 Imran, A. M., & Nordin, M. S. (2013). Predicting the underlying factors of academic dishonesty among undergraduates in public universities: A path analysis approach. Journal of Academic Ethics, 11(2), 103–120. doi:10.1007/s10805-013-9183-x Irvin, R., & Smith Macklin, A. (2007). Information and communication technology (ICT) literacy: Integration and assessment in higher education. Journal of Systemics, Cybernetics and Informatics, 5(4), 50–55. Retrieved from http://www.iiisci.org/journal/cv$/sci/pdfs/p890541.pdf John, S. P. (2015). The integration of information technology in higher education: A study of faculty’s attitude towards IT adoption in the teaching process. Contaduría y Administración, 60, 230–252. doi:10.1016/j.cya.2015.08.004 Jöreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34(2), 183–202. doi:10.1007/BF02289343 Keane, T., Keane, W., & Blicblau, A. (2016). Beyond traditional literacy: Learning and transformative practices using ICT. The Official Journal of the IFIP Technical Committee on Education, 21(4), 769– 781. doi:10.1007/s10639-014-9353-5 Khan, S. M., Butt, M. A., & Baba, M. Z. (2013). ICT: Impacting teaching and learning. International Journal of Computer Applications, 61(8), 7–10. doi:10.5120/9946-4589 Kline, R. (1998). Principles and practice of structural equation modeling. New York, NY: Guilford Press. Kuh, G. (2008). Why integration and engagement are essential to effective educational practice in the twenty-first century. Peer Review, 10(4), 27-28. Retrieved from https://www.aacu.org/sites/default/files/files/peerreview/PRFall08.pdf Lau, W. W. F., & Yuen, A. H. K. (2014). Developing and validating of a perceived ICT literacy scale for junior secondary school students: Pedagogical and educational contributions. Computers & Education, 78, 1–9. doi:10.1016/j.compedu.2014.04.016 Leu, D. J., McVerry, J. G., O’Byrne, W. I., Kiili, C., Zawilinski, L., Everett-Cacopardo, H., … Forzani, E. (2011). The new literacies of online reading comprehension: Expanding the literacy and learning curriculum. Journal of Adolescent & Adult Literacy, 55(1), 5–14. doi:10.1598/JAAL.55.1.1 Lonsdale, M., & McCurry, D. (2004). Literacy in the new millennium. Adelaide: Commonwealth of Australia. Retrieved from http://www.ncver.edu.au/publications/1490.html http://www.hrpub.org/journals/article_info.php?aid=2389 http://unesdoc.unesco.org/images/0018/001831/183168e.pdf http://dx.doi.org/10.14742/ajet.1675 http://dx.doi.org/10.5539/ies.v7n2p25 http://dx.doi.org/10.1146/annurev.psych.53.100901.135239 http://dx.doi.org/10.1353/jhe.0.0064 http://dx.doi.org/10.1007/s10734-009-9281-2 http://dx.doi.org/10.14742/ajet.1505 http://dx.doi.org/10.1002/ir.392 http://dx.doi.org/10.1007/s10805-013-9183-x http://www.iiisci.org/journal/cv$/sci/pdfs/p890541.pdf http://dx.doi.org/10.1016/j.cya.2015.08.004 http://dx.doi.org/10.1007/s10639-014-9353-5 http://dx.doi.org/10.5120/9946-4589 https://www.aacu.org/sites/default/files/files/peerreview/PRFall08.pdf http://dx.doi.org/10.1016/j.compedu.2014.04.016 http://dx.doi.org/10.1598/JAAL.55.1.1 http://www.ncver.edu.au/publications/1490.html Australasian Journal of Educational Technology, 2018, 34(1). 101 Luu, K., & Freeman, J. G. (2011). An analysis of the relationship between information and communication technology (ICT) and scientific literacy in Canada and Australia. Computers & Education, 56(4), 1072–1082. doi:10.1016/j.compedu.2010.11.008 Murray, M. C., & Pérez, J. (2014). Unraveling the digital literacy paradox: How higher education fails at the fourth literacy. Issues in Informing Science and Information Technology, 11, 85–100. Retrieved from http://iisit.org/Vol11/IISITv11p085-100Murray0507.pdf Nunally, J., & Bernstein, I. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw Hill. OECD. (2012). Literacy, numeracy and problem solving in technology-rich environments: Framework for the OECD survey of adult skills. Paris: Author. Oliver, B., & Goerke, V. (2008). Undergraduate students’ adoption of handheld devices and Web 2.0 applications to supplement formal learning experiences: Case studies in Australia, Ethiopia and Malaysia. International Journal of Education and Development using Information and Communication Technology, 4(3), 78–94. Retrieved from http://ijedict.dec.uwi.edu/include/getdoc.php?id=4577&article=522&mode=pdf Porchea, S. F., Allen, J., Robbins, S., & Phelps, R. P. (2010). Predictors of long-term enrollment and degree outcomes for community college students: Integrating academic, psychosocial, socio- demographic, and situational factors. The Journal of Higher Education, 81(6), 680–708. doi:10.1080/00221546.2010.11779077 Russell, C., Malfroy, J., Gosper, M., & McKenzie, J. (2014). Using research to inform learning technology practice and policy: A qualitative analysis of student perspectives. Australasian Journal of Educational Technology, 30(1), 1–15. doi:10.14742/ajet.629 Safar, A. H., & AlKhezzi, F. A. (2013). Beyond computer literacy: Technology integration and curriculum transformation. College Student Journal, 47(4), 614–626. Retrieved from ERIC database. (EJ1029288) Slechtova, P. (2015). Attitudes of undergraduate students to the use of ICT in education. Procedia – Social and Behavioral Sciences, 171, 1128–1134. doi:10.1016/j.sbspro.2015.01.218 Snook, S. C., & Gorsuch, R. L. (1989). Component analysis versus common factor analysis: A Monte Carlo study. Psychological Bulletin, 106(1), 148–154. doi:10.1037/0033-2909.106.1.148 Stevens, J. (2002). Applied multivariate statistics for the social sciences. Mahwah, NJ: Lawrence Erlbaum Associates. Tibebu, D., Bandyopadhyay, T., & Negash, S. (2010). ICT integration efforts in higher education in developing economies: The case of Addis Ababa University, Ethiopia. International Journal of Information and Communication Technology Education, 5(3), 34–58. doi:10.4018/978-1-61520-869- 2.ch019 Watson, G., Finger, G., & Proctor, R. (2004). Measuring Information and Communication Technology (ICT) Curriculum Integration. Computers in the Schools, 20(4), 67–87. doi:10.1300/J025v20n04_06 Watty, K., McKay, J., & Ngo, L. (2016). Innovators or inhibitors? Accounting faculty resistance to new educational technologies in higher education. Journal of Accounting Education, 36, 1–15. doi:10.1016/j.jaccedu.2016.03.003 Wilson, M., Scalise, K., & Gochyyev, P. (2015). Rethinking ICT literacy: From computer skills to social network settings. Thinking Skills and Creativity, 18, 65–80. doi:10.1016/j.tsc.2015.05.001 Zylka, J., Christoph, G., Kroehne, U., Hartig, J., & Goldhammer, F. (2015). Moving beyond cognitive elements of ICT literacy: First evidence on the structure of ICT engagement. Computers in Human Behavior, 53, 149–160. doi:10.1016/j.chb.2015.07.008 Corresponding author: Chris Campbell, chris.campbell@griffith.edu.au Australasian Journal of Educational Technology © 2018. Please cite as: Tadesse, T., Gillies, R. M, & Campbell, C. (2018). Assessing the dimensionality and educational impacts of integrated ICT literacy in the higher education context. Australasian Journal of Educational Technology, 34(1), 88-101. https://doi.org/10.14742/ajet.2957 http://dx.doi.org/10.1016/j.compedu.2010.11.008 http://iisit.org/Vol11/IISITv11p085-100Murray0507.pdf http://ijedict.dec.uwi.edu/include/getdoc.php?id=4577&article=522&mode=pdf http://dx.doi.org/10.1080/00221546.2010.11779077 https://doi.org/10.14742/ajet.629 http://dx.doi.org/10.1016/j.sbspro.2015.01.218 http://dx.doi.org/10.1037/0033-2909.106.1.148 http://dx.doi.org/10.4018/978-1-61520-869-2.ch019 http://dx.doi.org/10.4018/978-1-61520-869-2.ch019 http://dx.doi.org/10.1300/J025v20n04_06 http://dx.doi.org/10.1016/j.jaccedu.2016.03.003 http://dx.doi.org/10.1016/j.tsc.2015.05.001 http://dx.doi.org/10.1016/j.chb.2015.07.008 mailto:chris.campbell@griffith.edu.au https://doi.org/10.14742/ajet.2957 Introduction The need for integrated ICT literacy measurements Aims of the study Conceptual model of the study Methodology Participants Data sources Data analytic approach Results Item-level statistics CFA Regression models Discussion Limitations and directions for future research Conclusions Implications for further research, theory and practice in undergraduate education References