Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

REID (Research and Evaluation in Education), 7(2), 2021, 118-131 

Available online at: http://journal.uny.ac.id/index.php/reid 
 

 

 
Empirical lecturers’ and students’ satisfaction assessment in e-learning 
systems based on the usage metrics 

 
Sulis Sandiwarno* 
Universitas Mercu Buana, Indonesia 
*Corresponding Author. E-mail: sulis.sandiwarno@mercubuana.ac.id 

 

 

INTRODUCTION 

Information technology has been widely used in education to facilitate lecturers and stu-
dents to enhance communication and interaction during the learning process. E-learning grows 
up utilization of information technology to improve the education process from conventional 
learning to electronic-based learning (Caputi & Garrido, 2015; Sadikin, 2017; Sadikin et al., 2016). 
Hong et al. (2017) defined e-learning as an online learning which provides a collaborative means 
to achieve knowledge, creation, and interactions among lecturers and students. The participation 
of lecturers and students is a key to make a desirable outcome in higher-level learning (Kim, 
2013). E-learning system helps lecturers as well as students to work and communicate (collabo-
ratively) using web technology tools in different time and space (Casamayor et al., 2009; Gameel, 
2017). Moreover, e-learning system provides a new approach to give an orientation for the learn-
er in learning processes and is convenient to use anytime and anywhere (Navimipour & Zareie, 
2015).  

The discussion is a concept of interaction whereby users are responsible for learning activ-
ities and give contribution in e-learning system (Asoodar et al., 2016a; Haron et al., 2017; Lin, 
2018; Zhang et al., 2017). To make successful communication in a forum based on students and 
lecturers’ feedback in the learning process, there are some activities of users in e-learning system 

ARTICLE INFO ABSTRACT 

Article History 
Submitted: 
25 March 2021 
Revised:  
14 November 2021 
Accepted:  
6 December 2021 

 
Keywords 
e-learning; satisfaction; 
usage-based metrics; SUS 

 

Scan Me: 

 
 

Nowadays, in the pandemic of COVID-19, e-learning systems have been widely used 
to facilitate teaching and learning processes between lecturers and students. Assessing 
lecturers’ and students’ satisfaction with e-learning systems has become essential in im-
proving the quality of education for higher learning institutions. Most existing ap-
proaches have attempted to assess users’ satisfaction based on System Usability Scale 
(SUS). On the other hand, different studies proposed usage-based metrics (completion 
rate, task duration, and mouse or cursor distance) which assess users’ satisfaction 
based on how they use and interact with the system. However, the cursor or mouse 
distance metric does not consider the effectiveness of navigation in e-learning systems, 
and such approaches measure either lecturers’ or students’ satisfaction independently. 
Towards this end, we propose a lostness metric to replace the click or cursor distance 
metric for assessing lecturers’ and students’ satisfaction with using e-learning systems. 
Furthermore, to obtain a deep analysis of users’ satisfaction, we tandem the usage-
based metric (i.e., completion rate, task duration, and lostness) and the SUS metric. 
The evaluation results indicate that the proposed approach can precisely predict users’ 
satisfaction with e-learning systems. 

This is an open access article under the CC-BY-SA license.  

How to cite: 
Sandiwarno, S. (2021). Empirical lecturers’ and students’ satisfaction assessment in e-learning systems based on 

the usage metrics. REID (Research and Evaluation in Education), 7(2), 118-131. 

doi:https://doi.org/10.21831/reid.v7i2.39642 

https://creativecommons.org/licenses/by-sa/4.0/
https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 119 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

such as knowledge sharing (upload some course materials) and problem-solving (Horvat et al., 
2015; Koohang et al., 2016; Sandiwarno, 2016). The quiz is also an indicator used by lecturers to 
see the performance of students in the learning process. In the e-learning system, lecturers can 
upload questions such as multiple choice or essay, then lecturers give time for students to answer 
and finish them. Assessments of the quiz are usually done weekly, and auto-graded and peer-
graded assignments (Sun, 2016). 

To facilitate the learning process, several platforms, such as Learning Management Systems 
(LMS) have been proposed to support activities in e-learning and help users. Moodle is one of 
the common platforms of LMS and is mostly used for assisting users in the learning process 
(Liberona & Fuenzalida, 2014). Moodle is an LMS software which is publicly available and one of 
the pertinent e-learning system widely used in learning institutions (Ifinedo et al., 2018; Muñoz et 
al., 2017). Kerimbayev et al. (2017) used Moodle to share materials and users’ knowledge to in-
crease the motivation in an online course.  

Assessing users’ satisfaction in the e-learning system is necessary because it highlights the 
satisfaction level of users on using e-learning system. Satisfaction is the condition of users’ emo-
tional issue that can be viewed as consideration based on personal experiences and belief to prod-
ucts. Moreover, satisfaction is important key to indicate the effective of learning process between 
lecturers and students.  

Most previous approaches have been attempted to assess users’ satisfaction in e-learning 
systems. For instance, Almarashdeh (2016) measured lecturers’ satisfaction based on question-
naires, which in separates users’ criteria, gender, and age. Asoodar et al. (2016b) assessed users’ 
satisfaction based on the learning process (i.e., course dimension, technology dimension, and de-
sign of system) using an anonymous questionnaire and regression analysis. The evaluation results 
show that the proposed approach can be employed to explain and describe the users’ satisfaction 
in learning process. Cohen and Baruth (2017) in their study proposed an anonymous question-
naire and Analysis of Variance (ANOVA) to evaluate users’ satisfaction in difference among 
groups online learning by their personality. The result indicates that the proposed can be able to 
use in evaluate users’ satisfaction. Po-Olusula (Chen & Adesope, 2016), measuring users’ satisfac-
tion in the e-learning system involves different aspects such as technology, criteria of the user, 
and feature of web-based systems. Ku et al. (2013) contend and demonstrate that measuring 
users’ satisfaction in e-learning system can be done in teamwork, which means dividing the learn-
ing participants into two teams of students. 

Moreover, there are several usability methods which can be used to assess the users’ satis-
faction when they are interacting in e-learning systems, namely usage-based metrics (i.e., com-
pletion rate, task duration, and lostness). Usability is used as the measurement of some useful 
products, and it is easy to use for the users to get satisfaction goals more effectively and effi-
ciently. Mehmet (Berkman et al., 2018) defined usability as a tool to evaluate software products 
from subjective users’ perspective and questionnaires standardized to confirmed dependability of 
satisfaction. 

Harrati et al. (2016) argued that completion rate (notated as CR) is a metric which used to 
measure the percentages of users successfully and finished the activities on a specific task of the 
e-learning system. The high results of completion rate on tasks indicate that users successfully 
completed the assigned tasks. However, the low score implies that users did not achieve some of 
the tasks. Task duration (notated TD) is a metric used for measuring the total time that users 
require to finish the tasks. Task time is usually measured in minutes for long activities and sec-
onds for the short activities (Curcio et al., 2019), whereas lostness is a metric which used to calcu-
late the efficient in the navigation of web pages in which the participants took to complete the 
task step by step (Ahn et al., 2018; Curcio et al., 2019). Therefore, completion rate, task time and 
lostness respectively describe to what extent users successful finished each task, how long they 
take to complete such tasks, and the minimum number of steps that a user must take to finish the 
tasks.  

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 120 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

Harrati et al. (2016) have attempted to assess lecturers’ satisfaction based on usage-based 
metrics (i.e., completion rate, task duration, cursor distance or mouse clicks) and System Usability 
Scale (SUS) metric. SUS is a metric that is used to assess users’ satisfaction based on question-
naires. Cursor distance is a metric that is employed to assess the efforts undertaken by users in 
the systems through the hand use to move the cursor on the screen. The authors measured the 
correlation between the completion rate and SUS metrics by adopting Pearson Correlation Co-
efficient (PCC) metric. The results indicate that, there is a correlation between completion rate 
and SUS metrics. 

Although, the previous approaches have been attempted to assess users’ satisfaction and 
obtained the good results, however, questionnaire of SUS metric is not a sufficient for expressing 
the level of users’ satisfaction. Additionally, the previous approaches assess both lecturers and 
students separately. Moreover, the aforementioned approaches do not consider evaluating the ef-
fectiveness navigation in e-learning systems.  

To this end, in this paper, we propose an approach to assess lecturers’ and students’ sa-
tisfaction in using e-learning system, unlike other works which consider lecturers or students 
separately. Moreover, in conducting users’ satisfaction assessment we propose a lostness metric 
which is part of usage-based metrics to replace cursor distance or mouse clicks. The choice of 
this metric (lostness) was motivated by previous approaches (Ahn et al., 2018). To the best our 
knowledge, this paper is the first attempted to introduce assessing users’ satisfaction with adding 
lostness metric. Our proposed approach consists of two parts: (1) employing usage-based metrics 
to assess users’ satisfaction based on task modelling and (2) usability data analysis based on SUS 
metric. Task modelling is used to capture the activities and track the navigation of users. In addi-
tion, we exploit the well-known metrics in usability data analysis to assess the lecturers’ and stu-
dents’ satisfaction. Further, we analyze the correlation between the results between usage-based 
metrics and SUS metric. The main contribution of this study is summarized as follows. First, we 
propose a new way to assess lecturers’ and students’ satisfaction based on usage-based metrics 
with added lostness metric. Second, the proposed approach has been evaluated with the data 
from users in using e-learning systems. Third, we compared and examined the correlation be-
tween usage-based metrics and SUS metric. The evaluation results of this study indicate that there 
is significant correlation. 

The rest of this paper is organized as follows. In section 2, we highlight several related 
works in assessing users’ satisfaction. Section 3 describes the research method of our study. Sec-
tion 4 presents the results and discussion. In section 5, we conclude the paper and highlight the 
future work. 

METHOD 

In this section, we present an approach for assessing the users’ satisfaction in an e-learning 
system (Moodle). The version 3.6.2 Moodle is installed on remote accessible web server with the 
usage of logger scriplet that is integrated within HTML pages of the website. Empirically, to 
assess the users’ satisfaction, we grouped users into two groups (Trained and non-Trained). The 
users were such grouped in order to assess the influence of user training on the level of users’ 
satisfaction. Trained users are those with experience and are familiar with using e-learning system, 
whereas non-trained users are those who do not have experience or are not familiar with using e-
learning system.  

In assessing lecturers’ and students’ satisfaction, we explain the framework of the proposed 
approach as shown in Figure 1. Figure 1 shows the framework of the proposed approach which 
has two steps. First, we collected the logs activities of users from e-learning system such as dis-
cussion forum, quiz, uploading educational materials (e.g., documents, music, and pictures), and 
record all activities of users in a database. Second, in supporting to assess users’ satisfaction, the 
users had to fill a SUS questionnaire. The following subsections in detail present each of the key 
steps of the proposed approach. 

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 121 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

 

Figure 1. The Framework of the Proposed Approach 

Usage-based Metrics 

To assess the level of satisfaction based on usage-based metrics, we define the series of 
tasks (task descriptor), which are generally conducted by lecturers and students in e-learning sys-
tem. The task descriptors of lecturers and students are similar, but some tasks are different. The 
task descriptor (task modelling) of lecturers consists of login, open and choose the course, mak-
ing discussion forum, respond to the discussion forum, and also uploading quiz. On the other 
hand, the task descriptor of students consists of login, open and choose the course, respond 
discussion forum, uploading the discussion forum, and responding to quizes. The performed 
activities by the lecturers and students are similar in task 1 and task 2. The steps in task 1 are such 
that users must open e-learning system by typing the e-learning address on the browser address 
bar. Once open, on the start page of an e-learning system, the users will see the login form, which 
should be filled in by entering the login credentials. This step is a validation process which means 
if users are registered or have the credentials of the e-learning system then users are given access 
into the system. After successful login, users will have access to the main menu of the e-learning 
system. For creating a discussion forum or quiz that is shown to students in e-learning system, 
the lecturer opens the page of forum or quiz, fills out the form of forum or quiz, and also up-
loads some materials for discussion. After the lecturer creates a forum, students can interact with 
the lecturer. After the students upload the forum, then lecturer can provide feedback to the fo-
rum that has been done. Therefore, tthe student can also reply to the forum provided by the 
lecturer.  

In acquiring data of the courses in the e-learning system, we collected from the data logs of 
the e-learning system and put-on JavaScript code into the e-learning system for collecting data of 
activities performed by the lecturers and students in the course. The events were recorded by 
JavaScript in order to assess the users’ satisfaction based on system usage, we define the series of 
tasks (task descriptor), which are generally conducted by lecturers and students in e-learning sys-
tem. The task descriptors of lecturers and students are similar, but some tasks are different. The 
task descriptor of a lecturer consists of login, open and choose the course, making discussion 
forum, respond to the discussion forum, and uploading quiz. Task descriptor of students consists 
of login, open and choose the course, respond to discussion forum, uploading the discussion 
forum, and responding quiz. In supporting the assessment of the users’ satisfaction based on ac-
tivities, we employ commonly usage-based metrics including: completion rate, task duration and 
lostness. 

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 122 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

Completion Rate 

Completion rate is a metric used to measure the success of any activities performed by 
users, calculated by Formula (1). The percentage of this success ranges from 0% (failure) to 100% 
(success) (Harrati et al., 2016; Tullis & Albert, 2013). 

 

 …………………………… (1) 

Task Time 

Task time is a common way to measure the usability of a product. Task time is simply the 
time elapsed between the start of a task (St) and the end of a task (Ft), usually expressed in min-
utes and seconds, calculated as in Formula (2) (Tullis & Albert, 2013). 

 
Task Time = (St - Ft) …………………………………………………. (2) 

Lostness 

To calculate the lostness, Formula (3) is employed, where n represents the number of dif-
ferent web pages visited while performing the task, s is defined as the pages visited total number 
to indicate each task, r is denoted as the minimum number of task in pages which should be vi-
sited to finish the task, s is total number of page visited. 
 

………………………………. (3) 

System Usability Scale (SUS) 

From Bareeq et al. (AlGhannam et al., 2017), SUS with 10 questions was used, where each 
question has the concept of the SUS, and the positive statements presented in odd-numbered and 
the negative statements are even numbered. The respondents choose from a five-point Likert-
scale that is represented by numbers from strongly disagree (1) to strongly agree (5) accordingly. 
Each item's score contributes from 0 to 4. The sum of the scores is multiplied by 2.5 to obtain 
the overall SUS score and the number of scores for each respondent ranges from 0 – 100, as for-
mulated in Formula (4). 
 

……………………….. (4) 

Experimental Setup 

In this section, we explain the experimental setup of two methods. These two methods are 
the usage-based metrics and SUS metrics. 

Usage-based Metrics and SUS Metrics 

For evaluation, we compared the proposed method against the approaches that only use 
completion rate (notated as Cr), task duration (notated as Td), lostness (notated as L), and SUS 
metrics. Note that, the Cr results indicate that users are satisfied in range of 70% - 100% (Harrati 
et al., 2016; Tullis & Albert, 2013). The result of lostness metric should be less than 0.5 to con-
sider satisfaction (Smith, 1996; Tullis & Albert, 2013). The SUS results indicate that users are sa-
tisfied if the score is not less than 70% (AlGhannam et al., 2017). 

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 123 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

Data Collection from e-Learning 

The data of this study is the logs of the activities which are performed by the lecturers and 
students from online courses in the a university located in Jakarta, Indonesia. The lecturers were 
grouped based on gender, age, and academic qualification, whereas the students were grouped 
based on gender and age. The total number of users is 1906, out of which lecturers are 50 and 
students are 1856. 

Table 1.  The Distribution of Lecturers and Students 

Lecturers n-data %  Students n-data % 

Gender Male 30 60  Gender Male 464 50 
 Female 20 40   Female 464 50 

Age Distribution 25-35 25 50  Age Distribution 17-19 464 50 
36-45 10 20  20-22 464 50 

46-56 10 20  
57-67 5 10  

Academic 
Qualification 

Junior Lecturers (AA) 25 50  
Senior Lecturers (L) 15 30  
Associate Professor (LK) 5 10  
Professor (Prof.) 5 10  

 
Table 1 shows the distribution of lecturers and students, in which the lecturers consist of 

the junior lecturer (Asisten Ahli or AA), senior lecturer (Lektor or L), associate professor (Lektor 
Kepala or LK) and Professor (Prof.) In order to acquire data of the courses in the e-learning sys-
tem, we collected from the data logs of the e-learning system (Moodle) and put-on JavaScript into 
the e-learning system for collecting data of activities that are performed by the lecturers and 
students in the course. The events were recorded by JavaScript, in certain items such as how 
many clicks the users make to go to the intended page, because before the users use e-learning 
system, we identified the minimum number of click links that are required to reach each part of 
the system. 

FINDINGS AND DISCUSSION 

In this section, we present and discuss the results obtained from the experiment based on 
usage-based metrics, SUS metric, and finally a results discussion. 

Findings 

Usage-based Metrics Evaluation 

Table 2 depicts the results for the completion rate, duration, and also lostness for the two 
groups of lecturers. We note that, on average, all of the trained lecturers were able to complete 
the assigned activities successfully, except in task 4 and 5. In a total of eight lecturers and 14 
lecturers out of 50 lecturers failed to complete the assigned activities fully task 4 and task 5 re-
spectively. Then, we further noted that for the non-trained lecturers, though the majority were 
able to complete the assigned activities, but some of them failed to fully complete the activities in 
all the tasks. Moreover, the results generally suggest that the trained lecturers spent less time to 
complete the tasks (an average of 16 minutes for the long task), whereas non-trained lecturers 
spent about 37 minutes for the same task (task 5). Finally, according to the recorded values for 
the lostness metric, the results suggest that the trained lecturers efficiently navigated through the 
system comparing against the non-trained lecturers. In addition, Table 3 provides the detailed 
results of the lecturers based on their biographical information. The comparison of the trained 
and non-trained lecturers for the three usage-based metrics is further pictorially depicted in 
Figure 2. 

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 124 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

Table 2. Lecturers’ Satisfaction Assessment 

 Task 1 Task 2 Task 3 Task 4 Task 5 

Completion Rate      
Trained 100 100 100 97.5 93.8 

Non-Trained 64.9 73.2 68.9 55.8 77.5 
SD 24.82 18.95 21.99 29.49 11.53 

Duration      
Trained 3.18 3.32 11.14 12.32 15.65 

Non-Trained 8.39 12.09 35.03 30.24 37.65 
SD 3.68 6.2 16.89 12.67 15.56 

Lostness      
Trained 0.03 0.13 0.16 0.25 0.14 

Non-Trained 0.37 0.44 0.53 0.55 0.66 
SD 0.24 0.22 0.26 0.21 0.37 

Table 3. Lecturers’ Satisfaction Assessment based on Gender, Age, and Academic Qualification 

 Gender Age Academic Qualification 

Male Female 26-35 36-45 46-56 57-67 AA L LK Prof. 

Completion Rate           
Trained 99.08 97.68 98.7 99.24 96.24 99.24 98.7 99 95.64 96.94 

Non-Trained 67.44 68.98 65.9 62.68 67.38 65.56 68.64 65.1 60.76 73.86 
SD 18.28 15.81 18.76 20.12 16.52 18.24 17.83 18.36 19.78 14.63 

Duration           
Trained 9.31 9.64 9.32 8.87 8.92 8.92 9.32 8.94 9.14 8.42 

Non-Trained 24.48 24.98 24.51 24.73 24.92 24.93 24.51 24.76 24.48 25.27 
SD 12.67 12.91 12.78 12.61 12.73 13.01 12.78 12.7 12.54 13.01 

Lostness           
Trained 0.19 0.09 0.17 0.19 0.07 0.14 0.17 0.19 0.12 0.05 

Non-Trained 0.5 0.53 0.53 0.5 0.65 0.57 0.53 0.5 0.61 0.67 
SD 0.19 0.25 0.2 0.2 0.3 0.26 0.2 0.2 0.28 0.34 

 

 

Figure 2. Trained and Non-trained Lecturers’ Comparison 

Table 4 reports the results for the completion rate, duration, and also lostness for the two 
groups of students. A total of 1,856 students were grouped into two groups (trained and non-
trained) to contain 928 students each. Generally, the results suggest that on average all of the 928 
trained students were able to successfully complete the assigned activities in all of the five tasks. 
Furthermore, a total of 25 non-trained students (3% = 25/928) were unable to fully complete the 
assigned activities in all five tasks. It was also noted that, generally most of the non-trained stu-
dents did not fully complete the tasks resulting to an average of 60% completion rate.  

Moreover, the results generally suggest that trained students spent less time to complete the 
tasks (an average of 13.5 minutes for the long task), whereas non-trained lecturers spent about 31 
minutes for the same task (task 5). Finally, according to the reported values for the lostness me-

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 125 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

tric, the results indicate that the trained students efficiently navigated through the system com-
paring against the non-trained students. Table 5 further provides the detailed results of the stu-
dents based on their biographical data. The comparison of the trained and non-trained students 
based on the three usage-based metrics is further depicted in Figure 3. 

In summary, the results indicate that there is a significant difference between the recorded 
results of the three metrics (completion rate, duration, as well as lostness) between the trained 
and non-trained users (lecturers and students). The results suggest that trained users are signifi-
cantly better at and also more satisfied with using the e-learning system compared to non-trained 
users. Furthermore, it is worth noting that trained students generally reported good results com-
paring with the trained lecturers, whereas non-trained students and non-trained lecturers reported 
almost similar results. 

Table 4. Students’ Satisfaction Assessment based on Gender, Age, and Academic Qualification 

 Task 1 Task 2 Task 3 Task 4 Task 5 

Completion Rate      
Trained 100 100 100 100 100 

Non-Trained 60.4 64.6 46.1 51.5 59.58 
SD 28 25.03 38.1 34.29 28.58 

Duration      
Trained 2.9 3.1 11.16 12.65 13.55 

Non-Trained 16.69 14.95 31.49 29.71 30.78 
SD 9.75 8.38 14.38 12.06 12.18 

Lostness      
Trained 0.08 0.16 0.14 0.18 0.18 

Non-Trained 0.38 0.48 0.51 0.51 0.61 
SD 0.21 0.23 0.26 0.23 0.3 

Table 5. Students’ Satisfaction Assessment based on Gender and Age 

 Gender Age 

Male Female 18-20 21-22 

Completion Rate     
Trained 100 100 100 100 

Non-Trained 56.24 56.7 47.38 33.7 
SD 23.56 23.44 27.44 35.45 

Duration     
Trained 8.49 8.85 8.49 10.34 

Non-Trained 26.35 27.12 24.32 25.13 
SD 10.25 11.09 10.25 11.09 

Lostness     
Trained 0.14 0.14 0.12 0.12 

Non-Trained 0.49 0.49 0.45 0.59 
SD 0.2 0.2 0.19 0.27 

 

 

Figure 3. Trained and Non-trained Students’ Comparison 

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 126 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

System Usability Scale 

The system usability analysis aimed at quantifying how students and lecturers perceive the 
usability of the e-learning system. Table 6 depicts the SUS results for both trained and non-
trained lecturers based on gender, age and academic qualification. Generally, the results suggest 
that, on average the trained lecturers reported more than 90% whereas non-trained lecturers re-
ported 69% with an average standard deviation (SD) of 15.02 on task completion rate. As depict-
ed in Table 6, it is evident that trained lecturers perceived that the system as more usable com-
paring against non-trained lecturers. 

Table 6. Lecturers’ SUS Analysis 

 Gender Age Academic Qualification 

Male Female 26-35 36-45 46-56 57-67 AA L LK Prof. 

SUS (%)           
Trained 90 92.86 90 95 90.42 92.5 88 94.64 85 86.25 

Non-Trained 69.47 67.08 68.96 70 68.75 74.17 69.79 71.94 66.25 65.83 
SD 14.52 18.23 14.88 17.68 15.32 12.98 12.88 16.05 13.26 14.44 

 
Furthermore, in Table 7, we report the results of a usability analysis of students based on 

gender and age. The average results of SUS of trained students were recorded at 87.69% and at 
69.84% for non-trained students with an average SD of 12.62 on the completion rate. Similar as 
trained lecturers, and also the trained students reported higher SUS scores than non-trained stu-
dents which implies that trained students perceived the system as more usable comparing against 
non-trained students. Finally, the overall results for both lecturers and students suggest that lec-
turers ranked the usability of the system higher than the students. In other words, the lecturers 
were more satisfied with in than the students. 

Table 7.  Students’ SUS Analysis 

 Gender Age 

Male Female 18-20 21-22 

SUS (%)     
Trained 86.18 88.27 87.49 88.81 

Non-Trained 69.42 70.31 69.16 70.47 
SD 11.84 12.7 12.96 12.97 

 
The Cronbach alpha α which refers to the reliability of assessment is estimated as 0.865 for 

all scores of tasks. This is an indicative that the questionnaire of SUS metric strong reliability in-
strument used in the e-learning evaluation according to Borkowska and Jach (2017). They argued 
that the internal consistency assessing of the α scale should be reached the value above 0.8. 

Comparison between SUS and Lostness  

The study further compared the SUS and lostness to deduce whether the perceived satis-
faction of the system reflects users’ actual performance when using the system. In that regard, we 
compared the SUS scores against lostness, that is because SUS reveals how the users’ rate how 
easy using the system is, whereas lostness reflects to what extent users were able to use the sys-
tem practically in practice by measuring the ease of navigating within the system. We compared 
the lostness  with the SUS metrics and completion rate with the SUS metrics. In the comparison 
we examined the correlation between the results of lostness with SUS metrics and completion 
rate with SUS metrics. We also computed the Pearson Correlation Coefficient (PCC) for lostness 
with SUS metrics and obtained an average PCC value of r = 0.658. Furthermore, PCC for com-
pletion rate and SUS metrics we achieved an average PCC value of r = 0.736. Figure 4 depicts the 
comparison of SUS and lostness for lecturers and students respectively. The results generally 
suggest that there is a close correlation between SUS and lostness. 

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 127 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

 

Figure 4. Lostness versus SUS in Lecturers and Student Activities 

For example, in Figure 4, comparing the trained and non-trained lecturers, the trained lec-
turers reported higher scores of SUS which implies they perceived the system to be easy to use, 
equally their scores on the percentage of lostness were less. Therefore, the SUS assessment fairly 
evaluated the system. The same case was reported for trained and non-trained students as can be 
seen in Figure 4. Moreover, it is worth noting that, there were no significant differences between 
the results reported for SUS and lostness across different age, gender, and academic evaluation. 

Discussion 

This research shows an approach for evaluating the satisfaction of e-learning system based 
on the SUS and usage-based metrics. The experimental results show that trained users have skills 
and more experienced in using the e-learning system, while not all non-trained users have expe-
rience in using it. The results for measuring users’ satisfaction were presented in widely used met-
rics: completion rate, task time, lostness and SUS. Table 6 shows the results of the lecturers; the 
average results of trained lecturers reported 90.46% and non-trained lecturers reported 69.2% on 
completion rate. Table 7 shows average results of trained students of 87.69% and non-trained 
students achieved 69.84% on completion rate. The combination of trained users reported an 
average result of 89.08% and non-trained users reported an average result of 69.53% on comple-
tion rate. Based on lostness in Table 4, trained lecturers reported an average result of 0.14 and the 
result of non-trained lecturers was 0.56. Average result of trained lecturers showed better than 
the trained students, since the average SUS score of trained lecturers was 90.5%, but for trained 
students, the average SUS score was 87.7%. Further, the average lostness score of trained lectur-
ers was 0.14 while average result for trained students reported 0.13. In summary, there are two 
indicators for lostness and SUS: (1) trained students are more capable than trained lecturers in 
using the e-learning system based on lostness, and (2) trained lecturers are more satisfied in using 
e-learning system than trained students based on SUS. Also, Harrati et al. (2016) and Tullis and 
Albert (2013) argued that the minimum results of SUS to indicate satisfac-tion level is 70-100%. 

As summary, this research presents a number of unique contributions to assess users’ satis-
faction in e-learning system and explore some factors that can affect the satisfaction level and in-
teractivity performance for the lecturers and students in the university for employing the educa-
tional technologies. Firstly, the results achieved by the directed experimentations confirm that the 
SUS metric is insufficient to reveal the true approval and level of users' satisfaction in using e-
learning system. The evaluation of SUS metric should be fulfilled tandem with the usage-based 
metrics. This would assist to cluster the different lecturers and students and also comportment 
cavernous the reported usability analysis results based on the participants actual performances.   

https://doi.org/10.21831/reid.v7i2.39642


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 128 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

Therefore, the reported satisfaction results reported by means of SUS questionnaires 
administrated to a set of users, can be potentiality have different interpretation by the users to 
express their level of acceptance. In the other words, are the lecturers and students satisfied due 
to the ease of adopt for the e-learning system or since experiencing a new technological product 
of learning management system that they felt enjoy and happy about it regardless of the expected 
results.  

The experimental results have revealed that the distinct usage based metrics that including 
task duration completion rate lostness metrics present equivalently the same part in expressing 
and analyzing the usability degree of lecturers’ and students’ interactivity. For other factors re-
lated to the participants themselves, the younger users have shown greater motivation and skills 
to use technological products meanwhile older users have struggled poorly to use the e-learning 
system. This is in sentence with a number of recent studies which arrived to the same conclu-
sions (Bringula, 2013; Wagner et al., 2014). They argued that the factor of age has a pronounced 
impact on the users’ performance. Moreover, the lecturers with the highest academic qualifica-
tions have reported receiving performance with high completion rates. This is instinctively be-
cause the comparable connection among the qualification of age and the academic. According to 
Mentes and Turan (2012), the authors said that the gender is the factor which impacts the users’ 
performance, the results achieved confirm that both genders and ages have almost same usage 
based metrics with small variances with exception that male users have declared better self-
approval with the e-learning system. 

We noted that the usage metrics have represented that the lecturers and students in the 
university have attempted to associate with the platform of e-learning system when deal with the 
web pages with ample graphical view navigation and tools. This suggests that the partial im-
poverished usability of the lecturers' and students' interface which should be improved during the 
stages where highly lecturers and students are not success to comprehensive the e-learning tasks 
Meantime, the minimal interfaces are evident to be preferable in terms of obtaining objectives 
with the ease and consistency deducing the correlation among the the task complexity and the 
time duration number and navigation web with respect to the elements' number and options 
comprised within the e-learning interface. Moreover, lecturers and students have claimed their sa-
tisfaction for adopting the e-learning in the future for supporting online teaching while they have 
reuqired obviously anymore practicing and directive of how to employ the e-learning system. 

CONCLUSION 

In this paper, an empirical study was conducted to assess the satisfaction of lecturers and 
students on using e-learning system. In the experiment, we adopted a widely used e-learning sys-
tem (Moodle) for tracking users’ activities and their evaluation. We used four key metrics (Com-
pletion rate, task time, lostness and SUS) to assess users’ satisfaction and to quantify the perfor-
mance of users on using e-learning system. The findings of this study reveals that trained students 
and trained lecturers are more satisfied in using the e-learning system compared against non-
trained lecturers and students. The findings, therefore, suggest that formal training for employing 
e-learning system is essential to obtain satisfied and experienced users. In our future work, we 
aim at expanding the usage-based metrics to assess the speed and accuracy of communication 
within the forum between lecturers and students. 

REFERENCES 

Ahn, J., Kim, K., & Proctor, R. W. (2018). Comparison of mobile web browsers for 
smartphones. Journal of Computer Information Systems, 58(1), 10-18. 
https://doi.org/10.1080/08874417.2016.1180652 

AlGhannam, B. A., Albustan, S. A., Al-Hassan, A. A., & Albustan, L. A. (2017). Towards a 
standard Arabic system usability scale: Psychometric evaluation using communication 

https://doi.org/10.21831/reid.v7i2.39642
https://doi.org/10.1080/08874417.2016.1180652


10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 129 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

disorder app. International Journal of Human–Computer Interaction, 34(9), 1–6. 
https://doi.org/10.1080/10447318.2017.1388099 

Almarashdeh, I. (2016). Sharing instructors experience of learning management system: A 
technology perspective of user satisfaction in distance learning course. Computers in Human 
Behavior, 63, 249–255. https://doi.org/10.1016/j.chb.2016.05.013 

Asoodar, M., Vaezi, S., & Izanloo, B. (2016a). Framework to improve e-learner satisfaction and 
further strengthen e-learning implementation. Computers in Human Behavior, 63, 704–716. 
https://doi.org/10.1016/j.chb.2016.05.060 

Asoodar, M., Vaezi, S., & Izanloo, B. (2016b). Framework to improve e-learner satisfaction and 
further strengthen e-learning implementation. Computers in Human Behavior, 63, 704–716. 
https://doi.org/10.1016/j.chb.2016.05.060 

Berkman, M. İ., Karahoca, D., & Karahoca, A. (2018). A measurement and structural model for 
usability evaluation of shared workspace groupware. International Journal of Human-Computer 
Interaction, 34(1), 35–56. https://doi.org/10.1080/10447318.2017.1326578 

Borkowska, A., & Jach, K. (2017). Pre-testing of Polish translation of System Usability Scale 
(SUS). Advances in Intelligent Systems and Computing, 521, 143-153. 
https://doi.org/10.1007/978-3-319-46583-8_12 

Bringula, R. P. (2013). Influence of faculty- and web portal design-related factors on web portal 
usability: A hierarchical regression analysis. Computers and Education, 68, 187-198. 
https://doi.org/10.1016/j.compedu.2013.05.008 

Caputi, V., & Garrido, A. (2015). Student-oriented planning of e-learning contents for Moodle. 
Journal of Network and Computer Applications, 53, 115–127. 
https://doi.org/10.1016/j.jnca.2015.04.001 

Casamayor, A., Amandi, A., & Campo, M. (2009). Intelligent assistance for teachers in 
collaborative e-learning environments. Computers and Education, 53(4), 1147–1154. 
https://doi.org/10.1016/j.compedu.2009.05.025 

Chen, P.-H., & Adesope, O. (2016). The effects of need satisfaction on EFL online learner 
satisfaction. Distance Education, 37(1), 89–106. 
https://doi.org/10.1080/01587919.2016.1155962 

Cohen, A., & Baruth, O. (2017). Personality, learning, and satisfaction in fully online academic 
courses. Computers in Human Behavior, 72, 1–12. https://doi.org/10.1016/j.chb.2017.02.030 

Curcio, K., Santana, R., Reinehr, S., & Malucelli, A. (2019). Usability in agile software 
development: A tertiary study. Computer Standards and Interfaces, 64, 61-77. 
https://doi.org/10.1016/j.csi.2018.12.003 

Gameel, B. G. (2017). Learner satisfaction with massive open online courses. American Journal of 
Distance Education, 31(2), 98–111. https://doi.org/10.1080/08923647.2017.1300462 

Haron, H., Aziz, N. H. N., & Harun, A. (2017). A conceptual model participatory engagement 
within e-learning community. Procedia Computer Science, 116, 242–250. 
https://doi.org/10.1016/j.procs.2017.10.046 

Harrati, N., Bouchrika, I., Tari, A., & Ladjailia, A. (2016). Exploring user satisfaction for e-
learning systems via usage-based metrics and system usability scale analysis. Computers in 
Human Behavior, 61, 463–471. https://doi.org/10.1016/j.chb.2016.03.051 

Hong, J. C., Tai, K. H., Hwang, M. Y., Kuo, Y. C., & Chen, J. S. (2017). Internet cognitive failure 
relevant to users’ satisfaction with content and interface design to reflect continuance 



10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 130 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

intention to use a government e-learning system. Computers in Human Behavior, 66, 353–362. 
https://doi.org/10.1016/j.chb.2016.08.044 

Horvat, A., Dobrota, M., Krsmanovic, M., & Cudanov, M. (2015). Student perception of Moodle 
learning management system: A satisfaction and significance analysis. Interactive Learning 
Environments, 23(4), 515–527. https://doi.org/10.1080/10494820.2013.788033 

Ifinedo, P., Pyke, J., & Anwar, A. (2018). Business undergraduates’ perceived use outcomes of 
Moodle in a blended learning environment: The roles of usability factors and external 
support. Telematics and Informatics, 35(1), 93–102). 
https://doi.org/10.1016/j.tele.2017.10.001 

Kerimbayev, N., Kultan, J., Abdykarimova, S., & Akramova, A. (2017). LMS Moodle: Distance 
international education in cooperation of higher education institutions of different 
countries. Education and Information Technologies, 22(5), 2125–2139. 
https://doi.org/10.1007/s10639-016-9534-5 

Kim, J. (2013). Influence of group size on students’ participation in online discussion forums. 
Computers & Education, 62, 123-129. https://doi.org/10.1016/j.compedu.2012.10.025 

Koohang, A., Paliszkiewicz, J., Gołuchowski, J., & Nord, J. H. (2016). Active learning for 
knowledge construction in e-learning: A replication study. Journal of Computer Information 
Systems, 56(3), 238–243. https://doi.org/10.1080/08874417.2016.1153914 

Ku, H. Y., Tseng, H. W., & Akarasriworn, C. (2013). Collaboration factors, teamwork 
satisfaction, and student attitudes toward online collaborative learning. Computers in Human 
Behavior, 29(3), 922–929. https://doi.org/10.1016/j.chb.2012.12.019 

Liberona, D., & Fuenzalida, D. (2014). Use of Moodle platforms in higher education: A Chilean 
case. Communications in Computer and Information Science, 446, 124–134. 
https://doi.org/10.1007/978-3-319-10671-7_12 

Lin, J. W. (2018). Effects of an online team project-based learning environment with group 
awareness and peer evaluation on socially shared regulation of learning and self-regulated 
learning. Behaviour and Information Technology, 37(5), 445–461. 
https://doi.org/10.1080/0144929X.2018.1451558 

Mentes, A., & Turan, A. H. (2012). Assessing the usability of university websites: An empirical 
study on Namik Kemal University. Turkish Online Journal of Educational Technology, 
11(3), 61-69. http://www.tojet.net/articles/v11i3/1136.pdf 

Muñoz, A., Delgado, R., Rubio, E., Grilo, C., & Basto-Fernandes, V. (2017). Forum participation 
plugin for Moodle: Development and discussion. Procedia Computer Science, 121, 982–989. 
https://doi.org/10.1016/j.procs.2017.11.127 

Navimipour, N. J., & Zareie, B. (2015). A model for assessing the impact of e-learning systems 
on employees’ satisfaction. Computers in Human Behavior, 53, 475–485. 
https://doi.org/10.1016/j.chb.2015.07.026 

Sadikin, M. (2017). Mining relation extraction based on pattern learning approach. Indonesian 
Journal of Electrical Engineering and Computer Science, 6(1), 50-57. 
https://doi.org/10.11591/ijeecs.v6.i1.pp50-57 

Sadikin, M., Fanany, M. I., & Basaruddin, T. (2016). A new data representation based on training 
data characteristics to extract drug name entity in medical text. Computational Intelligence and 
Neuroscience, 3483528. https://doi.org/10.1155/2016/3483528 



10.21831/reid.v7i2.39642 
Sulis Sandiwarno 

Page 131 - Copyright © 2021, REiD (Research and Evaluation in Education), 7(2), 2021 
ISSN: 2460-6995 (Online) 

Sandiwarno, S. (2016). Perancangan model e-learning berbasis collaborative video conference 
learning guna mendapatkan hasil pembelajaran yang efektif dan efisien. Jurnal Ilmiah FIFO, 
8(2), 191-200. https://doi.org/10.22441/fifo.v8i2.1314 

Smith, P. A. (1996). Towards a practical measure of hypertext usability. Interacting with Computers, 
8(4), 365–381. https://doi.org/10.1016/S0953-5438(97)83779-4 

Sun, J. (2016). Multi-dimensional alignment between online instruction and course technology: A 
learner-centered perspective. Computers and Education, 101, 102–114. 
https://doi.org/10.1016/j.compedu.2016.06.003 

Tullis, T., & Albert, B. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability 
metrics (2nd ed.). Elsevier. https://doi.org/10.1016/C2011-0-00016-9 

Wagner, N., Hassanein, K., & Head, M. (2014). The impact of age on website usability. Computers 
in Human Behavior, 37, 270-282. https://doi.org/10.1016/j.chb.2014.05.003 

Zhang, S., Liu, Q., Chen, W., Wang, Q., & Huang, Z. (2017). Interactive networks and social 
knowledge construction behavioral patterns in primary school teachers’ online 
collaborative learning activities. Computers and Education, 104, 1–17. 
https://doi.org/10.1016/j.compedu.2016.10.011