1Medical Department VI/Psychosomatic Medicine and Psychotherapy, University Hospital Tübingen, Tübingen, Germany; 2Medical Informatics, Medical 
Education Unit, Sultan Qaboos University, Muscat, Oman
*Corresponding Author’s e-mail: itmeded@gmail.com

Can Medical Students Evaluate Medical 
Websites?

A mixed-methods study from Oman

Teresa Loda,1 *Ken Masters,2 Stephan Zipfel,1 Anne Herrmann-Werner1

CLINICAL & BASIC RESEARCH

Sultan Qaboos University Med J, August 2022, Vol. 22, Iss. 3, pp. 362–369, Epub. 25 Aug 22
Submitted 1 Feb 21
Revision Req. 18 Mar 21; Revision Recd. 16 Apr 21
Accepted 24 May 21 https://doi.org/10.18295/squmj.8.2021.114

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

abstract: Objectives: This study aimed to discover the extent to which medical students can evaluate medical 
websites, evaluation criteria used, factors affecting their abilities and whether a teaching intervention could rectify 
problems. Medical students and practitioners are required to evaluate medical information available on the Internet. 
Most current medical students are familiar with the Internet, but their ability to evaluate material may require 
improvement. Methods: A class of undergraduate medical students evaluated an unreliable medical website, received 
a teaching intervention on website evaluation criteria and re-evaluated the same site. This mixed-methods study 
was conducted at Sultan Qaboos University, Muscat, Oman, from September to December 2018. Results: A total of 
149 (response rate: 82.3%) students participated. Students spent, on average, 4.69 hours per day on the Internet. No 
significant correlations were found between demographic indicators and Internet time. On a 10-point Likert scale, 
students’ scores ranged from 5–6, with no significant differences between the pre- and post-intervention evaluations, 
except for increased polarisation away from the mean. Qualitative comments indicated an awareness of relevant 
criteria but an overall inability to critically apply them. Conclusion: The results indicate that one cannot make a 
blanket statement about medical students’ ability to evaluate medical websites despite their familiarity with technology. 
Moreover, website evaluation should be viewed primarily from the information perspective and that critical thinking 
ability may play a major role. Due to these overriding factors, short interventions are unlikely to have an impact, and 
other educational strategies should be developed. These are necessary to ensure that medical students can function 
independently as life-long learners and medical professionals.

Keywords: Internet; Medical Students; Oman.

Advances in Knowledge
- Students of Medical and Basic Medical Sciences at Sultan Qaboos University appear to lack the skills required for appropriately 

evaluating the trustworthiness and value of medical-related information.
- A single intervention that identifies and teaches criteria found mixed results. Although some students appeared to be initially aware 

of valid judging criteria, these criteria were frequently incorrectly applied. Post-intervention evaluation indicated a similar result of 
incorrect and inconsistent application of the criteria.

- Part of the reason for the mixed results may be a lack of critical reasoning skills that should have been developed during schooling.

Applications to Patient Care
- Healthcare professionals need to keep themselves abreast of new information so that they may deliver high-quality healthcare.
- Currently, in the absence of traditional gatekeepers of knowledge, healthcare professionals must rely on their own ability to appropriately 

critically evaluate new information.
- The inability of the Medical and Basic Medical Sciences’ students to perform this evaluation points to the need for some form of 

systematic training to develop these evaluation skills.

In the 21
st century, the internet is an 

essential source of medical information for 
medical practitioners, students and patients.1–5

The problem with the Internet, however, is that 
it contains too much information and distinguishing 
good (e.g. accurate, evidence-based and appropriate) 
information from bad (e.g. inaccurate, unsubstantiated 
or inappropriate) is time-consuming and difficult.6

Before the Internet, medical practitioners and 
students considered librarians to be the gatekeepers 
of information and, thus, relied on; for information on 
the Internet, in many cases, such human gatekeepers 

have been removed. Physicians rely on medical 
search engines (e.g. PubMed [US National Library 
of Medicine, Bethesda, Maryland, USA]) or broader 
search systems (e.g. EBSCOHost [EBSCO Information 
Services, Ipswich, Massachusetts, USA]) to perform 
gatekeeping, while most medical practitioners mainly 
use general search engines such as Google (Google 
LLC, Mountain View, California, USA).7,8 Physicians 
are required to critically appraise and evaluate the 
information they have found.8

Experienced physicians can rely on their medical 
expertise and experience to determine information 

https://creativecommons.org/licenses/by-nd/4.0/


Teresa Loda, Ken Masters, Stephan Zipfel and Anne Herrmann-Werner

Clinical and Basic Research | 363

accuracy; however, information changes over time and 
medical students and newly-qualified physicians do 
not always have the required knowledge and expertise 
to evaluate medical information accuracy and 
quality.9,10 Medical educators are, therefore, concerned 
about medical students’ ability to critically analyse and 
review literature.8,11,12 Studies of these skills frequently 
focus on theoretical aspects instead of on the students’ 
applying these skills.10,12 The development of these 
skills requires the ability to critically evaluate and 
appraise medical literature.13 The need to teach these 
skills has been recognised since 2009 by the UK’s 
General Medical Council.14

Students’ familiarity with computers and 
the Internet may not translate into an ability to 
appropriately deal with information from the Internet. 
Just like knowing how to read and write does not 
necessarily mean one knows how to read and write 
for academic and medical research purposes, knowing 
how to use the Internet does not necessarily mean one 
knows how to use it for academic, research or medical 
work; other skills may be needed. Even if students are 
familiar with the technology, one should not assume 
they can reliably evaluate websites and can, therefore, 
quickly filter out unreliable websites for themselves.

No set of internationally recognised website eval- 
uation criteria exists,8 although there is the HONCode 
system (https://www.hon.ch/HONcode/) as well as 
several guides. A widely cited and popular system for 
website evaluation is Kapoun’s five criteria: accuracy, 
authority, objectivity, currency and coverage.15 
Kapoun’s criteria cover most issues or concerns on 
the accuracy and credibility of any website (or any 
document) and form a simple and short list which is 
ideal for introducing students to the skills required for 
evaluating the information on the Internet. 

Considering that the literature has identified the 
need to understand and develop medical students’ 
ability to critically analyse textual information and that 
so much of their information is unfiltered from the 
Internet, this study attempts to answer three research 
questions: (1) Prior to any teaching, to what extent can 
undergraduate medical students evaluate the quality of 
a medical-related website, and on what criteria do they 
base their evaluation?; (2) Is their evaluation related to 
prior computer usage or other experience?; (3) After 
receiving basic instruction on website evaluation, 
how would the new knowledge affect their ability to 
evaluate the same website?

Methods

This mixed-method study was conducted at Sultan 
Qaboos University, Muscat, Oman, from September 

to December 2018, with 181 Medical and Basic Medical 
Sciences students from the Medical Informatics I 
course. Students were taught in three sections, on 
three consecutive days, by the same teacher, at the 
same venue, using the same notes and methods. 

As part of their Medical and Basic Medical 
Sciences’ undergraduate degrees, students complete 
a semester-long Medical Informatics I course. 
Highlighting and teaching the basics of website 
evaluation is part of the course. Most of the students 
have been admitted straight from school and may have 
attended a foundation year at the University, which 
includes computer literacy. Some students are in their 
first semester and others are in their third. 

A US-based, health-related website was used in 
the study. The website’s complete name and URL was 
disclosed to the Sultan Qaboos University College 
of Medicine and Health Sciences Medical Research 
Ethics Committee for ethical approval and is available 
upon request. It is a public website and contains 
health-related information with superficial indicators 
of authenticity. The name of the website is ‘Global 
[medical procedure] Institute’ and it offers access to 
textbooks with medical titles. The topics on the website 
are medical related and it claims to contain open and 
uncensored information on these medical topics. The 
‘About Us’ link on the website describes the institute’s 
history. Closer inspection of the website reveals several 
problems; it has no physical address and no identity 
or qualifications of the website’s authors or owners. 
Moreover, it is a publishing house. On the ‘About Us’ 
page, only after clicking on a single ‘Disclaimer’ link 
does one find that the information on the website is 
for ‘educational and informational purposes only’ and 
is not to be taken as medical advice, that all data on the 
website should be verified, that it is not endorsed by 
“the American Academy of Pediatrics, the FDA, CDC 
or any other federal, state or “official” organization” 
and that the website does not carry an HONCode or 
any similar certification. Finally, the disclaimer’s last 
line says that the website’s authors are not medical 
practitioners. All this information is ‘buried’ away 
from the front page. No medical knowledge is required 
to determine the reliability of the information on the 
website. 

An electronic questionnaire was created for the 
students to complete anonymously through their 
learning management system. In addition to students’ 
demographic data, the questionnaire was based 
partially on a study that examined students’ ability to 
create mobile apps, in which students had been asked 
about their previous information technology (IT) and 
health sciences’ education and training (examples 
were included in the questionnaire), experience as a 



Can Medical Students Evaluate Medical Websites? 
A mixed-methods study from Oman

364 | SQU Medical Journal, August 2022, Volume 22, Issue 3

programmer, electronic device usage and daily hours 
spent on the Internet.16 It was believed that asking 
about this broad spectrum of experiences would help 
identify any experiential subtleties that may impact 
students’ ability to evaluate the website. 

Regarding students’ perception of the website, 
three questions were asked on the site’s trustworthi- 
ness, whether they would recommend the website to a 
patient and the website’s overall quality, using a Likert 
scale (0–10). Finally, an open-ended question was 
asked on the students’ reasons and the criteria followed 
to answer the questions regarding the website’s quality.

The overall process followed the standard, est-
ablished format of a pre-test, single intervention (with 
practice) and post-test design commonly performed 
in clinical and non-clinical medical education and 
training interventions.17–19

The process was as follows: (1) students were 
directed to the website and allowed to explore it for 
10–15 minutes; (2) students completed the anonymous 
questionnaire (using temporary identifications), incl- 
uding a consent form; (3) for the intervention, the 
teacher didactically taught the students Kapoun’s eval-
uation criteria and focused on his criteria and related 
questions (this took approximately 45 minutes and 
the students were provided notes so that they could 
refer to them during the subsequent evaluations);15 
(4) students worked in pairs or groups of three and 
evaluated different websites to practice their new skills 
(they chose a website from a list that excluded the 
website listed in step 1); and (5) after gaining feedback 
on and participating in discussions about their practice 
websites, the students re-evaluated the original website 
and completed the second questionnaire, which asked 
for the identification code and the same questions on 
website evaluation as before. 

Data were included in the analysis only if the 
student completed both the pre-and post-intervention 
questionnaires and identified themselves with their 
temporary usernames consistently. Data collected 

pre- and post-intervention were compared to track 
changes in students’ perceptions.

The raw quantitative data were captured in 
Microsoft Excel, Version 2016 (Microsoft Inc., 
Redmond, Washington, USA) by one researcher, and 
statistical tests were performed. A second researcher 
independently performed the same statistical tests 
using the Statistical Package for the Social Sciences 
(SPSS), Version 25.0 (IBM Corp., Chicago, Illinois, 
USA). The results were inspected and verified by all 
researchers. 

As per the Kolmogorov–Smirnov test, quantitative 
data were normally distributed. The means, standard 
deviations and frequencies were also calculated. 
Analysis of variances were conducted to identify 
significant age differences. To evaluate pre- and post-
intervention data, t-tests for dependent samples were 
used. For correlations, Pearson’s correlations were 
obtained. Associations between variables (based 
on information from the literature) and differences 
regarding the evaluations were tested. A difference 
was considered statistically significant at P <0.05. 

Qualitative data were themed by one researcher 
using QDA Miner Lite, Version 2.0.6 (Provalis 
Research, Montreal, Canada) by employing Kapoun’s 
five criteria: accuracy, authority, objectivity, currency 
and coverage. The comments were subjectively 
classified as ‘negative’ or ‘positive’, based on the attitude 
expressed. Themes and raw data were inspected and 
verified by the other researchers. As many students 
also referred directly to whether or not they would 
recommend the page to patients, this theme was 
added. Finally, students made more general comments 
on design and security; therefore, an ‘Other’ theme 
was added as well. 

Ethics approval for the study was obtained from 
the Sultan Qaboos University College of Medicine and 

Table 1: Number of hours spent on the Internet per day 
by the included students (N = 149)

Hours n (%) Percentage

<1 1 (0.67) 0.67

1–2 17 (11.4) 11.41

3–4 65 (43.6) 43.62

5–6 37 (24.8) 24.83

7–8 20 (13.4) 13.42

9–10 6 (4) 4.03

11–12 3 (2) 2.01

Table 2: Students’ pre- and post-intervention Likert scale 
mean scores when evaluating a medical website (N = 149)

Question Mean ± SD P value

Pre-
intervention

Post-
intervention

How trust- 
worthy would 
you judge the 
website to be?

5.60 ± 2.50 5.45 ± 2.98 0.471

How likely 
are you to 
recommend 
this website to 
a patient?

5.28 ± 2.60 4.95 ± 2.82 0.105

How would 
you judge the 
quality of the 
website?

5.30 ± 2.50 5.36 ± 2.79 0.757



Teresa Loda, Ken Masters, Stephan Zipfel and Anne Herrmann-Werner

Clinical and Basic Research | 365

Health Sciences Medical Research Ethics Committee 
(MREC#1792).

Results

Of the 181 registered students, a total of 149 (response 
rate: 82.3%) were included in the study. Of these 149 
students, 70 (47.0%) were female, 69 (46.3%) were 
male and 10 (6.7%) did not indicate their gender. The 
sample’s gender proportions were not statistically 
different from that of the class population (P = 0.100). 
The students’ ages ranged from 17 to 21 years (mean 
age: 18.86 ± 0.80 years). 

Information was obtained on the students’ prior 
training. There were 12 (8.1%) students who had 
health-related training, 23 (15.4%) had IT-related 
training and 29 (19.5%) had programming experience.

On average, the students spent 4.69 hours on the 
Internet per day [Table 1]. No correlation was found 
between the number of hours spent on the Internet 
and age (r = 0.079; P = 0.340) or gender (P = 0.513). 

On average, the students spent 22.3% of 
their Internet time on health-related searches. No 
significant differences were present in the number of 
hours spent on the Internet based on age (r = 0.021; P 
= 0.713) or gender (male = 19.0 ± 0.77, female = 18.7 
± 0.83; P = 0.069).

Students’ pre- and post-intervention Likert scale 
scores and the reasons for their scores were obtained. 

Two important results stand out: (1) on the 10-point 
Likert scale, students rated the websites as being 
slightly above average. Second, no significant change 
was observed in the ratings made pre- and post-
intervention [Table 2].

These mean values, however, hide important 
information on the results’ distribution. The students 
did not merely give the same answers pre- and post-
intervention and a score polarisation tendency was 
present, with shifts in score increases and decreases 
[Figure 1]. 

Data showed that this polarisation is seen yet 
again in the score changes. This indicates that, while 
many students adjusted their ratings correctly after 
the intervention, many changed their ratings in the 
opposite direction. This polarisation is obscured by the 
nominal shift in the mean scores [Table 3].

Associations were sought between the other 
variables and the scores allocated for these questions. 
No demographic or activity variables (age, gender, 
amount of IT training, health training, hours spent 
on the Internet or the usage of the Internet for health-
related searches) were found to be associated with any 
scores (all P >0.05). 

As the qualitative data were themed according to 
Kapoun’s criteria, the data were laid out in that format 
[Tables 4 and 5]. In the pre-intervention evaluation, it 
was found that 54 negative and 46 positive comments 
could be classified under ‘Other’, many of which were 
unspecific comments about the website’s being of 
good or bad quality, unattractive, not secure or boring 
and other personal opinions. Additionally, 15 students 
commented that they did not have the knowledge or 
expertise to properly comment on the website. In total, 
students made 134 (53.2%) negative comments and 
118 (46.8%) positive comments. Of these, 80 (59.7%) of 
the negative comments and 72 (61.0%) of the positive 
comments aligned with Kapoun’s criteria or were 
aimed at the site’s value to patients.

In the post-intervention evaluation, it was found 
that 40 negative and 47 positive comments of the 
students could fall under the ‘Other’ category. In total, 
students made 245 (54.2%) negative comments and 

Figure 1: Pre-and post-intervention distribution of scores on students evaluating a medical website.

Table 3: Students’ changes in scores when evaluating a 
medical website (N = 149)

Question n (%)

Lower Equal Higher

How trustworthy 
would you judge the 
website to be?

59 (39.6) 43 (28.9) 47 (31.5)

How likely are you 
to recommend this 
website to a patient?

62 (41.6) 48 (25.5) 39 (32.9)

How would you judge 
the quality of the 
website?

60 (40.3) 37 (24.8) 52 (34.9)



Can Medical Students Evaluate Medical Websites? 
A mixed-methods study from Oman

366 | SQU Medical Journal, August 2022, Volume 22, Issue 3

207 (45.8%) positive comments. Of these, 205 (83.7%) 
of the negative comments and 160 (77.3%) of the 
positive comments aligned with Kapoun’s criteria or 
were aimed at the website’s value to patients.

Discussion

This study examined medical students’ ability to 
evaluate websites, particularly as they would be 
expected to do so in the absence of traditional 
gatekeepers such as librarians. The average amount of 
time these students spent on the Internet was typical 
of students internationally.20 Students evaluated a 
medical-related website, received a teaching interv-
ention and then re-evaluated that same website. No 
examples of a comparative exercise had been found 
in the literature; previous studies tested students on 
reputable or well-controlled websites or students self-
selected a broad range of websites and commented on 

Table 4: Theme, rating (negative or positive), number of 
comments and examples before the teaching intervention 
on students evaluating a medical website

Theme Rating n Examples 

Accuracy Negative 12 [N]ot all information 
in the website are 
correct. Some 
information needs 
more statistics.[#60]

Positive 2 [T]he article and 
studies help to have 
more accuracy.[#15]

Authority Negative 22 They have not 
mentioned their level 
of education or the 
field they are working 
in.[#95]

Positive 27 The website has a lot 
of references where 
you know that the 
information are true 
and right and know 
from where they got 
the information.[#5]

Objectivity Negative 11 [T]he website uses 
false information to 
promotes the sales of 
his book .... the reason 
for that website is 
not to help further 
the medical research 
domain but for 
commercial reasons.
[#59]

Positive 0

Currency Negative 6 The articles are old. So, 
its information may 
had changed and not 
updated.[#65]

Positive 0

Coverage Negative 3 It is true that this 
website have a large 
information about 
vaccination but that 
does not mean that 
it have everything we 
need to know.[#55]

Positive 11 [I]t gives access to 
pdf 's that help a person 
with their inquiry and 
provides alternatives 
for your problem.
[#101]

Appropriate 
for Patients 

Negative 26 Some patient will 
misunderstand the 
information because 
they do not have 
enough knowledge.
[#149]

Positive 32 [T]his website is useful 
and make the patient 
life more easily because 
it has the necessary 
information and data 
for make the right 
decision.[#30]

Table 5: Theme, rating (negative or positive), number of 
comments and examples after the teaching intervention 
on students evaluating a medical website 

Theme Rating n Examples 

Accuracy Negative 23 First point is the 
accuracy the website 
promotes false 
information.[#59]

Positive 29 [V]ery good website 
with a high accuracy.
[#14]

Authority Negative 63 [I]t does not provide 
secure information, 
from trust sources 
[#109]

Positive 59 [A]ll the information 
have a reference and 
copy right which varify 
information.[#89]

Objectivity Negative 54 [I]t is looks like an 
advertisement[#50]

Positive 12 This website is a good 
website because it is 
accurate and objective.
[#85]

Currency Negative 27 [N]o updated studies, 
most of them are old.
[#32]

Positive 17 [I]t was updated 
recently.[#17]

Coverage Negative 21 [T]he coverage looks 
incomplete, there are 
no sources given.[#1]

Positive 24 [T]here are sources for 
additional information.
[#25]

Appropriate 
for Patients

Negative 17 I will not prefer to 
recommend it for my 
patients, as it contain 
some difficult articles.
[#132]

Positive 19 It covered most of the 
information so it can 
[be] rated as a good 
website. I recommend 
this website for the 
patients.[#85]



Teresa Loda, Ken Masters, Stephan Zipfel and Anne Herrmann-Werner

Clinical and Basic Research | 367

them.8,21 For the current study, a highly questionable 
website was chosen to determine whether or not 
the students could identify the problems within it. 
The choice of a single website (rather than multiple) 
allowed for a more comprehensive view of the website 
across the full sample of students. While the broad 
results indicate a positive view of the website, a more 
detailed evaluation of the data reveals other subtleties 
and indicates that universal statements on current 
medical students’ ability to evaluate websites should 
be treated carefully.

With regards to the first research question (prior 
to any teaching, to what extent can undergraduate 
medical students evaluate the quality of a medical-
related website, and on what criteria do they base 
their evaluation?), students made more negative 
than positive comments but their overall rating was 
positive. Despite this positive tendency, results show a 
disparity across the student population, a mixed ability 
and that one cannot make a blanket statement about 
their evaluation ability [Figure 1A].

The high percentage of alignment between 
the students’ comments and Kapoun’s criteria is 
encouraging. The high number of positive comments, 
however, is discouraging. It indicates that even though 
students are aware of the criteria, their ability to match 
the case to the criteria is not optimal. 

These results extend the researchers’ arguments 
that these skills are necessary for medical students.10–13 
Furthermore, the current research demonstrates the 
extent to which these skills are lacking among these 
students. 

With regards to the second research question 
(is their evaluation related to prior computer usage 
or other experience?), studies have demonstrated an 
association between familiarity with one technology 
leading to ease of use with another technology.16,22–24 In 
this study, no association was found between familiarity 
with technology and the ability to evaluate websites 
or improve that ability. This is in agreement with the 
argument that teaching students the mechanics of 
using academic and medical search engines is only 
part of the solution; “the problem remains on how 
to educate students to critically evaluate information 
obtained using popular search engines”.8

As no correlation was found between students’ 
health-related training and evaluation scores, it is 
apparent that they have no bearing on students’ ability 
to evaluate websites. 

With regards to the third research question (after 
receiving basic instruction on website evaluation, 
how would this new knowledge affect their ability 
to evaluate the same website?), when examining the 
mean scores only, it appears the teaching intervention 

had no impact on students’ website evaluation ability. 
The polarisation, however, indicates that the criteria 
are not necessarily being applied correctly. 

Thus, the answer to this question is that students 
demonstrated a greater awareness of the criteria 
taught and, while many applied the criteria correctly, a 
similar number applied them incorrectly. 

This situation appears to echo a common 
complaint from clinical teachers that many students 
can recite rote-learnt lists of conditions but, when 
faced with a patient, are unable to match the patient 
to the list of conditions and arrive at a diagnosis. This 
indicates that broader critical thinking skills need to 
be focused on; these skills are derived within a broader 
educational and sociological context.

The lack of association between the level of 
technological prowess and website evaluation may 
not be entirely surprising. The reason for this lack of 
association is that the skillsets required for each may 
be different and it would be a mistake to consider a 
website as only a technological entity rather than a 
source of information requiring critical thought and 
evaluation. 

Whether one uses Kapoun’s criteria or any other 
system, critical evaluation of information and the 
required skills for this activity appear to have little 
to do with technological familiarity. Instead, critical 
insights, reasoning and evaluation skills are needed. 
An examination of students’ critical thinking skills 
may, indeed, point to the reasons behind students’ 
poor evaluation ability. 

A 2003 United Nations (UN) report on 
development in the Arab World reported a severe 
lack of critical thinking skills among school-leaving 
Omanis.25 Since then, Oman’s higher education 
institutions have attempted to measure and address 
this. Unfortunately, follow-up studies indicate 
that Omani university students’ critical thinking, 
interpretation and evaluation scores are significantly 
below international standards.26–28 

Critical thinking and critical appraisal skills are 
essential for medical students; they cannot be assumed 
and need to be developed.10,11,13 In the current study, 
the causes of poor critical thinking skills are likely 
to be from a poor schooling system. The UN report 
argues, “[T]he curricula taught in Arab countries seem 
to encourage submission, obedience, subordination 
and compliance, rather than free critical thinking. 
In many cases, the contents of these curricula do 
not stimulate students to criticise political or social 
axioms. Instead, they smother their independent 
tendencies and creativity”.25 Furthermore, the report 
goes on to say, “Generally speaking, the assigned 
curricula, starting from preliminary school or even 



Can Medical Students Evaluate Medical Websites? 
A mixed-methods study from Oman

368 | SQU Medical Journal, August 2022, Volume 22, Issue 3

before, embody a concept that views education as an 
industrial production process, where curricula and 
their content serve as moulds into which fresh minds 
are supposed to be poured…. Students can do little but 
memorise, recite and perfect rote learning”.25 

Thus, when considering medical students’ ability 
to evaluate a website, the results of the current study 
point to the influence of factors much bigger than 
just level of knowledge and certainly there is a need 
for correction on a more profound level than can be 
accomplished by a single intervention. Further research 
assessing critical thinking skills and its relationship to 
this evaluative ability would be required for a more 
definitive understanding of these factors.

The current study was limited because it was 
conducted in a single year on one group of students. 
Moreover, no research has been conducted on the 
long-term impact of teaching, which could be studied 
in follow-up research. 

Conclusion

This study found that the included undergraduate 
medical students’ ability to evaluate the quality of 
health-related websites is mixed. Prior exposure to 
and use of technology has no bearing on this ability. 
A single intervention has a limited and mixed impact, 
possibly as a result of poor prior critical thinking skills. 
Given that medical students and health professionals 
increasingly rely upon websites and other information 
sources that do not undergo quality control, it is 
recommended that training and practice of the 
required skills be reinforced.

a u t h o r s’ c o n t r i b u t i o n s
TL, KM, SZ and AH-W conceptualised and 
designed the study. TL, KM and AH-W created 
the instrument used. Instrument and intervention 
delivery was conducted by KM. TL and KM analysed 
the quantitative data. KM produced the themes of 
the qualitative data, while TL, KM, SZ and AH-W 
verified the qualitative data. TL, KM and AH-W 
interpreted the data. All authors drafted and revised 
the manuscript. All authors approved the final version 
of the manuscript.

c o n f l i c t o f i n t e r e s t
The authors declare no conflicts of interest. 

f u n d i n g

No funding was received for this study.

References
1.  Masters K. Access to and use of the internet by South African 

general practitioners. Int J Med Inform 2008; 77:778–86. 
https://doi.org/10.1016/j.ijmedinf.2008.05.008.

2.  Masters K. For what purpose and reasons do doctors use the 
internet: A systematic review. Int J Med Inform 2008; 77:4–16. 
https://doi.org/10.1016/j.ijmedinf.2006.10.002.

3.  Sommerhalder K, Abraham A, Zufferey MC, Barth J, Abel T. 
Internet information and medical consultations: Experiences 
from patients’ and physicians’ perspectives. Patient Educ Couns 
2009; 77:266–71. https://doi.org/10.1016/j.pec.2009.03.028.

4.  Moick M, Terlutter R. Physicians’ motives for professional internet 
use and differences in attitudes toward the internet-informed 
patient, physician-patient communication, and prescribing behavior. 
Med 2 0 2012; 1:e2. https://doi.org/10.2196/med20.1996.

5.  MacLeod A, Fournier C. Residents’ use of mobile technologies: 
Three challenges for graduate medical education. BMJ Simul 
Technol Enhanc Learn 2017; 3:99–105. https://doi.org/10.1136/
bmjstel-2016-000185.

6.  Battineni G, Baldoni S, Chintalapudi N, Sagaro GG, Pallotta G, 
Nittari G, et al. Factors affecting the quality and reliability of online 
health information. Digit Health 2020; 6:2055207620948996. 
https://doi.org/10.1177/2055207620948996.

7.  Kritz M, Gschwandtner M, Stefanov V, Hanbury A, Samwald M. 
Utilization and perceived problems of online medical resources 
and search tools among different groups of European physicians. 
J Med Internet Res 2013; 15:e122. https://doi.org/10.2196/
jmir.2436.

8.  Ghezzi P, Chumber S, Brabazon T. Educating medical students 
to evaluate the quality of health information on the web. In: 
Floridi L, Illari P, Eds. The Philosophy of Information Quality. 
Cham, Switzerland: Springer International Publishing, 2014. 
pp. 183–99. https://doi.org/10.1007/978-3-319-07121-3_10.

9.  Burd A, Chiu T, McNaught C. Screening internet websites for 
educational potential in undergraduate medical education. Med 
Inform Internet Med 2004; 29:185–97. https://doi.org/10.1080/ 
14639230400005982.

10.  Watson HR, Burr S. Research skills in medical education. 
MedEdPublish 2018; 7:151. https://doi.org/10.15694/mep.20 
18.0000151.1.

11.  Stark P, Ellershaw J, Newble D, Perry M, Robinson L, Smith J, 
et al. Student-selected components in the undergraduate medical 
curriculum: A multi-institutional consensus on assessable key 
tasks. Med Teach 2005; 27:720–5. https://doi.org/10.1080/014 
21590500271530.

12.  Murdoch-Eaton D, Drewery S, Elton S, Emmerson C, Marshall M, 
Smith JA, et al. What do medical students understand by 
research and research skills? Identifying research opportunities 
within undergraduate projects. Med Teach 2010; 32:e152–60. 
https://doi.org/10.3109/01421591003657493.

13.  Laidlaw A, Aiton J, Struthers J, Guild S. Developing research 
skills in medical students: AMEE Guide No. 69. Med Teach 
2012; 34:e754–71. https://doi.org/10.3109/0142159X.2012.704438.

14.  GMC. Tomorrow’s Doctors: Outcomes And Standards For 
Undergraduate Medical Education. London, UK: The General 
Medical Council, 2009. 

15.  Kapoun J. Teaching undergrads WEB evaluation: A guide for 
library instruction. Coll Res Libr News 1998; 59:522–3. https://
doi.org/10.5860/crln.59.7.522.

16.  Masters K. Health professionals as mobile content creators: 
Teaching medical students to develop mHealth applications. 
Med Teach 2014; 36:883–9. https://doi.org/10.3109/014215
9X.2014.916783. 

https://doi.org/10.1016/j.ijmedinf.2008.05.008
https://doi.org/10.1016/j.ijmedinf.2006.10.002
https://doi.org/10.1016/j.pec.2009.03.028
https://doi.org/10.2196/med20.1996
https://doi.org/10.1136/bmjstel-2016-000185
https://doi.org/10.1136/bmjstel-2016-000185
https://doi.org/10.1177/2055207620948996
https://doi.org/10.2196/jmir.2436
https://doi.org/10.2196/jmir.2436
https://doi.org/10.1007/978-3-319-07121-3_10
https://doi.org/10.1080/14639230400005982
https://doi.org/10.1080/14639230400005982
https://doi.org/10.15694/mep.2018.0000151.1
https://doi.org/10.15694/mep.2018.0000151.1
https://doi.org/10.1080/01421590500271530
https://doi.org/10.1080/01421590500271530
https://doi.org/10.3109/01421591003657493
https://doi.org/10.3109/0142159X.2012.704438
https://doi.org/10.5860/crln.59.7.522
https://doi.org/10.5860/crln.59.7.522
https://doi.org/10.3109/0142159X.2014.916783
https://doi.org/10.3109/0142159X.2014.916783


Teresa Loda, Ken Masters, Stephan Zipfel and Anne Herrmann-Werner

Clinical and Basic Research | 369

17.  Ashokka B, Dong C, Law LS, Liaw SY, Chen FG, Samarasekera DD. 
A BEME systematic review of teaching interventions to equip 
medical students and residents in early recognition and prompt 
escalation of acute clinical deteriorations: BEME Guide No. 62. 
Med Teach 2020; 42:724–37. https://doi.org/10.1080/014215
9X.2020.1763286.

18.  Deliz JR, Fears FF, Jones KE, Tobat J, Char D, Ross WR. Cultural 
competency interventions during medical school: A scoping 
review and narrative synthesis. J Gen Intern Med 2020; 35:568–77. 
https://doi.org/10.1007/s11606-019-05417-5.

19.  Omer U, Danopoulos E, Veysey M, Crampton P, Finn G. A rapid 
review of prescribing education interventions. Med Sci Educ 
2020; 31:273–89. https://doi.org/10.1007/s40670-020-01131-8.

20.  Galanek JD, Gierdowski DC, Brooks DC. ECAR Study of Under-
graduate Students and Information Technology. Louisville, CO, 
USA: ECAR, 2018. 

21.  Tannery NH, Foust JE, Gregg AL, Hartman LM, Kuller AB, 
Worona P. Use of web-based library resources by medical 
students in community and ambulatory settings. J Med Libr 
Assoc 2002; 90:305–9. 

22.  Wiedenbeck S, LaBelle D, Kain V. Factors affecting course 
outcomes in introductory programming. In: Dunican E, Green 
T, Eds. Proceedings of the 16th Workshop of the Psychology of 
Programming Interest Group. Carlow, Ireland: PPIG, 2004. pp. 
97–110. 

23.  Ivins J, Ong MP. Psychometric Assessment of Computing 
Undergraduates. In: Romero P, Good J, Chaparro EA, Bryant 
S, Eds. 17th Workshop of the Psychology of Programming 
Interest Group. Sussex, UK: PPIG, 2005. pp. 305–19. 

24.  Bosch N, D’Mello S, Mills C. What emotions do novices 
experience during their first computer programming learning 
session? In: Lane H, Yacef K, Mostow J, Pavlik P, Eds. Lecture 
Notes in Computer Science. Berlin/Heidelberg, Germany: 
Springer-Verlag, 2013. pp. 11–20. 

25.  UNDP. Arab Human Development Report 2003: Building a 
Knowledge Society. New York: United Nations Development 
Programme Arab Fund for Economic and Social Development, 
2003. 

26.  Al Barwani T, Al Yahmadi H, Clayton D, Al Kalbani M, Neisler O, 
Al Sulaimani H, et al. Measuring against expectations: Devel- 
opment of a multidimensional profile of university readiness of 
Omani higher education intake 2011–2013 (Case of SQU). In: 
Quality Management & Enhancement in Higher Education 20–
21 February 2012. Muscat, Oman: Sultan Qaboos University, 
2012. 

27.  Kumar R, James R. Evaluation of critical thinking in higher 
education in Oman. Int J High Educ 2015; 4:33–43. https://doi.
org/10.5430/ijhe.v4n3p33.

28.  Neisler O, Clayton D, Al-Barwani T, Al Kharusi H, Al-Sulaimani H. 
21st century teacher education: Teaching, learning and 
assessment of critical thinking skills at Sultan Qaboos 
University. In: Redefining Teacher Education for the Post-2015 
Era: Global Challenges and Best Practices. New York, USA: 
Nova, 2016. pp. 77–95. 

https://doi.org/10.1080/0142159X.2020.1763286
https://doi.org/10.1080/0142159X.2020.1763286
https://doi.org/10.1007/s11606-019-05417-5
https://doi.org/10.1007/s40670-020-01131-8
https://doi.org/10.5430/ijhe.v4n3p33
https://doi.org/10.5430/ijhe.v4n3p33