Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2775  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

BlindSense: An Accessibility-inclusive Universal 
User Interface for Blind People 

 

Akif Khan 
Department of Computer Science 

University of Peshawar 
Peshawar, Pakistan 
akif@uop.edu.pk 

Shah Khusro 
Department of Computer Science 

University of Peshawar 
Peshawar, Pakistan 
khusro@uop.edu.pk 

Iftikhar Alam 
Department of Computer Science 

University of Peshawar 
Peshawar, Pakistan 

iftikharalam@uop.edu.pk 
 

 

Abstract—A large number of blind people use smartphone-based 
assistive technology to perform their common activities. In order 
to provide a better user experience the existing user interface 
paradigm needs to be revisited. A simplified, semantically 
consistent, and blind-friendly adaptive user interface model is 
proposed. The proposed solution is evaluated through an 
empirical study on 63 blind people leveraging an improved user 
experience in performing common activities on a smartphone.   

Keywords-adaptive UI; blind people; smartphone; blind-
friendly 

I. INTRODUCTION 
A large number of blind people are using state-of-the-art 

assistive technologies for performing their daily life activities 
[1-3]. Smartphone-based assistive technologies are an 
emerging trend for the blind people due to the inbuilt features 
such as accessibility, usability, and enhanced interactions [4, 
5]. The accessibility services such as talking back, gesture 
controls, haptic feedback, screen magnifier, large text, color 
contrast, inverted colors, screen brightness, shortcuts, and 
virtual assistants are facilitating blind and visually impaired 
people in performing several operations on the smartphones. 
However, existing smartphone-based interfaces are posed to 
several issues in delivering a unified, usable and adaptive 
solution to the blind people. The navigational complexity in the 
interface design, lack of consistency in the button, icons, layout 
screens, identifying, selecting non-visual items on the screen, 
and traditional input mechanism is contributing in an increased 
cognitive overload [6]. In addition, every mobile application 
has its specific flow of interaction, placement of non-visual 
items, layout preferences, and distinct functionality. Notably, it 
is difficult to establish a balance between accessibility and 
usability of a mobile application. In most of the cases, the 
mobile apps are either accessible but barely usable or usable 
but barely accessible [7]. Nowadays, the available mobile apps 
are mostly inaccessible to the blind people. This is because 
these apps are either having limited usability or do not adhere 
to the web/mobile accessibility guidelines [8]. The usability of 
the smartphone-based user interface can be improved by using 
adaptive user interface paradigms. Adaptive user interfaces 
support context-awareness and can generate a new instance of 

the interface by changes in the environment, user preferences, 
and device usage [9]. This can help blind people in the 
personalization of their smartphone-based user interface (UI) 
layouts, widgets, and UI controls of a particular application, 
irrespective of their technical ability, skillset, and device 
handling capabilities. To gain a considerably improved blind-
friendly interface design requires an extensive revision, in order 
to meet the requirements and needs of the blind people. This 
may require a technical framework supported by an adaptation 
mechanism to address the diverse user capabilities, needs, and 
context-of-use to ensure a high degree of usability and 
acceptability [10].  

This paper aims to devise a universal UI design for blind 
people, to customize the interface components of the 
commonly available mobile applications into a blind-friendly 
simplified UI. This will provide a simplified, semantically 
consistent, easy to use interface for operating commonly 
available mobile apps on the smartphone. In addition, the blind 
people will be having better control over the interface 
customization, and re-organization of interface elements as per 
their requirements. The vital contribution of this paper is to 
improve the user experience of the blind people. The upcoming 
section provides an overview of related work pertains to 
accessibility-inclusive user interfaces design. 

II. RELATED WORK 
Through a series of studies, researchers have analyzed and 

identified recommendations for the accessibility-inclusive UIs 
for blind people [11, 12]. The emergence of smartphone-based 
UIs has opened new vistas for visually impaired and blind 
people. However, the cost of usability, accessibility, and the 
challenge of how to make this device more usable to the blind 
people emerges [13]. The pre-touchscreen era witnessed that 
the mobile device possessed physical controls for navigation 
and operational usage. However, existing touchscreen 
interfaces are vibrant to a number of issues due to the non- 
existence of physical buttons, and user interface controls 
making it insufficient to drive these devices [14, 15]. Some 
common usability issues are demonstrated in Table I.  
 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2776  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

TABLE I.  COMMON USABILITY ISSUES IN TOUCHSCREEN USER INTERFACES FOR BLIND PEOPLE  

Parameter Description 
HCI Model 

Coverage [26] 
Limitation 
[14, 16-19] 

Cognitive overload 
[21, 22] 

Limited organizational and operational capabilities of existing accessibility services 
are resulting in increased cognitive overload. Complex UI design, processing 
inadequate labels of UI components, missing links, accessing non-visual items on 
screen, task performance, and inconsistency are some of the factors contributing in the 
cognitive overload for blind people 

Task,Domain, 
Dialog, 

Presentation, 
Platform, User 

Semantic lost, 
Navigational complexity, 

Task adequacy, 
Dimensional trade-off, 
Device independence 

Placement and 
selecting non-visual 

items, layouts, 
Search [12, 23-27] 

Placing non-visual items on the screen, locating and identifying a particular item of 
interest are key issues. Remembering user action status, and following a pattern of 
activities is a challenge. Searching and retrieval of particular information is difficult 
activity to perform as well. 

Task, Dialog, 
Presentation, 

User 

Semantic lost, 
Navigational complexity, 

Task adequacy 

Keys with multiple 
functions [28] 

Lack of physical keys on soft keypad resulting in higher chances of wrong touches. 
Besides, many actions are associated with one key, which creates confusion for these 
people. Usually they are unaware of the type of functionality is associated with a 
particular key 

Task, 
Presentation, 

User 

Semantic lost, Task 
adequacy 

Automated 
Assistance [29] 

Automated assistance tools receive information proactively without a user request. 
Extensive utilization of such assistance systems may burden the blind people. 

User, Platform 
Semantic lost, Task 
adequacy, Cognitive 

overload 

Haptic feedback [30] 
The utilization of haptic feedbacks and gesture controls are an emerging issue for 
blind people, e.g., consistent and appropriate feedback at the right time is inadequate 
in the existing interfaces. 

Task, Dialog, 
Presentation, 

User 

Task adequacy, 
Dimensional trade-off 

User control over 
interface components 
(UI Adaptation) [31] 

Inadequate UI flexibility and limited control over UI personalization is a key issue. 
Besides, every mobile application provides a meaningful entry and exit paths/points 
and should accommodate users requirements, and allow the user to customize 
interface layouts, and manipulate non-visual objects directly. 

Task, Dialog, 
Presentation, 

User 

Semantic lost, 
Navigational complexity, 

Task adequacy, 
Dimensional trade-off 

Device 
incompatibility [32] 

The final user interface generated on different devices acts inversely. This final 
generated outcome does not offer interoperability with different operating systems, 
and devices. Certain applications require pre-installed libraries, and utilities to operate 
rationally. Cross-mobile and cross-platform support is a primary aspect lacking in the 
currently available interfaces. 

Task, Domain, 
Dialog, 

Presentation, 
Platform, User 

Semantic lost, 
Dimensional trade-off, 
Device independence 

Context of use  
[12, 27] 

Lack of non-usage of the context of use and subsequent adaptation support results in 
loss of precise and accurate information of device, environment and users. 

Task, Domain, 
Dialog, 

Presentation, 
Platform ,User 

Semantic lost, 
Navigational complexity, 

Task adequacy, 
Dimensional trade-off, 
Device independence 

Lack of Logical 
order, navigational 
items, and menu 

hierarchy [24, 32] 

Logical order of non-visual items, contents, and UI elements is a challenging 
opportunity. Complex menu hierarchy creates difficulties in accessing the right menu 
selection at the right time 

Task, Dialog, 
Presentation, 

User 

Semantic lost, 
Navigational complexity, 

Task adequacy, 
Dimensional trade-off 

Limited Consistency 
and Persistency [31]. 

Existing interfaces have limited persistency and consistency due to which it is difficult 
for the blind people to remember every action on the screen. 

Task, 
Presentation 

Semantic lost, 
Navigational complexity, 

Task adequacy 

Learnability and 
discoverability of the 

UIs [12] 

Learnability and discoverability are the key challenges in currently available 
applications. Discoverability is the time factor and ease by which the user can begin 
an effective interaction with the system. 

Task, 
Presentation, 

User 

Semantic lost, 
Navigational 

complexity,Task 
adequacy 

Inadequate mapping 
of feedbacks [10] 

Though, the first generation of haptic feedbacks is available in the form of vibratory 
motors, but still, this can provide a limited sensation in operating smartphones for 
blind people 

Task, Dialog, 
Presentation, 

User 

Semantic lost, Task 
adequacy, Dimensional 

trade-off 

Exhaustive Text-
Entry [12] 

Typical keypad, inadequate labels, smaller UI elements and text-to-speech responses 
in text-entry reduces the efficiency of the blind people. The error rate and a number of 
missed-touches in using traditional keypads are usually high. 

Task, 
Presentation 

Semantic lost, Task 
adequacy 

Screen Orientation, 
Size, Resolution [17] 

The usability of the touchscreen interfaces is affected by the screen elements such as 
the size of screen and change of orientation. The small size of button and UI elements 
have to divest effect on the performance. Screen orientation also leads to increase the 
difficulty level of these people in learnability and discoverability. 

Task, Dialog, 
Presentation, 

User 

Semantic lost, 
Navigational complexity, 

Task adequacy, 
Dimensional trade-off, 
Device independence 

User Model 
fragmentation 

[13, 17] 

Every application has preserved several models locally. Each application store and 
retrieve model information from the local repository ensuring the reuse of application 
models. However, heterogeneous application models may reflect a partial view of the 
user behavior and application usage in a particular scenario. 

Task, Domain, 
Dialog, 

Presentation 
Platform, User 

Semantic lost, 
Navigational complexity, 

Task adequacy, 
Dimensional trade-off, 
Device independence 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2777  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

The inclusion of accessibility in performing daily tasks 
through different applications and systems is highlighted in [14, 
16-19]. Screen readers and built-in accessibility services have 
considerably improved the usability of device footprints for 
blind people [20]. The preliminary focus was the ability to 
interact with the smartphones in performing common tasks 
such as reading a text message, identifying objects of interest 
and colors [4]. The advent of touchscreen technology has 
replaced the physical controls, elements, and directional 
anchors, resulting in creating difficulties in several operations. 
However, the tactical feedback, haptics, and multimodal 
interaction triggered a better basis for visual/auditory 
interactions [17, 21]. Besides, the touchscreen offers a number 
of challenging opportunities such as haptic feedback, gesture 
control systems [22], text-to-speech system, and screen reading 
accessibility services (e.g., Talkback for Android, Voice Over 
for Apple) enables blind people to read out the contents of the 
screen and operate smartphone interfaces [23]. Blind people 
usually avoid contents that develop accessibility problems for 
them [24]. Even sighted people consume 66% of their time in 
editing and correcting text in an automatic speech recognizer 
output on the desktop system [25]. Besides the above reported 
issues, Table I is depicting usability issues faced by blind 
people in performing various activities on smartphones. These 
problems are identified and analyzed in the specific context of 
HCI model [26] including task, domain, dialog, presentation, 
platform, and user model. 

In summary, the usability of touchscreen UIs merits further 
investigation. This requires the revamping of existing UIs 
based on the needs and expectations of the blind people. Many 
researchers have now emphasized on the development of a 
user-adaptive paradigm of designing simple to use, accessible, 
and user-friendly interfaces based on the guidelines of HCI [14, 
27-29]. In addition, few studies proposed usable and 
accessibility-inclusive UIs. However, the results need further 
improvement. Researchers should consider the improvement in 
the accessibility, usability, technical, and operational 
effectiveness of the smartphone-based UIs for blind people. 
From the findings of literature review in the area of human-
computer interaction, usability, accessibility and diversified 
requirements of the blind people, we come up with a universal 
accessibility framework on smartphone UIs for blind people. 
The proposed framework is designed keeping in view the 
related work mentioned in Table I. The proposed BlindSense, a 
universal UI design is discussed in the next section.  

III. BLIND-FRIENDLY UNIVERSAL USER INTERFACE DESIGN 
The technical abilities and tasks involved in the design of 

smartphone-based blind-friendly UI have been analyzed in the 
above section. We have analyzed common mobile applications 
by capturing the details of the nature of the app, category, total 
number of activities, number of inputs, number of outputs, 
number of UI controls used in the application, context of use, 
and minimal feature set. These common applications include 
SMS, Call, Contacts, Email, Skype, WhatsApp, Facebook, 
Twitter, Calendar, Location, Clock, Reminders, Reading 
Books, Reading Documents, Identifying Products, Reading 
News, Weather, Instagram, and Chrome. However, these 
applications have been designed for sighted people thus, a 

number of activities and sub-activities are either redundant, 
repetitive or having complex navigational structure, or need a 
long route to follow. The minimal feature sets were extracted 
through manual usability heuristics. The information is 
reported in Table II. Thus, suggesting a minimal set of 
activities, input, outputs and contents in performing common 
applications have been outlined prior to the design of our 
proposed architecture. 

The proposed BlindSense, a simplified, consistent, usable, 
adaptive universal UI model is based on user preferences, 
device logging, and context of use. The novel contribution is to 
customize/generate an optimal interface extracted from the 
existing common applications user interface controls, layouts, 
user interfaces, and widgets. This automatic transformation will 
be relying on semantic web technologies to model, and 
transform user interfaces resulting in transforming the 
complicated design of the existing mobile applications into 
blind-friendly and simplified UI. The BlindSense is a pluggable 
layer-based architecture promoting openness and flexibility in 
the technical design of the system. The designers or users can 
define their screen layouts, text-entry plug-ins, adaptation rules, 
templates, themes, and mode/pattern of interactions. The 
proposed architecture is illustrated in Figure 1. The architecture 
details are provided below. 

TABLE II.  SMARTPHONE APPLICATIONS FOR PERFORMING COMMON 
ACTIVITIES 

Applicat
ion 

Category 
Activi

ties 
Input 

UI 
Cont
rols 

Minimal 
Set 

Feature 

Call Communication 13 43 12 6 
SMS Communication 11 35 5 5 

Contacts Communication 15 44 4 5 
Emails Communication 25 67 12 8 

Calendar Productivity 7 32 8 3 
Clock Productivity 13 35 5 4 

Location Navigation 14 32 6 4 
Facebook Communication 26 58 7 5 
Twitter Communication 18 40 4 4 
Skype Communication 30 45 9 4 

WhatsApp Communication 24 35 9 4 
Chrome Suring Internet 19 26 8 3 

Instagram Picture Sharing 11 30 8 5 
News Information access 14 40 5 4 

Weather Information access 5 14 4 3 
Reading 
Books 

Text reader 14 25 6 2 

Reading 
Document 

Text reader 16 28 8 3 

 

A. User Interface Layer  
The UI layer serves as an interaction point between the 

smartphone and blind people. The BlindSense application 
presents a wizard to the blind people to customize their UI. The 
user inputs are captured through text-entry, gesture controls and 
voice commands for personalization and other operations. The 
application transforms the features extracted from Common 
Element Set (CES) to Minimal Feature Set (MFS) through the 
process of abstraction and adaptation. CES describes features 
of the UI, layouts, themes, widgets etc. The system 
automatically extracts MES from the user preferences, device 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2778  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

logging history, the context of use and the environment. These 
feature-sets are deployed at the activity or application level 
depending on a number of I/O of UI elements. For instance, a 
number of elements, input/outputs in a activity or an 
application level as reported in Table II.  

 

 
Fig. 1.  Universal User Interface Architecture  

B. Transformation Layer  
This layer ensures the delivery of a simplified and 

personalized UI representing user, impairment, accessibility, 
devices, UI components, and adaptation models. Adaptation 
knowledge base contains a set of personalization and 
adaptation rules. The input from User Information Model 
(UIM) and the context model are processed on this layer which 
results in the generation of the simplified UI. A user model 
may contain static (such as screen partitions) or dynamic (such 
as level of abstraction) information. The UIM consists of the 
following profiles: user capability, interest (interest level: high, 
low, and medium, interest category: computer, sports, 
entertainment, food, reading etc.), education, health, 
impairment profiles (visually impaired, blind, deaf-blind, and 
motor-impaired etc.), emergency, and social profiles. 
Adaptation manager is consisting of classes representing 
information related to several models for UI adaptation. 
Adaptation manager retrieves abstraction levels, and adaptation 
rules related to a specific disability from the adaptation 
repository. Abstraction mechanism can be applied to elements, 
group elements, presentation, and application level. For 
instance, adaptation rule related to the sequence generation of 
action and activities would be performed at the task and 
domain level. The final UI will be generated using Android 
XML layouts. In case of changes in the user preferences, 
adaptation components are updated with the latest information 
retrieved from user profile ontology and a new instance of the 
UI is generated 

C. Context Layer 
This layer captures, and stores information pertain to 

device, environment, user, and context through context 
extractor. The context model is composed of the user, platform, 
and environment models. The user model describes the needs 

and preferences while the platform model provides information 
related to device and platform, including screen resolution size, 
screen divisions, button size, keypads, aspect ratio, etc. The 
environment model represents information specific to the 
location of the user point of interaction, the level of ambient 
light etc. However, selective context sensing is performed in 
our case. Similarly, the light and noise sensing are not required 
all the time for continuous updating in the UI. This can be setup 
once, be stored and retrieved anytime. The smartphone sensors 
store data in the form of key-value pairs, nested structure and in 
the formal ontology. A user context model containing 
information about context provider, context property, and 
context status is generated at the end. Context data extractor 
filters the context data in relevance to UI adaptation. 

D. Semantic Layer 
The insight into the deeper aspect of UI adaptation involves 

the handling of model information, context-awareness, and 
their associated semantics. This layer encapsulates access to a 
comprehensive UI adaptation ontology used for user profiling 
and preferences, adaptation, context, devices and accessibility 
ontologies. It provides technology-independent access to meta-
data encoded in the ontology. Additional contents may also be 
associated with the activities and tasks related to UI modeling, 
e.g. multimedia captions, audio descriptions, and interpretation 
of several other patterns. The architecture is developed using 
re-configurable modular approach for realizing the inclusion of 
semantic web technologies.  

E. Storage Layer 
The storage layer is responsible for managing several 

storage sources including ontologies, data store etc. 
Information about user profiling, preferences, contextual data, 
adaptation rules, and layouts details is stored and retrieved 
from this layer, once all required data has articulated from the 
relevant models. BlindSense uses semantic reasoning 
capabilities of the ontological modeling to present a final UI to 
blind people. The user may change his/her preferences related 
to layout, theme, and interaction’s type in the runtime. The 
structural model of universal UI model is represented in state 
transition diagram (STD) in Figure 2.  

 

 
Fig. 2.  STD for system initialization 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2779  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

All these states are stored in the system and are executed in 
a specific order. The diagram begins with START, where it 
waits for input. Once the user provides a particular input, other 
processes are initiated by switching several states to complete 
an activity or perform a particular action. In case of error, the 
system is returned to the initial state, and the error is recorded 
in the system memory accordingly. 

F. Perspective Workflow 
BlindSense can be used as accessibility service or as an 

individual application. By enabling accessibility service for the 
first time, the system loads specific installed applications of 
common use and extracts common element features. User starts 
personalizing the layout by selecting a few preferences about 
screen divisions, mode of interactions, etc. Also, the device 
logging, context of use, and user profiling are automatically 
articulated. The rules used for the generation of simplified UI 
are checked in the adaptation repository where specific rules 
for adaptation are applied. In case of non-availability of 
specific rules for given transformation, the transformation is set 
to be default or baseline specification. The complete 
application simplification process is illustrated in Figure 3. The 
prototype is developed using Android SDK. 

 

 
Fig. 3.  BlindSense proof-of-concept  

IV. AIMS AND HYPOTHESES 
We studied the user experience by analyzing user 

satisfaction in performing several activities on the proposed 
universal UI design. Each participant has demonstrated his 
perceived usefulness, ease of use, system usability scale, and 
user experience. To the best of our knowledge, a similar 
universal UI design for blind people has not presented before. 
Thus, the aim was to investigate the effect of user experience in 
using common applications on a smartphone using universal UI 
design to gain a systematic understanding of user experience in 
overall operations. We aimed at formulating an assumption of 
which variables are the most central to user's experience. 
Following hypotheses were made: 

 H1: The perception of perceived usefulness in performing 
common activities through a universal UI for blind people 
in terms of the success of solving tasks/activities on 
smartphone influences a positive user satisfaction.  

 H2: The ease of use in personalizing a universal UI for 
blind people in term of task completion, personalization, 
and a number of accurate touches will improve the user 
experience.  

 H3: Consistency will lead to a more positive attitude 
towards an improved user experience in accessing non-
visual items, and skipping irrelevant items on a universal UI 
for blind people 

 H4: Improved system usability scale will lead to a more 
positive attitude towards the use of universal UI for blind 
people.  

 H5: Consistency in the interface elements will lead to a 
more positive attitude towards the ease of use in accessing 
and operating a universal UI for blind people on a 
smartphone. 

Besides we will analyze whether a specific usability 
parameter was most influential in the particular case or 
otherwise. The key variables include user satisfaction, 
perceived usefulness, ease of use. System usability scale will be 
predicated on having a positive/negative influence on blind 
people’s user experience. 

V. EVALUATION AND RESULTS 
The evaluation of the proposed solution was conducted 

through an empirical study. The usability of the proposed UI 
design and the individual components of the architecture were 
evaluated using already established methods, metrics, and 
usability parameters related to HCI. We were interested to find 
the user experience of blind people by performing a number of 
tasks associated with user interface customization and 
operating smartphone applications with the ease of use. 

A. Participation 
Sixty three (4 females, 59 male) participants took part in 

this study. The median age of participants was 39 years, within 
range of 22-56 years. In the pre-application assessment, 
participants experience was rated on a four-item scale: 
beginner, intermediate, advanced and expert. The participant 
level of smartphone usage experience varied from beginner to 
advanced. Usability experts observed the navigational and 
orientation skills the blind people have in performing common 
tasks/activities. The experts mainly judged the confidence and 
frustration level of the participants. Table III summarizes 
general information about the participants along with other 
indicators, i.e. information related to their background, age, 
gender and smartphone usage experience. The participants 
reported their level of experience in the initial trials.  

B. Procedure  
The participants were introduced to the BlindSense 

framework and a demonstration of the required steps to 
perform one-by-one was provided. The study spanned for 
eleven weeks and consisted of the following components: (1) 
pre-application usage assessment and collection of background 
data, (2) introductory session with our universal UI framework 
and initial trials, (3) in-the-wild device usage, (4) interviews 
and observations. A practice trial session on the general tasks 
and operational usage of several scenarios was performed. 
Participants were allowed to practice in a trial session on 
general tasks and activities such as unlocking the phone, 
placing a call, sending a message, etc. The participants were 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2780  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

asked to perform 121 predefined tasks. Average time exercised 
on each participant was about 66 minutes. The researchers were 
directly involved in observing the execution of the tasks 
performed. Besides, we acquired the services of nine 
professional facilitators who assisted the participants during the 
entire study. We continued the sequence of grouping and 
interviewing up to finishing all participants in the same pattern. 
In addition, all participants were provided with a Samsung S6 
and an HTC One smartphone running on android. The 
Talkback screen reading application and data collection 
service, and the BlindSense application were pre-installed on 
the devices. For each task we recorded the time of completion, 
the degree of accuracy in performing common activities like 
placing a call, sending messages, etc. The result section 
presents the responses collected through a structured 
questionnaire, interview, and observations. The university 
ethics committee/IRB has approved the consent procedure for 
this study. Written consent was obtained from the caretakers of 
the participants. The participants were informed about the study 
objective, study procedure, potential risks etc. The study 
checklist was verbally communicated to all blind people, with 
their verbal approval while the caretakers issued the written 
consent.  

TABLE III.  PARTICIPANTS INFORMATION DETAILS 

Variable Group 
Number 

of 
Participants 

Percent
age 

SD 

Gender 
Female 4 6.35 

38.89087 
Male 59 93.65 

Age 
(Years) 

22 to 35 36 57.14 
16.97056 36 to 45 12 19.05 

45 to 56 15 23.81 

Background 
Educated 45 71.43 

19.09188 
Literate 18 28.57 

Smartphone 
Usage 

Experience 

1 Month 14 38.10 

3.535534 
3 Months 19 30.16 
6 Months 9 14.29 
9 Months 17 11.11 
12 Months 4 6.35 

Observed 
Expertise 

Beginner 17 26.98 

7.67 
Intermediate 19 11.97 

Advance 21 33.33 
Experts 4 6.34 

 

C. Analysis and Validation Procedures/Data Analysis  
We run statistical correlation analysis of observations to 

define the relationship between UX attributes of the universal 
UI on attitude, intention to use, perceived usefulness, 
understandability and learnability, operability, ease of use, 
system usability scale, minimal memory load, consistency, and 
user satisfaction. The statistical software SPSS 21 using AMOS 
21 was used for analysis and structuring modeling. The first 
step was to define a measurement model and test the 
relationship among several dependent and independent 
variables. The assessment of measurement model validity was 
conducted by checking goodness-of-fit indices (GFI). We used 
confirmatory factor analysis (CFA) using maximum likelihood 
to verify the reliability, convergent validity, composite 

reliability and average variance of each construct. The 
measurement model had 60 variables for 10 latent variables. In 
order to confirm the fitness of proposed model, the Chi-Square, 
Chi-Square/d.f., GFI, incremental fit index (IFI), normed fit 
index (NFI), comparative fit index (CFI), Tucker-Lewis index 
(TLI), parsimony goodness of Fit index (PGFI) and root mean 
square error of approximation (RMSEA) were assessed. The 
measures mentioned above indicated that the estimated 
covariance metrics of the proposed measurement and observed 
model were found satisfactory. The reliability test was accessed 
through Cronbach’s alpha. The CFA model indicates that the 
overall fit index measurement model found a satisfactory ratio 
of Chi-Square to the degree of freedom (x2/df)=1.577, 
RMSEA=0.076, CFI=0.727, NFI=0.939, IFI=0.949, 
TLI=0.696, PGFI=0.539.  

In addition, the measurement model was found to have 
strong internal reliability and convergent validity. The 
Cronbach’s alpha values, item-total correlation, factor loading, 
composite reliability, and average variance extracted from the 
analysis report a robust fitness. Tables IV-VI show 
confirmatory factor loadings of each item with their respective 
reliability scores. The factor loadings having value above 0.05 
are considered as acceptable in general practice, whereas the 
reported factor loadings exceeded 0.06. Similarly, the value of 
Cronbach’s alpha reliability score 0.70 is considered an 
acceptable reliability score. In the reported data, the scores are 
above 0.70. In addition, to verify the internal consistency of 
each latent variable, we have measured the construct reliability 
too. It is acceptable when the composite reliability is higher 
than 0.07 and AVE is higher than 0.05. The reported score is 
mostly above the acceptable range of construct reliability. 
Figure 4 shows the diagram of the final structural model 
generated from the relationship of latent variables. The results 
are depicted in a standardized regression weights in different 
paths. All the paths were found significant at the level of 
p<0.001. As depicted, the perceived usefulness has an impact 
on the user satisfaction with high impact path weight (path 
coefficient=0.22). Research model overall had satisfactory 
variance in the user experience in operating adaptive user 
interfaces. In Tables VII a summary is presented. 

TABLE IV.  MEASUREMENT AND STRUCTURAL MODEL FIT INDEXES 

Fit Index Recommended Values Structural Model 
X2/df ≤ 3.00 2.877 
CFI ≤ 3.00 0.898 

GFI ≥ 0.90 0.937 

NFI ≥ 0.90 0.954 

IFI ≥ 0.90 0.900 

TLI ≥ 0.50 0.879 

PCFI ≥ 0.50 0.754 

RFI ≤ 1.00 0.825 

PNFI ≥ 0.50 0.716 

RMSEA < 0.08 0.070 

 

 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2781  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

TABLE V.  INTERNAL RELIABILITY AND CONVERGENT VALIDITY – PART I 

 Internal Reliability Convergent Average 

Latent Constructs 
Measurement 

Items 
Mean (SD) 

Cronbach’s 
alpha 

Item-total 
correlation 

Factor 
loading 

Composite 
reliability 

Average 
variance 
extracted 

Attitude 

ATT1 2.75 0.011 

0.825 

0.652 0.802 

0.511 0.721 
ATT2 3.18 0.904 0.641 0.838 
ATT3 2.61 1.005 0.693 0.822 
ATT4 2.74 1.079 0.625 0.929 

Intention to use 

IU1 3.89 0.635 

0.860 

0.711 0.796 

0.525 0.736 
IU2 3.82 0.619 0.760 0.850 
IU3 3.97 0.730 0.727 0.808 
IU4 3.84 0.734 0.668 0.967 

Perceived usefulness 

PU1 2.79 0.864 

0.959 

0.662 0.692 

0.816 0.711 

PU2 2.78 0.870 0.919 0.922 
PU3 2.73 0.865 0.894 0.917 
PU4 2.98 0.889 0.793 0.794 
PU5 2.97 0.915 0.803 0.831 
PU6 2.78 0.870 0.868 0.894 
PU7 2.86 0.913 0.872 0.890 
PU8 2.89 0.969 0.751 0.805 
PU9 2.90 0.928 0.848 0.867 

Understandability 
and Learnability 

UL1 2.84 0.884 
0.528 

0.361 0.787 
0.088 0.701 

UL2 3.13 0.975 0.361 0.885 

Operability 

OP1 2.89 0.863 

0.879 

0.541 0.758 

0.832 0.706 

OP2 2.70 0.994 0.515 0.890 
OP3 3.06 0.914 0.575 0.902 
OP4 2.56 0.980 0.578 0.819 
OP5 2.70 1.087 0.556 0.907 
OP6 2.54 1.045 0.680 0.769 
OP7 2.43 1.011 0.803 0.869 
OP8 2.33 0.803 0.766 0.838 
OP09 2.44 0.947 0.706 0.797 
OP10 2.44 0.963 0.587 0.798 
OP11 3.13 0.975 0.240 0.885 

 

 
Fig. 4.  Structural model for improving user satisfaction 

 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2782  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

TABLE VI.  INTERNAL RELIABILITY AND CONVERGENT VALIDITY – PART II 

Ease of Use 

EU1 4.08 0.666 

0.889 

0.725 0.771 

0.746 0.770 

EU2 4.03 0.682 0.772 0.854 
EU3 4.05 0.644 0.802 0.881 
EU4 4.07 0.629 0.847 0.916 
EU5 4.03 0.632 0.733 0.794 
EU6 4.13 0.645 0.692 0.742 
EU7 2.33 0.961 0.215 0.791 
EU8 4.16 0.835 0.816 0.806 

System Usability 
Scale (SUS) 

SUS1 3.58 0.897 

0.926 

0.717 0.989 

0.763 0.705 

SUS2 3.40 0.931 0.751 0.763 
SUS3 3.60 0.877 0.783 0.722 
SUS4 3.69 0.801 0.845 0.845 
SUS5 3.45 0.862 0.782 0.798 
SUS6 3.65 0.812 0.732 0.879 
SUS7 3.69 0.861 0.825 0.899 
SUS8 3.74 0.700 0.559 0.795 

Minimal Memory 
load 

MML1 3.71 0.705 

0.894 

0.611 0.689 

0.761 0.702 

MML2 3.97 0.647 0.710 0.904 
MML3 3.87 0.772 0.711 0.785 
MML4 3.62 0.851 0.730 0.804 
MML5 3.67 0.783 0.804 0.892 
MML6 3.78 0.888 0.685 0.866 
MML7 3.75 0.761 0.693 0.771 
MML8 3.13 0.942 0.507 0.769 

User Satisfaction 

US1 2.56 1.028 

0.940 

0.745 0.820 

0.676 0.813 
US2 2.40 1.040 0.897 0.933 
US3 2.30 0.835 0.909 0.939 
US4 2.38 0.958 0.881 0.932 
US5 2.33 0.916 0.794 0.880 

Consistency 

CO1 2.71 0.974 

0.807 

0.604 0.889 

0.769 0.716 

CO2 2.92 0.955 0.555 0.887 
CO3 2.94 1.014 0.577 0.900 
CO4 3.89 0.743 0.489 0.966 
CO5 3.68 0.820 0.535 0.847 
CO6 3.79 0.626 0.567 0.859 
CO7 3.89 0.675 0.368 0.734 
C08 3.11 1.002 0.452 0.650 

TABLE VII.  SUMMARY OF HYPOTHESIS TESTS 

Hypothesis  UC SC T SE P  
H1 : PU→US 0.138 0.303 2.483 0.056 0.016 
H2 : EU→US 0.777 0.469 4.153 0.18 0.000 

H3: CO→US 0.442 0.287 2.336 0.189 0.023 
H4 : SUS→US 0.506 0.400 3.407 0.148 0.001 
H5: CO→EU 0.298 0.320 2.635 0.113 0.011 

UC: Unstandardized Coefficient, SC: Standardized Coefficient, SE: Standard Error, P: Significance 

 
In respect of hypothesis: Perceived usefulness was 

positively associated with user satisfaction (H1, β=0.3030, 
p=0.016), Ease of Use (H2, β=0.469, p<0.000). Consistency 
(H3, β =0.287, p<0.023), System usability scale (H4, β=0.400, 
p<0.001) and consistency concerning ease of use (H5, β=0.320, 
p<0.011) had a positive effect on the user experience of blind 
people in using adaptive UI. The significance of all hypotheses 
was <005 thus each hypothesis is accepted. 

VI. DISCUSSION  
Understanding the need for developing an accessibility-

inclusive UI for blind people, our research articulates usability, 
ease of use, consistency, usefulness, and accessibility for 
generating a simplified, consistent and universal UI design for 
blind people. The study proposed, developed and validated a  

 
blind-friendly universal UI design for operating common 
applications on smartphone resulting in enriched user 
experience.  

As hypothesized, the parameters used, i.e. ease of use, 
consistency, operability, perceived usefulness, minimal 
memory load, system usability scale were found to have a 
positive effect to user satisfaction and user experience. 
Ultimately, a consensus was reached on the acceptance of using 
universal user interface model. The user’s attitude towards the 
use of the suggested application was reported as effective, 
pleasant and enjoyable. For statistical validation, this study 
measured ease of use, consistency, operability, perceived 
usefulness, minimal memory load, system usability scale, 
understandability and learnability (i.e., the fundamental 
determinants of user acceptance of any Technology Acceptance 
Model (TAM)) through a survey questionnaire. The results 
resulted in a satisfactory response. Through a series of 
interventions of model evaluations and validations, the 
hypothesis that user satisfaction is positively affected by the 
adaptation of the universal UI design for blind people is 
accepted. The study also verified the relationship between the 
usability of UI and user satisfaction. User satisfaction is an 
important factor in the design of smartphone-based UIs. In 
addition, the study results are consistent with earlier studies on 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2783  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

the usability and accessibility of smartphone applications. The 
findings collectively investigate that various features of the 
smartphone-based UIs and layouts such as screen size, user 
controls, navigational complexity, user interaction, and 
feedbacks convey positive psychological effects in a particular 
user context [30]. Methodologically, a potential threat to the 
investigation is to undertake this approach on visually-dense 
interfaces such as game and entertainment applications. 
Besides, the potential of smartphone capabilities can be used 
for hedonic and utilitarian purposes [31]. As depicted in the 
results, some users find the universal interface design to be a 
convenient and efficient one for completing their tasks, while 
others perceived this as a bit uncomfortable and annoying. 
Therefore, the including of more visually complex tasks may 
be further investigated.  

VII. CONCLUSION 
A large number of smartphone applications does not 

comply with the mobile accessibility guidelines. These 
applications do not specifically meet the requirements of blind 
people. Thus, these people are facing numerous challenges in 
accessing and operating smartphone interface components such 
as finding a button, understanding layouts, interface navigation 
etc. Besides, a blind person has to learn every new application, 
and their features resulting in penetrating learnability and 
discoverability. They have to learn and apply their previous 
experience and this may result in varying user experience. The 
findings of this study illustrated that a simplified, semantically 
consistent, and context-sensitive universal UI design 
contributes to having a satisfactory positive evaluation. The 
main contribution of this proposed research study was an 
attempt to improve the user experience of blind people in 
operating smartphones through a universal interface design by 
using adaptive UI paradigm for personalization. We have 
adopted measurement items from existing web/mobile usability 
and revamped a number of parameters for this study. The 
proposed solution addressed the problems of simplicity, 
reduction, organization, and prioritization [32] by providing a 
semantically consistent, simplified, task-oriented, and context-
sensitive UI design. During the study, the proposed 
intervention has significantly reduced the cognitive user 
overload. The consistency in the division of smartphone screen 
enables blind people to memorize the flow of activities and 
actions with ease. Thus there is a slim chance of getting lost in 
a given navigation workflow.  

Our results illustrate that our proposed solution is more 
robust, easy to use and adaptable than other solutions operated 
through the accessibility services. Our future work will focus 
on extending this framework for visually 
complex/navigationally-dense applications. Emotion-based UIs 
design may also be investigated further. Moreover, the 
optimization of GUI layouts and elements will be considered in 
the particular focus with gesture control systems, and eye-
tracking systems.  

ACKNOWLEDGMENT 

The authors would like to acknowledge the support of the 
Higher Education Commission (HEC) of Pakistan. 

REFERENCES 
[1] J. A. Kientz, S. N. Patel, A. Z. Tyebkhan, B. Gane, J. Wiley, G. D. 

Abowd, “Where's my stuff?: design and evaluation of a mobile system 
for locating lost items for the visually impaired”, 8th International ACM 
SIGACCESS Conference on Computers and Accessibility, Portland, 
USA, pp. 103-110, October 23-25, 2006 

[2] J. R. Fruchterman, “In the palm of your hand: a vision of the future of 
technology for people with visual impairments”, Journal of Visual 
Impairment and Blindness, Vol. 97, No. 10, pp. 585-591, 2003 

[3] J. McCarthy, P. Wright, “Technology as experience”, Interactions, Vol. 
11, No. 5, pp. 42-43, 2004 

[4] L. Hakobyan, J. Lumsden, D. O’Sullivan, H. Bartlett, “Mobile assistive 
technologies for the visually impaired”, Survey of Ophthalmology, Vol. 
58, No. 6, pp. 513-528, 2013 

[5] F. F.-H. Nah, D. Zhang, J. Krogstie, S. Zhao, “Editorial of the special 
issue on mobile human-computer interaction”, International Journal of 
Human–Computer Interaction, Vol. 33, No. 6, pp. 429-430, 2017 

[6] B. Adipat, D. Zhang, “Interface design for mobile applications”, AMCIS 
2005 Proceedings, AIS Electronic Library, p. 494, 2005 

[7] B. Leporini, F. Paterno, “Applying web usability criteria for vision-
impaired users: does it really improve task performance?”, International 
Journal of Human–Computer Interaction, Vol. 24, No. 1, pp. 17-47, 
2008 

[8] W3C, Web Accessibility Initiative, available at: http://www.w3.org/ 
WAI/ 

[9] P. A. Akiki, A. K. Bandara, Y. Yu, “Adaptive model-driven user 
interface development systems”, ACM Computing Surveys, Vol. 47, No. 
1, 2015 

[10] D. Weld, C. Anderson, P. Domingos, O. Etzioni, K. Gajos, T. Lau, S. 
Wolfman, “Automatically personalizing user interfaces”, IJCAI' 03 18th 
International Joint Conference on Artificial Intelligence, Acapulco, 
Mexico, pp. 1613-1619, 2003 

[11] A. Bhowmick, S. M. Hazarika, “An insight into assistive technology for 
the visually impaired and blind people: state-of-the-art and future 
trends”, Journal on Multimodal User Interfaces, Vol. 11, No. 2, pp. 149-
172, 2017 

[12] W. Grussenmeyer, E. Folmer, “Accessible touchscreen technology for 
people with visual impairments: a survey”, ACM Transactions on 
Accessible Computing, Vol. 9, No. 2, Article No. 6, 2017 

[13] H. Sieverthson, M. Lund, Usability challenges for the mobile web: an 
enterprise perspective, BSc Thesis, University of Bora, 2017 

[14] S. K. Kane, C. Jayant, J. O. Wobbrock, R. E. Ladner, “Freedom to roam: 
a study of mobile device adoption and accessibility for people with 
visual and motor disabilities”, in: Proceedings of the 11th international 
ACM SIGACCESS Conference on Computers and Accessibility, pp. 
115-122, ACM, 2009 

[15] P. Strumillo, P. Skulimowski, M. Polanczyk, “Programming Symbian 
smartphones for the blind and visually impaired”, in: Computers in 
Medical Activity, Advances in Intelligent and Soft Computing, Vol 65. 
pp. 129-136, Springer, 2009 

[16] M. L. Dorigo, B. Harriehausen-Mühlbauer, I. Stengel, P. S. Haskell-
Dowland, “Nonvisual presentation and navigation within the structure of 
digital text-documents on mobile devices”, Lecture Notes in Computer 
Science, Vol. 8011, pp. 311-320, 2013 

[17] R. J. P. Damaceno, J. C. Braga, J. P. Mena-Chalco, “Mobile device 
accessibility for the visually impaired: problems mapping and 
recommendations”, in: Universal Access in the Information Society, pp. 
1-15, Springer-Verlag, Berlin, Heidelber, 2017 

[18] G. E. Legge, P. J. Beckmann, B. S. Tjan, G. Havey, K. Kramer, D. 
Rolkosky, R. Cage, M. Chen, S. Puchakayala, A. Rangarajan, “Indoor 
navigation by people with visual impairment using a digital sign 
system”, PloS one, Vol. 8, No. 10, p. e76783, 2013 

[19] M. Rodriguez-Sanchez, M. Moreno-Alvarez, E. Martin, S. Borromeo, J. 
Hernandez-Tamames, “Accessible smartphones for blind users: A case 
study for a wayfinding system”, Expert Systems with Applications, Vol. 
41, No. 16, pp. 7210-7222, 2014 



 Engineering, Technology & Applied Science Research  Vol. 8, No. 2, 2018, 2775-2784 2784  
 

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People 
 

[20] E. Brady, M. R. Morris, Y. Zhong, S. White, J. P. Bigham, “Visual 
challenges in the everyday lives of blind people”, SIGCHI Conference 
on Human Factors in Computing Systems, Paris, France, pp. 2117-2126, 
2013 

[21] N. Mi, L. A. Cavuoto, K. Benson, T. Smith-Jackson, M. A. Nussbaum, 
“A heuristic checklist for an accessible smartphone interface design”, 
Universal Access in the Information Society, Vol. 13, No. 4, pp. 351-
365, 2014 

[22] M. C. Buzzi, M. Buzzi, B. Leporini, A. Trujillo, “Analyzing visually 
impaired people’s touch gestures on smartphones”, Multimedia Tools 
and Applications, Vol. 76, No. 4, pp. 5141-5169, 2017 

[23] T. Paek, D. M. Chickering, “Improving command and control speech 
recognition on mobile devices: using predictive user models for 
language modeling”, User Modeling and User-Adapted Interaction, Vol. 
17, No. 1-2, pp. 93-117, 2007 

[24] J. P. Bigham, A. C. Cavender, J. T. Brudvik, J. O. Wobbrock, R. E. 
Lander, “WebinSitu: a comparative analysis of blind and sighted 
browsing behavior”, 9th International ACM SIGACCESS Conference 
on Computers and Accessibility, Tempe, USA, pp. 51-58, 2007 

[25] S. K. Kane, J. O. Wobbrock, R. E. Ladner, “Usable gestures for blind 
people: understanding preference and performance”, Proceedings of the 
ACM CHI Conference on Human Factors in Computing Systems, 
Vancouver, Canada, pp. 413-422, May 7-12, 2011 

[26] P. Szekely, P. Luo, R. Neches, “Beyond interface builders: model-based 
interface tools”, in : Proceedings of the INTERACT '93 and CHI '93 
Conference on Human Factors in Computing Systems, ACM, pp. 383-
390, 1993  

[27] J. Abascal, C. Nicolle, “Moving towards inclusive design guidelines for 
socially and ethically aware HCI”, Interacting with Computers, Vol. 17, 
No. 5, pp. 484-505, 2005 

[28] U. Persad, P. Langdon, J. Clarkson, “Characterising user capabilities to 
support inclusive design evaluation”, Universal Access in the 
Information Society, Vol. 6, No. 2, pp. 119-135, 2007 

[29] O. Plos, S. Buisine, A. Aoussat, F. Mantelet, C. Dumas, “A Universalist 
strategy for the design of Assistive Technology”, International Journal of 
Industrial Ergonomics, Vol. 42, No. 6, pp. 533-541, 2012 

[30] J. P. Forgas, “Mood and judgment: the affect infusion model (AIM)”, 
Psychological Bulletin, Vol. 117, No. 1, pp. 39-66, 1995 

[31] K. J. Kim, S. S. Sundar, “Does screen size matter for smartphones? 
Utilitarian and hedonic effects of screen size on smartphone adoption”, 
Cyberpsychology, Behavior, and Social Networking, Vol. 17, No. 7, pp. 
466-473, 2014 

[32] J. Maeda, The Laws of Simplicity (Simplicity: Design, Technology, 
Business, Life), MIT Press, 2006