IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

163 

 

 

RETINA FUNDUS IMAGE MASK GENERATION USING 
PSEUDO PARAMETRIC MODELING TECHNIQUE 

 
A.M. AIBINU, M.J.E. SALAMI AND. A.A. SHAFIE 

 
Department of Mechatronics Engineering, 

Faculty of Engineering, International Islamic University Malaysia  
P. O. Box 10, 50728, Malaysia. 

 
E-mail: maibinu@iium.edu.my                                       

ABSTRACT : This paper discusses a new pseudo modeling technique for the 
generation of retina fundus image (RFI) mask. The model coefficients necessary for 
the generation of the mask has been estimated from the synaptic weights of real-valued 
neural network. Performance analysis of the newly proposed three step technique has 
been evaluated using DRIVE databases and other RFI obtained from other sources. he 
accuracy obtained by the application of the proposed technique on RFI contained in the 
DRIVE database varies between 99.62%  and 99.97% .  

KEYWORDS: Diabetes, Parametric Modeling Technique, Real-Valued Neural Network (RVNN), 
Retina Fundus Image 

 

1. INTRODUCTION  

In recent times, there have been a tremendous increase in age and society related 
diseases such as diabetes. In Sweden, 4%  of the country population have been diagnosed 
of diabetes [1] while approximately 2%  of the working class in United Kingdom are 
diabetic patient [2]. Similar study in Malaysia shows a steady rise in the population of 
diabetic patient from 0.65%  in 1960  to 2%  in 1980  [3] and according to America 
Diabetes Association, 6.3%  of the total population of Americans have been diagnosed of 
this disease [4]. Also, it have been estimated that over 13%  of the Egyptian with age 
above 20  years will have diabetes by the year 2025  [5, 6] and about 5.5%  of the world 
population have been diagnosed of the disease [6, 7]. 

Diabetes can be described as a disorder in body metabolism. Digested food enters the 
body stream with the aid of a hormone called insulin which is produced by the pancreas to 
automatically produce the correct amount of insulin needed for allowing glucose 
absorption from the blood into the cells. For individuals with diabetes, the pancreas either 
produces too little or no insulin or the cells do not react properly to the insulin that is 
produced by the body. The build up of glucose in the blood, overflows into the urine and 
then passes out of the body. Therefore, the body loses its main source of fuel even though 
the blood contains large amounts of glucose [8]. 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

164 

 

The effect of diabetes on the eye is known as Diabetic Retinopathy (DR) and this can 
be loosely classified into : Background Diabetic Retinopathy (BDR), Proliferate Diabetic 
Retinopathy (PDR) and Severe Diabetic Retinopathy (SDR) [9, 10, 11, 12]. If DR is left 
untreated, the disease can degenerate and lead to blindness. According to World Health 
Organization (WHO) report released in 2002, 8.7%  of the population of blindness have 
been linked to DR and taking a micro look at United State alone, about 8%  of the 
blindness for the age bracket 20  to 74  have been due to DR [4, 13]. 

Early detection, diagnosis and treatment have been identified as one of the ways to 
achieve reduction in the percentage of visual impairment caused by diabetes [1, 9]. 
Different approaches have been suggested for early detection and monitoring of the DR 
such as the use of computer aided automatic diagnosis of DR from retina fundus image 
(CAADRFI). CAADFRI and its variants (as suggested by others researchers) have the 
ability to reduce the number of blindness due to DR by almost 50%  and they provide 
considerable time saving for patient and ophthalmologists, reduce wears and tears on 
equipments [6, 11-12]. However, most of the acquired RFI usually have severe limitation 
such as quality, acquisition and problems. In ensuring that screeners and ophthalmologists 
have good view of the whole retina fundus image (RFI), more than the needed number of 
images are sometimes acquired. The existence of good image processing algorithm that 
can improve and consistently guarantee good image quality will lead to a drastic reduction 
in the RFI acquisition and storage space requirements since only a sufficient number of 
images will be saved [5, 9-10]. 

Various algorithms have been developed by various researchers for the CAADRFI 
system, the effectiveness of which greatly depends on the quality of the acquired RFI [2-
21]. Most of these techniques can be regarded as a three-stage technique, namely RFI 
preprocessing stage so as to solve the problems of RFI mask generation, object motion, 
poor lighting and illumination problems; image segmentation stage and diseases 
classification for the anatomy detection and classification. 

A typical RFI used for CAADFRI is shown in Fig. 1a, the whole regions contained in 
this RFI can be grouped into two main classes, namely the white semi-circular region of 
interest (ROI) and the black background [6]. Figure 1b is subsequently referred to in this 
paper as RFI mask . The process of generating the RFI mask involves labeling pixel 
belonging to the ROI as one group and those belonging to the surrounding areas of the 
ROI as background. Incorrect delineation of background from ROI greatly affects the 
results of the CAADFRI system hence the need for accurate delamination of ROI from the 
background in RFI for a CAADFRI system. A typical RFI mask is shown in Fig. 1b. 

Different methods of generating RFI mask have been suggested in the literature 
however in this paper, a new method of generating RFI mask using pseudo parametric 
modeling approach is presented. Review of related work is discussed in section 2 while the 
newly proposed technique is presented in section 3. Performance analysis of the proposed 
technique and conclusion are contained in section 4 and section 5 respectively. 

 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

165 

 

       

Fig. 1: (a)Retina Fundus Image (RFI)   (b) Retina Fundus Image (RFI) mask. 

   

2.  LITERATURE REVIEW 

A comparative performance measure applicable to the three major RFI preprocessing 
steps namely: mask generation, illumination equalization and color normalization has been 
reported in [6]. Three methods of automatic mask generation was evaluated in the work, 
and it was observed that using morphological operators on the thresholded red band result 
in superior performance over either the simple thresholding of the green band proposed by 
Goatman et al. [2] or thresholding on the the three RGB channels proposed in [19]. 

In a related work by Gagnon et al. [19], several methods of detecting anatomical 
structures in RFI were presented. The algorithm proceed through five main steps which 
involve: automatic mask generation using pixels value statistics and color threshold; visual 
image quality assessment using histogram matching and Canny edge distribution 
modeling; optic disk localization using pyramidal decomposition, Hausdorff-based 
template matching and confidence assignment; Macula localization using pyramidal 
decomposition and vessel network tracking using recursive dual edge tracking and 
connectivity recovering. Results obtained in RFI mask generation stage show that the 
proposed technique is robust against low visual quality of the images and is independently 
of the position of the Macular optic disk-centered when detecting the optic disk. Success 
rates of 100%  is reached for the Optic disk detection and 95%  for the Macula detection. 

Early detection of DR from RFI has been proposed in [20]. RFI model was developed 
based on probability distribution function of Macular pigment, haemoglobin and melanin 
to represent macular region, retinal vasculature and background, respectively. Independent 
component analysis based on the spectral absorbance of the model was then applied to 
determine retinal pigments from RFI. The proposed technique outperforms other non-
invasive enhancement methods, such as contrast stretching, histogram equalization and 
contrast limited adaptive histogram equalization techniques, however little or no mention 
was made of methods of RFI mask generation in this work. 

In another related work, an expert system for early diagnosis of signs of DR was 
presented [21]. The process starts with an extraction of sub-image containing the optic 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

166 

 

disk (OD) followed by OD boundary extraction in parallel from both the red and green 
channels of this sub-image by means of morphological and lastly the edge detection 
techniques was then applied. Both OD boundaries are approximated by a circumference 
using the circular Hough transform. The location procedure succeeded in 99%  cases, 
taking an average computational time of 1.67 s. with a standard deviation of 0.14 s while 
the segmentation algorithm rendered an average common area overlapping between 
automated segmentations and true OD regions of 86%  with an average computational 
time of 5.69 s having 0.54 s standard deviation. 

In [14], a computer-aided diagnosis (CAD) system that facilitates the interpretation of 
RFI was developed. In this technique, RFI was converted into a feature vector that was 
created using histograms obtained from multiple images with different resolutions. An 
accuracy of about 96.34%  was obtained with this technique. Similarly, Kee et. al 
developed a computer based approach for automatic segmentation of blood vessels in 
screening large numbers of vessel abnormalities contained in RFI [14]. Although the 
results in this proposed method were compared to a manually segmented image but no 
empirical interpenetration of the accuracy was given. 

A three-stage framework for detection of vascular intersection namely bifurcation-
point (BP) and cross-point in RFI using a hybrid combined cross-point number (CCN) 
approach was proposed in [11, 12]. Using a receiver operator curve (ROC) based analysis 
on the proposed hybrid algorithm, a very high precision and true positive rate with a very 
small false error rate were obtained, indicating an improved performance when compared 
to the individual performance of modified cross-point number (MCN) and Simple cross 
point number proposed in [12, 16]. Another version to the use of 5  x 5  windowed based 
CCN was suggested in [11, 12] and also proposed in [16]. The hybrid technique was 
recently modified by using a 7x7 window in detecting vascular intersection in RFI [16]. 
The newly modified technique was tested on 100 RFI as opposed to the test performed in 
[11] on two publicly available RFI images databases. 

Detection of abnormalities related to the inner structures of RFI using image 
enhancement, Max-Tree image representation and filtering procedure was proposed in 
[18]. This method produced an accuracy of 93.95%  and 94.21%  in detecting associated 
abnormalities on 40 and 20 RFI respectively. 

Detection of the macula in the processing of RFI was proposed in [13]. Regions of 
dark spots were detected by finding the coordinates with the lowest pixel intensity and 
determining the average pixel neighborhood intensities and these were ranked to obtain 
the region containing the macula. In evaluating the performance of this technique, a total 
of 162 images (81 RFI of left eye and 81 RFI of right eye) were used and 98.8%  accuracy 
was obtained when the algorithm was applied on high quality RFI. However, this proposed 
technique was unable to accurately detect the macula in a low quality RFI. 

Edge sharpening technique is employed in highlighting the boundary of RFI in the 
development of automatic detection of lesions in RFI [3]. Also, automatic detection and 
extraction of exudates and optic disk in RFI was proposed in [1]. The RFI was 
preprocessed in order to improve the contrast and the overall color saturation in the image, 
then the optic disk was eliminated and the exudates were segmented using spatially 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

167 

 

weighted Fuzzy c-means clustering. The proposed algorithm for optic disk detection 
produced 92.53%  accuracy. The sensitivity and the specificity of the proposed algorithm 
for exudates detection were 86%  and 98%  respectively. 

Development of an automatic measurement and analysis of vessel tortuosity in color 
RFI was proposed in [35]. Two sets of RFI for normal and diabetic persons were 
considered in this work and the proposed approach will be suitable for predicting or earlier 
diagnosis of diabetes or cardiovascular diseases. It was observed that higher vessel 
tortuosity appears in diabetic persons but this requires confirmation in studies with 
detailed clinical information and adequate sample size. 

Villalobos and Riveron in [36] presented a fast automatic retinal vessel segmentation 
and vascular landmarks extraction method for biometric applications using a three stage 
approach namely preprocessing, main processing and postprocessing steps. Application of 
this technique on images in DRIVE database, shows that this proposed method can a 
accurately detect and extract information from a RFI. 

 

3.  DEVELOPMENT OF PSEUDO INTELLIGENT TECHNIQUE FOR 
RFI 

Parametric modeling technique involves representation of data in an efficient and 
parsimonious form using minimum adjustable parameters [22-27]. The steps involved are 
model selection, model parameters determination and model validation. Model selection is 
basically about choosing an appropriate parametric form for the signal or data to be 
modeled and some of the known models include autoregressive (AR) model, moving 
average (MA) model, autoregressive moving average (ARMA) model and their variants 
(such as autoregressive with external input (ARX) model, vector autoregressive (VAR), 
autoregressive moving average with external inputs (ARMAX) model, autoregressive 
integrated moving average (ARIMA) model, autoregressive integrated moving average 
model etc) [22-26]. Similarly, model parameters determination is concerned with the use 
of computationally efficient algorithm to determine model parameters while model 
validation involves the evaluation of how well the selected model captures the key features 
of the modeled data using the estimated model parameters [22-27]. 

In this work, new method of generating RFI mask using RVNN is proposed. Detailed 
knowledge about RVNN can be obtained from [28, 29] and differences between a RVNN 
and its counterpart, the complex valued neural network can be obtained from [30-34]. The 
required model coefficients are obtained from the synaptic weights and the adaptive 
coefficients of the activation functions in a trained RVNN. 

Consider the RFI acquired in Red-Green-Blue (RGB) mode shown in Figure 1a, with 
the mask shown in Figure 1b, the image consist of three components, namely red (R), 
green (G) and blue (B) channels. These components fuse to give the final image shown in 
Fig. 1a using the RGB scheme depicted in Fig. 2.  

   



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

168 

 

 

Fig. 2: RGB color Scheme Generation as applicable to RFI. 

 

Since the final image ),( yxF  can be represented by its components color as  

 ),(),(),(=),( yxFayxFayxFayxF
BBGGRR

++  (1) 

where 
GR

aa ,  and 
B

a  are the coefficients required to generate the RFI mask (Fig. 1b). 

Similarly, consider the AR model shown in Fig. 3, which is driven by a white noise 
sequence )(nx  to produce an output sequence )(ny  according to  

)()(=)(
0

1=

nxbknyany
k

p

k

+−−∑   

 )()](...2)(1)([=
021

nxbpnyanyanya
p

+−++−+−−  (2) 

where pka
k

≤≤,1  and p  are the AR model coefficients and model order respectively.  

 
By comparing (1) and (2), the problem of generating the required RFI mask reduces to that 
of estimating the required coefficients in (1), thus the problem reduces to a pseudo 
parametric modeling approach. Hence, a new three stage pseudo parametric modeling 
approach for the determination of the model coefficients 

GR
aa ,  and 

B
a  from the synaptic 

weights and adaptive activation function coefficients of a RVNN is shown in Fig. 4.   
 
 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

169 

 

 

                                        Fig. 3: AR model representation. 

      

 

           Fig. 4: RVNN-based pseudo AR model RFI mask generation. 

   



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

170 

 

 By considering the RVNN-based RFI mask generation shown in Figure 5, the output 
)(ny  is expressed as  

  ))))((((=)(
010

1=
1

1=

hgnyvwny
lkkl

p

k
ll

M

l

++∑∑ ôô βα  (3) 

 where M  is the number of neurons in the hidden layer, 
1l

w  is the weight connecting 

node l  in the hidden layer to output layer, 10
h  is the bias term of output neuron, 

l
ϑ  is the 

thl  output of the hidden node l  , α  is the adaptive coefficient of the linear output 
activation function, 

kl
v  is the weight connecting input node k  to hidden node l , lg  is the 

bias of the hidden node l , 
l

β  is the adaptive coefficient of hidden node linear activation 
function and P  is the number of input nodes. For an RGB-based RFI, 3=P . 

 

 

Fig. 5: RVNN-based RFI mask neural network diagram. 

   

Using linear transfer activation functions in both the hidden and output layer (i.e ()ô  is 
the activation function), the required RFI mask generation coefficients are given as  

 31    ,=
1

1=

≤≤∑ kvwa
lkll

M

l
k

βα  (4) 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

171 

 

 Thus, 
21

, aa  and 
3

a  are the model coefficients for the R, G and B components of the 

RGB-based RFI respectively and the coefficients obtained in (4) are equivalent to the 
required coefficients 

GR
aa ,  and 

B
a  given in (1). 

 

4.  PERFORMANCE ANALYSIS 

The evaluation of the proposed technique is presented in this section. Performance 
evaluation of this technique has been carried out using the digital retinal images for vessel 
extraction popularly known as the DRIVE database. The images in the database were 
acquired using a Canon CR5 non-mydriatic 3CCD camera with a 45  degree field of view 
(FOV). The dataset contains 40  RFI of 584 x 565  pixels and the images have been 
divided into a training set consisting of 20  RFI and a test set of 20  RFI [37]. In this 
proposed technique only one single image is enough for training of the proposed RVNN- 
based pseudo modeling technique and the remaining 39  images can be used for testing. In 
evaluating the performance of this technique some of the mathematical definitions and 
acronyms used include: 

 
NP

TNTP
ACCAccuracy

+
+

=)(  

 
FPTP

TP
PRPrecision

+
=)(  

 
P

FP
FRRatePositiveFalse =)(   

 
PositivesTotal

TP
TPRRatePositiveTrue

 
=)(    

where TP, TN , FP and FN represent respectively true positive, true negative, false 
positive and false negative[38]. 

In the training phase, a single RFI is used to generate the training data, sample of the 
ROI and the background serve as the input to the RVNN. The ROI is assigned a target 
value of 1 while the background is assigned a target value of 0 . The input and the target 
data are concatenated and normalized separately and these are then fed to the RVNN. 
Once the RVNN converged, the synaptic weights and adaptive coefficients are extracted 
and the required coefficients could be computed. In the testing mode, the obtained pseudo 
modeling coefficients are applied on the input image, the output is then clustered into two 
distinct classes, the ROI and the background. 

In this paper, two types of clustering techniques have been evaluated, namely, the 
simple thresholding and k-means algorithm. one RFI image in the DRIVE database have 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

172 

 

been used for the extraction of the mask coefficients, from which the value of the required 
coefficients obtained are 2.7677−  , 0.5609 , 0.5417−  for 

GR
aa ,  and 

B
a  respectively. 

Results obtained from the application of these coefficients on RFI contained in DRIVE 
database are discussed herein. Each experiment has been repeated 30  times and the 
average accuracy value was then computed, the RFI mask obtained from the use of this 
technique was compared with the mask given in the DRIVE database and performance 
measurement criteria discussed earlier were then computed. 

Table 1 shows the results obtained by the application of the obtained coefficients and 
simple thresholding technique on one of the RFI from the database while Table 2 depicts 
the results obtained by the use of k-means algorithm. As observed from the table, the use 
of simple thresholding of the RVNN output performs far better than that of k-means 
algorithm, though the performance of the simple thresholding technique reduces at very 
low or high threshold values. 

 
 

    Table  1: RFI mask generation using Simple thresholding clustering technique.  

Threshold Accuracy Precision TPR FPR 

0.0500 0.7074 0.7023 1.0000 0.9443 

0.1000 0.9433 0.9240 1.0000 0.1831 

0.1500 0.9954 0.9933 1.0000 0.0149 

0.2000 0.9996 0.9995 1.0000 0.0012 

0.2500 0.9995 1.0000 0.9993 0.0000 

0.3000 0.9990 1.0000 0.9986 0.0000 

0.3500 0.9985 1.0000 0.9978 0.0000 

0.4000 0.9981 1.0000 0.9972 0.0000 

0.4500 0.9976 1.0000 0.9966 0.0000 

0.5000 0.9971 1.0000 0.9958 0.0000 

   

      Table  2: RFI mask generation using K-means clustering technique.  

Accuracy Precision TPR FPR 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

173 

 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 

0.9916 1.0000 0.9878 0.0000 
     

Table 3 shows the results obtained by the application of the obtained coefficients on all 
the RFI contained in DRIVE training database. The accuracy of the k-means clustering 
technique is compared with that of simple thresholding technique over all the RFI 
contained in the training subset. From the results shown in this table, the use of simple 
thresholding technique outperforms that of k-means algorithm. 

  

Table  3: Comparing K-means clustering with simple thresholding (with 0.20 
threshold value) for RFI mask generation.  

K-means Simple Threshold 
Image 

Number 
Accuracy Precision TPR Accuracy Precision TPR 

21 0.9990 1.0000 0.9985 0.9934 0.9904 1.0000 

22 0.9914 1.0000 0.9876 0.9997 0.9996 1.0000 

23 0.9915 1.0000 0.9878 0.9996 0.9994 1.0000 

24 0.9916 1.0000 0.9878 0.9996 0.9995 1.0000 

25 0.9907 1.0000 0.9864 0.9996 0.9995 1.0000 

26 0.9992 1.0000 0.9988 0.9934 0.9905 1.0000 

27 0.9921 1.0000 0.9886 0.9997 0.9995 1.0000 

28 0.9924 1.0000 0.9889 0.9997 0.9995 1.0000 

29 0.9875 1.0000 0.9819 0.9996 0.9994 1.0000 

30 0.9931 1.0000 0.9900 0.9996 0.9994 1.0000 

31 0.9919 1.0000 0.9883 0.9996 0.9994 1.0000 

32 0.9992 1.0000 0.9989 0.9916 0.9879 1.0000 

33 0.9923 1.0000 0.9888 0.9997 0.9995 1.0000 

34 0.9701 1.0000 0.9565 0.9972 0.9959 1.0000 

35 0.9913 1.0000 0.9874 0.9997 0.9995 1.0000 

36 0.9928 1.0000 0.9895 0.9996 0.9995 1.0000 

37 0.9924 1.0000 0.9890 0.9997 0.9995 1.0000 

38 0.9963 1.0000 0.9946 0.9961 0.9943 1.0000 

39 0.9930 1.0000 0.9898 0.9997 0.9995 1.0000 

40 0.9922 1.0000 0.9887 0.9995 0.9993 1.0000 

  

Furthermore, in another experiment using RFI acquired from another hospital in 
Sweden [9-10], the coefficients obtained previously from DRIVE database was applied on 
the RFI from [9-10], result obtained is shown in Figure 6. The unavailability of the ground 
truth mask leads to only subjective evaluation of the obtained mask.  



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

174 

 

 

                    (a)                                           (b)                                            (c) 

Fig.  6: (a) Original RFI,   (b) RFI mask using k-means clustering technique,    
(c) RFI mask using simple thresholding technique. 

   

Lastly, the performance of the proposed pseudo modeling algorithm was evaluated on 
RFI acquired from another fundus camera in Malaysia, result obtained is shown in Figure 
7. The unavailability of the ground truth mask leads to only subjective evaluation of the 
obtained mask. The superior performance of the simple thresholding technique is depicted 
in the resulting images 7b and 7c. 

 

 

                             (a)                                           (b)                                            (c) 

        Fig. 7: (a) Original RFI,   (b) RFI mask using k-means clustering technique,    
(c) RFI mask using simple thresholding technique. 

   

5.   CONCLUSION 

 A new method of extracting RFI using pseudo modeling technique has been presented 
in this paper and several existing works on CAADRFI have been discussed herein. The 
accuracy obtained by the application of the proposed technique on RFI contained in the 
DRIVE database varies between 99.62%  and 99.97% . It has also been shown that using a 
simple thresholding to classify the output of the RVNN technique leads to an improved 
performance as compared to the use of k-means clustering technique. Furthermore, the 
proposed technique has also been used to generate RFI mask from low visual quality RFI 
using the same extracted pseudo model coefficients.  



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

175 

 

ACKNOWLEDGMENT 

This work is partially supported by Malaysian E-science grant 0083080101 SF−−− . 

 

REFERENCES 

[1]    Conference Report: Screening for Diabetic Retinopathy in Europe 15 years after the St. 
Vincent declaration the Liverpool Declaration 2005. Retrieved March 18, 2010, From 
website: 
http://reseauophdiat.aphp.fr/Document/Doc/confliverpool.pdf#search='www.drsceening 
2005.org.uk' 

[2]   K. A. Goatman, A. D. Whitwam, A. Manivannan, J. A. Olson, and P. F. Sharp, Colour 
normalisation of retinal images,  Proc. Med. Imag. Understanding and Analysis, 2003. 

[3]   H. Yazid, H. Arof and N. Mokhtar, Edge Sharpening for Diabetic Retinopathy Detection,  
IEEE Conference on Cybernetics and Intelligent Systems, pp 41-44, 2010. 

[4]    P. Kahai, K. R. Namuduri and H. Thompson, A Decision Support Framework for Automated 
Screeing of Diabetic Retinopathy,  Int. Journal of Biomedical Imaging, pp 1-8, Volume 
2006, Article ID 45806. 

[5]   W. H. Herman, R. E. Aubert, M. A. Ali, E. S. Sous, and A. Badran, Diabetes mellitus in 
Egypt: risk factors, prevalence and future burden,   Eastern Mediterranean Health J., (3), 
144-148, 1997. 

[6]   A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, A Comparative Evaluation of 
Preprocessing Methods for Automatic Detection of Retinal Anatomy,  Proceedings of the 
Fifth International Conference on Informatics and Systems (INFOS 07), pp 24 -30, March 
24-26, 2007. 

[7]    J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever and B. van-Ginneken, Ridge-based 
vessel segmentation in color images of the retina,  IEEE Trans. Med. Imag.,(23), 501-509, 
April 2004. 

[8] Abate Diabetes: Diabetes. Accessed March 21, 2006, from Website: 
http://www.abatediabetes.com/diabetes.html 

[9]     A. M. Aibinu, M. I. Iqbal, M. Nilsson and M. J. E. Salami, Automatic Diagnosis of Diabetic 
Retinopathy from Fundus Images Using Digital Signal and Image Processing Techniques,  
International Conference on Robotics, Vision, Information, and Signal Processing, Penang, 
Malaysia, pp. 510 - 515, Nov. 2007. 

[10]  A. M. Aibinu, M. I. Iqbal, M. Nilsson and M. J. E. Salami, A New Method of Correcting 
Uneven Illumination Problem in Fundus Images,  International Conference on Robotics, 
Vision, Information, and Signal Processing, Penang, Malaysia, pp. 445 - 449, Nov.2007. 

[11]  A.M. Aibinu, M.I. Iqbal, A.A. Shafie, M.J.E. Salami, M. Nilson, Vascular intersection 
detection in retina fundus images using a new hybrid approach, Computers in Biology and 
Medicine, (40), 81-89, January, 2010.  

[12]  M. I. Iqbal, A. M. Aibinu, M. Nilsson, I. B. Tijani, and M. J. E. Salami, Detection of 
Vascular Intersection in Retina Fundus Image Using Modified Cross Point Number and 
Neural Network Technique, Proceedings of the International Conference on Computer and 
Communication Engineering, pp, 241-246, 2008. 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

176 

 

[13]  N. M. Tan, D.W.K. Wong, J. Liu, W.J. Ng, Z. Zhang, J.H. Lim, Z. Tan, Y. Tang, H. Li, S. Lu 
and T.Y. Wong, Automatic Detection of the Macula in the Retinal Fundus Image by 
Detecting Regions with Low Pixel Intensity,  International Conference on Biomedical and 
Pharmaceutical Engineering, ICBPE '09, pp 1-5, 2-4 Dec. 2009, Singapore. 

[14]  S. Lu, J. Liu, J. H. Lim, Z. Zhang, T. N. Meng, W. K. Wong, H. Li, and T. Y. Wong, 
Automatic Fundus Image Classification for Computer-Aided Diagonsis,  31st Annual 
International Conference of the IEEE EMBS Minneapolis, pp 1453-1456, 2009.  

[15]  Y.P. Kee, I. Lila Iznita, M.H. Ahmad Fadzil, A.N. Hanung, N. Hermawan and S.A. Vijanth, 
Conference on Innovative Technologies in Intelligent Systems and Industrial Applications 
(CITISIA 2009), 2009. 

[16]  V. Bevilacqua, S. Camb`o, L. Cariello,and G. Mastronardi, A Combined Method to Detect 
Retinal Fundus Features,  European Conference on Emergent Aspects in Clinical Data 
Analysis, Pisa, Italy, Sept. 2005. 

[17]  P. Aravindhan and P.N. Jebarani Sargunar, Automatic Exudates Detection in Diabetic 
Retinopathy Images Using Digital Image Processing Algorithms,  Proceedings of the Int. 
Conf. on Information Science and Applications ICISA 2010, pp 26-30, February, 2010. 

[18]  I.K.E. Purnama, K.Y.E. Aryanto. Branches Filtering Approach to Extract Retinal Blood 
Vessels in Fundus Image, 

[19]   L. Gagnon, M. Lalonde, M. Beaulieu, and M. C. Boucher, Procedure to detect anatomical 
structures in optical fundus images,  Proc. Conf. Med. Imag. 2001, Image Processing (SPIE 
4322), San Diego, pp.1218-1225, 2001. 

[20]   A. F. M. Hani and H. A. Nugroho, Model-Based Retinal Vasculature Enhancement in 
Digital Fundus Image using Independent Component Analysis,  IEEE Symposium on 
Industrial Electronics and Applications (ISIEA 2009), pp 160-164, 2009. 

[21]  A. Aquino, M. E. Gegúndez-Arias and D. Marín, Detecting the Optic Disc Boundary in 
Digital Fundus Images Using Morphological, Edge Detection, and Feature Extraction 
Techniques,  IEEE TRANSACTIONS ON MEDICAL IMAGING, (29), 1860-1869, Nov. 
2010. 

[22]   E. N. Bruce, Biomedical Signal Processing and Signal Modeling, Willey Press, 2000. 

[23]   S. L. Marple, Digital Spectral Analysis with Applications, Prentice Hall, Inc., 1987. 

[24]   M. H. Hayes, Statistical Digital Signal Processing and Modeling, John Wiley and Sons, New 
York, 1996 

[25]  J. G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms and 
Applications, 4th ed., Pearson Prentice Hall, 2007. 

[26]  D.G. Manolakis, V. K. Ingle and S. M. Kogon, Statistical and Adaptive Signal Processing, 
Mc. Graw Hill, 2000. 

[27]  M. Aibinu, M. Nilsson, M. J. E. Salami and A. A. Shafie , Voice Activity Detection Using 
Modeling Approach   submitted for publication in Computers in Medicine and Biology, 
2010. 

[28]  Y. H. Hu and J Hwang, Introduction to Neural Networks for Signal Processing,  CRC Press 
LLC, 2002. 

[29]  S. Haykin, Neural Networks: A comprehensive foundation, 2nd ed.,  Eaglewood, Cliffs, NJ: 
Prentice Hall. 



IIUM Engineering Journal, Vol. 11, No. 2, 2010  A.M. Aibinu et al.  

177 

 

[30]  A. M. Aibinu, M. J. E. Salami, and A. A. Shafie, Determination of Complex-Valued 
Parametric Model Coefficients Using Artificial Neural Network Technique,  AANS, volume 
2010 (2010), Article ID 984381  

[31] A. Hirose, Complex-Valued Neural Networks, Series on Studies in Computational 
Intelligence, pp. 176, ISBN-10: 3-540-33456-4, ISBN-13: 978-3-540-33456-9, New York, 
NY- Springer-Verlag, 2006. 

[32]  A. Hirose (Editor), Complex-Valued Neural Networks, Theories and Applications, Series on 
Innovative Intelligence, pp. 363, ISBN-981-238-464-2, 2003. 

[33]  A. Hirose. Complex-valued neural networks for more fertile electronics, Journal of the the 
IEICE, 87(6):447–449, June 2004.  

[34]  A. Hirose, Complex-valued neural networks: The merits and origin,  Proceedings of Int. 
Joint. Conf. on Neural Networks pp- 257–264, June, 2009. 

[35]  A. Bhuiyan, B. Nath, K. Ramamohanarao, R. Kawasaki and T. YinWongth, Automated 
Analysis of Retinal Vascular Tortuosity on Color Retinal Images,  Science+Business Media, 
LLC 2010, 25 May 2010.  

[36]  F. Villalobos and E. Riveron, Fast Automatic Retinal Vessel Segmentation and Vascular 
Landmarks Extraction Method for biometric Applications',  International Conference on 
Biometrics, Identity and Security (BIdS), 2009, pp 1-10, 22-23 Sept. 2009 

[37]  The DRIVE database, Image Sciences Institute, University Medical Center Utrecht, The 
Netherlands /http://www.isi.uu.nl/Research/Databases/DRIVE/S, last accessed on 30th 
April, 2009. 

[38]  T. Fawcett, ROC Graphs: Notes and Practical Considerations for Researchers, Technical 
Report MS 1143 – Extended version of HPL-2003-4, HP Laboratories; 2004. OpenURL.