Microsoft Word - ETASR_V13_N2_pp10425-10431


Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10425  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

Improvement of Classification Accuracy of Four-
Class Voluntary-Imagery fNIRS Signals using 
Convolutional Neural Networks 

 

Md. Mahmudul Haque Milu  
Department of Biomedical Engineering, Jashore University of Science and Technology (JUST), 
Bangladesh 
mahmudhmilu@gmail.com 
 
Md. Asadur Rahman 
Department of Biomedical Engineering, Military Institute of Science and Technology (MIST), 
Bangladesh 
bmeasadur@gmail.com  
 
Mohd Abdur Rashid 
Department of EEE, Noakhali Science and Technology University, Bangladesh | Division of Electronics 
and Informatics, Gunma University, Japan 
marashid.eee@nstu.edu.bd 
(corresponding author) 
 
Anna Kuwana 
Division of Electronics and Informatics, Gunma University, Japan 
kuwana.anna@gunma-u.ac.jp  
 
Haruo Kobayashi 
Division of Electronics and Informatics, Gunma University, Japan 
koba@gunma-u.ac.jp 
 

Received: 23 January 2023 | Revised: 6 February 2023 | Accepted: 8 February 2023 

 

ABSTRACT 

Multiclass functional Near-Infrared Spectroscopy (fNIRS) signal classification has become a convenient 

way for optical brain-computer interface. fNIRS signal classification with high accuracy is a challenging 

assignment while the signals are produced by means of voluntary and imagery movements of the same 

limb. Since the activation in time and space of voluntary and imagery movement show a similar pattern, 

the classification accuracy by the conventional shallow classifiers cannot reach an acceptable range. This 

paper proposes an accuracy improvement approach with the use of Convolutional Neural Networks 

(CNNs). In this work, voluntary and imagery hand movements (left hand and right hand) were performed 

by several participants. These four-class signals were acquired utilizing fNIRS devices. The signals were 

separated based on the tasks and filtered. With manual feature extraction, the signals were classified by 

support vector machine and linear discriminant analysis. The automatic feature extraction and 

classification mechanism of the CNN were applied to the fNIRS signals. From the results, it was found that 

CNN improves the classification accuracy to an acceptable range, which has not been achieved by any 

convolutional network. 

Keywords-fNIRS; voluntary and imagery fNIRS signal; classification accuracy; conventional classifiers; 

Convolutional Neural Network (CNN) 

I. INTRODUCTION  

Brain-Computer Interface (BCI) is a remarkable notion of 
current neuro-computational research that offers the scope to 

control the computer with brain commands. Research and 
development regarding BCI can strongly contribute to neuro-
prosthetics applications [1-2]. A brain-controlled computer 
needs to read the brain functionalities properly. The brain 



Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10426  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

functionalities can be assessed by both invasive and non-
invasive procedures with non-invasive procedures being the 
primary choice of modern BCI [3]. Electroencephalography 
(EEG) and magneto-encephalography (MEG) are well-known 
noninvasive methods based on electric impulsive signals of the 
brain. Due to some questionable features like noise sensitivity, 
poor spatial resolution, and motion sensitivity, these modalities 
should be replaced by some newer ones [4-5].  

To meet the challenge of overcoming the previous 
limitations, functional Near-Infrared Spectroscopy (fNIRS) is a 
widely used noninvasive optical modality that can measure 
functional neuro-activations based on the hemodynamics of the 
brain tissue [6-7]. Although this modality is comparatively 
slower than the EEG and MEG regarding the brain response, 
due to very high spatial resolution and robust activation level, 
fNIRS gets significant importance in BCI research and 
applications [8-10]. For BCI the brain is to be stimulated 
externally or internally, with procedures termed as stimuli. 
Most of the stimuli are provided to the brain in BCI research by 
imagery or voluntary motor actions [11]. These movement-
related functions of the brain correlate with the motor cortex 
which is located in the frontal lobe of the brain. There are 
several research works related to movement-related (either 
voluntary or imagery) hemodynamics classifications for BCI 
applications.  

The activation patterns of the voluntary and imagery 
movement-based stimuli are widely examined in [12-15]. 
These studies suggested that several areas in the motor cortex 
and some prefrontal cortex become active due to movement 
execution and imagination. Voluntary and imagery movement-
related tasks were not classified by machine learning 
algorithms. In [16], the two-class problems of imagery left and 
right-hand wrists movement were classified with the Linear 
Discriminant Analysis (LDA) method. Three classes of BCI 
were studied in [17], with two classes being motor imagery 
tasks. Voluntary movement-related tasks of the left and right 
hand were classified in [18] from the fNIRS signals of 
optimally selected channels. Multiple motor imagery tasks 
(movements of the left hand, right hand, left foot, and right 
foot) were classified in [19]. In these works, authors classified 
either the voluntary or imagery movement-related fNIRS 
signals. To the best of our knowledge, no work classified 
fNIRS signals regarding both voluntary and imagery movement 
stimuli. This is a challenge for the researchers to find the 
voluntary and imagery movement related fNIRS signal, 
simultaneously, because the activation area of the voluntary 
and imagery movement by an organ is the same in the brain 
[13] and their activation strengths are alike. This is a challenge 
for signal classification at high classification accuracy that was 
considered to be solved in this work. 

In this work, both voluntary and imagery movement fNIRS 
signals were classified separately utilizing Linear Discriminant 
Analysis (LDA) and Support Vector Machines (SVM). We 
found satisfactory results for two class voluntary and two class 
motor imagery fNIRS signals. Additionally, we made it a four-
class problem adding the voluntary and imagery fNIRS signals 
together and thereafter we classified the signal utilizing the 
same procedure applied with LDA and SVM. In this approach, 

the classification accuracy was only 55-65%. This is a great 
barrier to implementing neuro-rehabilitate prosthetics where 
several events occur and the classification must be accurate and 
precise. To improve the classification accuracy for such a 
problem, we deployed a Convolutional Neural Network (CNN) 
which has an automatic feature extraction capability. Since the 
fNIRS signals are of multiple channels, the time series data of 
the change in the concentration of oxidized hemoglobin (HbO2) 
and deoxidized hemoglobin (dHb) are used to prepare two-
dimensional data. These data were used to prepare topographic 
images of the hemodynamic activations and are fed to the 
proposed CNN structure. Our proposed CNN method can 
classify the 4-class fNIRS signal with 86.14% (on average) 
accuracy which is a significant improvement. Therefore, the 
contribution of this research work can be summarized as: 

 Developing a shallow deep neural network to classify 
multiple class fNIRS signals with high accuracy. 

 Proposing a method for effective BCI for same limb 
voluntary-imagery fNIRS signal classification in subject 
dependent and independent approach. 

II. MATERIALS AND METHODS 

A. Dataset  

The dataset used in this research is available in [20], and 
was collected with permission from the authors. According to 
the description given in [20], the participants were healthy. In 
their experiment, four types of tasks were performed by the 
subjects: left hand (LH) movement, right-hand (RH) 
movement, imagery left-hand (iLH) movement, and imagery 
right-hand (iRH) movement. Therefore, the subjects performed 
two voluntaries and two imagery hand movement tasks in 
different sessions. In each trial, a subject performed a 10-
second task of two different movements with 20 seconds of rest 
after each task. At first, 20 seconds passed at the resting 
condition to correct the baseline values of the data. A 10-
second task followed. This task might be LH or iLH. This was 
followed by another 20-second resting period, which was 
followed by another 10 second task which was either RH or 
iRH. This trial was repeatedly performed. Each subject 
performed 5 trials in each session. As a whole, sixteen (16) 
sessions (8 voluntary and 8 imagery) were performed by every 
participant in a total of 40 trials of each task. Eventually, 40 
trials of 4-class data were acquired from every subject. The 
fNIRS data were acquired using fNIR devices [21], which have 
4 NIR sources and 10 detectors. The optode of this device 
covers 16 channels. The device provides a measure of 
continuous real-time concentration change in the values for 
HbO2 and Hb. The Cognitive Optical Brain Imaging (COBI) 
studio software was used to log the fNIRS data in computer 
memory [22]. 

B. Methods 

In this section, all the data processing steps from signal 
acquisition to event classification for the performance test are 
explained. After designing the data acquisition protocol, data 
from prefrontal hemodynamics were collected. Then, in order 
to reduce dimensions, data compression using Principle 
Component Analysis (PCA) was applied. After that, the 



Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10427  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

baseline of fNIRS data was corrected. As a part of the pre-
processing, these data were filtered. Then the data were 
separated into some sequential events and some prominent 
features were extracted. Using these features, classification 
accuracy was calculated. The main steps of the proposed data 
processing methods for the fNIRS data classification are 
presented briefly in Figure 1. The Figure is mainly focused on 
the conventional data classification technique. The steps are 
also described concisely with technical details in this 
subsection. Feature extraction and classification procedure by 
CNN are also discussed in order to clarify how this proposed 
method customized the CNN structure for achieving the 
objectives of the proposed method. 

 

 

Fig. 1.  Block diagram of the total signal processing steps to create a 
machine learning-based predictive model and the accuracy of the testing data. 

1) Data Compression 

Since the data were acquired with 16 channels, the 
processing could face the issue of high dimensionality. To 
check the actual dimensions, the data were transformed by 
PCA and it was found that the actual dimension of the data was 
4 instead of 16. The result of PCA is shown in Figure 2 with a 
similar channel number of ilk dimension. The resulting 4 
dimensions are termed as Left Lateral (LL: channel 1, 2, 3, 4), 
Left Medial (LM: channel 5, 6, 7, 8), Right Medial (RM: 
channel 9, 10, 11, 12), and Right Lateral (RL: 13, 14, 15, 16). 
The signals having the same dimension were averaged. 
Therefore, the 16-channel configuration becomes compressed 
from i×16 to i×4. This compression helps reduce the feature 
dimension, which is very important for achieving high 
classification accuracy in the machine learning approach.  

2) Filtering and Baseline Correction 

Since fNIRS electrodes collect data from the human body, 
the signal is full of artifacts and noises. So, removing different 
types of noises is the first step of data processing. The fNIRS 
data can be affected by three types of noise [23]. These are 
motion artifacts, physiological signals, such as heart rate and 
respiration, and instrument and environment-related noises. 
Motion artifacts occur due to head movement and it results in 
fNIRS detectors shifting and losing contact with the skin 
exposing them to ambient light or light emitted directly from 
the fNIRS source. These types of motion artifacts can be easily 
recognized because they cause sudden, large spikes in the raw 
fNIRS data. Rapid head movements can cause the blood to 
move toward or away from the area that is being monitored, 

rapidly increasing or decreasing blood volume. As the 
dynamics of this type of motion artifacts are slower than LED 
pop, they can be confused with the actual hemodynamic 
response due to brain activation. Physiological signals such as 
heart rate (over 0.5Hz) and respiration (over 0.2Hz) are at 
higher frequency ranges than hemodynamic responses, thus 
they should also be eliminated. So the raw fNIRS signals were 
filtered by a Savitzky-Golay filter of the 3rd order with 21 
frame length (equivalent to 0.1Hz cut-off FIR low pass filter in 
2Hz fNIRS signal) [10]. The raw noisy and filtered data of 
fNIRS are shown in Figure 3. Then, the data were separated in 
each trial. Every trial of the fNIRS data was corrected by 
subtracting the baseline from the original signal. The baseline 
was calculated by averaging the data of the first task. 

 

 
Fig. 2.  Principle component 1 vs. component 2 presents the actual signal 
dimensions in eigenspace. 

 
Fig. 3.  The raw fNIRS signal with physiological artifacts and the filtered 
signal after the Savitzky-Golay filtering technique.  

3) Feature Extraction 

When the input data are too large to be processed and are 
suspected to be redundant, then they can be transformed into a 
reduced set of features. Analyzing a large number of input 
variables requires a large amount of memory and computation 
power. It also over-fits the classification algorithms. The 
selected features are expected to contain the relevant 
information from the input data so that the desired task can be 
performed by using this reduced representation instead of the 

S
ig

n
a
l 
A

m
p
li
tu

d
e
 i
n
 m

ic
ro

m
o
le

/l
it
e
r



Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10428  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

complete initial data. In this study, mean, slope, standard 
deviation, and skewness were extracted as the dominant 
features. Since the fNIRS signal does not exhibit complexity 
like EEG, MEG, or fMRI, simple time domain features are 
enough to represent the signal characteristics [23]. In this 
experiment, we have worked with the above features. Before 
training the machine learning-based network, all the values of 
the training features were normalized between 0 and 1. 

minmax

min'









                                 (1) 

where '  are the feature values that are rescaled between 0 and 

1. On the other hand, 
n

R  are the actual values of the 
features. The maximum and minimum values of the features 
are presented as max  and min  respectively. 

4) Classification Methods    

Classification is the process of predicting the class of given 
data points. Classes are sometimes called targets. Classification 
predictive modeling is an important task of approximating a 
mapping function from input variables to discrete output 
variables. A training process continues until the model achieves 
the desired level of accuracy on the training data for the SVM-
and LDA-based predictive models. Since the machine learning 
models (SVM and LDA) gave lower accuracy in multiclass (4-
class classification) problems, CNN (a deep neural network) is 
applied. 

C. Deep Neural Network-based Classification   

A deep neural network needs no extracted features because 
it extracts features from its own. A CNN is one of the best 
formats of deep neural networks. In this work, CNN is utilized 
for fNIRS data classification. Since the feature extraction and 
classification process are included in the CNN structure, it is 
very important to prepare the input layer and output layer of 
CNN. The fNIRS data preparation to run the CNN and the 
structure of the proposed CNN layers are described briefly 
below.  

1) Input Preparation 

CNN is applied when the inputs are images or a two-
dimensional matrix. When the computer sees an image, it will 
see an array of pixel values depending on the dimension of 
images. The image dimension is 32×32. So it will see an array 
of 32×32×3 numbers for RGB values and 32×32×1 for 
grayscale images. Each of these numbers is given a value from 
0 to 255 that describes the pixel intensity at that point. CNN 
consists of convolutional, nonlinear, pooling, and fully 
connected layers. In convolutional layers, a convolutional filter 
whose width is equal to the dimension of the input and kernel 
size is convolved with the input data. To build the feature map, 
the output of the convolutional layer is converted by an 
activation function similar to ANNs'. After each convolutional 
layer, additional subsampling operations such as max-pooling 
and softmax are performed to enhance the performance [24-
25]. As with ANNs, hyper-parameters such as learning rate, 
batch size, and the number of epochs should be investigated for 
the CNN to improve its classification performance. 

2) Structure of the CNN Layers  

In this research, at first any two events: motor imagery hand 
movement or motor execution hand movements were classified 
by two shallow neural networks (SVM and LDA), with 
satisfactory accuracy. But when the 4-event classification was 
performed, the accuracy became very low, which means 
shallow neural networks like SVM and LDA cannot be utilized 
for more than 2-class events. To improve this result, the CNN 
algorithm is proposed. A general structure of a CNN is given in 
Figure 4 including its main layers. 

 

 
Fig. 4.  Layers (convolutional, ReLU, pooling, and fully connected) of a 
CNN. 

Deep learning is a class of machine learning algorithms that 
uses a cascade of multiple layers of nonlinear processing units 
for feature extraction and transformation. A CNN is highly 
capable of automatically learning appropriate features from the 
input data by optimizing the weight parameters of each filter, 
using forward and backward propagation to minimize 
classification errors. 

a) Convolution Layer 

The convolution layer is the first layer to extract features 
from an input image (as given in Figure 4). Convolution 
preserves the relationship between pixels by learning the image 
features using small squares of input data. It is a mathematical 
operation that takes two inputs, the image matrix and a filter or 
kernel. Consider the dimension of an image matrix is dwh   

and the filter's is h wf f d  . So, the output of the matrix 

dimension is ( 1) ( 1) 1h wh f w f      .  

A very important note is that the depth of the filter should 
be the same as the depth of the input image. So for a 5×5×3 
image, the dimension of the filter is 5×5×3. Starting from the 
first position of the input image, the filter is sliding or 
convolving around the input image, it is multiplying the values 
in the filter with the original pixel values of the original image. 
The method used in this work to prepare the input images from 
the data (HbO2 and Hb) is illustrated in Figure 5. The way 
convolutional filters worked on the images is shown in Figure 
6. This multiplication is all summed up and a single number 
will be found. It should be remembered that this single number 
is only for when the filter is on the starting corner of the input 
image. This process is repeatedly performed for every location 
of the image.  

 



Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10429  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

 

 

Fig. 5.  The input data consisting of the concentration changes of HbO2 
(red) and Hb (blue) overall channels. A convolutional filter runs through the 
input data along the vertical axis.   

b) ReLU Layer 

ReLU (Rectified Linear Unit) layer induces nonlinearity in 
the values of the incoming layer. The ReLU function only 
passes the values x which are greater than zero: 










0      0

0      
)(

xif

xifx
xf                                   (2) 

Other functions are also used to increase nonlinearity, for 
example f(x) = tanh(x), f(x) = |tanh(x)|, and the sigmoid 

function 
1

( )
1

x
f x

e



. ReLU is often preferred to other 

functions because it trains the neural network several times 
faster without a significant penalty to generalization accuracy.  

c) Pooling Layer 

This layer reduces the number of parameters if the images 
are too large. Spatial pooling is also called downsampling and 
reduces the dimensionality of each map, but retains the 
important information. Spatial pooling can be of different 
types: Max pooling, average pooling, and sum pooling. In this 
work, the max pooling layer is used.  

d) Fully Connected Layer 

This layer takes the output of the convolution, ReLU, and 
pooling layers as input. With the fully connected layers, we 
combine these features to create a model. Finally, we have an 
activation function such as softmax or sigmoid to classify the 
outputs. 

e) Cross Validation 

k-fold cross-validation is used to estimate the classification 
performance of the predictive model. The first step in this 
process is to divide the data into k folds, where each fold 
contains an identical amount of input data. Οne fold is used as 
a test dataset, while the remaining folds are used as training 
sets. Afterward, a classification procedure is applied to the 

selected test and training sets. This process is performed for 
each of the k folds. In this study, 5-fold cross-validation was 
performed. The parameters of different layers and the structure 
of the proposed CNN are summarized in Table I.   

TABLE I.  CNN LAYER PARAMETERS 

Parameters Value 

Matrix input layer 384×384 
Convolution layer 1 2, 8 
Max-pooling layer 1 2 (pool size) 
Convolution layer 2 4, 16 
Max-pooling layer 2 2 (pool size) 
Fully connected layer 4 
Learning rate 0.01 
Epoch 15 
Validation frequency 3 

 

III. RESULTS AND DISCUSSION 

The activation due to imagery movement planning occurs 
in the prefrontal cortex which can be measured by the increased 
concentration of HbO2. For the left-hand and right-hand 
imagery movement planning, the activation level is found to be 
increased in the concentration of HbO2 in the right hemisphere 
and left hemisphere, respectively. Since 4 different portions are 
selected to observe the actual concentration change in HbO2 of 
LH and RH movement planning. The variation of HbO2 
concentration in the LL, LM, RM, and RL portions of the 
prefrontal cortex due to planning movements of LH and RH of 
a randomly selected subject is presented in Figures 6 and 7.  

 

 
(a) 

 
(b) 

Fig. 6.  Comparison of the concentration changes in HbO2 between LH and 
RH movement planning in (a) LL and (b) LM portion of the prefrontal cortex. 

 
(a) 

 
(b) 

Fig. 7.  Comparison of the concentration changes in HbO2 between LH and 
RH movement planning in (a) RM and (b) RL portion of the prefrontal cortex. 

It is easily observable from Figures 6 and 7 that all the 
proposed portions of the prefrontal cortex exhibit significant 
variations in the concentration of HbO2. 



Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10430  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

 

Fig. 8.  Training and validation progress with loss reduction process in 
CNN. 

 

 
Fig. 9.  Classification accuracy comparison of SVM, LDA, and CNN. 

TABLE II.  CLASSIFICATION ACCURACY OF LDA AND SVM FOR 2-CLASS AND 4-CLASS fNIRS SIGNAL 

Sub 
SVM classification accuracy (%) LDA classification accuracy (%) SVM 4-class classification 

accuracy (%) 

LDA 4-class classification 

accuracy (%) LH vs RH iLH vs iRH LH vs RH iLH vs iRH 
1 80 77.5 90 81.5 45.45 47.5 
2 83.5 80 92.5 82 52.27 45 
3 78 69.5 78 69.5 50 45 
4 80 72 85 73.5 54.54 55 
5 83.5 81.5 88.5 80 52.27 52.5 

 

As a result, all these portions should have to be taken into 
account as a significant source to discriminate the imagery 
activities. Accordingly, features were extracted from the fNIR 
signals of LL, LM, RM, and RL. The features were used to 
train SVM and LDA separately to check the feature-dependent 
accuracy of the predictive model. Several time domain features 
like variance, total summation, and kurtosis did not return good 
classification accuracy except mean and slope. The 
classification accuracies for two classes by SVM and LDA 
were satisfactory. But for 4 classes, the classification accuracy 
was very low. The results are given in Table II. Therefore, 
CNN was applied in order to improve the classification 
accuracy for the 4-class problem.  

The training progress of CNN is shown in Figure 8. Here, 
both training accuracy and loss are shown graphically for a 
randomly chosen subject. The classification accuracy of the 
applied CNN is presented in Figure 9 along with the results of 
SVM and LDA. From the results given in Figure 9, it was 
found that the classification accuracy of CNN for the 4-class 
fNIRS signal is way more satisfactory than the results of LDA 
and SVM. 

IV. CONCLUSION 

This research work investigates how CNN can enhance the 
classification accuracy with data of more than two classes 
when the SVM and LDA classification accuracy is very low. 
The results show that for iLH, iRH, LH, and RH on five 
subjects, the CNN-based scheme provides 86.24% (on average) 
accuracy, whereas, in the case of SVM and LDA, the average 
classification accuracy is 50.9% and 49%, respectively. In the 
case of subject-independent data, CNN shows 83.33% 
accuracy. This result revealed that CNN can lead to the 
practical development of a BCI system. Since classification 
accuracy is the most essential factor to design a practical BCI 

device, we will try to explore further improvements in the 
accuracy of fNIRS-based BCI by implementing several deep 
learning-based algorithms. We can also try to merge some 
other modalities with fNIRS and check whether this process 
provides better accuracy or not. 

ACKNOWLEDGMENT 

The authors acknowledge the partial financial support 
provided by the Department of Biomedical Engineering, MIST, 
Dhaka-1216, Bangladesh. 

REFERENCES 

[1] S. Ajami, A. Mahnam, and V. Abootalebi, "Development of a practical 
high frequency brain–computer interface based on steady-state visual 
evoked potentials using a single channel of EEG," Biocybernetics and 
Biomedical Engineering, vol. 38, no. 1, pp. 106–114, Jan. 2018, 
https://doi.org/10.1016/j.bbe.2017.10.004. 

[2] A. Rezeika, M. Benda, P. Stawicki, F. Gembler, A. Saboor, and I. 
Volosyak, "Brain–Computer Interface Spellers: A Review," Brain 
Sciences, vol. 8, no. 4, Apr. 2018, Art. no. 57, https://doi.org/ 
10.3390/brainsci8040057. 

[3] F. Cincotti et al., "Non-invasive brain–computer interface system: 
Towards its application as assistive technology," Brain Research 
Bulletin, vol. 75, no. 6, pp. 796–803, Apr. 2008, https://doi.org/ 
10.1016/j.brainresbull.2008.01.007. 

[4] K. D. Tzimourta et al., "Evaluation of window size in classification of 
epileptic short-term EEG signals using a Brain Computer Interface 
software," Engineering, Technology & Applied Science Research, vol. 8, 
no. 4, pp. 3093–3097, Aug. 2018, https://doi.org/10.48084/etasr.2031. 

[5] "What Is MEG?" http://web.mit.edu/kitmitmeg/whatis.html (accessed 
Feb. 15, 2023). 

[6] H. Y. Kim, K. Seo, H. J. Jeon, U. Lee, and H. Lee, "Application of 
Functional Near-Infrared Spectroscopy to the Study of Brain Function in 
Humans and Animal Models," Molecules and Cells, vol. 40, no. 8, pp. 
523–532, Aug. 2017, https://doi.org/10.14348/molcells.2017.0153. 

[7] Md. A. Rahman and M. Ahmad, "Identifying appropriate feature to 
distinguish between resting and active condition from FNIRS," in 3rd 
International Conference on Signal Processing and Integrated 



Engineering, Technology & Applied Science Research Vol. 13, No. 2, 2023, 10425-10431 10431  
 

www.etasr.com Milu et al.: Improvement of Classification Accuracy of Four-Class Voluntary-Imagery fNIRS Signals … 

 

Networks, Noida, India, Feb. 2016, pp. 671–675, https://doi.org/ 
10.1109/SPIN.2016.7566781. 

[8] N. Naseer and K.-S. Hong, "fNIRS-based brain-computer interfaces: a 
review," Frontiers in Human Neuroscience, vol. 9, 2015, Art. no. 3, 
https://doi.org/10.3389/fnhum.2015.00003. 

[9] S. N. Abdulkader, A. Atia, and M.-S. M. Mostafa, "Brain computer 
interfacing: Applications and challenges," Egyptian Informatics Journal, 
vol. 16, no. 2, pp. 213–230, Jul. 2015, https://doi.org/ 
10.1016/j.eij.2015.06.002. 

[10] Md. A. Rahman, M. A. Rashid, and M. Ahmad, "Selecting the optimal 
conditions of Savitzky–Golay filter for fNIRS signal," Biocybernetics 
and Biomedical Engineering, vol. 39, no. 3, pp. 624–637, Jul. 2019, 
https://doi.org/10.1016/j.bbe.2019.06.004. 

[11] K.-S. Hong, M. J. Khan, and M. J. Hong, "Feature Extraction and 
Classification Methods for Hybrid fNIRS-EEG Brain-Computer 
Interfaces," Frontiers in Human Neuroscience, vol. 12, 2018, Art. no. 
246, https://doi.org/10.3389/fnhum.2018.00246. 

[12] T. Hanakawa, I. Immisch, K. Toma, M. A. Dimyan, P. Van Gelderen, 
and M. Hallett, "Functional Properties of Brain Areas Associated With 
Motor Execution and Imagery," Journal of Neurophysiology, vol. 89, no. 
2, pp. 989–1002, Feb. 2003, https://doi.org/10.1152/jn.00132.2002. 

[13] S. C. Wriessnegger, J. Kurzmann, and C. Neuper, "Spatio-temporal 
differences in brain oxygenation between movement execution and 
imagery: A multichannel near-infrared spectroscopy study," 
International Journal of Psychophysiology, vol. 67, no. 1, pp. 54–63, 
Jan. 2008, https://doi.org/10.1016/j.ijpsycho.2007.10.004. 

[14] A. M. Batula, J. A. Mark, Y. E. Kim, and H. Ayaz, "Comparison of 
Brain Activation during Motor Imagery and Motor Movement Using 
fNIRS," Computational Intelligence and Neuroscience, vol. 2017, May 
2017, Art. no. e5491296, https://doi.org/10.1155/2017/5491296. 

[15] S. M. S. Galib, S. Md. R. Islam, and Md. A. Rahman, "A multiple linear 
regression model approach for two-class fNIR data classification," Iran 
Journal of Computer Science, vol. 4, no. 1, pp. 45–58, Mar. 2021, 
https://doi.org/10.1007/s42044-020-00064-0. 

[16] N. Naseer and K.-S. Hong, "Classification of functional near-infrared 
spectroscopy signals corresponding to the right- and left-wrist motor 
imagery for development of a brain–computer interface," Neuroscience 
Letters, vol. 553, pp. 84–89, Oct. 2013, https://doi.org/10.1016/ 
j.neulet.2013.08.021. 

[17] K.-S. Hong, N. Naseer, and Y.-H. Kim, "Classification of prefrontal and 
motor cortex signals for three-class fNIRS–BCI," Neuroscience Letters, 
vol. 587, pp. 87–92, Feb. 2015, https://doi.org/10.1016/ 
j.neulet.2014.12.029. 

[18] A. Janani and M. Sasikala, "Evaluation of classification performance of 
functional near infrared spectroscopy signals during movement 
execution for developing a brain–computer interface application using 
optimal channels," Journal of Near Infrared Spectroscopy, vol. 26, no. 4, 
pp. 209–221, Aug. 2018, https://doi.org/10.1177/0967033518787331. 

[19] A. M. Batula, H. Ayaz, and Y. E. Kim, "Evaluating a four-class motor-
imagery-based optical brain-computer interface," in 36th Annual 
International Conference of the IEEE Engineering in Medicine and 
Biology Society, Chicago, IL, USA, Aug. 2014, pp. 2000–2003, 
https://doi.org/10.1109/EMBC.2014.6944007. 

[20] Md. A. Rahman, M. S. Uddin, and M. Ahmad, "Modeling and 
classification of voluntary and imagery movements for brain–computer 
interface from fNIR and EEG signals through convolutional neural 
network," Health Information Science and Systems, vol. 7, no. 1, Oct. 
2019, Art. no. 22, https://doi.org/10.1007/s13755-019-0081-5. 

[21] "fnir-devices-technology.pdf." Accessed: Feb. 15, 2023. [Online]. 
Available: https://www.biopac.com/wp-content/uploads/fnir-devices-
technology.pdf. 

[22] "COBI fNIR Imager software | BIOPAC," BIOPAC Systems, Inc. 
https://www.biopac.com/upgrade/cobi-fnir-imager-software/. 

[23] Md. A. Rahman, M. A. Rashid, M. Ahmad, A. Kuwana, and H. 
Kobayashi, "Activation Modeling and Classification of Voluntary and 
Imagery Movements From the Prefrontal fNIRS Signals," IEEE Access, 
vol. 8, pp. 218215–218233, 2020, https://doi.org/10.1109/ACCESS. 
2020.3042249. 

[24] N. C. Kundur and P. B. Mallikarjuna, "Deep Convolutional Neural 
Network Architecture for Plant Seedling Classification," Engineering, 
Technology & Applied Science Research, vol. 12, no. 6, pp. 9464–9470, 
Dec. 2022, https://doi.org/10.48084/etasr.5282. 

[25] S. Nuanmeesri, "A Hybrid Deep Learning and Optimized Machine 
Learning Approach for Rose Leaf Disease Classification," Engineering, 
Technology & Applied Science Research, vol. 11, no. 5, pp. 7678–7683, 
Oct. 2021, https://doi.org/10.48084/etasr.4455.