Microsoft Word - ETASR_V13_N3_pp10895-10900


Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10895-10900 10895  
 

www.etasr.com Saraswathi & Pushpa: The AB-MTEDeep Classifier with AGAN for the Identification and … 

 

AB-MTEDeep Classifier Trained with AAGAN 

for the Identification and Classification of 

Alopecia Areata 
 

Chinnaiyan Saraswathi  

Department of Computer and Information Science, Faculty of Science, Annamalai University, India 

saraswathichinnaiyan@gmail.com (corresponding author) 

 

Balasubramanian Pushpa 

Department of Computer and Information Science, Faculty of Science, Annamalai University, India 

pushpasidhu@gmail.com
 

Received: 15 March 2023 | Revised: 7 April 2023 | Accepted: 17 April 2023 

Licensed under a CC-BY 4.0 license | Copyright (c) by the authors | DOI: https://doi.org/10.48084/etasr.5852 

ABSTRACT 

Artificial Intelligence (AI) is widely used in dermatology to analyze trichoscopy imaging and assess 

Alopecia Areata (AA) and scalp hair problems. From this viewpoint, the Attention-based Balanced Multi-

Tasking Ensembling Deep (AB-MTEDeep) network was developed, which combined the Faster Residual 

Convolutional Neural Network (FRCNN) and Long Short-Term Memory (LSTM) network with cross 

residual learning to classify scalp images into different AA classes. This article presents a new data 

augmentation model called AA-Generative Adversarial Network (AA-GAN) to produce a huge number of 

images from a set of input images. The structure of AA-GAN and its loss functions are comparable to those 

of standard GAN, which encompasses a generator and a discriminator network. To generate high-quality 

AA structure-based images, the generator was trained to extract the 2D orientation and confidence maps 

along with the bust depth map from real hair and scalp images. The discriminator was also used to 

separate real from generated images, which were provided as feedback to the generator to create synthetic 

images that are extremely close to the real input images. The created images were used to train the AB-

MTEDeep model for AA classification. Finally, the experimental results exhibited that the AA-GAN-AB-

MTEDeep achieved 96.94% accuracy. 

Keywords-artificial intelligence; Alopecia Areata; deep learning; data augmentation; GAN 

I. INTRODUCTION  

Hair loss or scalp problems are usually caused by stressful 
situations [1]. Nowadays, many individuals have scalp illnesses 
like psoriasis, baldness, etc., as a result of several issues, 
including bad daily routines, unequal weight growth, extreme 
weariness, and dangerous environments [2-3]. The function of 
iron and nutritional supplements in the diagnosis of baldness 
was examined in [4]. The most common kind of hair loss, 
known as Alopecia Areata (AA), affects up to 70% of men and 
40% of women [5]. Problems with scalp hair can be influenced 
by internal factors such as endocrine, hereditary, illness, and 
others. Many cases of psoriasis, cellulitis, and associated signs 
and symptoms were reported in France [6]. According to many 
studies, psoriasis affects about 18% of children in the US and 
Australia. As a result, both adults and children might 
experience scalp problems. Therefore, it is important to know 
how to properly manage the scalp and prevent diseases linked 
to baldness. Recently, specialized therapies have emerged to 
address severe scalp disorders [7]. 

The state of a patient's scalp is assessed manually in the 
most frequently used analytical processes for treating baldness. 
However, these manual diagnostic examinations can provide a 
wide range of findings and raise questions about the diagnosis 
and health of the scalp since they depend on the skills of the 
physiotherapist [8]. A substantial amount of effort and money 
have been invested in continuously developing physiotherapy 
skill sets [9]. Another significant problem is the variations in 
how scalp hair microscope pictures should be interpreted, even 
among licensed and experienced physiotherapists. Such 
discrepancies in medical interventions might be attributed to 
ignorance [10]. 

ScalpEye [11] is a deep learning-based scalp identification 
system developed to overcome these issues. A minority of 
people experience a patch of hair loss, while other people 
report more severe or uncommon issues [12]. In recent years, 
numerous scalp and dermoscopic images have been employed 
in recognizing and diagnosing AA. Trichoscopy and biopsies 
are frequently necessary to identify and diagnose AA as the 
reason for hair loss [13]. One of the main problems with these 
diagnostics is the number of tests required for a reliable 



Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10895-10900 10896  
 

www.etasr.com Saraswathi & Pushpa: The AB-MTEDeep Classifier with AGAN for the Identification and … 

 

diagnosis. Additional research on AA diagnosis and detection 
using AI methods, such as Support Vector Machine (SVM), 
Artificial Neural Networks (ANNs), Convolutional Neural 
Networks (CNNs), and others, is highly encouraged [14]. 
According to this perspective, a correct diagnosis requires the 
simultaneous recognition of AA and scalp condition. To 
achieve this, an Ensemble Parameter Optimized LSTM-based 
Pre-learned DL (EPOLSTM-PDL) model, which combined 
LSTM and pre-trained CNNs such as AlexNet, ResNet, and 
InceptionNet was proposed in [15]. The pre-trained CNNs were 
used to capture the deep features from the hair and scalp 
images, which were learned by the LSTM network. The 
LSTM’s hyperparameters were chosen by the Battle Royale 
Optimization (BRO) algorithm. But, if the network depth is 
increased, degradation problem occurrs causing overfitting. 
Therefore, the Attention-based Balanced Multi-Tasking 
Ensembling Deep (AB-MTEDeep) model was developed, 
which adopted cross-residual learning in the LSTM with 
FRCNN to improve efficiency [16]. However, the robustness 
and generalization ability of the deep learning models largely 
depends on the amount of data available in the training phase. 
According to this, it is essential to have an adequate number of 
images for efficient training. 

This paper proposes a new data augmentation model called 
AA-GAN to generate a large number of images from given 
input images. The design and loss functions of this AA-GAN 
are analogous to tose of the standard GAN, which comprises 
both a generator and a discriminator. For real hair and scalp 
images, the generator was trained to reconstruct high-quality 
AA structure-based images according to the extraction of the 
2D orientation and confidence maps, along with a bust depth 
map. Moreover, the discriminator was used to distinguish real 
and generated images which were given as feedback to the 
generator to generate synthetic images very closer to the real 
input images. Furthermore, the created images were used to 
train the AB-MTEDeep model, which was used to classify the 
test images into different AA classes. Thus, the AA-GAN can 
increase the number of training images for more effective 
classification of AA conditions. 

II. RELATED WORKS 

In [17], a novel data augmentation method was proposed, 
called Random Image Cropping and Patching (RICAP), which 
randomly cropped and patched an input image to generate a 
new training image for CNN. Also, the class labels of the 
actual images were mixed to gain the benefit of soft labels, 
However, it needed to choose the appropriate hyperparameters 
for effective performance. In [18], the presented StyleGAN was 
used to generate high-quality nodule images fed to the 
Transfer-ResNet50 to classify tumors, but it needed further 
enhancement in the diversity of the created images. In [19], an 
Enhanced Framework of GANs (EF-GANs) was designed for 
VGG16, integrating geometric transformation schemes and 
GANs for image augmentation. In [20], the Self-attention 
Progressive Growing of GANs (SPGGANs) was proposed to 
create more fine-grained nodule scans by fusing details from all 
feature positions, and the Two-Timescale Update Rule (TTUR) 
was applied to enhance the model's robustness. In [21], the 
Zero Shot Augmentation Learning (ZSAL) framework was 

presented for health signal processing. Initially, the contour of 
a lesion was recognized by a skilled physician, and a 
background image without a lesion was chosen. In [22], a 
novel Inception-Augmentation GAN (IAGAN) was proposed 
to create new X-ray scans that could help detect pneumonia 
and COVID-19. In [23], a generic adversarial data 
augmentation model called AdvChain was proposed to enhance 
the diversity and efficiency of learning data for medical image 
segmentation. In [24], a new model called XtremeAugment 
was developed for labeling and augmenting images, using 
Hardware Dataset Augmentation (HAD) and Object-Based 
Augmentation (OBA). The HAD was used to enable users to 
acquire more data, whereas the OBA was used to increase the 
training data variability and maintain the distribution of the 
augmented images being similar to the actual data. But, this 
approach needed annotated images for efficient training and 
was not effective with very limited images. 

III. PROPOSED METHODOLOGY 

Figure 1 presents an overall schematic of the proposed AA-
GAN. 

 

 

Fig. 1.  Schematic representation of the proposed study. 

A. Image Collection 

This system used the following 2 openly accessible datasets 
for analysis: 

 Figaro1k dataset, which is a public dataset enclosing 1050 
hair photos, evenly allocated into distinct types like straight, 
wavy, and curly [25]. 

 Dermnet dataset, which is a public dataset accessible on 
Dermnet [26], enclosing 23 types of dermatological 
illnesses with AA. Overall, 108 photos were obtained for 3 
distinct AA types: mild, moderate, and severe. 

Figure 2 shows a few examples of scalp hair photos in 
multiple labels from the given databases. 

 



Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10895-10900 10897  
 

www.etasr.com Saraswathi & Pushpa: The AB-MTEDeep Classifier with AGAN for the Identification and … 

 

Normal  

(0) 

Mild  

AA (1) 

Moderate AA (2) 

Severe  

AA (3) 

Fig. 2.  Scalp hair photo examples of the Dermnet and Figaro1k databases. 

B. Data Preparation for AA-GAN 

A fused model space was initially defined to create the 
training dataset. At first, a bounding box was defined as the 
boundary of the model space, where the ground-truth 3D hair 
orientation volume was generated and 2D hair orientation and 
confidence maps were extracted. 

Bounding box: The model space was constrained by a 
bounding box defined by the bust model and each hair model 
of the dataset, excluding a few extremely long hairs. After that, 
a 3D volume with a resolution of 128×128×128 was split in the 
bounding box. 

2D capture: To obtain 2D information maps X under the 
defined model space, the center of the image plane matched 
with the center of the bounding box. The 2D image was 
obtained by orthogonal projection with a scale of 1024/H. So, 
the dimension of the obtained image was 1024×1024. 

After that, the dataset was doubled by flipping all models, 
and constrained hairstyles such as braids and bounds were 
eliminated. In the considered dataset, 1050 hairstyles were 
available, which differ from straight to curly and short to long. 
The hair around the center of the bounding box was rotated 
arbitrarily. The rotation ranged from -15° to 15° for the X-axis, 
-30° to 30° for the Y-axis, and -20° to 20° for the Z-axis. As 
each dataset model was prepared by polygon strips, they were 
transformed to a dense 3D orientation volume viewed as the 
ground truth Y and grew strands later. Afterward, the hair 
strands were rendered to the 2D image at the camera view pose. 
Additionally, the bust model was considered a condition for 
AA-GAN because hair grows on the scalp and circulates the 
body. The bust depth map was determined by ray tracing, pixel 
by pixel, to obtain the distance from the bust to the camera, and 

the distance was split by D to vary the value within [0,1]. 
Finally, the network input X was created using the 2D 
orientation map, confidence map, and bust depth map. Every 
2D map was valued within [0,1] and the 3D and 2D orientation 
vectors were encoded in color space. For all dataset models, N 
pairs of X and Y were determined as training data. 

C. Design and Training of Alopecia Areata – Generative 
Adversarial Network 

With the 2D maps and the bust depth map captured from 
the input image, AA-GAN aimed to create a 3D orientation 
volume encoding both occupancy and orientation data to guide 
the AA scalp hair image augmentation. The input of the 
network was a 2D tensor X with a dimension of 1024×1024, 
consisting of 4 feature channels, which were acquired in the 
unified model space: hair orientation map (the 2D direction 
vector XY encoded as the color of RG), confidence map (the 
confidence value as the color of gray), and bust depth map (the 
depth value as the color of gray). The output was a 3D tensor Y 
of dimension 128×128×96, where the hair orientation vectors 
were encoded in RGB. 

1) Loss Functions 

The GAN is trained by the game-theory approach between 
the generator and discriminator. The aim was to train the 
generator G(X) that maps the input 2D tensor to a required 
output 3D tensor �� : �� = ����. Meanwhile, the discriminator 
maximizes the Wasserstein-1 distance between the generator 
distribution of G(X) and the target distribution of Y with a 
conditional latent projection P(X). The objective of the 
discriminator is to reduce the loss as: 

	
 = � �
 ��� , ������ − ��
��, ������ + �� ���∇�� 
 ��� , ������� − 1�
�  (1) 

where the third term is the gradient penalty for random samples 

�� , that �� ← "� + �1 − "��, and " is a random number in [0,1]. 
The coefficient � was set to 10, ��∙� is the CNN to map 2D 
tensor � into a 3D latent space to be combined with � or �� , and 
the parameters in ��∙�  are trained along with those of 
 . 
Similarly, the loss function for a generator is described by: 

	
 = −� �
 ��� , ������   (2) 
This function does not perform well to fine-tune the 

generator, because the distribution variance between the actual 
and the counterfeit cannot be simply calculated by the plus or 
minus signs. According to the fact that only selected layers of 
pre-learned networks are used as feature representations to 
transfer texture style from a source to the target image, the 
losses of style and content were adopted, where the features 
were defined in the domains of selected discriminator layers. 
So, the objective of fine-tuning the generator was to reduce the 
loss: 

	$∗ = &	'()*+)* + ,	-*./+         = & ∑ 	'()*+)*// + , ∑ 	-*./+//  (3) 
where α and β are the weighting factors. The content loss was 
taken as the square-error loss between feature representations: 

	'()*+)*/ = 2� ∑ �345
/ ��, ����� − 345/ ��� , ������

�
45  (4) 



Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10895-10900 10898  
 

www.etasr.com Saraswathi & Pushpa: The AB-MTEDeep Classifier with AGAN for the Identification and … 

 

where l is a selected layer, i is the i
th
 feature map, k is the index 

in the feature tensor, and f describes the discriminator features. 
The style loss is defined by the mean-squared distance between 
the Gram matrices, where all elements were computed by the 
inner product between the vectorized feature maps i and j: 
647/ = ∑ 345/ 375/5 . The objective was: 

	-*./+/ = 289:;<:; ∑ �647
/ ��, ����� − 647/ ��� , ������

�
47  (5) 

where Nl was the number of feature maps and Ml was the size 
of feature tensors. 

2) Design 

Table I shows the generator and discriminator structures. 
The following notations were used to define the structure of the 
proposed AA-GAN: The input and output data fo the 
processing units are in(resolution, feature channels) and 
out(resolution, feature channels), C(input channels, output 
channels, stride) is the convolutional layer with a ReLU 
activation, = is the dimensional expansion layer, and ζ is the 
fully connected node. Also, + was used as the element-wise 
addition in the residual blocks created by C, and I was the input 
tensor of the current layer. For each 2D convolutional layer C2, 
the filter size was 5 and 3 for each 3D convolutional layer C3. 
The processing units for X-, Y-, and Z-info contain similar 
strategies. 

TABLE I.  GENERATOR AND DISCRIMINATOR 
STRUCTURE 

Generator 

>?�1024 × 1024,4� ⇐ �  
E��4,16,2� + GE� �4,8,2�, E��8,16,1�I  

E��16,64,2� + GE� �16,32,2�, E��32,64,1�I  
E��64,256,1� +

GE��64,128,2�, E��128,256,1�I  
L + GE��256,256,1�, E��256,256,1�I  

MNO�128 × 128,256�  

X-, Y-, Z-blocks 

>?�128 × 128,256�  
L + GE��256,256,1�, E��256,256,1�I  
L + GE��256,256,1�, E��256,256,1�I  

E��256,128,1�  
E��128,96,1�  

=  
MNO�128 × 128 × 96,1�  

Concatenation of the QRS 
from X-, Y-, Z-blocks   

>?�128 × 128 × 96,3�  
L + GET�3,3,1�, ET�3,3,1�I  
L + GET�3,3,1�, ET�3,3,1�I  

MNO�128 × 128 × 96,3� ⇒ ��   

D
is

c
r
im

in
a

to
r
 

V�∙� block 

>?�1024 × 1024,4� ⇐ �  
E��4,32,2�  

E��32,64,2�  
E��128,96,1�  

=  
MNO�128 × 128 × 96,1�  

Concatenation of  

WX/W with V�Z�   

>?�128 × 128 × 96,4�  
ET�4,32,2�  

ET�32,64,2�  
ET�64,128,2�  

ET�126,256,2�  
ET�256,512,2�  

[  
 

Generator: The initial block with input X, consisting of 4 
residual network elements-wisely adding activation from the 
previous layer to successive layers to obtain a residual 

correction from high- to low-level data, downsamples feature 
maps to a latent code from 1024×1024 to 128×128, along with 
the number of features increasing from 4 to 256. After that, X-, 
Y-, and Z-blocks independently encode the latent code to 
features with 96 channels and the resolution along the Z-axis in 
the resulting volume. As well, = transforms the series of 2D 
features into a single channel of 3D features. Then, the output 
from X-, Y-, and Z-blocks are concatenated and given to the 
following 3D residual convolutional networks. 

Discriminator: Considering the correspondence between 
the 2D input � and the 3D desired output �� /�, the latter was 
concatenated with ���� , a feature map encoding �  to a 3D 
latent space with a similar resolution as �� /�. Subsequently, the 
concatenated 3D feature tensor was convoluted by several 
filters until the layer of ζ to finally differentiate the actual and 
τηε counterfeit. 

3) Learning Policy 

The two-timescale update rule was applied to optimize the 
discriminator only once rather than many times, increasing 
time efficiency. The ADAM optimizer was applied with 
,2 = 0 , and ,� = 0.9  for training. The training rate for the 
discriminator was set at 0.0003, and 0.0001 for the generator. 
This proposed AA-GAN was designed to create a 128×128×96 
3D volume encoded in both the occupancy and orientation 
fields, utilizing 2D maps as input with a size of 1024×1024. 
The batch size for learning was set to 5. For the generator 
objective, the style and content weighting factors were set as: 

& = 1] − 2 and , &^ = 5] + 2. 
The selected layers for content loss were 0, 3, 6, and 

l=0,1,2,3,4 for style loss. If l=0, P(X) was eliminated from 
	'()*+)*_  and 	-*./+_ . Thus, by training the AA-GAN, more 
training images were generated and used to train the AB-
MTEDeep classifier model. The trained classifier was applied 
to classify the test images into the mild, moderate, and severe 
AA classes. 

IV. EXPERIMENTAL RESULTS 

The effectiveness of the AA-GAN-AB-MTEDeep model 
was assessed and compared with the ones of existing models 
by implementing them in MATLAB 2017b using the Figaro1k 
and Dermnet databases. Of the images collected, 70% was used 
for training and the remaining 30% was used for testing. The 
considered existing models were the AB-MTEDeep [16], 
RICAP-CNN [17], EF-GAN-VGG16 [19], and IAGAN [22], 
which were applied to the Figaro1k and Dermnet databases for 
the AA classification, using the same proportions for the 
training and testing of the models. Figure 3 shows the generator 
and discriminator loss curves in the AA-GAN model, while 
Figure 4 depicts the training progress of the AA-GAN-AB-
MTEDeep model for AA classification. Table II presents the 
confusion matrix for the AA-GAN-AB-MTEDeep model for 
testing, and Table III presents the performance results of the 
models for AA classification. 

Figure 5 illustrates the accuracy values of various data 
augmentations with classification models on the Dermnet and 
Figaro1k databases. The accuracy of the AA-GAN-AB-



Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10895-10900 10899  
 

www.etasr.com Saraswathi & Pushpa: The AB-MTEDeep Classifier with AGAN for the Identification and … 

 

MTEDeep was 17.6% higher than RICAP-CNN's, 14% higher 
than the EF-GAN-VGG16's, 8.6% higher than the IAGAN's, 
and 1.9% higher than that of the AB-MTEDeep model. This is 
due to the augmentation of the number of training images to 
create an effective classification model. 

 

 

Fig. 3.  Loss curve for generator and discriminator during training. 

 
Fig. 4.  Training progress of the AA-GAN-AB-MTEDeep model for AA 

classification (training accuracy curve and loss curve). 

TABLE II.  CONFUSION MATRIX OF THE AA-GAN-AB-
MTEDEEP TESTING 

Actual 

Classified 

Classes 0 1 2 3 

0 29 0 1 0 

1 0 7 0 0 

2 1 0 9 0 

3 0 0 0 9 

TABLE III.  PERFORMANCE ANALYSIS FOR THE AA 
CLASSIFICATION MODELS ON FIGARO1K AND DERMNET 

Metrics 
RICAP

-CNN 

EF-GAN-

VGG16 
IAGAN 

AB-

MTEDeep 

AA-GAN-

AB-

MTEDeep 

Precision (%) 81.48 84.05 88.19 94.06 95.8 

Recall (%) 82.22 84.68 88.46 95.3 96.8 

F-measure (%) 81.85 84.365 88.325 94.68 96.3 

Accuracy (%) 82.31 84.86 89.22 95.11 96.94 

 

 

Fig. 5.  Accuracy of AA-GAN-AB-MTEDeep and existing models. 

V. CONCLUSION 

This study presented the AA-GAN model designed to 
generate a large number of training images for AA 
classification. In this model, the structure and loss functions 
were similar to those of the standard GAN, which involves the 
generator and the discriminator network. The generator 
network was trained to create high-quality images based on the 
AA structure by retrieving the 2D orientation and confidence 
maps, along with the bust depth map from the original hair and 
scalp images. The discriminator was trained to distinguish 
original from synthetic images, which was given as feedback to 
the generator for minimizing its error. Furthermore, the 
generated synthetic images were used to train the AB-
MTEDeep model for AA classification. The test results showed 
that the AA-GAN-AB-MTEDeep had a 96.94% accuracy, 
which was higher than the ones of the other variants of GAN 
with the AB-MTEDeep model to classify AA and scalp 
conditions. In the future, hybrid deep learning models with 
hyperparameter optimizers can be developed to improve AA 
classification performance. 

REFERENCES 

[1] K. York, N. Meah, B. Bhoyrul, and R. Sinclair, "A review of the 
treatment of male pattern hair loss," Expert Opinion on 
Pharmacotherapy, vol. 21, no. 5, pp. 603–612, Mar. 2020, 
https://doi.org/10.1080/14656566.2020.1721463. 

[2] A. Ahmad, F. Khatoon, B. Khan, M. Mohsin, Lucknow, and A. Aligarh, 
"A Critical Review of Daus-Sadaf (Psoriasis): Unani & Modern 
Perspectives," International Journal of Creative Research Thoughts, vol. 
8, no. 7, pp. 4570–4582, Jul. 2020, https://doi.org/10.13140/RG.2. 
2.25897.83040. 

[3] D. I. Conde Hurtado, J. I. Vergara Rueda, J. L. Bermudez Florez, S. C. 
Cadena Infante, and A. J. Rodriguez Morales, "Potential Dermatological 
Conditions Resulting from a Prolonged Stay at Home during the 
COVID-19 Pandemic: A Review," Acta dermatovenerologica Croatica: 
ADC, vol. 29, no. 3, pp. 135–147, Dec. 2021. 

[4] M. J. Adelman, L. M. Bedford, and G. A. Potts, "Clinical efficacy of 
popular oral hair growth supplement ingredients," International Journal 
of Dermatology, vol. 60, no. 10, pp. 1199–1210, Oct. 2021, 
https://doi.org/10.1111/ijd.15344. 

[5] A. Egger, M. Tomic-Canic, and A. Tosti, "Advances in Stem Cell-Based 
Therapy for Hair Loss," CellR4-- repair, replacement, regeneration, & 
reprogramming, vol. 8, Sep. 2020, Art. no. e2894. 

[6] F. Diotallevi, O. Simonetti, G. Rizzetto, E. Molinelli, G. Radi, and A. 
Offidani, "Biological Treatments for Pediatric Psoriasis: State of the Art 
and Future Perspectives," International Journal of Molecular Sciences, 
vol. 23, no. 19, Jan. 2022, Art. no. 11128, https://doi.org/10.3390/ 
ijms231911128. 

[7] L. C. Coates et al., "Group for Research and Assessment of Psoriasis and 
Psoriatic Arthritis (GRAPPA): updated treatment recommendations for 



Engineering, Technology & Applied Science Research Vol. 13, No. 3, 2023, 10895-10900 10900  
 

www.etasr.com Saraswathi & Pushpa: The AB-MTEDeep Classifier with AGAN for the Identification and … 

 

psoriatic arthritis 2021," Nature Reviews Rheumatology, vol. 18, no. 8, 
pp. 465–479, Aug. 2022, https://doi.org/10.1038/s41584-022-00798-0. 

[8] M. Sreenatha and P. B. Mallikarjuna, "A Fault Diagnosis Technique for 
Wind Turbine Gearbox: An Approach using Optimized BLSTM Neural 
Network with Undercomplete Autoencoder," Engineering, Technology 
& Applied Science Research, vol. 13, no. 1, pp. 10170–10174, Feb. 
2023, https://doi.org/10.48084/etasr.5595. 

[9] H. Reffad, A. Alti, and A. Almuhirat, "A Dynamic Adaptive Bio-
Inspired Multi-Agent System for Healthcare Task Deployment," 
Engineering, Technology & Applied Science Research, vol. 13, no. 1, 
pp. 10192–10198, Feb. 2023, https://doi.org/10.48084/etasr.5570. 

[10] V. C. Ho, T. H. Nguyen, T. Q. Nguyen, and D. D. Nguyen, "Application 
of Neural Networks for the Estimation of the Shear Strength of Circular 
RC Columns," Engineering, Technology & Applied Science Research, 
vol. 12, no. 6, pp. 9409–9413, Dec. 2022, https://doi.org/10.48084/ 
etasr.5245. 

[11] W.-J. Chang, L.-B. Chen, M.-C. Chen, Y.-C. Chiu, and J.-Y. Lin, 
"ScalpEye: A Deep Learning-Based Scalp Hair Inspection and Diagnosis 
System for Scalp Health," IEEE Access, vol. 8, pp. 134826–134837, 
2020, https://doi.org/10.1109/ACCESS.2020.3010847. 

[12] F. Zucchelli, N. Sharratt, K. Montgomery, and J. Chambers, "Men’s 
experiences of alopecia areata: A qualitative study," Health Psychology 
Open, vol. 9, no. 2, Jul. 2022, Art. no. 205510292211215, 
https://doi.org/10.1177/20551029221121524. 

[13] A. Alessandrini, F. Bruni, B. m. Piraccini, and M. Starace, "Common 
causes of hair loss – clinical manifestations, trichoscopy and therapy," 
Journal of the European Academy of Dermatology and Venereology, 
vol. 35, no. 3, pp. 629–640, 2021, https://doi.org/10.1111/jdv.17079. 

[14] A. Elder, C. Ring, K. Heitmiller, Z. Gabriel, and N. Saedi, "The role of 
artificial intelligence in cosmetic dermatology—Current, upcoming, and 
future trends," Journal of Cosmetic Dermatology, vol. 20, no. 1, pp. 48–
52, 2021, https://doi.org/10.1111/jocd.13797. 

[15] C. Saraswathi, B. Pushpa, "Computer Imaging of Alopecia Areata and 
Scalp Detection: A Survey," International Journal of Engineering 
Trends and Technology, vol. 70, no. 8, pp. 347-358, 2022, 
https://doi.org/10.14445/22315381/IJETT-V70I8P236. 

[16] C. Saraswathi and B. Pushpa, "Machine Learning Algorithm for 
Classification of Alopecia Areata from Human Scalp Hair Images," in 
Computational Vision and Bio-Inspired Computing, Singapore, 2023, 
pp. 269–288, https://doi.org/10.1007/978-981-19-9819-5_21. 

[17] R. Takahashi, T. Matsubara, and K. Uehara, "Data Augmentation Using 
Random Image Cropping and Patching for Deep CNNs," IEEE 
Transactions on Circuits and Systems for Video Technology, vol. 30, no. 
9, pp. 2917–2931, Sep. 2020, https://doi.org/10.1109/TCSVT. 
2019.2935128. 

[18] Z. Qin, Z. Liu, P. Zhu, and Y. Xue, "A GAN-based image synthesis 
method for skin lesion classification," Computer Methods and Programs 
in Biomedicine, vol. 195, Oct. 2020, Art. no. 105568, https://doi.org/ 
10.1016/j.cmpb.2020.105568. 

[19] H. Xu et al., "An Enhanced Framework of Generative Adversarial 
Networks (EF-GANs) for Environmental Microorganism Image 
Augmentation With Limited Rotation-Invariant Training Data," IEEE 
Access, vol. 8, pp. 187455–187469, 2020, https://doi.org/10.1109/ 
ACCESS.2020.3031059. 

[20] I. S. A. Abdelhalim, M. F. Mohamed, and Y. B. Mahdy, "Data 
augmentation for skin lesion using self-attention based progressive 
generative adversarial network," Expert Systems with Applications, vol. 
165, p. 113922, Mar. 2021, https://doi.org/10.1016/j.eswa.2020.113922. 

[21] K. Guo, T. Luo, M. Z. A. Bhuiyan, S. Ren, J. Zhang, and D. Zhou, "Zero 
shot augmentation learning in internet of biometric things for health 
signal processing," Pattern Recognition Letters, vol. 146, pp. 142–149, 
Jun. 2021, https://doi.org/10.1016/j.patrec.2021.03.012. 

[22] S. Motamed, P. Rogalla, and F. Khalvati, "Data augmentation using 
Generative Adversarial Networks (GANs) for GAN-based detection of 
Pneumonia and COVID-19 in chest X-ray images," Informatics in 
Medicine Unlocked, vol. 27, Jan. 2021, Art. no. 100779, https://doi.org/ 
10.1016/j.imu.2021.100779. 

[23] C. Chen et al., "Enhancing MR image segmentation with realistic 
adversarial data augmentation," Medical Image Analysis, vol. 82, Nov. 
2022, Art. no. 102597, https://doi.org/10.1016/j.media.2022.102597. 

[24] S. Nesteruk et al., "XtremeAugment: Getting More From Your Data 
Through Combination of Image Collection and Image Augmentation," 
IEEE Access, vol. 10, pp. 24010–24028, 2022, https://doi.org/10.1109/ 
ACCESS.2022.3154709. 

[25] "Figaro 1K," Figaro 1K | share Your Project. http://projects.i-ctm.eu/it/ 
progetto/figaro-1k. 

[26] "DermNet skin disease atlas," DermNet | Dermatology Resource. 
https://dermnet.com.