International Journal of Renewable Energy Development Int. J. Renew. Energy Dev. 2023, 12(3), 478-487 | 478 https://doi.org/10.14710/ijred.2023.49972 ISSN: 2252-4940/Β© 2023.The Author(s). Published by CBIORE Contents list available at IJRED website International Journal of Renewable Energy Development Journal homepage: https://ijred.undip.ac.id Prediction of the output power of photovoltaic module using artificial neural networks model with optimizing the neurons number Abdulrahman Th. Mohammad1* , Hasanen M. Hussen2 , Hussein J. Akeiber3 1Middle Technical University (MTU), Baqubah Technical Institute, Renewable Energy Department, Baghdad, 10074, Iraq 2Ministry of Higher Education and Scientific Research, Department of Research and Development, Baghdad, 10074, Iraq 3Ministry of Interior, Directorate of Arab and International Cooperation, Department of Educational Affairs, Baghdad, Iraq Abstract. Artificial neural networks (ANNs) is an adaptive system that has the ability to predict the relationship between the input and output parameters without defining the physical and operation conditions. In this study, some queries about using ANN methodology are simply clarified especially about the neurons number and their relationship with input and output parameters. In addition, two ANN models are developed using MATLAB code to predict the power production of a polycrystalline PV module in the real weather conditions of Iraq. The ANN models are then used to optimize the neurons number in the hidden layers. The capability of ANN models has been tested under the impact of several weather and operational parameters. In this regard, six variables are used as input parameters including ambient temperature, solar irradiance and wind speed (the weather conditions), and module temperature, short circuit current and open circuit voltage (the characteristics of PV module). According to the performance analysis of ANN models, the optimal neurons number is 15 neurons in single hidden layer with minimum Root Mean Squared Error (RMSE) of 2.76% and 10 neurons in double hidden layers with RMSE of 1.97%. Accordingly, it can be concluded that the double hidden layers introduce a higher accuracy than the single hidden layer. Moreover, the ANN model has proven its accuracy in predicting the current and voltage of PV module. Keywords: Photovoltaic, Power production, Artificial neural networks, Neurons @ The author(s). Published by CBIORE. This is an open access article under the CC BY-SA license (http://creativecommons.org/licenses/by-sa/4.0/). Received: 31st Oct 2022; Revised: 27th January 2023; Accepted: 16th Feb 2023; Available online: 30th March 2023 1. Introduction The traditional statistical techniques are useful to estimate the behaviour of linear systems in different engineering disciplines. The presence of nonlinearity in some of applications makes the traditional statistical techniques inefficient to predict the relationship between the input and output parameters (Nayak et al., 2017). Among the soft computing methodologies, the artificial neural networks (ANNs) are widely used in recent time to predict, optimize and classify the behaviour of many problems in our life (Abiodun et al., 2018). ANNs are relevant to Machine Learning (ML) models which have the ability to mimic the basic biological neural systems, especially the human brain (Mubiru, 2011). ANNs can be classified into two main algorithms: Feed-Forward Neural Network (FFNN) and Feed- Backward Neural Network (FBNN) (Hertz, 2018). The FFNN is defined as a classification algorithm where each neuron in a layer connects to other neurons in other layers with an equal weight (Abiodun et al., 2018). The weight is defined as an indicator for the potential amount of the knowledge in the network. During the FFNN, the information is transmitted in one direction from input layer to output layer throughout the hidden layer. When the network is operated normally and acted as a * Corresponding author Email: abd20091976@gmail.com (A.Th. Mohammad) classifier, the FBNN process between the layers is not necessary (Hagan and Menhaj, 1994). The FBNN is denoted as algorithm for back-preferable propagation training which has the ability to build coordinated graph in sequence from the connections between the neurons. The FBNN can be used to minimize the loss function by adjusting or correcting the weights. In general, the simple architecture of ANN consists of input, hidden and output layers (Mohammad et al., 2020; Mohammad et al., 2013) which composes a number of interconnected elements called neurons. Each neuron receives the input signal from external process or from another neuron. The output signal from each neuron produces from a transfer function and passes into other neuron or external outputs (Zhang et al., 1998). The following are the key questions that numerous researchers have addressed: a. What is the number of hidden layers and the number of neurons?; b. Is there a relationship between the neurons number and the input and output parameters?; c. Are there other parameters relate to determine the neurons number?. According to Zhang et al. (1998), the accuracy of ANNs is mostly influenced by the number of hidden layers and their neurons. Cybenko, (1989) demonstrated that the single hidden layer is sufficient to investigate any desired accuracy in Research Article https://doi.org/10.14710/ijred.2023.49972 https://doi.org/10.14710/ijred.2023.49972 https://doi.org/10.14710/ijred.2023.49972 https://www.ncbi.nlm.nih.gov/pubmed/?term=Abiodun%20OI%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Abiodun%20OI%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 mailto:abd20091976@gmail.com https://orcid.org/0000-0002-0487-4464 https://orcid.org/0000-0003-1119-7390 https://orcid.org/0009-0009-2207-0150 http://crossmark.crossref.org/dialog/?doi=10.14710/ijred.2023.49972&domain=pdf A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |479 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE any complex nonlinear problems. However, this requires a large number of neurons. This is not desirable in the training time of the optimal number of neurons since it leads to poor generalization ability of ANNs. Several colleagues proved the advantages of double hidden layers over single hidden layer in ANNs. According to Barron (1994), double hidden layers provide greater benefits in some cases. Srinivasan et al. (1994) investigated that the higher efficiency in the training stage can be improved using double hidden layers. In addition, Zhang (1994) demonstrated that the high accuracy of prediction can be achieved using double hidden layers. On the other hand, some other colleagues demonstrated a relationship between the number of input parameters and neurons in hidden layers and introduced a heuristic constraint on the number of neurons (Lachtermacher, 1995). Kang (1991), Tang and Fishwick (1993), Wong (1991), and Lippmann (1987) and Hecht-Nielsen (1990), proved the existence of a relationship between the number of neurons and the input parameters which mathematically represented as (j=n/2), (j=n), (j=2n), and (j=2n+1), respectively. j is the number of neurons and n is the number of input parameters. Some other colleagues proved the existence of a relationship between the neurons number and input/output parameters. For instance, Kalogirou et al. (1996) proposed that the neurons are set according to the formulas: (𝑗𝑗 = 2 3 (𝑛𝑛 + π‘œπ‘œ) and 𝑗𝑗 = 3 2 (𝑛𝑛 + π‘œπ‘œ)). Moreover, Mohanraj (2009) proposed that the number of neurons is set according to the formula (𝑗𝑗 = βˆšπ‘›π‘› + π‘œπ‘œ + π‘Žπ‘Ž). Where a is a constant from (1 to 5) and o is the output parameters. In this regard, the number of training data has an influential role for determining the number of neurons as mathematically represented as: (𝑗𝑗 = ((𝑛𝑛 + 𝑂𝑂)/2) + √(𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 π‘œπ‘œπ‘œπ‘œ π‘‘π‘‘π‘›π‘›π‘Žπ‘Žπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘›π‘›π‘‘π‘‘ π‘‘π‘‘π‘Žπ‘Žπ‘‘π‘‘π‘Žπ‘Ž π‘π‘π‘œπ‘œπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘π‘)) (Kalogirou, 1996, Kalogirou et al., 1997 and 1998, Luyao Liu et al., 2017, Shaft et al., 2006). A critical analysis of the above studies would introduce the shortcoming the of the conducted research. Specifically, the number of neurons in hidden layer were randomly selected or by trial and error. Furthermore, the compatibility between hidden layers and the number of neurons is not thoroughly addressed and discussed. Therefore, the research intends to optimize the neurons number in single and double hidden layers which would enable to predict the output power of PV module. To appropriately approve the contribution of this research, the associated results of optimal neurons will be compared against those neurons that mathematically represented by some other colleagues. In this regard, the comparison between the performance of single and double hidden layers will be presented for the optimum number of neurons. 2. Related works The applications of Photovoltaic (PV) solar cells technology have become widely used to generate the electrical power over the last two decades. Generally, output characteristics of PV modules can be represented in current-voltage (I-V) and power- voltage (P-V) curves (Singh and Ravindra, 2012). These characteristics are influenced by the ambient conditions such as, solar radiation, ambient temperature, dust and wind speed (Ziane, 2021; Kidegho et al., 2021). The relationship between the input parameters of ambient conditions and PV output is a complex nonlinear system. In literature, numerous studies were presented to analyse and estimate the output characteristics and performance of PV using an experimental, analytical and numerical models. Among of these models, the machine learning using ANNs has been approved as an active model to predict the output electrical characteristics of PV modules. This section focuses on addressing the most important aspects that can be used to predict the output power of PV module using ANN methodology. Barhdadi et al. (2019) used an ANN model with Levenberg-Marquardt back-propagation algorithm to predict the output power of PV module. The structure of ANN has six input parameters and random neurons number from (5 to 35) with single hidden layer. The results showed that the neuron number 35 has achieved the better prediction in ANN model. Two topologies of ANN named as feed forward and radial basis were investigated by Gaur et al. (2018) to predict the performance of five PV module technologies under the influence of solar irradiance and temperature. In each topology, single and multi-hidden layers with 10 and 5 neurons were trained and tested using built-in functions and Levenberg-Marquardt with Resilient Back propagation, respectively. The results of the proposed ANN models indicated that the mean bias error deviations are less than 1% if compared to the dependent models. In the same context, Di-Falco et al. (2014) used three types of ANN topologies named as Multilayer perceptron, a recursive neural network and a gamma memory trained to forecast the production power of PV module under the influence of ambient conditions. Additionally, module temperature, open circuit voltage (Voc) and short circuit current (Isc) were used as an input parameter of the ANN model. The result showed that the error ranged between 0.05 to 1% for the predicted and real power of PV module. Two architectures of neural networks were used by Enachescu et al. (2016) to forecast the production power of PV module. The first architecture is a Multilayer perceptron with back propagation and two neurons number (15 and 100). The second architecture is named Elman networks with feed forward. According to the obtained results, the Elman type with small data has performed better than Multilayer perceptron in learning stage. The Multilayer Perceptron (MLP) topology was used by Jumaat et al. (2018) to predict the maximum voltage (Vm) and current (Im) of PV module. The structure of ANN contains seven input parameters named as solar irradiance, ambient temperature, relative humidity, humidity ratio, module temperature, Voc, and Isc). The ANN architecture built as single hidden layer with number of neurons from 1 to 10 and two output parameters. The results elucidated that the ANN model is of a high accuracy to predict the Vm and Im of PV module. Kayri and Gencoglu (2019) employed the feed-forward ANN topology with a back-propagation algorithm to predict the output power of mono-crystalline silicon PV module. Six input parameters of weather conditions were considered including the solar irradiance, solar elevation angle, ambient temperature, wind speed, wind direction and relative humidity besides two hidden layers to build the ANN structure. The comparison between the estimated and measured results showed that the maximum mean square error has not exceeded 1.4% and the coefficient of determination (R2) ranged between 99.637 to 99.998%. A simple ANN structure was proposed by Mellit et al. (2013) to estimate the output power of 50 Wp PV in Turkey. The ANN model depends on input data as measured along one year including the solar irradiance, air temperature and output power in cloudy and sunny days. The ANN structure considered single hidden layer with one neuron. The results elaborated that the model of sunny days is more accurate than the model of cloudy days. The determination coefficient of the cloudy days recorded between 93% and 97% while recorded between 96% and 97% in sunny days. 3. Methods and ANN applications A polycrystalline PV module Type (FRS-165W) selected and installed at the centre of Middle Technical University- A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |480 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE Baghdad-Iraq to achieve the experimental tests and to collect the dataset. The PV module has the following technical specifications: (Pmax=165 Wp, Isc=9.81 A, Voc=22.05 V, Imp=9.17 A and Vmp=18 V). The I-V tracer (SEAWARD PV200) used to measure the output electrical characteristics of PV module including: Voc, Isc, Vmp and Imp. In addition, solar meter (Survey 200R) unit used and synchronized with the I-V tracer to measure the ambient temperature, solar radiation and back temperature of the PV module. Also, a handle anemometer used to measure the wind speed (Va). Totally, 326 measured data obtained through the period of tests. These data were stored and reported by solar data logger and displayed by SolarCert software. ANN has been used effectively in photovoltaic for solving various problems, for example, the effects of atmospheric variables on the production power in PV modules. It is defined as a mapping system that can be used to represent a nonlinear relationship between the input and output parameters (Mellit et al., 2013). In general, the simple structure of ANN consists of three layers named as: input layer, hidden layer and output layer (Figure 1) (Pontes et al., 2012). The relationship between the input and output layers can be represented mathematically as: 𝑂𝑂𝑗𝑗 = π‘œπ‘œ βˆ‘ π‘Šπ‘Šπ‘—π‘—,𝑖𝑖 𝑋𝑋𝑖𝑖 + 𝑛𝑛𝑖𝑖 π‘šπ‘š 𝑖𝑖=1 (1) Where: W, X and b are the weight, input parameter and bias, respectively and f is the activation function. The output neurons (k) in the output layer can be expressed as: π‘¦π‘¦π‘˜π‘˜ = π‘‘π‘‘βˆ‘ π‘Šπ‘Šπ‘˜π‘˜,𝑗𝑗 𝑂𝑂𝑗𝑗 𝑛𝑛 𝑗𝑗=1 (2) The relationship between the input parameters and k is represented in Eq. 3 (MacKay, 1992) π‘¦π‘¦π‘˜π‘˜ = π‘‘π‘‘οΏ½βˆ‘ π‘Šπ‘Šπ‘˜π‘˜,𝑗𝑗 𝑛𝑛 𝑗𝑗=1 π‘œπ‘œοΏ½βˆ‘ π‘Šπ‘Šπ‘—π‘—,𝑖𝑖 𝑋𝑋𝑖𝑖 + 𝑛𝑛𝑖𝑖 π‘šπ‘š 𝑖𝑖=1 οΏ½οΏ½ (3) The training process tries to adjust the connection weight for keeping the predicted output (yk) closed as expectation to the desired ouput (ykοΏ½οΏ½οΏ½) under the given input parameters (Xi). The error fuction between the predicted and desired output can be formuated as a minimum value (Elsheikh et al., 2019; Raj et al., 2019). π‘›π‘›π‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘šπ‘šπ‘›π‘› 1 2 βˆ‘ (ykοΏ½οΏ½οΏ½ βˆ’ 𝑦𝑦𝐾𝐾)2π‘™π‘™π‘˜π‘˜=1 (4) The transfer function is used to obtain and send the signals between the layers. There are three popular types of transfer functions called: linear, sigmoid, and hyperbolic tangent transfer functions (Mellit et al., 2013). The mathematical model of these types of transfer functions can be represented as: π‘œπ‘œ(𝑆𝑆) = ⎩ ⎨ ⎧ 𝑆𝑆 π‘™π‘™π‘‘π‘‘π‘›π‘›π‘›π‘›π‘Žπ‘Žπ‘›π‘› π‘œπ‘œπ‘›π‘›π‘›π‘›π‘“π‘“π‘‘π‘‘π‘‘π‘‘π‘œπ‘œπ‘›π‘› 1 1+π‘’π‘’βˆ’π‘†π‘† π‘π‘π‘‘π‘‘π‘‘π‘‘π‘›π‘›π‘œπ‘œπ‘‘π‘‘π‘‘π‘‘ π‘œπ‘œπ‘›π‘›π‘›π‘›π‘“π‘“π‘‘π‘‘π‘‘π‘‘π‘œπ‘œπ‘›π‘› 𝑒𝑒+𝑆𝑆 βˆ’π‘’π‘’βˆ’π‘†π‘† 𝑒𝑒+𝑆𝑆 +π‘’π‘’βˆ’π‘†π‘† β„Žπ‘¦π‘¦π‘π‘π‘›π‘›π‘›π‘›π‘›π‘›π‘œπ‘œπ‘™π‘™π‘‘π‘‘π‘“π‘“ π‘‘π‘‘π‘Žπ‘Žπ‘›π‘›π‘‘π‘‘π‘›π‘›π‘›π‘›π‘‘π‘‘ π‘œπ‘œπ‘›π‘›π‘›π‘›π‘“π‘“π‘‘π‘‘π‘‘π‘‘π‘œπ‘œπ‘›π‘› (5) The sigmoid transfer function is considered as a better method among the transfer functions (Mellit et al., 2013). Therefore, the current study utilised this as an activation function. The error can be expressed as: 𝐸𝐸 = 1 2 οΏ½ykοΏ½οΏ½οΏ½ βˆ’ 1 1+π‘’π‘’βˆ’π‘†π‘† οΏ½ 2 (6) 𝐸𝐸 = 1 2 οΏ½ykοΏ½οΏ½οΏ½ βˆ’ 1 π‘’π‘’βˆ’βˆ‘ π‘Šπ‘Šπ‘—π‘—,𝑖𝑖𝑋𝑋𝑖𝑖+𝑏𝑏𝑖𝑖 π‘šπ‘š 𝑖𝑖=1 οΏ½ 2 (7) The chain rule theory is also applied to calculate ( πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• ) as follows: πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• = πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• π‘₯π‘₯ πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• π‘₯π‘₯ πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• (8) According to the derivation of each component, the final change error of the weight can be represented as: πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• = (ykοΏ½οΏ½οΏ½ βˆ’ π‘¦π‘¦π‘˜π‘˜) 1 1+π‘’π‘’βˆ’π‘†π‘† οΏ½1 βˆ’ 1 1+π‘’π‘’βˆ’π‘†π‘† �𝑋𝑋 (9) Therefore, if the predicted output π‘¦π‘¦π‘˜π‘˜ is not close to the desired output ykοΏ½οΏ½οΏ½, the weights must be adapted (Rumelhart et al., 1986). π‘Šπ‘Šπ‘›π‘›π‘’π‘’π‘›π‘› = π‘Šπ‘Šπ‘œπ‘œπ‘™π‘™π‘œπ‘œ βˆ’ ƞ πœ•πœ•πœ•πœ• πœ•πœ•πœ•πœ• (10) Where: ƞ is the learning rate (0 ≀ ƞ ≀ 1). The quantitative variable of neurons is normalized to some standard ranges such as [0 1] or [-1 1] before beginning the training and testing processes (Mohammad et al., 2013). The normalization process can be investigated according to (Sanjay et al., 2006) as: 𝑋𝑋𝑖𝑖 = 0.8 π‘œπ‘œπ‘šπ‘šπ‘šπ‘šπ‘šπ‘šβˆ’π‘œπ‘œπ‘šπ‘šπ‘–π‘–π‘šπ‘š (𝑑𝑑𝑖𝑖 βˆ’ π‘‘π‘‘π‘šπ‘šπ‘–π‘–π‘›π‘›) + 0.1 (11) Where: dmax, dmin and di are the maximum, minimum and number (ith) of the desired input/ output data, respectively. 3.1. Neurons number There is no mathematical formula can determine the number of neurons in the hidden and output layers. Therefore, most of the Fig. 1 Basic design of ANN structure. https://www.sciencedirect.com/science/article/pii/S0960148113002279#! https://www.sciencedirect.com/science/article/pii/S0960148113002279#! A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |481 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE colleagues were relied on the trial and error approach to select the neurons number. Besides, some of them suggested a relationship between the neurons number and input parameters. For example, Kang (1991), Tang and Fishwick (1993), Wong (1991), Lippmann (1987) and Hecht-Nielsen (1990) suggested that the neurons number can be specified by the input parameters: 𝑗𝑗 = οΏ½ 𝑛𝑛 2οΏ½ 𝑛𝑛 2𝑛𝑛 2𝑛𝑛 + 1 (12) Where: j and n are the neurons number and input parameters, respectively. Some other colleagues proved that neurons number can be specified by input and output parameters. Mohanraj (2009) proved the following relationship 𝑗𝑗 = βˆšπ‘›π‘› + π‘œπ‘œ + π‘Žπ‘Ž (13) Kalogirou et al. (1996) also introduced Eq. 14 to identify the neurons number 𝑗𝑗 = οΏ½ 2 3 (𝑛𝑛 + π‘œπ‘œ) 3 2 (𝑛𝑛 + π‘œπ‘œ) (14) Where: o is the number output parameters and (a) is a constant from 1 to 5. Kalogirou (1996), Kalogirou et al. (1997), Kalogirou et al. (1998), Liu et al. (2017), and Shaft et al. (2006) stated that the number of training data plays a key role in determining the neurons number besides considering the input and output parameters. This is clearly represented in Eq. 15 𝑗𝑗 = �𝑛𝑛+𝑂𝑂 2 οΏ½ + �𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 π‘œπ‘œπ‘œπ‘œ π‘‘π‘‘π‘›π‘›π‘Žπ‘Žπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘›π‘›π‘‘π‘‘ π‘‘π‘‘π‘Žπ‘Žπ‘‘π‘‘π‘Žπ‘Ž π‘π‘π‘œπ‘œπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘π‘ (15) 3.2. Structure of proposed ANN In the current study, two architectures of ANN were proposed. The first one represents ANN structure with single hidden layer. The second one represents with double hidden layers. A 326 dataset were used to run the ANN models. The block diagram of concept the overlap between the ANN model and experimental setup is depicted in Figure 2. The minimum and maximum values of the measured parameters are summarized Fig. 2 Concept of the overlap between the ANNs model and experimental setup. Table 1 Minimum and maximum values of measured data. Parameters Symbol Unit Min Max Input parameters Solar irradiance G W/m2 169.2 1003 Ambient temperature Ta oC 32 51 Wind speed Va m/sec 0.3 2.5 Module temperature Tc oC 32.6 69.1 Open circuit voltage Voc V 19.6 20.6 Short circuit current Isc A 4.116 9.1 Output parameters Output power P W 0 140 A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |482 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE in Table 1. The characteristics of the ANN model can be drawn as: β€’ Six nodes used in input layer. These include the solar irradiance, ambient temperature, wind speed, module temperature, open circuit voltage and short circuit current) and one node in output layer (output power). β€’ The model used Levenberg-Marquardt back- propagation technique in training stage. β€’ 70% from dataset was used for training, while 15% for testing and 15% for validation β€’ Single and double hidden layers were evaluated in the ANN model. β€’ The sigmoid and purlin functions were used as an activation functions in the hidden and output layers. β€’ The neurons number were tested from 1 to 100. β€’ The optimization of neurons number has been evaluated using RMSE in validation and training stage. β€’ Mean Squared Error (MSE) and coefficient of determination are used to measure the effectiveness of the ANN model (R2). 4. Results and discussions In this study, two ANN architectures with single and double hidden layers were used to predict the production power of PV module in the real weather conditions of Iraq. The capability of ANN models has been evaluated under the impact of several weather and operational parameters. Totally, six variables were used as the input parameters. Three of these variables related to the weather conditions (ambient temperature, solar irradiance and wind speed) while, the other variables related to the characteristics of PV module (module temperature, short circuit current and open circuit voltage). 4.1. Visualization of weather and operational parameters A scatter plot visualization technique was used to represent and express the data graphically as shown in Figure 3. The main goal of visualization process is to acquire insight into the data. The measured output power of PV is plotted and ranged against the solar irradiance, ambient temperature, module temperature, wind speed, open circuit voltage and short circuit current. It is clear that that the power increases linearly with irradiance and open circuit current. On the other hand, the linearity decreases with wind speed. However, the relationship becomes random with ambient temperature, module temperature and open circuit voltage. 4.2. Optimization of neurons number Two MATLAB codes were used to optimize the neurons number in single and double hidden layers. The optimization ranged between 1 to 100 and evaluated depending on the minimum value of Root Mean Squared Error (RMSE) in validation stage. Also, the evaluation depends on the minimum difference of RMSE between training and validation stages. Figures 4 shows the best neuron number of 15 in single layer with RMSE value of 2.76 % and 10 in double hidden layer with RMSE value of 1.97%. Furthermore, the difference between the Fig. 3 Visualization of measure data with power production of PV module. A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |483 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE RMSE in validation and training stages has recorded 0.18% and 0.21% in single and double hidden layers, respectively. Generally, the optimal neuron in each single and double hidden layer has showed a better RMSE than the neurons that proposed by many colleagues (Kang, 1991; Liu et al., 2017; Shaft et al., 2006; H-Nielsen, 1990; Mohanraj et al., 2009; Kalogirou et al., 1996-1997-1998; Kalogirou, 1996; Wong, 1991; Lippmann, 1987). The comparison of RMSE in single hidden layer would also show that the neuron number 5 (according to the formulas of Mohanraj et al., 2009 and Kalogirou et al., 1996) has represented a lower RMSE of 2.4% in validation stage but with difference about 2.31% from training stage. In addition, the neuron number 7 (according to the formula of Mohanraj et al., 2009) has elaborated the same value of RMSE in the current study but with difference about 0.36% from training stage. However, the neuron number 5 in double hidden layers (according to Mohanraj et al., 2009 and Kalogirou et al., 1996) has recorded the same value of RMSE of the current study but with difference about 2.94%. In more details, Table 2 discusses the calculation of neurons number according to the mathematical formulas imposed by the colleagues in the open literature. Their values of RMSE in validation and training stages (a) Single hidden layer (b) Double hidden layer Fig. 4 Optimal neurons number (a) single hidden layer (b) double hidden layers. a) Single hidden layer b) Double hidden layer Fig. 5 Best validation performance (a) single hidden layer (b) double hidden layers. A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |484 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE were calculated based on the results of the present ANN model when run from (1 to100) neurons number. Table 2 has been adopted to clarify the comparison between the present ANN model and previous ANN models depending on the values of RMSE in validation stage and the minimum difference between RMSE for validation and training stages. 4.3. Performance of ANN model Figure 5 shows the best performance of ANN model for single and double hidden layers. It is clear that the best validation performance in single layer has been investigated at epoch 6 with MSE of 1.966Γ—10-3. However, it has been investigated at epoch 4 with MSE of 1.950Γ—10-3 in double hidden layer. Furthermore, the MSE of training stage in single and double hidden layers has recorded lower value than MSE of testing and validation stages. This is a great indication that the data of the model were learned very well in training stage. Figure 6 defines the regressions curves of all the ANN stages including: Training, testing, validation and all for single and double hidden layers. The coefficient of determination (R2) is used as an indicator to evaluate the accuracy between the predicted and target of (a) Single hidden layer (b) Single hidden layer Fig. 6 Regression curves of all stages (a) single hidden layer (b) double hidden layers Fig. 7 Measured and predicted values of output power of PV. A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |485 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE production power. In single layer, the values of R2 ranged as 0.9881, 0.981, 0.982 and 0.985 for training, testing, validation, and all, respectively. For double hidden layer, the values of R2 ranged as 0.9887, 0.985, 0.99 and 0.988, respectively. According to R2, it can be observed that the ANN model of double hidden layers has the best accuracy if compared to the model of single layer in all the stages of ANN model. Figure 7 shows the pattern of predicted power of the ANN model in single and double Table 2 Present neurons number compared with literature studies. ANN structure Reference Formula RMSE-Valid(%) Difference RMSE% (Valid- Train) Single hidden layer 6-3-1 (Kang, 1991) j=n/2 6.4 2.30 6-6-1 (Tang et al., 1993) j=n 7.04 1.84 6-12-1 (Wong, 1991) and (Lippmann,1987) j=2n 2.08 2.18 6-13-1 (H-Nielsen, 1990) j=2n+1 8.03 5.90 6-4-1 6-5-1 6-6-1 6-7-1 6-8-1 (Liu et al., 2017) 𝑗𝑗 = βˆšπ‘›π‘› + π‘œπ‘œ + π‘Žπ‘Ž a=1,2,3,4,5 3.2 2.4 7.04 2.76 4.06 0.58 2.31 1.84 0.36 0.68 6-5-1 (Shaft et al., 2006) 𝑗𝑗 = 2 3 (𝑛𝑛 + π‘œπ‘œ) 2.4 2.31 6-11-1 (Shaft et al., 2006) 𝑗𝑗 = 3 2 (𝑛𝑛 + π‘œπ‘œ) 8.53 4.88 6-19-1 (Mohanraj. et al., 2009), (Kalogirou et al., 1996- 1997-1998), (Kalogirou., 1996) 𝑗𝑗 = οΏ½ 𝑛𝑛 + 𝑂𝑂 2 οΏ½ + �𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 π‘œπ‘œπ‘œπ‘œ π‘‘π‘‘π‘›π‘›π‘Žπ‘Žπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘›π‘›π‘‘π‘‘ π‘‘π‘‘π‘Žπ‘Žπ‘‘π‘‘π‘Žπ‘Ž π‘π‘π‘œπ‘œπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘π‘ 6.23 2.23 6-15-1 Present study Optimization 2.76 0.18 Double hidden layer 6-3-3-1 (Kang, 1991) j=n/2 4.03 1.31 6-6-6-1 (Tang et al., 1993) n 2.96 2.06 6-12-12-1 (Wong, 1991) and (Lippmann R.P., 1987) j=2n 6.69 4.96 6-13-13-1 (H-Nielsen, 1990) j=2n+1 8.12 4.97 6-4-4-1 6-5-5-1 6-6-6-1 6-7-7-1 6-8-8-1 (Liu et al., 2017) 𝑗𝑗 = βˆšπ‘›π‘› + π‘œπ‘œ + π‘Žπ‘Ž a=1,2,3,4,5 2.67 1.97 2.96 6.22 3.40 1.31 2.94 2.06 3.68 0.13 6-5-5-1 (Shaft et al., 2006) 𝑗𝑗 = 2 3 (𝑛𝑛 + π‘œπ‘œ) 1.97 2.94 6-11-11-1 (Shaft et al., 2006) 𝑗𝑗 = 3 2 (𝑛𝑛 + π‘œπ‘œ) 7.93 4.85 6-19-19-1 (Mohanraj et al., 2009), (Kalogirou et al., 1996- 1997-1998), (Kalogirou, 1996) 𝑗𝑗 = οΏ½ 𝑛𝑛 + 𝑂𝑂 2 οΏ½ + �𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 π‘œπ‘œπ‘œπ‘œ π‘‘π‘‘π‘›π‘›π‘Žπ‘Žπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘›π‘›π‘‘π‘‘ π‘‘π‘‘π‘Žπ‘Žπ‘‘π‘‘π‘Žπ‘Ž π‘π‘π‘œπ‘œπ‘‘π‘‘π‘›π‘›π‘‘π‘‘π‘π‘ 5.45 2.02 6-10-10-1 Present study Optimization 1.97 0.21 (a).Measured and predicted (I-V) curves (b)Measured and predicted (P-V) curve Fig. 8 Validation of ANN model with measured data (a) I-V curve (b) P-V curve. A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |486 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE hidden layers for training and validation stages. The pattern is similar with measured value of power especially in double hidden layer. However, there is a little deviation in the single hidden layer at training stage. The accuracy of convergence the patterns increases in the validation stage for double hidden layer more than the single hidden layer. . 4.4. Case study of (I-V) and (P-V) curves To confirm more reliability of the present ANN model, the model was tested to predict the maximum power point current (Imp) and voltage (Vmp). The output results of current and voltage prediction are represented as: (I-V) and (P-V) curves to compare them against the measured data as shown in Figure 8. A good matching can be seen between the predicted and measured (I- V) and (P-V) curves. In addition, the ANN model with double hidden layer has achieved more convergence with the measured data rather than ANN model with single hidden layer. This indicates that the ANN model has proven its accuracy in predicting the current and voltage of PV module. 5. Conclusion Two ANN architectures with single and double hidden layer have been used to predict the power production of a polycrystalline PV module in real condition of Iraq. The main contribution of ANN model was the optimization of the neurons number in the hidden layers. The analysis of results showed that the best prediction of the ANN model with single hidden layer was investigated at optimal neuron number 15 with minimum Root Mean Squared Error (RMSE) of 2.76% and 10 neurons in double hidden layers with RMSE of 1.97%. Thus, it can be concluded that the present ANN model has proven its rapid response in reaching the predicted accuracy more than other models included in the literature. In addition, the ANN model with double hidden layers achieved a higher accuracy than the single hidden layer. In a summary, the ANN model proved its accuracy to forecast the current and voltage of PV module. References Abiodun, O.I., Aman J., Abiodun E. O., Kemi, V. D., Nachaat, A. M., Humaira, A. (2018). State-of-the-art in artificial neural network applications: A survey. Heliyon, 4(11), 1-41. https://doi.org/10.1016/j.heliyon.2018.e00938 Barron, A.R. (1994). Neural networks: A review from a statistical perspective. Statistical Science, 9 (1), 33-35. https://doi.org/10.1214/ss/1177010638 Cristian-Dragos, D., Adrian, G. & Calin, E. (2016). Solar Photovoltaic Energy Production Forecast Using Neural Network. Procedia Technology, 22, 808-815. https://doi.org/10.1016/j.protcy.2016.01.053 Cybenko, G. (1989). Approximation by super positions of a sigmoidal function. Mathematical Control Signals Systems, 2, 303-314. https://link.springer.com/article/10.1007/BF02551274 Elsheikh, A.H., Sharshir SW, Elaziz, M.A., Kabeel, A.E., Guilan, W. & Haiou, Z. (2019). Modelling of solar energy systems using artificial neural network: a comprehensive review. Solar Energy, 180 (1), 622-639. https://doi.org/10.1016/j.solener.2019.01.037 Gideon, K., Francis, N., Christopher, M. & Robert K. (2021). Evaluation of thermal interface materials in mediating PV cell temperature mismatch in PV-TEG power generation. Energy Reports, 7, 1636- 1650. https://doi.org/10.1016/j.egyr.2021.03.015 Hagan, M.T. & Menhaj, M.B. (1995). Training feed forward networks with the Marquardt algorithm. IEEE Trans. Neural Network, 5(6), 989-993. https://doi.org/10.1109/72.329697 Hecht-Nielsen, R. (1990) Neurocomputing. Addison-Wesley, Longman Publishing Co., Inc. 75 Arlington Street, Suite 300 Boston, MA United States. https://dl.acm.org/doi/10.5555/103996 Hertz, J., Krogh, A. & Palmer, R.G. (2018). Introduction to The Theory of Neural Computation. CRC Press-Taylor & Francis Group, 6000 Broken Sound Parkway NW, Boca Raton, London New York. https://doi.org/10.1201/9780429499661 Ismail, K. & Gencoglu M.T. (2019). Predicting power production from a photovoltaic panel through artificial neural networks using atmospheric indicators. Neural Computing and Applications, 31 (8), 3573-3586. https://doi.org/10.1007/s00521-017-3271-6 Kalogirou, S.A, Neocleous, C.C. & Schizas, C. N. (1998). Artificial neural networks for modelling the starting-up of a solar steam generator. Applied Energy, 60 (2), 89-100. https://doi.org/10.1016/S0306- 2619(98)00019-1 Kalogirou, S.A. (1996). Artificial neural networks for predicting the local concentration ratio of parabolic trough collectors. Freiburg, Germany: Proceedings of the International Conference EuroSun’96, 470-475. http://ktisis.cut.ac.cy/handle/10488/822 Kalogirou, S.A., Neocleous, C.E, & Schizas, C.N. (1996). A comparative study of methods for estimating intercept factor of parabolic trough collectors. London, UK: Proceedings of the International Conference EANN’96,5-8. http://users.abo.fi/abulsari/EANN96.html Kalogirou, S.A., Neocleous, C.E., Schizas, C.N. (1997). Artificial neural networks for the estimation of the performance of a parabolic trough collector steam generation system. Stockholm, Sweden: Proceedings of the International Conference EANN’97, 227-232. https://ktisis.cut.ac.cy/handle/10488/18149 Kang, S. (1991). An Investigation of the Use of Feed forward Neural Networks for Forecasting. Ph.D. Thesis, Kent State University. https://dl.acm.org/doi/10.5555/144978 Laarabi, B., May, T. O., Dahlioui, D., Bassam, A., Flota-Banuelos, M. & Barhdadi, A. (2019). Artificial neural network modeling and sensitivity analysis for soiling effects on photovoltaic panels in Morocco. Superlattices and Microstructures, 127, 139-150. https://doi.org/10.1016/j.spmi.2017.12.037 Lachtermacher, G., Fuller, J.D. (1995). Back propagation in time-series forecasting. Journal of Forecasting, 14 (4), 381- 393. https://doi.org/10.1002/for.3980140405 Lippmann, R.P. (1987). An introduction to computing with neural nets. IEEE Magazine, 4(2), 4-22. https://doi.org/10.1109/MASSP.1987.1165576 Liu, L., Diran L., Qie S., Hailong, L. & Ronald, W. (2017). Forecasting Power Output of Photovoltaic System Using A BP Network Method. Energy Procedia, 142, 780-786. https://doi.org/10.1016/j.egypro.2017.12.126 MacKay, D.J.C. (1992). A practical Bayesian framework for back propagation networks. Neural Computation, 4 (3), 448-72. https://doi.org/10.1162/neco.1992.4.3.448 Mellit, A., Saglam, S. & Kalogirou, S.A. (2013). Artificial neural network- based model for estimating the produced power of a photovoltaic module. Renewable Energy, 60, 71-78. https://doi.org/10.1016/j.renene.2013.04.011 Mittal, M. Birinchi B., Sahaj, S. & Anshu, M.G. (2018). Performance prediction of PV module using electrical equivalent model and artificial neural network. Solar Energy, 176, 104-117. https://doi.org/10.1016/j.solener.2018.10.018 Mohammad, A.T., Al-Obaidi, M.A., Hameed, E.M. Basheer, I.M. & Mujtaba I. M. (2020). Modelling the chlorophenol removal from wastewater via reverse osmosis process using a multilayer artificial neural network with genetic algorithm. Journal of Water Process Engineering, 33, 1-10. https://doi.org/10.1016/j.jwpe.2019.100993 Mohammad, A.T., Sohif, B. M., Sulaiman, M.Y., Sopian, K., & Al-abidi A.A. (2013). Artificial neural network analysis of liquid desiccant dehumidifier performance in a solar hybrid air-conditioning system. Applied Thermal Engineering, 59 (1-2), 389- 397.https://doi.org/10.1016/j.applthermaleng.2013.06.006 Mohanraj, M., Jayaraj, S. & Muraleedharan, C. (2009). Performance prediction of a direct expansion solar assisted heat pump using artificial neural networks. Appl Energy. 86(9):1442-1449. https://doi.org/10.1016/j.apenergy.2009.01.001 Mubiru, J. (2011). Using Artificial Neural Networks to Predict Direct Solar Irradiation. Advances in Artificial Neural Systems, 2011, 1687- 7594. https://doi.org/10.1155/2011/142054 Nayak, S. C., Misra, B. B. & Behera, H.S. (2017). Artificial chemical reaction optimization of neural networks for efficient prediction https://www.ncbi.nlm.nih.gov/pubmed/?term=Abiodun%20OI%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Jantan%20A%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Omolara%20AE%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Dada%20KV%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Mohamed%20NA%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Mohamed%20NA%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pubmed/?term=Arshad%20H%5BAuthor%5D&cauthor=true&cauthor_uid=30519653 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6260436/ https://doi.org/10.1016/j.heliyon.2018.e00938 https://doi.org/10.1016/j.protcy.2016.01.053 https://link.springer.com/article/10.1007/BF02551274 https://doi.org/10.1016/j.solener.2019.01.037 https://doi.org/10.1016/j.egyr.2021.03.015 https://doi.org/10.1109/72.329697 https://dl.acm.org/doi/10.5555/103996 https://doi.org/10.1201/9780429499661 https://www.researchgate.net/journal/Neural-Computing-and-Applications-1433-3058?_sg=pkxY8ksdM63fwuf6OVzlFxy_ecPWy2g8309m2OHjQNejA-W3IV5sbveCEOvRGFFNjrOybjq2gxEICGXi8bQGzJy0dVkFB9M.L0o3SB5eTUFkK_SUxJ4oB6kfk4YxrBTyitGRMUhHMWIh8aYPYMY-QydGHs1nITlsKb9Ws63p-dc7mLe3VU5jsw https://doi.org/10.1016/S0306-2619(98)00019-1 https://doi.org/10.1016/S0306-2619(98)00019-1 http://ktisis.cut.ac.cy/handle/10488/822 http://users.abo.fi/abulsari/EANN96.html https://ktisis.cut.ac.cy/handle/10488/18149 https://doi.org/10.1016/j.spmi.2017.12.037 https://doi.org/10.1002/for.3980140405 https://doi.org/10.1109/MASSP.1987.1165576 https://www.sciencedirect.com/science/article/pii/S187661021735854X#! https://www.sciencedirect.com/science/article/pii/S187661021735854X#! https://www.sciencedirect.com/author/52364999900/qie-sun https://www.sciencedirect.com/science/article/pii/S187661021735854X#! https://www.sciencedirect.com/science/article/pii/S187661021735854X#! https://www.sciencedirect.com/journal/energy-procedia https://www.sciencedirect.com/journal/energy-procedia/vol/142/suppl/C https://doi.org/10.1016/j.egypro.2017.12.126 https://doi.org/10.1162/neco.1992.4.3.448 https://doi.org/10.1016/j.renene.2013.04.011 https://doi.org/10.1016/j.solener.2018.10.018 https://www.sciencedirect.com/science/article/pii/S2214714419310244 https://www.sciencedirect.com/science/article/pii/S2214714419310244 https://www.sciencedirect.com/science/article/pii/S2214714419310244 https://www.sciencedirect.com/journal/journal-of-water-process-engineering https://www.sciencedirect.com/journal/journal-of-water-process-engineering https://www.sciencedirect.com/journal/journal-of-water-process-engineering/vol/33/suppl/C https://doi.org/10.1016/j.jwpe.2019.100993 https://www.sciencedirect.com/journal/applied-thermal-engineering https://www.sciencedirect.com/journal/applied-thermal-engineering/vol/59/issue/1 https://doi.org/10.1016/j.applthermaleng.2013.06.006 https://doi.org/10.1016/j.apenergy.2009.01.001 https://www.researchgate.net/journal/Advances-in-Artificial-Neural-Systems-1687-7608 https://doi.org/10.1155/2011/142054 https://www.sciencedirect.com/science/article/pii/S2090447915001410#! https://www.sciencedirect.com/science/article/pii/S2090447915001410#! https://www.sciencedirect.com/science/article/pii/S2090447915001410#! A. Th. Mohammad et al Int. J. Renew. Energy Dev 2023, 12(3), 478-487 |487 ISSN: 2252-4940/Β© 2023. The Author(s). Published by CBIORE of stock market indices. Ain Shams Engineering Journal, 8(3), 371- 390. https://doi.org/10.1016/j.asej.2015.07.015 Pontes, F.J., de Paiva, A.P., Balestrassi, P.P., Ferreira, J.R. and da Silva, M.B. (2012). Optimization of radial basis function neural network employed for prediction of surface roughness in hard turning process using Taguchi’s orthogonal arrays. Expert Systems with Applications, 39 (9), 7776-7787. https://doi.org/10.1016/j.eswa.2012.01.058 Priyanka, S. & Ravindra N.M. (2012). Temperature dependence of solar cell performance-an analysis. Solar Energy Materials and Solar Cells, 101, 36-45. https://doi.org/10.1016/j.solmat.2012.02.019 Raj, A.K., Kunal, G., Srinivas, M. & Jayaraj, S. (2019). Performance analysis of a double-pass solar air heater system with asymmetric channel flow passages. Journal of Thermal Analysis and Calorimetry, 136 (1), 21-38. http://dx.doi.org/10.1007/s10973- 018-7762-1 Rumelhart, D.E., Hinton, G.E. & Williams, R.J. (1986). Learning representations by backpropagating Errors. Nature, 323, 533-536; https://www.nature.com/articles/323533a0 Sanjay, C., Jyothi, C. & Chin, C.W. (2006). A study of surface roughness in drilling using mathematical analysis and neural networks. International Journal Advance Manufacturing Technology, 29, 846- 852. https://link.springer.com/article/10.1007/s00170-006- 0717-x Shaft, I. A., Shah S.I. & Kashif, F.M. (2006). Impact of varying neurons and hidden layers in neural network architecture for a time frequency application. In Proceedings of the 10th IEEE International Multitopic Conference (INMIC), Islamabad, Pakistan, (23-24) 188- 193. https://doi.org/10.1109/INMIC.2006.358160 Siti, A. J., Flora, C., Mohd, H. A.W, Nur Hanis, M. R. & Muhammad, F. O. (2018). Prediction of Photovoltaic (PV) Output Using Artificial Neutral Network (ANN) Based on Ambient Factors. Journal of Physics: Conference Series 1049, (2018) 012088. https://doi.og/10.1088/1742-6596/1049/1/012088 Srinivasan, D., Liew, A.C. & Chang, C.S. (1994). A neural network short- term load forecaster. Electric Power Systems Research, 28 (3), 227- 234. https://doi.org/10.1016/0378-7796(94)90037-X Tang, Z., Fishwick, P.A. (1993). Feed forward neural nets as models for time series forecasting. Journal on Computing, 5 (4), 374-385. https://doi.org/10.1287/ijoc.5.4.374 Valerio, L., Brano, G. C. & Mariavittoria D.F. (2014). Artificial Neural Networks to Predict the Power Output of a PV Panel. International Journal of Photoenergy, 2014, Article ID 193083, 1-12. https://doi.org/10.1155/2014/193083 Wong, F.S. (1991). Time series forecasting using back propagation neural networks. Neurocomputing, 2 (4), 147-159. https://doi.org/10.1016/0925-2312(91)90045-D Yegnanarayana, B. (2009). Artificial neural networks. New York. USA: PHI Learning Pvt. Ltd. Zhang, G., Eddy, P. B. & Michael, Y. H. (1998). Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting, 14 (1), 35-62. https://doi.org/10.1016/S0169- 2070(97)00044-7 Zhang, X., (1994). Time series analysis and prediction by neural networks. Optimization Methods and Software, 4(2), 151-170. https://doi.org/10.1080/10556789408805584 Ziane, A., Necaibia, A., Sahouane, N., Dabou, R., Mostefaoui, M., Bouraiou, A., Khelifi, S., Rouabhia, A. & Blal, M. (2021). Photovoltaic output power performance assessment and forecasting: Impact of meteorological variables. Solar Energy, 220, 745-75. https://doi.org/10.1016/j.solener.2021.04.004 Β© 2023. The Author(s). This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA) International License (http://creativecommons.org/licenses/by-sa/4.0/) https://www.sciencedirect.com/journal/ain-shams-engineering-journal https://www.sciencedirect.com/journal/ain-shams-engineering-journal/vol/8/issue/3 https://doi.org/10.1016/j.asej.2015.07.015 https://doi.org/10.1016/j.eswa.2012.01.058 https://www.sciencedirect.com/journal/solar-energy-materials-and-solar-cells https://www.sciencedirect.com/journal/solar-energy-materials-and-solar-cells https://www.sciencedirect.com/journal/solar-energy-materials-and-solar-cells/vol/101/suppl/C https://doi.org/10.1016/j.solmat.2012.02.019 http://dx.doi.org/10.1007/s10973-018-7762-1 http://dx.doi.org/10.1007/s10973-018-7762-1 https://doi.org/10.1109/INMIC.2006.358160 https://doi.og/10.1088/1742-6596/1049/1/012088 https://doi.org/10.1016/0378-7796(94)90037-X https://doi.org/10.1287/ijoc.5.4.374 https://doi.org/10.1155/2014/193083 https://doi.org/10.1016/0925-2312(91)90045-D https://doi.org/10.1016/S0169-2070(97)00044-7 https://doi.org/10.1016/S0169-2070(97)00044-7 https://doi.org/10.1080/10556789408805584 Prediction of the output power of photovoltaic module using artificial neural networks model with optimizing the neurons number Abdulrahman Th. Mohammad10F , Hasanen M. Hussen2, Hussein J. Akeiber3 1. Introduction 4. Results and discussions Liu, L., Diran L., Qie S., Hailong, L. & Ronald, W. (2017). Forecasting Power Output of Photovoltaic System Using A BP Network Method. Energy Procedia, 142, 780-786. https://doi.org/10.1016/j.egypro.2017.12.126 Mittal, M. Birinchi B., Sahaj, S. & Anshu, M.G. (2018). Performance prediction of PV module using electrical equivalent model and artificial neural network. Solar Energy, 176, 104-117. https://doi.org/10.1016/j.solener.2018.10.018 Ziane, A., Necaibia, A., Sahouane, N., Dabou, R., Mostefaoui, M., Bouraiou, A., Khelifi, S., Rouabhia, A. & Blal, M. (2021). Photovoltaic output power performance assessment and forecasting: Impact of meteorological variables. Solar Energy, 220, 745-7...