Microsoft Word - 6-704.doc Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1067 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs Slim Abid Control & Energy Management Lab (CEM LAB) National School of Engineering of Sfax University of Sfax, Tunisia abid_slim_enis@yahoo.fr Mohamed Chtourou Control & Energy Management Lab (CEM LAB) National School of Engineering of Sfax University of Sfax, Tunisia mohamed.chtourou@enis.rnu.tn Mohamed Djemel Control & Energy Management Lab (CEM LAB) National School of Engineering of Sfax University of Sfax, Tunisia mohamed.djemel@enis.rnu.tn Abstract—Choosing the training algorithm and determining the architecture of artificial neural networks are very important issues with large application. There are no general methods which permit the estimation of the adequate neural networks size. In order to achieve this goal, a pruning algorithm based on the relevancy index of hidden neurons outputs is developed in this paper. The relevancy index depends on the output amplitude of each hidden neuron and estimates his contribution on the learning process. This method is validated with an academic example and it is tested on a wind turbine modeling problem. Compared with two modified versions of Optimal Brain Surgeon (OBS) algorithm, the developed approach gives interesting results. Keywords- pruning algorithm; OBS approach; relevancy index; hidden neurons I. INTRODUCTION The Feedforward Neural Networks (FNNs) [1, 2] have been successfully used to solve many problems, such as: dynamic system identification, signal processing, pattern classification [3, 4] and intelligent control [5]. One of the difficulties of a FNNs applying process is the determination of the optimal network architecture. In general, if the network structure is too small, it may not be able to learn the training samples. On the other hand, large-sized networks learn easily but show poor generalization capacities due to over-fitting. Thus, algorithms that can determine the appropriate network architecture automatically are highly desirable. Research in this field can be classified in two categories: constructive [6] and pruning approaches [7]. Recent interest has been growing on pruning strategies [8-10] that start with large-sized network and remove unnecessary hidden neurons or weights either during the training phase or after convergence to a local minimum. The most known methods are Optimal Brain Damage (OBD) [11] and Optimal Brain Surgeon (OBS) [12] which eliminate the neurons or weights with the smallest saliency one by one, which significantly increases the complexity of the procedure computation and the running time. In this paper, an improved pruning algorithm based on hidden neurons’ outputs is investigated and compared with two algorithms derived from the OBS method. The rest of paper is organized as follows. Section 2 presents briefly analysis of OBS algorithm and the description of two modified versions of this algorithm namely (Unit-OBS) [13] and (Fast-Unit-OBS) [14]. A pruning algorithm based on relevancy index of hidden neurons outputs is introduced in section 3. Section 4 illustrates the obtained simulation results. Finally, the conclusion is presented in section 5. II. RELATED WORKS A simple (FNN)s with a single output is represented in figure 1 (the generalization to more outputs units is straightforward). Fig. 1. Feedforward Neural Network. This neural network is parameterized in terms of its weights, where:   mTm21 wwww  ,,,  (1) The training data consists of N patterns {xj,yj}, j=1, …, N. The error function for a given pattern is defined as: 2dp yy21J )(  (2) The global error function is described as:    N 1p pJN1J (3)  y xi x2 x1 wi 1 1  wi Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1068 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs A. OBS approch In [11], an OBD method which calculates the saliency only with the pivot elements of the Hessian matrix without retraining after the pruning process was introduced. To overcome this problem, the (OBS) algorithm which determines and removes the weight that has the smallest effect on the neural network performance and adjusts the remaining weights according to the error function gradient was proposed [12]. The OBS algorithm assumes that the network has been trained to a local minimum of the error, so the second-order Taylor expansion of the error function with respect to the weights can be expressed as: )( 2 1 3 wOwHww w J J T T           (4) where H denotes the Hessian matrix composed by the coefficients of the second order derivatives of the error function. In order to minimize the error given by (3), OBS algorithm deletes one of the weights having a value tending to 0 and removes the particular weight wq which satisfies the following equation: 0wwe qTq  (5) where eq represents the unit vector corresponding to weight wq qqT wew  . The corresponding expression for the minimum error is modified caused by changing a given weight depicted as:               0wwewHw21 qTqTq /..minmin (6) So as to resolve this constrained optimization problem, we introduce Lagrange’s method which leads to find the corresponding optimum value of Lagrange function L after deleting wq as:  qq1 2qq Hw21L  (7) The remaining weights are updated according the following equation: q1qq1 q eHHww  ][ (8) with [H-1]qq denotes the diagonal element (q,q) of the inverse of the Hessian matrix H-1. The OBS algorithm needs inverse Hessian matrix to update the weights which is the most disadvantage of this algorithm. It has been proposed a procedure to calculate H-1 based on a recursive method as [12]:     1n1nT1n 1n11n1n1n1n11n XHXp HXXHHH    (9) with  48pn0 1010etHHIH   , . However, one of the main difficulties of OBS approach is that it requires a great amount of computation and a huge time for pruning procedure. B. Unit-OBS algorithm The Unit-OBS pruning algorithm removes the unneeded neuron in one step with minimal increase in error [15]. This approach reduces both the computation complexity and the running time. The details of the Unit-OBS algorithm are as follows:  Step 1: Train the neural network to minimum error based on some local optimization method.  Step 2: Compute H-1.  Step 3: For each neuron u:  Compute the number of output weights for the neuron u and record it as m(u).  Form the unit matrix M for neuron u which: ],,,[ )(uqm2q1q eeeM  (10) where q1, q2,…, qm(u) denote the sequence number of each output weight with:     0010e iTqi   ,,,, (11)  Calculate the error deviation after deleting neuron u:   wMMHMMw21uJ T11TT  )( (12)  Step 4: Find the neuron u0 that gives the smallest increase in error noted )( 0uJ : if  t0 EuJ  )( then the algorithm will stop; otherwise go to step 5. where Et indicates a preselected threshold.  Step 5: Remove neuron u0 and update the other weights using the following equation:   wMMHMMHw T11T1  (13) The unavoidable drawback for this algorithm is the complexity of computing the Hessian matrix whose dimension is equal to the number of initial weights of the network. In order to overcome this disadvantage a modified version of Unit-OBS algorithm has been proposed in [14]. C. Fast Unit-OBS algorithm In this case, the size of the Hessian matrix depends on the number of hidden neurons. The relevance of each hidden neuron is described by the following equation:  uu1 2uu Hw21L  (14) Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1069 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs where the weight mean is given by:    m 1i uiu wm1w (15) where wui denotes the output weights for the hidden neuron u and m indicates the number of wui. The steps of the Unit-OBS-algorithm are as follows:  Step 1: Train the neural network to minimum error based on some local optimization method.  Step 2: Compute H-1.  Step 3: For each hidden neuron u; compute the relevance Lu and find the neuron u0 that gives the smallest value which will be noted L.  Step 4: If L is greater than the preselected threshold Lt, than go to step 5. Otherwise:  delete the selected hidden neuron and adjust the other weights of the remaining neurons based on the following equation:   u1uu1u eHHww  (16) with :     0010e uu   ,,,, (17)  return to the step 2.  Step 5: Stop the pruning approach and it may be desirable to retrain the network at this stage. Despite the improvements in algorithms Unit-OBS and Fast-Unit-OBS which have reduced the computation time of the pruning procedure, these methods still need of calculating the inverse of the Hessian matrix .To overcome this problem, a pruning algorithm that eliminates the hidden neurons having a low impact on learning performances without using the inverse of the matrix is proposed in the next section. III. PROPOSED ALGORITHM The basic idea of the algorithm proposed in this paper has been inspired from the pruning approach in [16] which consists of estimating the sensitivity of the global error changes with respect to each connection during the training phase and removing the weight which presents the smallest sensitivity value. The neural network will be retrained and the pruning process will be executed as capacities learning are satisfactory. However, the success of this approach depends on the size of the initial neural structure. Indeed, for the case of large-sized network, the number of weight increases hugely which degrades the performance of the approach significantly. The proposed method consists of finding the contribution of each hidden neuron in the network which reduces the complexity of the procedure and eliminates the one having the least effect on the cost function. This effect is estimated by calculating the relevancy index which depends on the output amplitude of each hidden neuron defined as follows:                 N 1p N1u pu puu c OON1RI (18) where puO indicates the output of the uth hidden node for a given pattern and Nc denotes the number of hidden ones. The average value of relevancy index is defined as: c N 1u uavu NRIRI c   (19) Each hidden neuron with a relevancy index smallest than the average value should be deleted. It is to be noted that we are interested in a multi input-single output (MISO) model and the weights update law is based on the back-propagation algorithm which can explained as: wJw p (20) where  denotes the learning rate. The proposed pruning algorithm can be described as follows:  Step 1: Train the neural network based on the back- propagation algorithm to obtain a tolerated value of the training criterion noted Jt.  Step 2: Calculate the relevancy index for each hidden neuron using equation (18).  Step 3: Remove each hidden neuron u which satisfies the following test: avuu RIRI  (21) If there is no neuron that satisfies the test (21), than go to step 5.  Step 4: Maintaining the same average value of the relevance index, train the obtained structure with a reduced number of iterations noted m and return to step 2.  Step 5: Stop the pruning approach and retrain the network at this stage. It can be seen that the proposed algorithm is based on a pruning strategy avoiding the computation the inverse of Hessian matrix which consists the main contribution of this approach. IV. SIMULATIONS RESULTS AND DISCUSSIONS In this section, we present the simulation results. Initially, the capacities of the proposed algorithm are verified on an academic example. Then, we use this algorithm for wind Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1070 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs turbine neural modeling. Through two simulation examples the proposed algorithm will be compared with the Unit-OBS and the Fast-Unit-OBS algorithms according to some performances criteria. A. First example The goal is to present the learning set to the studied algorithms and to show that they can determine a number of hidden neurons close to the one of the neural network used to generate the learning set. Figure 2 shows the neural network generating the learning set with an input vector  )(),(),(),( 2ku1ku2ky1ky  , 3 hidden neurons and y as the output where the activation function in the hidden neurons and the output neuron is the sigmoid one. Fig. 2. Neural Network generating the learning set. Figures 3 and 4 represent the input signal used respectively for the training and validation sets. The efficiency of each algorithm is computed as: 100N 3N1eff cfc  % (22) with fcN indicates the final number of hidden neurons for the obtained structure after the pruning procedure. The initials values of weights and the numerical parameters are selected based on several simulations to obtain the best performances in terms of training capacities and convergence time. 0 50 100 150 200 250 300 350 400 450 500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fig. 3. Training input signal. 0 20 40 60 80 100 120 140 160 180 200 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fig. 4. Validation input signal. The simulation results describing the performances of Unit- OBS, Fast-Unit-OBS and the proposed algorithms are given in Table I where for each algorithm a set of initial neural structures has been considered. Table II shows the average of performances for the studied algorithms. It can be seen that the investigated approaches provide similar and satisfactory performances. On the other hand, the proposed algorithm avoids the complex calculation of the inverse of the Hessian matrix which considers the main advantage of this method. To confirm these interpretations, we test the studied algorithms on neural modeling problem of wind turbine. B. Second example In the last years, a growing interest in renewable energy has been evident [17]. Wind energy has become competitive and considerable technological progress has been achieved in the field of wind turbines. The energy production depends on many factors essentially the choice of robust methods for controlling wind turbines [18]. This step can be accomplished as well as the model describes the system dynamics correctly, which shows the importance of the selection of modeling strategy. In this work, modeling of the turbine is provided via a neural model whose architecture is selected based on a pruning approach. The rigid model with only one degree of freedom [19-21] is given by: atgaat kTTI  (23) The point (.) designates the first order time derivative, Tg is the generator torque and Ir, Ig, kr, kg are the moment of inertia of rotor side masses, the moment of inertia of generator side masses, the mechanical damping in the rotor side and the mechanical damping in the generator side respectively, where: g2grt InII  is the total inertia of generator side masses, g2grt knkk  is the equivalent mechanical damping. For model (23) agg n  is satisfied, where: aa   is the rotor rotational speed and gg   is the rotational speed of the high speed shaft, while gn designates the gear ratio between the primary shaft and the secondary shaft, a and g are the azimuthally rotor position and the azimuthally position of the u(k) Training samples k Validation samples k u1(k) 1 6 7 2 4 8 5 3 y(k-1) u(k-1) u(k-2) 1 y(k-2) 9 y(k) Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1071 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs high speed shaft. The captured aerodynamic torque Ta is given in terms of the power coefficient Cp(,) as:         ,pa 32a CkvR21kT (24) where  is the specific speed defined as: vR a (25) v is the effective wind speed,  is the air density, and R denotes the blades rotor radius. The power coefficient Cp(,) is estimated using aerodynamic data obtained from wind tunnel measurements. It is generally represented under the form of an analytical formula which gives Cp() for various values of the pitch angle . In the literature [22] one finds the following approximation:   ]exp[, BACp  (26) where:              2cc1G GcB cBcGcA 3 32 6 541 (26) with the coefficients ci, i=1... 6 are identified from real Cp curves. Using the Euler approximation and for small values of the sampling period, the discrete model describing the wind turbine can be expressed as follows:                 kIkI kTI kTtk1k atttgtaaa (27) t denotes the sampling period. The goal of this study is to determine an adequate neural model of the wind turbine. This model should be able to learn nonlinear dynamics of the wind turbine and gives good generalization ability. TABLE I. PERFORMANCES OF THE STUDIED ALGORITHMS Algorithm Numerical simulation Parameters cN fcN Efficiency (%) Generalization error Run time Unit-OBS =0.6, Et=8.10 -5, Jt=0.003, =9.10-5 5 3 100 0.0036 0’11" 8 5 75 0.0032 0’11" 11 4 90.91 0.0035 0’18" 14 3 100 0.0035 1’11" 20 4 95 0.0037 1’12" 25 3 100 0.0035 2’08" Fast-Unit-OBS =0.6, Lt=Jt=0.003, =10-8 5 3 100 0.0036 0’10" 8 3 100 0.0033 0’09" 11 4 90.91 0.0035 0’14" 14 2 92.86 0.0034 1’36" 20 5 90 0.0037 0’46" 25 4 96 0.0036 0’43" Proposed algorithm =0.6, Jt=0.003, m=20 5 2 80 0.0033 0’23" 8 3 100 0.0034 1’09" 11 3 100 0.0032 0’29" 14 3 100 0.0035 0’19" 20 5 90 0.0035 0’40" 25 5 92 0.0032 1’43" The wind turbine parameters used in simulations are the following [23, 24]:                   4.18cand10.636.9c,309082.0c ,10.003.0c,02.0c,10.1023.1c ,5.7,sNmrad7.3k,sNmrad5.1k ,1,kgm4.34I,kgm10.25.3I ,165.43n,m38.21R,kgm225.1 6 2 54 2 32 2 1 opt 1 g 1 r 2 g 25 r g 3    The input-output neural model describing the wind turbine is presented in Figure 5. The model input vector is constituted by the actual and previous generator torque (Tg(k) and Tg(k-1)), the actual and previous rotor rotational speed (a(k) and a(k-1)) and the actual value of wind speed v(k). The model output is the future value of the rotor rotational speed a(k+1). The selection of the input vector and the value of the sampling period have been done after several simulations. The chosen mean wind speed was set to vmoy=12ms-1 and t=0.1s. Figures 6 and 7 represent respectively the input signals Tg(k) and v(k), used in the training (6.a and 7.a) and validation (6.b and 7.b) phases. It is to be noted that each training algorithms have been executed for different initial neural structures. The simulation results describing the performances of the algorithms studied in this paper are illustrated in Table III. It can be seen that the proposed algorithm provides better performances when compared to Unit-OBS and Fast-Unit- Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1072 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs OBS algorithms. For better illustration of the results presented in Table III, they can be summarized in Table IV which presents the average performance on final architecture, generalization capacities and convergence time. Table IV shows the contribution of the proposed algorithm. In fact, we note that the algorithm based on relevancy index leads to the simplest neural structure and presents the least convergence time with satisfactory generalization abilities. Moreover, the proposed method avoids the complex computation of the inverse of the Hessian matrix which considers the major drawback of the OBS approach. Figure 8 gives the evolution of the identified neural model output for the training (8.a) and validation (8.b) sets using the proposed algorithm. Fig. 5. Input-output neural model. Fig. 6. Tg (k) used in the training and validation phases ((a): training, (b): validation). (a) (b) 0 100 200 300 400 500 600 5 10 15 20 k  0 20 40 60 80 100 120 140 160 180 200 5 10 15 20  k Fig. 7. v(k) used in the training and validation phases ((a): training, (b): validation). Fig. 8. Training and validation performances for the proposed algorithm ((a): training, (b): validation). (a) (b) 0 100 200 300 400 500 600 7.5 7.6 7.7 7.8 7.9 8 8.1 8.2 8.3 8.4 real speed estimated speed 0 20 40 60 80 100 120 140 160 180 200 7.5 7.6 7.7 7.8 7.9 8 8.1 8.2 8.3 8.4 real speed estimated speed Training Validationv(ms -1) samples samples v(ms-1)  1kTg  kTg  ka  1ka  1 1   1ka   kv samples k a(rads-1) samples k a(rads-1) Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1073 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs TABLE II. AVERAGE OF PERFORMANCES FOR THE STUDIED ALGORITHMS Algorithm Efficiency (%) Generalization error Run time Unit-OBS 93.49 0.0035 0’52" Fast-Unit-OBS 95.11 0.0035 0’36" Proposed algorithm 93.67 0.0033 0’47" TABLE III. ALGORITHMS PERFORMANCES FOR THE WIND TURBINE NEURAL MODELING PROBLEM. Algorithm Numerical simulation Parameters cN fcN Generalization error Run time Unit-OBS =0.4, Et=8.10-5 Jt=0.01, =9.10-5 10 9 0.0058 1’42" 15 9 0.0080 2’01" 20 6 0.0056 2’44" 25 7 0.0048 2’41" Fast-Unit-OBS =0.4, Jt=0.01, Lt= 0.002, =10-8 10 8 0.0044 1’53" 15 14 0.0048 1’34" 20 13 0.0057 2’37" 25 13 0.0067 2’02" Proposed algorithm =0.4, Jt=0.01, m=10 10 6 0.0063 1’34" 15 6 0.0084 1’39" 20 7 0.0048 2’04" 25 5 0.0084 1’52" TABLE IV. AVERAGE OF PERFORMANCES. Algorithm fcN Generalization error Run time Unit-OBS 8 0.0061 2’17" Fast-Unit-OBS 12 0.0054 2’02" Proposed algorithm 6 0.0070 1’47" V. CONCLUSION This paper presents a pruning algorithm based on relevancy index that allows obtaining the adequate neural network structure. The main advantage of this method is that it avoids the calculating of the inverse of the Hessian matrix which is indispensable for using any pruning algorithm based on the OBS approach. To confirm the effectiveness of the developed algorithm, it has been applied on an academic example and on a wind turbine model. The simulation results demonstrated that the proposed algorithm, compared with the Unit-OBS and the Fast-Unit-OBS algorithms, not only ensures the same training and generalization performance but also shortens the runtime and simplifies considerably the obtained neural structure after pruning process what proves the potential utility of this algorithm. REFERENCES [1] P. Mehra, B. W. Wah, Artificial Neural Networks: Concepts and Theory, IEEE Comput. Society Press, 1992 [2] J. M. Zurada Introduction to Artificial Neural Systems, St Paul, MN: West, 1992 [3] V. E. Neagoe, C.T. Tudoran, “A neural machine vision model for road detection in autonomous navigation”, U.P.B. Sci. Bull., Series C, Vol. 73, No. 2, pp. 167-178, 2011 [4] E. Şuşnea, “Using artificial neural networks in e-learning systems”, U.P.B. Sci. Bull., Series C, Vol. 72, No. 4, pp. 91-100, 2010 [5] A. Mechernene, M. Zerikat, S. Chekroun, “Indirect field oriented adaptive control of induction motor based on neuro-fuzzy controller”, J. Electrical Systems, Vol. 7, No. 3, pp. 308-319, 2011 [6] D. Liu, T. S. Chang, Y. Zhang, “A constructive algorithm for feedforward neural networks with incremental training”, IEEE Transactions on Circuits and Systems. Fundamental Theory and Applications, Vol. 49, No. 12, pp. 1876-1879, 2002 [7] R. Reed, “Pruning algorithms-A survey”, IEEE Trans. Neural Net., Vol. 4, No. 5, pp. 740-747, 1993 [8] H. Honggui, Q. Junfei, “A novel pruning algorithm for self- organizing neural network”, International Joint Conference on Neural Networks, Atlanta, Georgia, USA, pp. 22-27, 2009 [9] D. Juan, E. M. Joo, “A fast pruning algorithm for an efficient adaptive fuzzy neural network”, 8th IEEE International Conference on Control and Automation Xiamen, China, pp. 1030- 1035, 2010 [10] Z. Zhang, J. Qiao, “A node pruning algorithm for feedforward neural network based on neural complexity”, International Conference on Intelligent Control and Information Processing, Dalian, China, pp. 406- 410, 2010 [11] Y. Le Cun, L. S. Denker, S. A. Solla, “Optimal brain damage” in Advances in Neural Information Processing systems, D.S. Touretzky, Ed. San Mateo, CA: Morgan Kaufmann, pp. 598-605, 1990 [12] B. Hassibi, D. Stork, G. Wolff, “Optimal brain surgeon and general network pruning”, IEEE Int. Conf. Neural Networks, Vol. 1, pp. 293– 299, 1993 [13] A. Stahlberger, M. Riedmiller, “Fast network pruning and feature extraction using the unit-OBS algorithm”, Advances in Neural Information Processing Systems, Denver, Vol. 9, pp. 2-5, 1996 [14] J. -F. Qiao, Y. Zhang, H. -G. Han, “Fast unit pruning algorithm for feedforward neural network design”, Applied Mathematics and Computation, Vol. 205, pp. 622–627, 2008 [15] B. Hassibi, D. G. Stork, “Second-order derivatives for network pruning: Optimal brain surgeon”, Advances in Neural Information Processing Systems, Vol. 5, pp. 164-171, 1993 [16] E. D. Karnin, “A simple procedure for pruning backpropagation trained neural networks”, IEEE Trans. Neural Networks, Vol. 1, pp. 239-242, 1990 [17] N. Ciprian, M. Florin, “Operational parameters evaluation for optimal wind energy systems development”, U.P.B. Sci. Bull., Series C, Vol. 74, pp. 223-230, 2012 [18] A. Pintea. D. Popescu, “A comparative study of digital IMC and RST regulators applied on a wind turbine”, U.P.B. Sci. Bull., Series C, Vol. 74, No. 4, pp. 27-38, 2012 Engineering, Technology & Applied Science Research Vol. 6, No. 4, 2016, 1067-1074 1074 www.etasr.com Abid et al.: A Pruning Algorithm Based on Relevancy Index of Hidden Neurons Outputs [19] P. Christou, “Advanced materials for turbine blade manufacture”, Reinforced Plastics, Vol. 51, No. 4, pp. 22-24, 2007 [20] L. Fingersh, M. Hand, A. Laxson, “Wind turbine design cost and scaling model”, National Renewable Energy Laboratory, Technical Report NREL/TP-500-40566, 2006 [21] Y. D. Song, B. Dhinakaran, X. Y. Bao, “Variable speed control of wind turbines using nonlinear and adaptive algorithms”, Wind Engineering and Industrial Aerodynamics, Vol. 85, pp. 293-308, 2000 [22] K. Reif, F. Sonnemann, R. Unbehauen, “Nonlinear state observation using H∞-filtering filtering riccati design”, IEEE Transactions On Automatic Control, Vol. 44, No. 1, pp. 203-208, 1999 [23] A. Khamlichi, B. Ayyat, M. Bezzazi, L. El Bakkali, V. C. Vivas, C. L. F. Castano, “Modelling and control of flexible wind turbines without wind speed measurements”, Australian Journal of Basic and Applied Sciences, Vol. 3, No. 4, pp. 3246-3258, 2009 [24] S. Abid, M. Chtourou, M. Djemel, “Incremental and Stable Training Algorithm for Wind Turbine Neural Modeling”, Engineering Review (ER), Vol. 33, No. 3, pp. 165-172, 2013