INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL ISSN 1841-9836, e-ISSN 1841-9844, 14(6), 615-632, December 2019. Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi Jianwu Jiang 1. School of Computer Science and Technology, Soochow University, China 333 East GanJiang Way, SuShou, 225300, China 2. College of Information Engineering, Taizhou Polytechnic College, China 47955024@qq.com Zhi Chen College of Information Engineering, Taizhou Polytechnic College, China 17784068@qq.com Yihuai Wang* School of Computer Science and Technology, Soochow University, China 333 East GanJiang Way, SuShou, 225300, China *Corresponding author: yihuaiw@suda.edu.cn Tao Peng School of Computer Science and Technology, Soochow University, China sdpengtao401@gmail.com Shilang Zhu School of Computer Science and Technology, Soochow University, China 20154027006@stu.suda.edu.cn Lianmin Shi College of Information Engineering, Suzhou Institute of Trade & Commerce, China 20154027006@stu.suda.edu.cn Abstract: Permanent Magnet Synchronous Motor(PMSM) control system with strong nonlinearity makes it difficult to accurately identify motor parameters such as stator winding, dq axis inductance, and rotor flux linkage. Aiming at the premature convergence of traditional Back Propagation Neural Network(BPNN) in PMSM motor parameter identification, a new method of PMSM motor parameter identification is proposed. It uses Chaotic Artificial Fish Swarm Algorithm(CAFSA) to optimize the initial weights and thresholds of BPNN, and then strengthens training by BPNN algorithm. Thus, the global optimal network parameters are obtained by using the global optimization of CAFSA and the local search ability of BPNN. The simulation results and experimental data show that the initial value sensitivity of the network model optimized by CAFS-BPNN Algorithm is weak, the parameter setting is robust, and the system stability is good under complex conditions. Compared with other intelligent algorithms, such as RSL and PSO, CAFS-BPNNA has high identification accuracy and fast convergence speed for PMSM motor parameters. Keywords: Permanent Magnet Synchronous Motor(PMSM), Back Propagation Neu- ral Network(BPNN), Chaotic Artificial Fish Swarm Algorithm(CAFSA), parameter estimation, identification accuracy, convergence speed. Copyright ©2019 CC BY-NC 616 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi 1 Introduction Permanent magnet synchronous motor (PMSM) is widely used in industrial robots, servo drive control, electric vehicle speed regulation and other high-precision control fields due to its outstanding advantages in power and torque density, speed control performance and system robustness [10]. PMSM motor parameters include stator windings, dq axis inductance, rotor flux, etc. They can be used to reflect the performance of the motor system during operation, and can assist to complete motor condition monitoring [5] and fault diagnosis [12]. Therefore, accurate parameter identification ability is very important for motor control system. However, motor parameters are very vulnerable to external factors such as temperature, flux saturation and skin effect [3], such as temperature change will affect the stator winding, permanent magnet demagnetization directly affects the dq axis inductance [22]. With the change of working condition and surrounding environment, there is a certain difference between actual parameters and nominal parameters [17]. Therefore, as a motor controller, it must have the function of self-tuning and identification of motor parameters. Considering the technical difficulty and cost, it is difficult to directly measure motor parameters through auxiliary sensors such as temperature and magnetism [5,15]. PMSM control system has very strong non-linear and time-varying characteristics. It is very difficult to accurately identify the parameters of PMSM. The identification algorithm needs to comprehensively weigh the factors of complexity, convergence and computing time. 2 Related works In traditional control theory, parameter identification methods of PMSM include least square method (RLS) [18, 23], extended Kalman filter identification (EKF) [1, 6, 20], model reference adaptive (MRAS) [2,26] etc. Because of the linear parameterization of RLS algorithm, RLS estimators are usually noise sensitive, which may lead to lower identification accuracy. When estimating the winding re- sistance and rotor flux by EKF, the estimator is noisy and unstable, so it can not accurately estimate the actual parameters. MRAS estimator can not accurately estimate winding resistance, inductance and rotor flux simultaneously. It is often impossible to find the optimal solution of the estimated parameters accurately by using the above methods. With the development of swarm intelligence theory, many scholars have applied this kind of optimization algorithm to motor parameter identification, including ant colony algorithm [4], genetic algorithm [21], particle swarm optimization algorithm [14]. Ant colony algorithm and particle swarm optimization algorithm are easy to fall into local optimization convergence, and the optimization error is relatively large. Literature [11] uses genetic algorithm to identify dq axis inductance and rotor flux, because of the large space- time complexity, the convergence speed is slow. Literature [14] uses dynamic particle swarm optimization (DPSO-LS) algorithm and dynamic anti-learning strategy of Gauss distribution to enhance the global search ability, but the computational complexity becomes larger. With the development of artificial intelligence technology, the artificial neural network is widely used in parameter identification of PMSM motor, and BPNN is the most widely used [21,25]. BPNN is a multi-layer feedforward neural network trained by error back propagation algo- rithm. It has the advantages of clear model structure and simple calculation. It has been proved theoretically that its three-layer model can map any complex non-linear relationship model [16]. It is widely used in non-linear curve fitting in various signal processing and automatic control. Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 617 BPNN has excellent local optimization ability and can converge quickly and accurately in local areas. However, this advantage has also results in poor global search performance, local prematurity, poor global convergence, long iteration period and other problems, and its algorithm optimization is difficult to achieve the desired convergence accuracy and convergence speed. Therefore, it is necessary to combine the BPNN algorithm with the other intelligent algorithm with stronger global optimization ability. In the early stage of training, the cooperative algorithm is used to converge the network to the global extremum quickly, and then the local optimization characteristic of BPNN is used to converge quickly. CAFSA is a swarm intelligence algorithm with excellent global searching ability. It simulates the behavior of fish swarm such as preying, swarming, fellowing and moving to optimize the system. It has the characteristics of weak sensitivity to initial values, strong robustness to parameter setting, good global searching performance and fast convergence speed [19,23]. CAFSA algorithm has large randomicity and poor convergence in the later search period. It needs to improve the optimization performance by improving system step size, the field of view, adding congestion factor [8]. The combination of CAFSA and BPNN will make up for their respective optimization shortcomings, and improve the overall optimization performance in terms of convergence speed and accuracy. This paper presents a PMSM motor parameter identification algorithm based on CAFS- BPNNA , which combines BPNN with AFSA effectively. By injecting impulse current into the d-axis of PMSM motor control system, the sampling process data, including d-axis voltage, q- axis voltage, d-axis current, q-axis current and motor speed, are used as data sets for network training and testing. The parameters set {Rs,Ld,Lq,ψf} to be identified are taken as network output. Combining the global search ability of CAFSA and the local fast convergence property of BPNN, the convergence accuracy and speed of PMSM motor parameter identification are effectively improved. The outstanding contributions of this paper include: (1) Using the forward propagation topology of BPNN, a nonlinear mapping identification model from sampled parameter set {ud,uq, id, iq,ω} to identified parameter set {Rs,Ld,Lq,ψf} of permanent magnet synchronous motor (PMSM) is constructed. The input data set of the mapping model is obtained by d-axis current injection method, and parameters of PMSM motor to be identified are obtained by simple and direct mapping model. (2) The CAFS-BPNNA algorithm of BPNN optimized by CAFSA is introduced into online identification of PMSM motor parameters. The connection weights and thresholds of BPNN network are taken as the optimization objects of CAFSA. The connection weights and thresholds of BPNN are optimized quickly by using CAFSA global search ability in the early stage of training, and then the optimization of BPNN is transferred to the global optimal extremum point in the later stage. By giving full play to their respective optimization characteristics and avoiding their own shortcomings, the convergence accuracy and speed of obtaining network parameters of PMSM motor parameter identification model are effectively improved. (3) The population aggregation degree is embedded in CAFS-BPNNA as the switching con- dition of the two intelligent algorithms. According to the optimization characteristics of the fish swarm algorithm, the aggregation degree of the population increases in the later stage of optimization. When it reaches the preset threshold, it means that it reaches the late stage of fast convergence optimization algorithm of CAFSA, and can be transferred to BPNN preci- sion optimization algorithm, thus the time of turning into precision optimization algorithm is accelerated. 618 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi 3 Theoretical analysis of group intelligence identification model for PMSM motor parameters 3.1 Mathematical model for PMSM motor identification PMSM control system is a complex system with strong coupling and time-varying nonlin- earity. It can establish mathematical models in static three-phase coordinate system, static αβ coordinate system and synchronous rotating dq coordinate system. The mathematical models in three coordinate systems can be transformed each other. The mathematical model in the dq coordinate system is the most commonly used mathematical model [11]. Neglecting the magnetic saturation effect of PMSM and the eddy current and hysteresis loss of core are neglected, the voltage equation and flux linkage equation in the dq coordinate system are expressed as (1) and (2):   ud = Rsid + dψd dt −ωψq uq = Rsiq + dψq dt −ωψd (1) { ψd = Ldid + ψf ψq = Lqiq (2) where ud and uq are the voltage component on the dq axis, id and iq are the current component on the dq axis, Rs is the stator resistance, ψd and ψq are the flux linkage on the dq axis, Ld and Lq are the inductance on the dq axis, ψf is the flux chain generated by the permanent magnet,ω is the electrical angular speed. In the synchronous rotating dq coordinate system, the mathematical model of PMSM can be expressed as:   did dt = Rs Ld id + Lq Ld ωid + ud Ld diq dt = Rs Lq iq + Ld Lq ωid + uq Lq − ψf Lq ω (3) p = {Rs,Ld,Lq,ψf} is the set of parameters that need to be identified synchronously. When id = 0, the dq axis current is decoupled so that the stator current has only the AC component of q axis. In the steady state, formula (2) is brought into formula (1) and discretized to express as follows: { ud0 = −Lq0ω0ψq0 uq0 = Rsiq0 + ψf0ω0 (4) Formula (4) shows that the order of the motor equation is two, but the system needs to identify four parameters:{Rs,Ld,Lq,ψf}. The equation of state is rank-deficit type. By injecting the d-axis current of instantaneous id 6= 0 into the steady state, the motor model is expressed as:{ ud1 = Rsid1 −Lqω1iq1 uq1 = Rsiq1 + Ldω1id1 + ψfω1 (5) It can be seen from literature [2] that the measurement error of rotor flux linkage ψferror is caused by assuming Ld = Lq and ψf = ψf0. When ψferror is negligible relative to ψf0, (12) shows that ψfe and ψf0 are equal, and |ψferror|can be simplified as follows: Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 619 Id/s -3 -2 -1 0 1 2 3 e rr o r/ % 0 5 10 15 20 25 30 cur1 cur2 cur3 cur4 Figure 1: Estimation error of rotor flux relative to d-axis current iq = 3.55A,iq0 = 3.35A,ψm0 = 175Wb,Ld = 8.5mH Cure1:∆ψm = 1%∆ψm0,∆L = 10%Ld Cure2:∆ψm = 0.5%∆ψm0,∆L = 5%Ld Cure3:∆ψm = 0.1%ψm0,∆L = 1%Ld Cure4:∆ψm = 0.05%∆ψm0,∆L = 0.5%Ld id=0 id=-2 Time(ms) Id (A ) 2ms Tsample*N Data0 SampleTime: 0~N-1 Data1 SampleTime: 0~N-1 Tsample*N Figure 2: Time series model for data sampling |ψferror| ≈ | ∆ψfiq1 2 + ∆Lid1iq1 2 iq1 2 − id12 − iq12 | = | ∆ψfiq1 2 id1 2 + ∆Liq1 2) id1 | (6) When id1 is large enough , (13) shows that|ψferror|→ 0 and ψfe ≈ ψf0, see Fig. 1. 3.2 Data set acquisition method of PMSM motor identification model Generally, PMSM motors can be controlled by constant torque angle (torque angle is 90◦). Adjusting the d-axis current (id = 0) makes the q-axis get the maximum torque. In order to obtain the information of motor speed, dq axis current and voltage needed to be sampled when evaluating motor parameters, Data0 and Data1 were obtained by alternately adjusting (id = 0) and (id 6= 0) injecting reverse d axis current (id = −2A) during sampling period. The single sampling time is Tsample (= 10−4s), and the total sampling time is Tsample∗N. The time series model of data sampling is shown in Fig.2. Data1 data sampling start time is 2ms after current switching, which avoids the interference caused by current switching and ensures the stability of sampling data. 620 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi uq0 ud0 ω0 iq0 uq1 ud1 iq1 ω1 id1 m1 m2 mk Rs Ld Lq ψf w91 w12 w1k V11 w9k 9191 w w92 w1212 w11 11 V12VV1212 V13V V14 Vk1k11 Vk2VVk2kk Vk3 Vk4 Figure 3: BPNN-PMSM motor parameter identification model 3.3 Parameter mapping model of PMSM motor identification based on BPNN According to the principle of BPNN and the control strategy model of PMSM in Section 3.1, the set {Rs,Ld,Lq,ψf} of PMSM parameters to be identified is taken as the output neuron set of BPNN, and the data0 =ud0,uq0, iq0,ω0(id = 0) and data1 = {ud1,uq1, id1, iq1,ω1}(id 6= 0) are used as the input neuron set. The BPNN parameter identification model of PMSM motor as shown in Fig. 3 is con- structed. The three-layer neural network model consists of nine input neurons and four output neurons. The hidden layer uses Tansig function as the transfer function f1(·), and the output layer uses linear purelin as the transfer function f2(·). The hidden layer neurons output zh = f1( ∑n h=0 ωihxi)(h = 1, 2, · · · · · ·9) and the output layer neurons output yj = f2( ∑m h=0 vhjzh)(j = 1, 2, 3, 4), thus the BPNN network completes the mapping from the input 9-dimensional space to the output 4-dimensional space. After determining the input layer, output layer and data set of BPNN network, the key factors affecting the performance of BPNN network include the number of hidden layer layers, the number of hidden layer neurons, initial connection weights and thresholds, learning rate and learning algorithm. Increasing the hidden layer can reduce the network error and improve the accuracy, but the performance will also increase the network complexity and training time, and may appear the tendency of "over-fitting". According to Hornic’s theory that three-layer BPNN can fit arbitrary non-linear curve, the identification model in this paper adopts a three- layer network structure with one hidden layer. The number of hidden layer neurons directly determines whether the network will appear "over-fitting" phenomenon, but there is no theory to accurately determine the number of hidden layer neurons in BPNN network. In this paper, empirical formula and experimental verification will be used to determine the optimal number. The setting of initial connection weights and thresholds will have a great impact on the convergence speed of the model. In this paper, the artificial fish swarm algorithm with strong initial robustness is introduced into the initial optimization of the model, so as to improve the convergence speed and speed up Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 621 the global optimization. Through the study of BPNN network, the following empirical formulas for calculating hidden layer neuron data are formed: Gagan pointed out the logarithmic relationship between the hidden layer neurons and the input neurons. Formula 1: m1 = log2n[20]. Kolmogorov’s theorem shows that there exists a relationship between recessive neurons and input neurons. Formula 2: m2 = 2n− 1. D. Gao [7] used least square method to simplify the relationship between input and output and hidden layer neurons. Formula 3: m3 = √ (0.43nl + 0.12l2 + 2.54n + 0.77l + 0.35+0.51 [7]. The number of hidden layer neurons can be calculated by input neurons and output neurons. Formula 4: m4 = √ nl. The number of neurons in the hidden layer was obtained by using input neurons and output neurons as well as by adjusting parameters. Formula 5: m5 = √ n + l + α,α = 1, 2 · · · · · ·10. In the above formula, m is the number of neurons in the hidden layer,n is the number of neurons in the input layer, l is the number of neurons in the output layer. Table 1: The number of neurons in the hidden layer of BPNN No. m1 m2 m3 m4 m5 m n = 9 ,l = 4 ,α = 1,2,· · · · · · ,10 4 17 8 6 5-14 4-17 The number of hidden layer neurons m = m1,m2,m3,m4,m5 is obtained by combining the above empirical formulas. According to the BPNN-PMSM parameter identification model in Chapter 3.3, n = 9, l = 4 can be obtained. The number of hidden layer neurons m is shown in Tab.1. Weight Mean Square Error (WMSE) is used to calculate the training errors. The individual error of sample p can be expressed: Ep = 1 2 ∑l j=1(ŷ p j −yj p)2wj, and global error WMSE can be expressed:WMSE = ∑p i=1 Ep. In the previous two formulas, l is the number of output nodes, p is the number of training samples, ŷpj is the expected output value of network, y p j is the actual output value of network, and wj is the error weight of output nodes. According to the definition of net input value Sj of neuron node yj, the hidden layer neuron output zh = f1(Sh) and the output layer neuron output yj = f2(Sj) described in chapter 3.3. vhj is the weights of output layer connection and wih is hidden layer connection. They can be adjusted by cumulative error algorithm. The increments of connection weight ∆vhj and ∆wih can be generated after each iteration. They are expressed as follows: ∆vhj = ∑p i=1 ∑l j=1 η(ŷ p j −y p j ) 2f ′2(Sj)zj, ∆wih = ∑p i=1 ∑l j=1(η(ŷ p j −y p j ) 2f ′2(Sj)vhjf ′ 1(Sh)xi, where η is learning rate and η ∈ (0.01, 10). In the process of learning, the weighted inertial momentum is used to reduce the learning oscillation and accelerate the convergence of the system. The adjusted formula for calculating the actual weights can be expressed: ∆w(n) = −η1∆WMSE(n)+αw(n−1), where η1 is learning rate and α is the inertial momentum. In order to adapt the BPNN network to the convergence speed in different periods, the system learning rate parameter η1 is adjusted dynamically and adaptively. The error ratio can be expressed: ϕ = WMSE(n)/WMSE(n− 1). 622 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi When ϕ is larger than the threshold value ϕth, the learning rate is increased, and vice versa, which ensures that the learning rate is earlier and faster in the early learning stage. In the later stage of learning, the learning rate decreases, the vibration decreases and tends to be optimal. Table 2: Relationships between hidden layer neurons and errors Hidden layer neuron Training error (%) Testing error (%) Hidden layer neuron Training error (%) Testing error (%) 4 12.6334 12.0542 11 8.2683 7.0547 5 10.4325 10.1453 12 7.5876 6.2482 6 10.0836 9.2695 13 7.0218 5.0624 7 9.5836 9.0521 14 6.6482 6.1486 8 9.0452 8.8485 15 5.8473 5.1862 9 8.6142 8.5858 16 5.8593 5.8864 10 8.056 8.1524 17 5.7324 5.3485 The data set is acquired by sampling model in section 3.2. The sampling times is 500. 400 sets of data are used as training set and the others are used as test set. The data of hidden layer neurons are set to 4-17. Because the parameters of four output neurons involved in this paper are equally important in motor performance evaluation, the training and testing results are shown in Tab.2 according to the average weight. The error data show that with the increase of the number of hidden layer neurons, the training error and test error decrease gradually, but the test error fluctuates when the number of hidden layer neurons exceeds 11. The number of hidden layer neurons can be set at 13-15 in the comprehensive comparison error data, which is selected as 15 in this paper. 3.4 Chaos artificial fish swarm algorithm In this paper, the Chaos Artificial Fish Swarm Algorithms (CAFSA) algorithm is used to optimize BPNN, and a PMSM motor parameter identification model based on CAFSBPNNA algorithm is proposed. Relevant definitions of CAFSBPNNA algorithms The state of individual AF fish in artificial fish swarm at time t is defined as: XS(t) = (xS1 (t),x S 2 (t), · · · · · · ,x S D(t)), (S = 1, 2, · · · · · · ,AF_Num). where AF_Num is the size of artificial fish, D is the state dimension of XS(t) (the sum of connection weights and thresholds of BPNN), xSi (t)(i = 1, 2, · · · · · · ,D) are variables waiting for optimization. ES(t)(= 1 eS(t) ) is the current food concentration of Sth artificial fish. V isual is the visual range of artificial fish (perceived distance), Step is moving step of artificial fish, ∆ is crowding factor. CAFSBPNNA is a three-layer BPNN network with one hidden layer, in which the number of neurons in the input layer(Lin) is n = 9, the number of neurons in the hidden layer(Lh) is m = 15, and the number of neurons in the output layer(Lo) is l = 4. The dimension D of XS(t) of artificial fish can be calculated as: D = m(n + 1) + l(m + 1) = 15 ∗ 10 + 4 ∗ 16 = 216. The connection weights of Lin → Lh and Lh → Lo in BPNN are assigned to the front part of XS(t), and the thresholds of neurons in Lh and Lo are stored in the back part of XS(t). Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 623 Individual fish XS(t) contains the connection weights and thresholds needed to construct BPNN network, which can represent a BPNN. When AF is initialized, AF_Num of the artificial fish are put in the range of [-1,1] by chaotic random mode. Initial fish food concentration ES(t) = 1 eS(t) , where eS(t) = WMSE N , WMSE is the global weighted mean square error, and N is the dimension of training sample set. The smaller the error eS(t) between the expected output value and the actual output value of the network, the higher the food concentration ES(t) of the artificial fish, and the better the performance of the neural network constructed by the artificial fish. The distance dPQ between any two artificial fish XP (t) and XQ(t) in AF is calculated as follows: dPQ = ∑D i=1 |X P i (t) −X Q i (t)| D , XPi (t), X Q i (t) is the corresponding state value of two artificial fish. When dPQ < V isual, XP (t) and XQ(t) are the corresponding state values of two artificial fish. Behavior description of artificial fish swarm algorithms (a) Prey behavior: Fish tend to move towards high food concentration in water through their sensory organs. Artificial fish Xi(t) traverses all the partner fish in the search field, and the number of partners Frd_Num was calculated. At first, the target fish Xtarget(t) is assigned Xi(t), and the food concentration is compared with that of partner fish Xj(t) in each traverse. If Etarget(t) < Ej(t), the target fish to be searched is recorded as partner fish Xj(t). After searching all the partner fish, if the target fish Xtarget(t) is found, then the artificial fish Xi(t) moves one step to the Xtarget(t) according to:Xi(t + 1) = Xi(t) + rounds() · step · Xtarget(t) −Xi(t) dtargeti ; otherwise, the artificial fish Xi(t) moves one step to the cluster center. (b) Swarm behavior: Fish like to gather, forage collectively and avoid risks. Assuming that the current state of artificial fish is artificial fish Xi(t), the number of companion fish Frd_Num and the central position Xc(t) are obtained by traversing the search field. If Ec(t) Frd_Num > ∆Ei(t), it means that the food concentration in the partner center is high but not too crowded. It can move forward to the center according to: Xi(t + 1) = Xi(t) + rounds() · step · Xc(t) −Xi(t) dci , otherwise preying behavior will be carried out. (c) Fellow behavior: Some individuals in the fish swarm find the food and share the location, and others follow and arrive at the food point to eat together. Assuming that the current state of artificial fish is Xb(t), the number of partners in the field of vision is Frd_Num and the highest concentration of food is Xb(t). If Eb(t) Frd_Num > ∆Ei(t), it shows that the partner with the highest food concentration is not too crowded at Xb(t). At this time, Xi(t) can move towards partner Xb(t) according to: Xi(t + 1) = Xi(t) + rounds() ·step · Xb(t) −Xi(t) dbi ; otherwise, preying behavior will be carried out. 624 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi (d) Moving behavior: When the individual fish Xi(t) in the fish swarm finds that there is no other partner around them, it can move randomly to the next position within its field of vision according to: Xi(t + 1) = Xi(t) + rounds() ·step, thus widening the range of swarming, and further exploring for obtaining global optimum. (e) Bulletin board: Update and preserve the highest food concentration and the corre- sponding individual fish status. In the iteration traversal process, when the individual fish food concentration is higher than the bulletin board record, the bulletin board is updated. Chaotic search Chaotic phenomena refer to irregular and unpredictable behaviors in non-linear systems, which are ergodic and random. When chaotic search is introduced into AFSA algorithm, its ergodicity can make the fish swarm search range include the whole target area, and its randomness is conducive to the search process breaking away from local extremum and tending to global optimum, which improves the search efficiency of AFSA algorithm and shortens the convergence time of global optimization. Typical chaotic systems include Logistic mapping, Lorenz system and Henon map. In this paper, one-dimensional logistic chaotic system is used as the random motion behavior required in the search process of AFSA. Definitions are as follows:yk+1 = f(µ,yk) = µyk(1 − yk), where, yk ∈ (0, 1) and it is the value of y after kth iterations, and µ is the control parameter of the system. When µ = 4, the system is in a completely chaotic state, at which time yk can not repeat all the values between (0,1). Logistic chaotic system optimizes the activities of artificial fish swarm. It means that the generation of all random numbers includes the initialization of artificial fish and the activities of fish swarm. Therefore, the following provisions are made for artificial fish swarm: (a) The chaotic system generates the initial artificial fish. There are 216 initial connection weights and thresholds to be optimized in the parameter identification model of permanent magnet synchronous motor based on CAFSNNA. For the i-th Artificial Fish Xi(t), the chaotic variable yik(k = 1, 2, · · · · · · , 216) is generated by using the logistic system, and the initial value Xi(0) is set accordingly. If the range of parameters to be identified is (a, b), the initial state of artificial fish Xi(0) is: Xi(0) = (xi1(0),x i 2(0), · · · · · · ,x i 216(0)), where x i k(0) = ai + (bi + ai)yik. Similarly, AF_Num artificial fish can be produced. (b) The random number function rounds() used in formula (14-17) is generated by logis- tic chaotic system when calculating the preying, swarming, fellowing and moving behavior of artificial fish swarm. 4 Process analysis of PMSM motor parameter identification net- work algorithm based on CAFSPNNA 4.1 Network optimization algorithm for PMSM motor parameter identifica- tion based on CAFSBPNNA CAFSBPNNA includes two parts for PMSM motor parameter identification. In the early stage, the superior global optimization ability of chaotic artificial fish swarm is used to quickly approach the global optimal point. In the later stage, the fast local convergence of BPNN neural Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 625 Figure 4: Procedure of PMSM motor parameter identification based on CAFSBPNNA network is used to complete the final global optimal point. The specific steps are as follows (see Fig. 4). (1) Initialize BPNN: According to the PMSM motor parameter evaluation model, the BPNN network topology structure is determined: the number of input layer, hidden layer, output layer and the number of their respective neurons. (2) Initialize CAFS: According to the topological structure of BPNN network, the connec- tion weights and thresholds are determined. Set the initial state value of artificial fish: fish size(AF_Num), individual fish state dimension(D), visual range(V isual), moving step(step) , crowding factor(∆), maximum iteration number( imax), target error( WMSEmax). Logistic chaotic system is used to determine the position and initial state of artificial fish. The initial position of individual fish Xi(0) and corresponding food concentration Ei(0) were calculated by loading samples data. (3) Update bulletin board: The highest food concentration of individual fish was selected by traversing comparison, and the Bulletin Board was judged and updated. (4) Behavior selection of AF: Fish swarm chooses and executes four kinds of activities: preying, fellowing, swarming and moving. Logistic chaotic system is used to enter the next optimal position. (5) CAFS termination condition judgment: According to the Bulletin Board, the optimal food concentration and the number of iterations, it is judged whether the CAFS termination condition of CAFSNNA optimization is satisfied, and then it is transferred to the later BPNN optimization, otherwise the next step (3) is to continue the fish optimization. (6) BPNN optimization: Transfer to the BPNN iterative optimization process, using the aforementioned BPNN back propagation iteration weight updating method until the final opti- 626 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi iteration number 0 10 20 30 40 50 60 70 80 90 0 0.05 0.1 0.15 0.2 0.25 CAFS-BPNNA-base iteration number 20 30 40 50 60 70 ×10-3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 CAFS-BPNNA-base Figure 5: Aggregation degree σ of optimized iterative population based on CAFSBPNNA mization conditions are satisfied to obtain the global optimal network. 4.2 Selection of switching opportunity for two swarm intelligence algorithms in CAFSBPNNA network optimization In the identification model, the optimization process of BPNN network parameters is com- pleted by CAFS and BPNN algorithms in two stages. If the switching point is too late, the CAFS algorithm will produce blind search when it approaches the global extremum point in the later stage, which will affect the convergence speed. On the contrary, if the switching point is too early and enters the BPNN search process too early, it will lead to premature convergence to the local extremum point, which will affect the convergence accuracy. Therefore, accurately grasping the switching time of the two algorithms is very important for the whole optimization. In the traditional CAFS algorithm, the termination conditions are mainly determined by the number of iterations and the target error, but in the optimization process of CAFSNNA fusion algorithm, it is difficult to accurately judge whether it has entered the optimal terminal of CAFS fast convergence by only a single system target error. In the later stage of CAFS algorithm, the individual fish in the population are clustered near the global extremum because of their preferential characteristics, so the individual aggregation degree of the population is obviously increased, and the individual fitness value of each individual fish will also approach the average fitness value of the system. For this reason, the aggregation degree defined by formula (7) is introduced to judge the aggregation of individual fish [9]. σ = 1 D D∑ j=1 ( fj −favg f ) 2 (7) In the formula, fj is the fitness function value of individual fish Xj(t), favg is the average fitness value of population, and f is the normalized parameter. In the early stage of identification, the individual fish in the fish swarm have great difference, including loose distribution, low clustering degree and large value. In the later stage of optimization, the fish flock clustering degree increases and aggregation degree σ is higher. When σ is less than the threshold value, aggregation degree of the fish swarm is very high. The fusion algorithm has entered the later stage of rapid convergence of CAFS, and then it can be turned into BPNN for precision optimization. Fig.5 is the change curve of aggregation degree σ in the iteration process of CAFSBPNNA, and CAFS-BPNNA-base is the original curve of iteration optimization. CAFS-BPNNA-σ is an Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 627 ! "# "# $ %&'( ")*+,- ./ ./ ./ ")*+"# ! " # $ % 01 *234561$62375 234563892:;1!623755 .< <$ .< ?@ A) B2: ) C D E 0) F$ F! G>9(=> G?#H> C/H I.>-J F>9(=> F?#H> (29K# LF.AB C D E C9(=> D#H> E9>-J C9(=> D#H> 0=# M N .9>-J .< L(##! 017 014 G! G$ ! $ " OK 1! 1$ P8 ECQL*D.,,C OK 1! 1$ " L@R(# #)*+,*-)-. /)012 -, SLH>HR-TUR9H>V#TFKI$T3F5+ SLH>HR-TUR9H>V#TFKI!T3F5+ SLH>HR-T@2--#/HT KI!T3C5+ SLH>HR-T@2--#/HT KI$T3C5+ SORHR-TK(##!T")T3->!'K5+ Figure 6: The block diagram of PMSM speed control system based on CAFS-BPNNA iterative curve with the judgment of population aggregation degree σ. After the first curve is below 0.0001, the CAFS fish swarm optimization enters the disordered state until it reaches the preset iteration number of 50 and then turns to BPNN to search for the global extremum quickly. The latter iteration curve adds the optimum iteration of judging the degree of population aggregation σ to switch directly to another BPNN optimization algorithm after σ drops below 0.0001, and reaches the extremum point after fast convergence. The comparison results show that the iteration optimization which combines the judgment of population aggregation degree σ can effectively avoid the disordered swimming in the later optimization stage of the fish swarm algorithm and improve the convergence speed of the fusion algorithm. 5 Result and analysis 5.1 Experimental setting of PMSM motor parameter identification Table 3: Nominal parameters of PMSM motors Parameter Value Parameter Value Parameter Value P/W 2600 ω/r ·min−1 2000 Rs/Ω 2.875 ψf/Wb 0.175 I/A 4 T/N ·m 5 Ld/mH 8.5 J/kg ·m2 0.48e−5 U/V 380 ωmax/r ·min−1 2500 Lq/mH 8.5 Pp/p 4 The PMSM speed control system based on speed closed-loop control regulation is constructed on the motor control platform, as shown in Fig.6. The parameters of PMSM motor are as shown in Tab.3. The range of parameters to be determined is Rs ∈ (0, 5),Ld ∈ (0, 015),Lq ∈ (0, 015),ψf ∈ (0, 3). The motor adopts SVPWM control strategy and the pulse current(-2A) injected into the d-axis at 0.1s lasts 5 ms. Fig.7 shows the speed tracking curve of the motor when the current pulse is injected at ω = 2000r/min. The data show that the speed fluctuation of the motor after the instantaneous pulse input is less than 0.5%, and the influence on the system can be neglected. Data0 and Data1 are generated 628 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi Figure 7: PMSM motor speed ω at pulse current Id=-2A by sampling {ud,uq, id, iqandω} before and after pulse current injection, and 500 groups are sampled as training and testing sets of CAFSBPNNA. 5.2 Experimental setting of PMSM motor parameter identification The parameters of CAFSNNA algorithm are as follows: the input neuron number n = 9; the hidden neuron number m = 15, and Tansig taken as transfer function; output neuron number l= 4, and linear purelin taken as transfer function. The training error is calculated by WMSE. Connection weight updating is calculated by cumulative error method of adaptive inertial momentum. Intelligent fish swarm settings include: the fish swarm number AF_Num = 30, individual fish size dimension D = 216, visual field range V isual = 1, maximum step size Step = 0.5, crowding factor∆ = 14, maximum iteration number imax = 400, training sample 400 groups, minimum allowable training error is 0.001. In order to compare the performance of PMSM motor parameter identification based on CAFSBPNNA algorithm, RLS and PSO are introduced for analogy identification. RSL pa- rameter configuration: learning factor C1 = C2 = 2, inertia weight W = 0.5, the maximum number of iterations is 400.PSO parameter configuration: estimate Θ̂(0) = 1 n P0 , covariance P0 = p0I,p0 = 10 6, the maximum number of iterations is 400. 5.3 Training error analysis based on motor speed and torque The speed and torque of PMSM motor are the two main indexes that affect the perfor- mance of the motor. Therefore, in the experiment, the motor is set in different conditions of speed and torque to compare the identification ability of various intelligent algorithms for motor parameters.Fig.8 is PMSM motor speed ω = 2000r/m and load torque is 2N · m. The three intelligent algorithms in this paper track and compare the waveforms of motor parameter iden- tification effect and iteration times. Tab.4 gives the identification error data of three algorithms for motor under normal speed of 2000r/m and maximum speed of 2500r/m, as well as Torque is 2N ·m and 3N ·m. Fig.8 shows that RLS has the worst convergence performance. It usually needs 150 iterations to reach the convergence point of all identification parameters. PSO has better convergence effect, but its convergence accuracy is not high compared with other two algorithms. Compared with RLS and PSO, the proposed CAFSBPNNA algorithm has better convergence time and accuracy. When the torque value is fixed to 2N·m and the speed is adjusted to the maximum speed, the total training error of the three algorithms decreases, among which the RLS decreases the most, the CAFSBPNNA algorithm changes the least and the identification accuracy is the highest. When the speed is fixed at 2000r/m and the torque is increased by 1N ·m, the single error of PSO arithmetic fluctuates, the errors of Rs and Ld become larger, the errors of Lq and ψf become smaller, and the total error increases. The ψf of RLS and CAFSBPNNA is basically constant, Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 629 iteration/times 0 50 100 150 200 250 300 350 400 L d /H 0 0.002 0.004 0.006 0.008 0.01 0.012 PSO RLS CASF-BPNNA BASE iteration/times 0 50 100 150 200 250 300 350 400 L q /H 0 0.005 0.01 0.015 PSO RLS CASF-BPNNA BASE iteration/times 0 50 100 150 200 250 300 350 400 0 0.05 0.1 0.15 0.2 0.25 0.3 PSO RSL CASF-BPNNA BASE iteration/times 0 50 100 150 200 250 300 350 400 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 PSO RLS CASF-BPNNA BASE Figure 8: PMSM motor parameter identification results (ω = 2000r/min,T = 2N ·m) and the total training error decreases with the decrease of Rs,Ld and Lq. CAFSBPNNA has the smallest change and the highest accuracy. In summary, compared with RSL and PSO, CAFSBPNNA is superior to the other two refer- ence algorithms in parameter identification under uniform working conditions and error fluctua- tion under changing working conditions, and it always shows good convergence and identification accuracy. Table 4: PMSM motor parameter identification error Working condition Algorithm Rs/Ω Ld/mH Lq/mH ψf/Wb Error T = 2N ·m ω = 2000r/min PSO RSL CAFBPNNA 0.554/19.3% 0.741/25.8% 0.132/4.6% 1.185/13.9% 1.343/15.8% 0.365/4.3% 1.113/13.1% 1.181/13.9% 0.467/5.5% 0.009/5.2% 0.078/44.8% 0.007/4.5% 12.8% 25.1% 4.7% T = 2N ·m ω = 2500r/min PSO RSL CAFBPNNA 0.477/16.6% 0.293/10.2% 0.129/4.5% 0.552/6.5% 1.011/11.9% 0.229/2.7% 1.147/13.5% 0.629/7.4% 0.382/4.5% 0.012/7.1% 0.072/41.3% 0.006/3.8% 10.9% 17.7% 3.8% T = 3N ·m ω = 2000r/min PSO RSL CAFBPNNA 0.684/23.8% 0.514/17.9% 0.117/4.1% 1.198/14.1% 1.037/12.2% 0.365/4.3% 0.824/9.7% 0.663/7.8% 0.314/3.7% 0.008/4.9% 0.081/46.8% 0.007/4.4% 13.1% 21.1% 4.1% 630 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi 6 Conclusion Aiming at the identification of PMSM motor parameters Rs,Ld,Lq and ψf, the full rank identification model of motor parameters is derived based on PMSM motor voltage equation and flux equation. The feasibility of motor parameters identification by d-axis current injection in steady state and the sampling method of sample data set are analyzed. A method of parameter identification of PMSM motor by optimizing BPNN with CAFS is proposed. Compared with PSO and RLS, experiments show that the proposed CAFSBPNNA algorithm has high accuracy and short convergence time for motor parameter identification, and has the stability of identification when working conditions are changed. The specific advantages are as follows: (1) The CAFSBPNNA algorithm fully coordinates the local and global search performance advantages of BPNN and CAFS, speeds up the convergence speed and improves the convergence of motor parameter identification. (2) The application of chaotic search reduces the sensitivity of initial value of parameter identification and enhances the robustness of parameter setting in the learning process. (3) The CAFSBPNNA algorithm has good stability, and can maintain good identification effect under different speed and torque conditions. CAFSBPNNA algorithm has good versatility. It can be used not only for parameter iden- tification of PMSM motor, but also for parameter identification and on-line tracking of other motors such as stepping motor, induction motor and precision motor. CAFSBPNNA algorithm proposed in this paper is more computational than on-chip system resources.The system adopts online sampling and periodic uploading of motor operation data. The parameters of identifica- tion network are optimized and updated by off-line optimization and remote download. Later on-chip system is implemented. In the future, the online optimization and updating methods of network parameters are further studied on the basis of improving the performance of on-chip system and optimizing the time and space complexity of algorithm. Acknowledgment Financial supports from the Priority Academic Program Development of Jiangsu Higher Education Institutions. Bibliography [1] Babak, N.M.; Meibody-Tabar, F.; Sargos F.M.(2004). Mechanical sensorless control of PMSM with online estimation of stator resistance, IEEE Transactions on Industry Ap- plications, 40(2), 457-471, 2004. [2] Boileau, T.; Leboeuf, N.; Nahid-Mobarakeh, B. et al.(2011). Online Identification of PMSM Parameters: Parameter Identifiability and Estimator Comparative Study, IEEE Transac- tions on Industry Applications, 47(4), 1944-1957, 2011. [3] Bose, B. K.(2009). Power Electronics and Motor Drives Recent Progress and Perspective, IEEE Transactions on Industrial Electronics, 56(2), 581-588, 2009. [4] Chen, Z.; Zhong, Y.; Li J.(2011). Parameter identification of induction motors using Ant Colony Optimization, IEEE World Congress on Computational Intelligence, IEEE.2011. Parameter Estimation for PMSM based on a Back Propagation Neural Network Optimized by Chaotic Artificial Fish Swarm Algorithm 631 [5] Da, Y.; Shi, X.; Krishnamurthy, M.(2013). A New Approach to Fault Diagnostics for Per- manent Magnet Synchronous Machines Using Electromagnetic Signature Analysis, IEEE Transactions on Power Electronics, 28(8), 4104-4112, 2013. [6] Donoso, Y.; Montoya, G. A.; Solano F. (2015). An Energy-Efficient and Routing Approach for Position Estimation using Kalman Filter Techniques in Mobile WSNs, International Journal of Computers Communications & Control, 10(4), 500-507, 2015. [7] Gao, D.Q.(1998). On Structures of Supervised Linear Basis Function Feedforward Three -Layered Neural Networks, Chinese Journal of Computers, 21(1), 81-86, 1998. [8] Hajisalem, V.; Babaie, S.(2018). A Hybrid Intrusion Detection System Based on ABC-AFS Algorithm for Misuse and Anomaly Detection. Computer Network, 136, 37-50, 2018. [9] He, S.; Wu, Q. H.; Saunders, J. R.(2009). Group Search Optimizer - An Optimization Algorithm Inspired by Animal Searching Behavior, IEEE Transactions on Evolutionary Computation, 13(5), 973-990, 2009. [10] Kim, J.; Jeong, I.; Lee, K.(2014). Fluctuating Current Control Method for a PMSM Along Constant Torque Contours, IEEE Transactions on Power Electronics, 29(11), 6064-6073, 2014. [11] Liu K.; Zhang Q.; Chen J.; et al.(2011). Online Multiparameter Estimation of Nonsalient- Pole PM Synchronous Machines with Temperature Variation Tracking, IEEE Transactions on Industrial Electronics, 58(5), 1776-1788, 2011. [12] Liu, K.; Zhu, Z.Q.; Stone, D.A.(2013). Parameter Estimation for Condition Monitoring of PMSM Stator Winding and Rotor Permanent Magnets, IEEE Transactions on Industrial Electronics, 60(12), 5902-5913, 2013. [13] Liu, Q.; Hameyer, K.(2015). A fast online full parameter estimation of a PMSM with sinu- soidal signal injection, Energy Conversion Congress & Exposition, IEEE, 2015. [14] Liu, Z.H.; Wei, H.L.; Zhong, Q.C.; et al.(2017). Parameter Estimation for VSI-Fed PMSM based on a Dynamic PSO with Learning Strategies. IEEE Transactions on Power Electron- ics, 32(4), 3154-3165, 2017. [15] Lu, K.; Vetuschi, M.; Rasmussen, P.O. et al.(2010). Determination of High-Frequency d- and q- axis Inductances for Surface-Mounted Permanent-Magnet Synchronous Machines, IEEE Transactions on Instrumentation & Measurement, 59(9), 2376-2382, 2010. [16] Mirchandani, G.; Cao, W.(1989). On hidden nodes for neural nets, IEEE Transactions on Circuits and Systems, 36(5), 661-664, 1989. [17] Omar, S.H.; Roberto, M.R.; Jose, R.M.; Hayde, P.B.(2015). Parameter Identification of PMSMs Using Experimental Measurements and a PSO Algorithm, IEEE Transactions on Instrumentation & Measurement, 64(8), 2146-2154, 2015. [18] Pellegrino, G.; Vagati, A.; Guglielmi, P. et al.(2011). Performance Comparison Between Surface-Mounted and Interior PM Motor Drives for Electric Vehicle Application, IEEE Transactions on Industrial Electronics, 59(2), 803-811, 2011. [19] Sengottuvelan, P.; Rasath, N.(2016). BAFSA: Breeding Artificial Fish Swarm Algorithm for Optimal Cluster Head Selection in Wireless Sensor Networks, Wireless Personal Communi- cations, 94(4), 1-13, 2016. 632 J.W. Jiang, Z. Chen, Y.H. Wang, T. Peng, S.L. Zhu, L.M. Shi [20] Shi, Y.; Sun, K.; Huang, L. et al.(2012). Online Identification of Permanent Magnet Flux Based on Extended Kalman Filter for IPMSM Drive With Position Sensorless Control, IEEE Transactions on Industrial Electronics, 59(11), 4169-4178, 2012. [21] Wei, C.; Xin, L.; Mei, C.(2009). Suboptimal Nonlinear Model Predictive Control Based on Genetic Algorithm, International Symposium on Intelligent Information Technology Appli- cation Workshops, IEEE, 2009. [22] Xiao, X.; Chen, C.; Zhang, M.(2010). Dynamic Permanent Magnet Flux Estimation of Per- manent Magnet Synchronous Machines, IEEE Transactions on Applied Superconductivity, 20(3), 1085-1088, 2010. [23] Yan, H.; Wang, Y. R.; Shi, H. X.; Li, Q.; Zeng, Y. S.; Jaini, R.(2019). Solid-Liquid Flow of Axial Flow Pump in Loop Reactor and Operating Control with Single Invert, International Journal of Simulation Modelling, 18(3), 464-475, 2019. [24] Zhang, D.(2017). High–speed Train Control System Big Data Analysis Based on Fuzzy RDF Model and Uncertain Reasoning, International Journal of Computers Communications & Control, 12(4), 577-591, 2017. [25] Zhang, D.; Sui, J.; Gong, Y.(2017). Large scale software test data generation based on collective constraint and weighted combination method, Tehnicki Vjesnik, 24(4), 1041-1050, 2017. [26] Zhang, H.P.; Ye, J.H.; Yang, X. P.; Muruve, N.W.; Wang, J.T.(2018). Modified Binary Par- ticle Swarm Optimization Algorithm in Lot-Splitting Scheduling Involving Multiple Tech- niques, International Journal of Simulation Modelling, 17(3), 534-542, 2018.