Microsoft Word - 1.doc Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 58 DESIGN OF MULTI-LAYER NEURAL NETWORKS FOR BUTTERWORTH FILTER OPTIMIZATION Assist. Prof. Dr. Hanan A. R. Akkar Department of Electrical & Electronic Engineering University of Technology E-mail: hnn_aaa@yahoo.com Abstract In this paper a proposed design of five multi-layer feed-forward Artificial Neural Networks (ANNs) is presented for optimized Butterworth filter. The first and second network perform Butterworth ideal Low Pass Filter (LPF) and typical LPF. The third ANN performs Band Pass Filter (BPF). The fourth network perform multi–BPF which consists of two layers, the first layer consists of six tansig neurons and the second layer consists of one purline neuron, and the fifth feed-forward network is designed to perform the High Pass Filter (HPF) which consists of three layers, the first layer consists of three tansig neurons, the second layer consists of three tansig neurons and the third layer consists of one purline neuron. Back-propagation training algorithm is used to train the proposed networks with Mean Square Error (MSE) equals 10-10. Simulation and test programs are implemented by using MATLAB. Key word: Artificial Neural Networks, Digital Signal Processing, Filters. لمرشحات البترورثمثاليتصميم الشبكات العصبية متعددة الطبقات لتمثيل حنان عبد الرضا عكار . د.م.أ ية وااللكترونيةقسم الهندسة الكهربائ/الجامعة التكنولوجية العراق/بغداد :الملخص الشبكتان العصبيتان حيث تنفذ.مرشحات البترورثلتمثيل مثالي لشبكات عصبية متعددة الطبقات خمس تم في هذا البحث تصميم أما . ح الترددات ألحزمي مرش ذي تنففتقوم ب الشبكة العصبية الثالثة أما . األولى و الثانية مرشح الترددات الواطئة المثالي والعملي ا , التي تنفذ مرشح الترددات ألحزمي المتعدد فتتكون من طبقتان , الشبكة العصبية الرابعة حيث تحتوي الطبقة األولى على ست خالي ي تي تنفذ مرشح الترددات العال ال خامسة الالشبكة العصبية وأخيرا . عصبية و تحتوي الطبقة الثانية على خلية عصبية واحدة فقط ا أم .فتتكون من ثالث طبقات حيث تحتوي الطبقة األولى على ثالث خاليا عصبية و تحتوي الطبقة الثانية على ثالث خاليا عصبية الخوارزمية ذات االنتشار العكسي في تدريب الشبكات العصبية تم استخدام . الطبقة الثالثة فتحتوي على خلية عصبية واحدة فقط .MATLABاختبارها باستخدام الشبكات و تم تدريب. −1010ل التربيعي للخطأ بحدودحيث تم الحصول على المعد Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 59 Introduction ANN has been studied for many years in the hope of achieving human like performance in the field of speech and image recognition (Stuart, 2007) . In the case of artificial net, the neuron is a node or processing element, which processes weighted inputs and produces outputs which might be used as inputs to other nodes (Sivanandam, 2006). Digital filters are widely used in processing digital signals of many diverse applications, including speech processing and data communications, image and video processing, sonar, radar, seismic and oil exploration and consumer electronics (Madisettf, 1999). The design and realization of digital filters involve a blend of theory, applications and technologies. For most applications, it is desirable to design frequency selective filters which alter or pass unchanged different frequency components (Haykin, 1999). In this paper, a proposed feed-forward ANN are designed to optimized common ideal & typical digital filter types (low pass, band pass, high pass and multi-band pass) with sharp cut-off edge that cannot be implemented directly. They must be approximated with realization system the sharp cut-off edges need to be replaced with transition bands in which the designed would change smoothly in going from one band to the other. Butterworth Polynomial Filter Characteristics The Butterworth filter provides the best Taylor series approximation to the ideal LPF response at analog frequencies (0 and ∞ ) for any order n. The Butterworth polynomials are polynomials of order n whose magnitude is given by (Stein, 2000):- 1 2 n aa nB ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +=⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ω ω ω ω (1) Where ω is 2πf, ωa is 2πf0 and f0 is a resonant frequency. LPF is formed by taking the reciprocal of these polynomials (Kara, 2001). 1 1 2n a VO V A A ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + = ω ω (2) gain.frequency resonant is andfilter ofgain theis Where VOV AA As n increases, vov AA becomes closer to one for aω〈ω , and vov AA falls off more sharply for aω〉ω .When aω=ω , 2/1=vov AA regardless of n. Back-Propagation Training Algorithm In this section a trainable layered neural networks employing the input data is presented. In the case of layered network training, the error can be propagated into hidden layers so that the output error information passes back-ward. This mechanism of back ward error transmission is used to modify the synaptic weights of internal and input layers. The back-propagation algorithm is used throughout this paper for supervised training of multilayer FFNN (Kabir, 2005). Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 60 The back propagation designed to minimize the Mean Square Error (MSE) between the actual output of a multi-layer FFNN and the desired output. The following steps give summery of back-propagation algorithm( Hsu, 2005):- Step 1: chosen E , 0 max>η , Where weights W and V are initialized at small random values: W is (k x J) and V is (J x I). Step 2: Training step starts here, Input is presented and layers outputs computed [f(net)]. 1,2,...Jjfor ),xv(fy tjj =← Where jv is the j'th row of V weights, and 1,2,...Kkfor ),yw(fo t kk =← Where kw is the k'th row of W weights. Step 3: Error value is computed: 1,2,...Kkfor ,E)od( 2 1 E 2kk =+−← Step 4: Error signal vectors oδ and yδ of both layers are compute. Vector oδ is (Kx1), and yδ is (Jx1). The error signal terms of the output layer in this step are: K 2,... 1,kfor ),o1)(od( 2 1 2 kkok =−−=δ The error signal terms of the hidden layer in this step are: ∑ = δ−=δ K 1k kjok 2 jyj w)y1(2 1 for j=1, 2… J. Step5: Output layer weights are adjusted: jokkjkj yww ηδ+← , for k=1, 2… K and j=1, 2… J. Step 6: Hidden layer weights are adjusted: iyjjiji xvv ηδ+← for j=1,2,…J and i=1,2,.. I. Step 7: Repeat by going to step 2. Step 8: The training cycle is completed. for EEmax, then E ← 0, and initiate the new training cycle by going to step 2. Description and Computer Simulation Results The feed-forward back-propagation neural network which optimized the ideal LPF and typical polynomial LPF is shown in Figure (1). The network is simulated using MATLAB and its output is plotted against the target as shown in Figure (2), where cut frequency is 0.1 rad/sec. The weights transpose vector between inputs and first hidden layer are: V= [-1.6970 1.6970 -1.6970 -1.7017 -0.5252 30.9583] t The weights of bias(b) are: [-1.6970 1.6970 -1.6970 -1.7017 -0.5252 30.9583 ]. 1 )exp(1 2 )( − −+ = net netf λ Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 61 [ ]tW 3.17.05.02.19.02 −−= The performance training is measured according to the Mean Square Errors MSE, which equals to 1010− and learning parameter used is (1). The network is trained for (165) epochs with time of (10.094) sec, as shown in Figure (3). For typical LPF the simulated FFNN is shown in Figure (4) at cut-off frequency of (1.5) rad/sec.The network is trained for (1109) epochs to reach the performance goal which is ( 1010− ) with elapsed time equals (21.844) sec as shown in Figure (5). Other FFNN is designed to perform the ideal BPF, it consists of three layers, the first layer consists of six tansig neurons, the second consists of four tansig neurons and third layer consists of one purline neuron, as shown in Figure (6). The output simulation result of the FFNN for BPF network is shown in Figure (7). The adaptation is done with trains, which updates weights with specified learning function. The network is trained for (285) epochs with elapsed time equals (30.39) sec, as shown in Figure (8). A proposed FFNN design of Multi-band pass filter is shown in Figure (9). The network consists of two layers. The first layer consists of six tansig neurons and the second layer consists of one purline neuron. The output of the simulated network is shown in Figure (10). The MSE ( 1010− ) of the network with (303) epochs and elapsed time equals to (14.56) sec is shown in Figure (11). The last proposed design FFNN of HPF is shown in Figure (12). The network consists of three layers. The first layer consists of three tansig neurons, the second layer consists of three tansig neurons, and the third layer consists of one purline neuron. The output of the simulated network is shown in Figure (13). The MSE ( 1010− ) of the simulated network with (303) epochs and elapsed time equals to (14.56) sec. is shown in Figure (14). The following table illustrates the structure of FFNN for ideal LPF, typical LPF, ideal BPF, Multi-band PF and ideal HPF with MSE, No. of epochs, Elapsed time and cut-off frequency for each network. The cut-off frequency illustrated in Table (1) is corresponding to the location of a sharp edge of the output response of the filter as shown in Figures (2, 4, 7, 10, and 13). Conclusions In this paper, five feed-forward neural networks are proposed for Butterworth filter optimization. These networks learn the basic analog prototype for different types of classical Butterworth filter and summarize major characteristics. The created networks are trained using back-propagation algorithm that minimizes the MSE between the actual output and desired output). The use of Multi-layer Feed Forward Neural Networks (MFFNN) with Butterworth filter shows fast response for producing the estimated output signals. The estimated results are near to the actual values according to the value of Mean Square Error (MSE) based on the use of back- propagation training algorithm. This simplifies the development with better performance and fast computations of such type of applications. These proposed FFNN are very efficient and fast because of very low MSE (10-10) is achieved, which gives high learning to the network to response for any selected frequency. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ −−−−− −−− −− −−−− −−−− = 4.14.04.08.02.12.0 6.07.08.17.05.02.1 3.02.34.02.08.04.1 8.21.26.12.12.07.1 1.03.46.05.05.02.1 1W Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 62 References Haykin, S., (1999) " Signal and Systems", John Willy & Sons, Inc. Hsu, D., (2005) " Competitive Learning with Floating Gate Circuits ", IEEE on circuits and systems. Kabir, A.,(2005) " Implementation of Multi-layer Neural Network on FPGA", College of Technology Indiana State University, March. Karu, Z. Z, (2001).," Signals and Systems Made Ridiculously Simple", Zizi Press, Cambridge MA. Madisettf, V. K, (1999), "Digital Signal Processing Handbook", Chapman & Hall / CRC Press LLC. Sivanandam, S. N, (2006)," Introduction to Ann's", VIKAS publishing house PVT LTD. Stein, J.Y, (2000) " Digital Signal Processing a Computer Science Perspective", Widely Inter- Science Publication John Wiley & Sons, Inc. Stuart, R., (2007) " Artificial Intelligence: a Modern Approach", 3rd Edition, Prentice Hall. Table (1) Simulation results of the proposed design. Type of filter FFNN layers MSE No. of epochs Elapsed time sec. Cut-off frequency rad/sec Ideal LPF 6-5-1 1010− 165 10.094 0.1 Typical LPF 6-5-1 1010− 1109 21.84 1.5 Ideal BPF 6-4-1 1010− 285 30.39 0.3-0.8 Multi-band PF 6-1 1010− 303 14.56 0.1-0.4 0.5-0.8 Ideal HPF 3-3-1 1010− 3045 47.2 0.1 Figure (1) The first and second FFNN for ideal and typical LPF. W1 bias x Input layer 5 W2 V 66 1 Hidden layers 1 1 5 Output layer Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 63 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 Frequency(rad/sec) M ag ni tu de (d b) data 1 shape-preserving data 2 0 20 40 60 80 100 120 140 160 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 165 Epochs T ra in in g- B lu e G oa l-B la ck Performance is 9.78591e-011, Goal is 1e-010 Figure (2) The output of ideal LPF FFNN. Figure (3) The MSE of ideal LPF FNNN. 0 1 2 3 4 5 6 7 8 9 10 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 Frequency(rad/sec) M ag ni tu de (d b) Shape-preserving interpolant data 1 shape-preserving data 2 0 200 400 600 800 1000 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 1109 Epochs T ra in in g- B lu e G oa l-B la ck Performance is 9.90498e-011, Goal is 1e-010 Figure (4) The output of typical LPF FFNN. Figure (5) The MSE of the typical LPF FFNN. Figure (6) The FFNN for BPF . W1 b x Input layer 4 W2 V 66 1 Hidden layers 1 1 5 Output layer Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 64 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 Frequency(rad/sec) M ag ni tu de (d b) 0 50 100 150 200 250 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 285 Epochs T ra in in g- B lu e G oa l-B la ck Performance is 3.29975e-011, Goal is 1e-010 Figure (7) The output of BPF FFNN. Figure (8) The MSE of BPF FFNN. Figure (9) The FFNN of Multi-band PF. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 Frequency(rad/sec) M ag ni tu de (d b) data 1 shape-preserving data 2 0 50 100 150 200 250 300 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 303 Epochs T ra in in g- B lu e G oa l-B la ck Performance is 3.987e-011, Goal is 1e-010 Figure (10) The output of Multi-band PF FFNN. Figure (11) The MSE of Multi-band PF FFNN W1 b x Input layer V 66 1 Hidden layers 1 Output layer Al-Qadisiya Journal For Engineering Sciences Vol. 2 No. 1 Year 2009 65 Figure (12) The FFNN for HPF. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 Frequency(rad/sec) M ag ni tu de (d b) 0 500 1000 1500 2000 2500 3000 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 3045 Epochs T ra in in g- B lu e G oa l-B la ck Performance is 9.99921e-011, Goal is 1e-010 Figure (13) The output of HPF FFNN. Figure (14) The MSE of HPF FFNN. W1 b x Input layer 3 W2 V 63 1 Hidden layers 1 1 5 Output layer