62-69محمود وسامر Al-Khwarizmi Engineering Journal,Vol. 13, No. 1, P.P. Proposed Hybrid Sparse Adaptive Algorithms for System Mahmood A. K Abdulsattar *,**Department of Electrical Engineering *Email:mahmood.abdulsattar@googlemail.com **Email: (Received 29 August https://doi.org/10.22153/kej.2017.12.003 Abstract For sparse system identification,recent suggested algorithms are Attracting LMS (ZA-LMS), Reweighted Zero- have modified the cost function of the conventional LMS algorithm by adding a constraint of coefficients sparsity. And so, the proposed algorithms are named ℓ�-ZA merging twoconstraints from previous algorithms to improve theconvergence rate system. In this paper, a complete analysis was done Gaussian sequence for input signal. The discuss algorithms and system sparsity was observed. In addition and the last recent algorithms were presented and the necessary conditions to improve convergence rate. Finally, the results presented to match closely by a wide-range of parameters. Keywords: Adaptive filter, ℓ�-LMS, zero-attracting, p 1. Introduction Adaptive filtering is a difficult problem when the involved impulse response is sparse. Least Mean Square (LMS) algorithm used in many applications for example system identification, echo cancelation, and channel equalization, due to its easy implementation, good performance, and high robustness, [1–3]. The impulse response of a sparse system contains many zeros or near-zero coefficients and few large ones [4]. For this system, the conventional LMS not ever takes improvement of the sparsity. In latest years, a number of algorithms suggested depend on LMS to the feature of sparsity. The proportionate NLMS(PNLMS) algorithm and its improved which use the individual step size with respect coefficients of filter in proportional to improve Khwarizmi Engineering Journal,Vol. 13, No. 1, P.P. 62- 69 (2017) Proposed Hybrid Sparse Adaptive Algorithms for System Identification K Abdulsattar* Samer Hussein Ali** Engineering /College of Engineering/ University of Baghdad mahmood.abdulsattar@googlemail.com : msc_eng_samerhussein@yahoo.com August 2016; accepted 19 December 2016) https://doi.org/10.22153/kej.2017.12.003 recent suggested algorithms are ℓ�-norm Least Mean Square ( -Attracting LMS (RZA-LMS), and p-norm LMS (p-LMS) have modified the cost function of the conventional LMS algorithm by adding a constraint of coefficients sparsity. And ZA-LMS, ℓ�-RZA-LMS, p-ZA-LMS and p-RZA-LMS that are designed by m previous algorithms to improve theconvergence rate and steady state of was done for the theoretical operation of proposed algorithms by exited white Gaussian sequence for input signal. The discussion of mean square deviation (MSD) with regard to parameters of observed. In addition, in this paper, the correlation between proposed algorithms re presented and the necessary conditions of these proposed algorithms results of simulations are compared with theoretical study (?), which is of parameters.. attracting, p-LMS, mean square deviation, Sparse system identification problem when se response is sparse. Least (LMS) algorithm used in many applications for example system identification, echo cancelation, and channel equalization, due to its easy implementation, good performance, and The impulse response of a sparse system zero coefficients and few large ones [4]. For this system, the conventional LMS not ever takes improvement of years, a number of on LMS to improve roportionate and its improved ones, the individual step size with respect the to improve the convergence rate of them[5, 6]. developed sparse signal processing part [7 The estimation of these algorithms is applied the features of unknown impulse response then added sparsity constraints to the cost function of gradient descent. It is essential to deportment a th examination of proposed algorithms. Numerical simulations proved the proposed algorithm better performance than recent sparse system identification, minimized steady-state MSD convergence rate. 1.1. Relation to Other Works The mean square evaluation has been illustrated for LMS algorithm and a share of its deviations, the optimal algorithm parameters has Al-Khwarizmi Engineering Journal Proposed Hybrid Sparse Adaptive Algorithms for System University of Baghdad norm Least Mean Square (ℓ�-LMS), Zero- LMS) algorithms, that have modified the cost function of the conventional LMS algorithm by adding a constraint of coefficients sparsity. And LMS that are designed by of MSD for sparse for the theoretical operation of proposed algorithms by exited white ion of mean square deviation (MSD) with regard to parameters of between proposed algorithms algorithms were planned are compared with theoretical study (?), which is Sparse system identification. [5, 6]. The recently developed sparse signal processing part [7–11]. The estimation of these algorithms is applied the features of unknown impulse response then added sparsity constraints to the cost function of gradient It is essential to deportment a theoretical algorithms. Numerical simulations proved the proposed algorithms have algorithms for sparse system identification, due to both and improved The mean square evaluation has been and a share of its deviations, the optimal algorithm parameters has PDF created with pdfFactory trial version www.pdffactory.com mailto:mahmood.abdulsattar@googlemail.com https://doi.org/10.22153/kej.2017.12.003 mailto:mahmood.abdulsattar@googlemail.com mailto:msc_eng_samerhussein@yahoo.com https://doi.org/10.22153/kej.2017.12.003 http://www.pdffactory.com http://www.pdffactory.com Mahmood A.K Abdulsattar Al-Khwarizmi Engineering Journal, Vol. 13, No. been selected to characterize the theoretically performance. LMS algorithm was proposed by Widrow for the first time in [12] and its performance was studied in [13]. Later, the mathematical frame of mean square analysis was established by Horowitz and Senne via examining the coefficients vector and succeeded the closed form appearance of MSD [14], which wa simpler by Weinstein and Feuer [15]. A brief analysis, which planned in [16] depend on a style of adaptive algorithms, that presents linear time invariant processes depend on the instantaneous of gradient vector and the LMS is the simplest algorithm. Likewise, the examination Normalized LMS (NLMS) has involved extremely attention [17,18].Even so, the styles stated, which are professional in their own background, might no longer be straight usedin LMS, since its nonlinearity. The MSD analysis of ZA-LMS and p-LMS have been operated in [19, 20]. 1.2. Main Contribution The contribution which presented in this paper is on the performance analysis of steady Then, the stability condition on step size is chosen. After that, the rule to select the parameter for steady-state performance is suggested. Finally, the steady-state MSD is achieved with the optimal parameter for proposed algorithms over the traditional algorithm. Another contribution is on behavior of instantaneous analysis that implies the convergence rate for LMS algorithm.. Also, the convergence rates of proposed algorithms are compared with that of standard LMS. 2. Background 2.1. Standard LMS Algorithm Let � = [ℎ�ℎ�ℎ� … . . ℎ���]�denotes the coefficient vector of filter, e.g., an impulse response of FIR filter; �(�) = [� (�)1) … … . � (� − L + 1)] � denotes a vector of input signal where L is the filter length, �(n) denotes a desired signal, �(�) illustrates the observation noise vector, �(�) denotes the output signal��(�) illustrates the estimated coefficient of adaptive filter at iterationn and e(n) denotes an estimation error. Khwarizmi Engineering Journal, Vol. 13, No. 2, P.P. 63 been selected to characterize the theoretically performance. LMS algorithm was proposed by the first time in [12] and its performance was studied in [13]. Later, the analysis was established by Horowitz and Senne via examining succeeded the closed- [14], which was make simpler by Weinstein and Feuer [15]. A brief on a style of adaptive algorithms, that presents linear time- on the instantaneous gradient vector and the LMS is the simplest examination of has involved the styles , which are professional in their own usedinℓ�- LMS, since its nonlinearity. The MSD analysis of LMS have been operated in [19, this paper is on the performance analysis of steady-state. Then, the stability condition on step size is parameter . Finally, with the optimal parameter for proposed algorithms over the vior of implies the .. Also, the convergence rates of proposed algorithms are denotes the , an impulse ) � (� − denotes a vector of input denotes a illustrates the observation denotes the output signal, the estimated coefficient vector ) denotes an Fig. 1. Adaptive System Identification (Direct System Modeling). �(�) = ��(�) � + �(�) �(�) = ��(�)��(�) �(�) = �(�) − �(�) The cost function of traditional expressed as ℂ(�) = �� �(�) � The filter coefficient vector for LMS is updated by: ��(� + 1) = ��(�) − � �ℂ(�)���(�) ��(� + 1) = ��(�) + ��(�)�(�) Where � is the step size of adaptation 3. ��-LMS Algorithm This algorithm was derived by inserting an norm constraint as a sparsity constraint to the cost function of traditional LMS to convergence of LMS algorithm for sparse system identification [٤]. The cost function is presented by the factorization ℂ(�) = 12 �(�) � + ��� �(1���� − � � The update of adaptive filter coefficient:��(� + 1) = ��(�) + � �(�) �(�) �(�) = � ���(��(�)) � ��|�� (�)| Where ��� = ���� is a parameter stabilize the estimation of error and the constraint, parameter β is a positive value that is applied to define the area of zero attraction [4], sgn (∙)is a component-wise sign function defined as , P.P. 62- 69(2017) 1. Adaptive System Identification (Direct …(1) …(2) …(3) traditionalLMS ℂ(�) is …(4) coefficient vector for LMS is ( ) …(5) is the step size of adaptation [1]. This algorithm was derived by inserting anℓ�- norm constraint as a sparsity constraint to the cost LMS to improve the convergence of LMS algorithm for sparse system The cost function is presented by the factorization ��|��� (�)|) … (6) The update of adaptive filter coefficient: ( ) − ��� �(�) … (7) …(8) is a parameter used to error and the latest , parameter β is a positive value that is applied to define the area of zero attraction [4], wise sign function which PDF created with pdfFactory trial version www.pdffactory.com http://www.pdffactory.com http://www.pdffactory.com Mahmood A.K Abdulsattar Al-Khwarizmi Engineering Journal, Vol. 13, No. ���(�) = �� |�|⁄ � ≠ 00 � = 0� By using the first order Taylor series of exponential functions, the computational complexity of (٧) can be reduced as followed [� ��|�| ≈ � 1 − �|�||�| ≤ 1 ��0 �����ℎ���� Substituting (9) and (10) into (8), function can be expressed clearly as �(�) = ⎩⎨ ⎧−����(�) − �, − 1 �� ≤ ℎ� � (�) < 0−����(�) + �, 0 ≤ ℎ� � (�) < 1 ��0, �����ℎ��� Fig. 2 describes the characteristic of the function s(x), which has zero-attraction effect. Fig. 2. Function s(n) for zero-attraction effect. 4. (ZA-LMS) and (RZA-LMS) Algorithm In the zero attractor, a cost function ℂ�� defined by combining square error and penalty of estimation coefficients vector sparsity constraint ℂ��(�) = �� �(�) � + ���||��(�)|| � The update filter of (ZA-LMS) is determined equally ��(� + 1) = ��(�) + ��(�)�(�) −��� ���(��(�)) Where ��� = ����is the factor used to control the force of sparsity penalty.In ZA-LMS all taps are forced to zero uniformly, and its performance will weaken in not sparse systems, then LMS use individual zero attractors for different filter taps and its cost function is ℂ���(�) = �� �(�) � + ���� ∑ log��(1 +�����|ℎ� � (�)|) The update filter of (RZA-LMS) is defined as Khwarizmi Engineering Journal, Vol. 13, No. 2, P.P. 64 …(9) st order Taylor series of exponential functions, the computational ) can be reduced as followed [٤], …(10) Substituting (9) and (10) into (8), function �(∙) 0�� ⎭⎬ ⎫ …(11) Fig. 2 describes the characteristic of the attraction effect. attraction effect. Algorithms ��(�) is defined by combining square error and �� norm penalty of estimation coefficients vector as a …(12) LMS) is determined … (13) to control LMS all taps are forced to zero uniformly, and its performance will weaken in not sparse systems, then RZA- LMS use individual zero attractors for different + …(14) LMS) is defined as ��(� + 1) = ��(�) + ��(�)�(�)���� ���(��(�)) ��� |��(�)| Where���� = ������, where parameter controls the similarity between (15) and (7). 5. p-LMS Algorithm The cost function of (p-LMS) ℂ by combining square error and ���norm penalty coefficient vector as shown as ℂ� (�) = �� �(�) � + �� ||��(�)||�� The update equation of (P-LMS) ��(� + 1) = ��(�) + ��(�) �(�)�� � ���(��(�)) ��|��(�)|��� A parameter p has effect on the estimation bias in addition to the strength of sparsity correction. A parameter �� = ��� is used to stabilize the constraint term and the estimation square error. 6. Proposed Algorithms A. ��-ZA-LMS and ��-RZA-LMS The proposed cost functions of andℓ�-RZA-LMS are designed by merging between ℓ�-norm with ℓ�-norm on the coefficients of filter constraints and/ or reweighted zero attractor constraints into the cost function of LMS to improve the convergence of it. They obviously presentin (18) and (19). ℂ(�) = �� �(�) � + ���||��(�)|| � +� ��|��� (�)|) ℂ(�) = �� �(�) � + ���� ∑ log�������|ℎ�� (�) |) + ��� ∑ (1���� − � ��|�� The update of adaptive filter coefficient ��(� + 1) = ��(�) + � �(�) �(�)��� ���(��(n)) − ��� � ���(��(n ��(� + 1) = ��(�) + � �(�) �(�)���� ���(��(�)) ��� |��(�)| − ��� � ���(��(n) B. p-ZA-LMS and p-RZA-LMS This proposed algorithms have designed by merging between p-norm with constraints on the coefficients of filter and/or , P.P. 62- 69(2017) ( ) − …(15) , where parameter � controls the similarity between (15) and (7). ℂ� (�) is defined norm penalty the ) …(16) LMS) as ( ) − …(17) A parameter p has effect on the estimation bias in addition to the strength of sparsity correction. A is used to stabilize the constraint term and the estimation square error. LMS Algorithms The proposed cost functions of ℓ�-ZA-LMS LMS are designed by merging norm on the coefficients of filter constraints and/ or reweighted zero attractor constraints into the cost the convergence rate obviously presentin (18) and (19). + ��� ∑ (1���� − ...(18) ��(1 +�� (�)|) …(19) The update of adaptive filter coefficient ( ) −n)) � ��|�� (�)| …(20) ( ) −)) � ��|�� (�)| …(21) LMS Algorithms This proposed algorithms have designed by norm with ℓ�-norm constraints on the coefficients of filter and/or PDF created with pdfFactory trial version www.pdffactory.com http://www.pdffactory.com http://www.pdffactory.com Mahmood A.K Abdulsattar Al-Khwarizmi Engineering Journal, Vol. 13, No. 2, P.P. 62- 69(2017) 65 reweighted zero attractor constraints into the LMS cost function to increase the convergence of LMS for sparse systems. They obviously present in (22) and (23). ℂ(�) = �� �(�) � + ���||��(�)|| � + �� ||�� (�)||�� …(22) ℂ(�) = �� �(�) � + ���� ∑ log��(1 +�����|ℎ�� (�) |) + �� ||�� (�)||�� …(23) The update of adaptive filter coefficient ��(� + 1) = ��(�) + � �(�)�(�) −������(��(n)) − �� � ���(��(�))��|��(�)|��� …(24) ��(� + 1) = ��(�) + � �(�)�(�) −���� ���(��(�)) ��� |��(�)| − �� � ���(��(�))��|��(�)|��� …(25) 7. Simulation Results We illustrated the performance of the proposed algorithms for system identification via a computer simulation. An impulse response of unknown system consists 16 coefficients may be one of three systems, the first impulse response, the value of 5th tap equal 1 and the others equal zero is called sparse system, the second impulse response, the values of odd taps equal 1 and the others equal zero is called semi sparse system and the third impulse response, the values of all taps equal 1 is called not sparse system. A white Gaussian noise used as input signal and observed noise with variances 1 and 0.01 individually. The first experiment is planned to examine the convergence rate performance of sparse system with apply our methods with different value of�. The parameters of algorithms are providing in Tables 1, 2, 3, and 4.The results of algorithms are achieved from independent simulations, as shown in Fig’s.3, .4, .5, and .6, these are obvious that the convergence rate of proposed algorithms are more rapidly and produces lower MSD than the LMS are done in a large value of �. The second experiment is planned to test the performance of the proposed algorithms via various sparsity. The unknown system here is sparse system, then after 1500 iterations are semi sparse system and later, after 3000 iteration be not sparse system. The parameters are set as in table 5.Fig. 7 and Fig. 8 show the average estimate of mean square deviation (MSD). Both the ℓ�-RZA- LMS and the p-RZA-LMS return better steady- state MSD and faster convergence than other algorithms (before the 1500th iteration)when the system is sparse. When the number of non-zero taps increases to 8, (after the 1500th iteration and before the 3000th iteration) when the system is semi sparse, the performance of algorithms deteriorate while the ℓ�-RZA-LMS and p-RZA- LMS maintains the best performance and the MSD of ℓ�-RZA-LMS algorithm is lower than that of ℓ�-ZA-LMS algorithm and the MSD of p- RZA-LMS algorithm is lower than that of p-ZA- LMS algorithm among this filter. When the system is non-sparse (after 3000 iterations), the ℓ�-ZA-LMS and p-RZA-LMS maintain the best performance with this filter while the others still achieves comparably to the LMS. The third experiment suggests a system with 128-taps with 8 nonzero coefficients as shown in Fig 9. The iterations of all filters are 5000. table 6 present the parameters of algorithms for this experiment, the average MSD is shown in Fig.10 and Fig.11. For this long sparse system, the convergence rate of all filters is almost the best form that of LMS, but the MSD of ℓ0-RZA-LMS and p-RZA-LMS are relatively minimum. Table 1, Parameters of ��-ZA-LMS Algorithm. � ��� � ��� LMS 0.025 ℓ�-ZA-LMS1 0.0156 1.6 ∗ 10�� 5 1.6 ∗ 10�� ℓ�-ZA-LMS2 0.025 2.5 ∗ 10�� 5 2.5 ∗ 10�� ℓ�-ZA-LMS3 0.0313 3 ∗ 10�� 5 3 ∗ 10�� Table 2, Parameters of ��-RZA-LMS Algorithm. � ��� � ���� LMS 0.025 ℓ�-RZA- LMS1 0.0156 8 ∗ 10�� 5 1.6∗ 10�� ℓ�-RZA- LMS2 0.025 1.3∗ 10�� 5 2.5∗ 10�� ℓ�-RZA- LMS3 0.0313 1.6∗ 10�� 5 3 ∗ 10�� Table 3, Parameters of p-ZA-LMS Algorithm. � �� � ��� LMS 0.025 p-ZA-LMS1 0.0156 1.6 ∗ 10�� 0.6 1.6 ∗ 10�� p-ZA-LMS2 0.025 2.5 ∗ 10�� 0.6 2.5 ∗ 10�� p-ZA-LMS3 0.0313 3 ∗ 10�� 0.6 3 ∗ 10�� Table 4, Parameters of p-RZA-LMS Algorithm. � �� � ���� LMS 0.025 p-RZA-LMS1 0.0156 2.4 ∗ 10�� 0.6 4 ∗ 10�� p-RZA-LMS2 0.025 4 ∗ 10�� 0.6 6 ∗ 10�� p-RZA-LMS3 0.0313 5 ∗ 10�� 0.6 8 ∗ 10�� PDF created with pdfFactory trial version www.pdffactory.com http://www.pdffactory.com http://www.pdffactory.com Mahmood A.K Abdulsattar Al-Khwarizmi Engineering Journal, Vol. 13, No. 2, P.P. 62- 69(2017) 66 Table 5, Parameters of Algorithms for Second Experiment. � ��� � ��� ���� � �� p LMS 0.025 ℓ�-LMS 0.025 8 ∗ 10�� 5 ZA-LMS 0.025 1.6 ∗ 10�� RZA-LMS 0.025 2.5 ∗ 10�� 10 p-LMS 0.025 4 ∗ 10�� 0.6 ℓ�-ZA-LMS 0.025 2.5 ∗ 10�� 5 2.5 ∗ 10�� ℓ�-RZA-LMS 0.025 1.3 ∗ 10�� 5 2.5 ∗ 10�� 10 p-ZA-LMS 0.025 2.5 ∗ 10�� 2.5 ∗ 10�� 0.6 p-RZA-LMS 0.025 6.3 ∗ 10�� 10 4 ∗ 10�� 0.6 Table 6, Parameters of Algorithms for Third Experiment. � ��� � ��� ���� � �� P LMS 0.0078 ℓ�-LMS 0.0078 2.5 ∗ 10�� 5 ZA-LMS 0.0078 6 ∗ 10�� RZA-LMS 0.0078 2.5 ∗ 10�� 10 p-LMS 0.0078 8 ∗ 10�� 0.6 ℓ�-ZA-LMS 0.0078 8 ∗ 10�� 5 8 ∗ 10�� ℓ�-RZA-LMS 0.0078 8 ∗ 10�� 5 2 ∗ 10�� 10 p-ZA-LMS 0.0078 8 ∗ 10�� 8 ∗ 10�� 0.6 p-RZA-LMS 0.0078 6 ∗ 10�� 10 2.5 ∗ 10�� 0.6 Fig. 3. Learning curves of ��-ZA-LMS withdifferent �, driven by white signal. Fig. 4. Learning curves of ��-RZA-LMS with different�, driven by white signal. Fig. 5. Learning curves of p-ZA-LMS with different �, driven by white signal. Fig. 6. Learning curves of p-RZA-LMS with different �, driven by white signal. 0 300 600 900 1200 1500 -50 -40 -30 -20 -10 0 10 Sparse system Time Index M S D V al ue in d B LMS L0-ZA-LMS1 L0-ZA-LMS2 L0-ZA-LMS3 0 300 600 900 1200 1500 -50 -40 -30 -20 -10 0 10 Sparse system Time Index M S D V al ue in d B LMS L0-RZA-LMS1 L0-RZA-LMS2 L0-RZA-LMS3 0 300 600 900 1200 1500 -40 -30 -20 -10 0 10 Sparse system Time Index M S D V al ue in d B LMS P-ZA-LMS1 P-ZA-LMS2 P-ZA-LMS3 0 300 600 900 1200 1500 -50 -40 -30 -20 -10 0 10 Sparse system Time Index M S D V al ue in d B LMS P-RZA-LMS1 P-RZA-LMS2 P-RZA-LMS3 PDF created with pdfFactory trial version www.pdffactory.com http://www.pdffactory.com http://www.pdffactory.com Mahmood A.K Abdulsattar Al-Khwarizmi Engineering Journal, Vol. 13, No. 2, P.P. 62- 69(2017) 67 Fig. 7. The performance of different algorithms of varying sparsity, driven by white signal. Fig. 8. The performance of different algorithms of varying sparsity, driven by white signal. Fig. 9. 128-order adaptive filter. Fig. 10. The performance of 128-order adaptive filters, driven by white input signal. Fig. 11. The performance of 128-order adaptive filters, driven by white input signal. 8. References [1] B. Widrow and S. D. Stearns, “Adaptive Signal Processing”. Prentice-Hall Signal Processing Series, 1985. [2] S. Haykin, “Adaptive Filter Theory”. Englewood Cliffs, NJ: Prentice-Hall, 1986. [3] A. H. Sayed, “Adaptive Filters”. IEEE, a John Willey& Sons, Inc., Publication, New York 2008. [4] Y. Gu, J. Jin, and S. Mei, “ℓ�- Norm constraint LMS algorithm for sparse system identification,” IEEE Signal Process. Lett., vol. 16, no. 9, pp. 774–777, Sep. 2009. [5] D. L. Duttweiler, “Proportionate normalized least-mean-squares adaptation in echo cancellers,” IEEE Trans. Speech Audio Process., vol. 8, no. 5, pp. 508–518, Sep. 2000. [6] H. Deng and R. A. Dyba, “Partial update PNLMS algorithm for network echo cancellation,” in Proc. Int. Conf. Acoust., 0 1000 2000 3000 4000 5000 -50 -40 -30 -20 -10 0 10 20 Multi Sparse level system Iteration M S D V al ue s in d B LMS L0-LMS ZA-LMS RZA-LMS L0-ZA-LMS L0-RZA-LMS 0 1000 2000 3000 4000 5000 -50 -40 -30 -20 -10 0 10 20 Multi Sparse level system Iteration M S D V al ue s in d B LMS ZA-LMS RZA-LMS P-LMS P-ZA-LMS P-RZA-LMS 0 20 40 60 80 100 120 140 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 Second Sparse System Iteration A m pl itu d e 0 1000 2000 3000 4000 5000 -40 -35 -30 -25 -20 -15 -10 -5 0 MSD of Second Sparse system Iteration M S D V al ue s in d B LMS L0-LMS ZA-LMS RZA-LMS L0-ZA-LMS L0-RZA-LMS 0 1000 2000 3000 4000 5000 -40 -35 -30 -25 -20 -15 -10 -5 0 MSD of Second Sparse system Iteration M S D V al ue s in d B LMS ZA-LMS RZA-LMS P-LMS P-ZA-LMS P-RZA-LMS PDF created with pdfFactory trial version www.pdffactory.com http://www.pdffactory.com http://www.pdffactory.com Mahmood A.K Abdulsattar Al-Khwarizmi Engineering Journal, Vol. 13, No. 2, P.P. 62- 69(2017) 68 Speech, Signal Process. (ICASSP), Taiwan, pp. 1329–1332, Apr. 2009. [7] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” in Proc. Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Taiwan, pp. 3125–3128, Apr. 2009. [8] C. Paleologu, J. Benesty, and S. Ciochina, “An improved proportionate NLMS algorithm based on the ℓ�norm,” in Proc. Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Dallas, TX, pp. 309–312, Mar. 2010. [9] Y. Kopsinis, K. Slavakis, and S. Theodoridis, “Online sparse system identification and signal reconstruction using projections onto weighted ℓ�balls,” IEEE Trans. Signal Process., vol. 59, no. 3, pp. 905–930, Mar. 2011. [10] G. Su, J. Jin, Y. Gu, and J. Wang “Performance Analysis of ℓ�-Norm Constraint Least Mean Square Algorithm” IEEE Trans. Signal Process., vol. 60, NO. 5, May 2012. [11] J. W. Yoo, J. W. Shin, and P. G. Park, Senior Member, IEEE “An Improved NLMS Algorithm in Sparse Systems Against Noisy Input Signals” IEEE Trans. Circuit and system., Vol. 62, no. 3, Mar. 2015. [12] B. Widrow and M. E. Hoff, “Adaptive switching circuits,” IreWescon Convention Rec., no. 4, pp. 96–104, 1960. [13] B. Widrow, J. M. McCool, M. C. Larimore, and C. R. Johnson, “Stationary and nonstationary learning characteristics of the LMS adaptive filter,” Proc. IEEE, vol. 64, no. 8, pp. 1151–1162, Aug. 1976. [14] L. Horowitz and K. Senne, “Performance advantage of complex LMS for controlling narrow-band adaptive arrays,” IEEE Trans. Acoust.,Speech, Signal Process., vol. ASSP- 29, no. 3, pp. 722–736, Jun. 1981. [15] A. Feuer and E. Weinstein, “Convergence analysis of LMS filters withuncorrelated Gaussian data,” IEEE Trans. Acoust., Speech, SignalProcess., vol. ASSP-33, no. 1, pp. 222–230, Feb. 1985. [16] A. Ahlen, L. Lindbom, and M. Sternad, “Analysis of stability and performance of adaptation algorithms with time-invariant gains,” IEEE Trans. Signal Process., vol. 52, no. 1, pp. 103–116, Jan. 2004. [17] S. C. Chan and Y. Zhou, “Convergence behavior of NLMS algorithm for Gaussian inputs: Solutions using generalized Abelian integral functions and step size selection,” J. Signal Process. Syst., vol. 3, pp. 255–265, 2010. [18] D. T. M. Slock, “On the convergence behavior of the LMS and the normalized LMS algorithms,” IEEE Trans. Signal Process., vol. 41, no. 9, pp. 2811–2825, Sep. 1993. [19] K. Shi and P. Shi, “Convergence analysis of sparse LMS algorithms with l1-norm penalty,” Signal Process., vol. 90, no. 12, pp. 3289–3293, Dec. 2010. [20] F. Y. Wu, Student Member, IEEE, and F. Tong, Member, IEEE “Non-Uniform Norm Constraint LMS Algorithm for Sparse System Identification” IEEE communication letter, vol. 17, no. 2, Feb. 2013. PDF created with pdfFactory trial version www.pdffactory.com http://www.pdffactory.com http://www.pdffactory.com )2017( 62-69، صفحة 2، العدد13دجلة الخوارزمي الھندسیة المجلم عبد الستارمحمود عبد القادر 69 التكیفیة المتناثرة األنظمةالخوارزمیات التكیفیة الھجینة المقترحة لتحدید ھویة **سامر حسین علي* محمود عبد القادر عبد الستار جامعة بغداد/ كلیة الھندسة /ئیةالكھربا الھندسة قسم mahmood.abdulsattar@googlemail.com : االلكتروني البرید* msc_eng_samerhussein@yahoo.com : االلكتروني البرید** الخالصة -ℓ�-norm Least Mean Square (ℓ�-LMS), Zero-Attracting LMS (ZA-LMS), Reweighted Zero-Attracting LMS (RZA لتحدید النظام التكیفي المتناثر اقترحت سابقا خوارزمیات وھي LMS), and p-norm LMS (p-LMS). ℓ�-ZA-LMS, ℓ�-RZA-LMS, p-ZA-LMS and p-RZA-LMS الخوارزمیات المقترحة حالیا ھي وأسماء. LMSتقید مناسب لدالة كلفة بإضافةوذلك LMSعن طریق تعدیل دالة الكلفة للـ وفي ھذا البحث یوجد تحلیل ریاضي متكامل للخوارزمیات . للنظام المتفرقب ومعدل التقار MSDلحسین LMSتقییدین لدالة كلفة بإضافةوصممت فیما یتعلق بالمعلومات الخاصة لكل خوارزمیة ونسبة التناثر MSDومناقشة نتائج ، الداخلة للنظام اإلشارةتمثل WGNالمقترحة عن طریق استعمال بین الخوارزمیات المقترحة حالیا والخوارزمیات المقترحة سابقا والشروط الضروریة للخوارزمیات المقترحة حالیا والتي تبین العالقة فضال عن. المبینة .المحاكات والنتائج النظریة باالعتماد على المعلومات لكل خوارزمیة أنظمةھناك مقارنة بین نتائج ، وأخیرا. تحسین معدل التقارب للخوارزمیات المقترحة PDF created with pdfFactory trial version www.pdffactory.com mailto:mahmood.abdulsattar@googlemail.com mailto:msc_eng_samerhussein@yahoo.com http://www.pdffactory.com http://www.pdffactory.com