International Journal of Analysis and Applications Volume 18, Number 6 (2020), 920-938 URL: https://doi.org/10.28924/2291-8639 DOI: 10.28924/2291-8639-18-2020-920 SOME MULTISTEP ITERATIVE METHODS FOR NONLINEAR EQUATION USING QUADRATURE RULE GUL SANA∗, MUHAMMAD ASLAM NOOR, KHALIDA INAYAT NOOR Department of Mathematics, COMSATS University Islamabad, Park Road, Islamabad, Pakistan ∗Corresponding author: gulsana123@yahoo.com Abstract. We introduce a sequence of third and fourth order iterative schemes to determine the roots of nonlinear equations by applying quadrature formula and decomposition approach. We also examined the convergence of suggested iterative methods under varied constraint. Various numerical test examples are presented to exhibit the validity, efficiency and implementaion of our algorithms. 1. Introduction An extensive range of problems that appeared in diverse directions of pure and applied sciences can be considered by mean of nonlinear equations in the framework of novel and inventive techniques. Many authors proposed, analyzed and modified diversity of numerical methods using various approaches such as Taylor series, decomposition methods, quadrature formulas, modified homotopy perturbation method and variational iterative method, and. A vast literature is available that highlight different methods used to solve nonlinear equations for example see, [1, 3–33] and references therein. In Adomian decomposition method solution is considered in terms of an infinite series, which converges to towards exact solution. Chun [6] and Abbasbandy [1] constructed and investigated different higher order iterative methods by applying the decomposition technique of Adomian [2]. Darvishi and Barati [9] also applied the adomian decomposition Received June 22nd, 2020; accepted July 21st, 2020; published September 3rd, 2020. 2010 Mathematics Subject Classification. 41A55. Key words and phrases. convergence analysis; quadrature formula; iterative method; decomposition technique. ©2020 Authors retain the copyrights of their papers, and all open access articles are distributed under the terms of the Creative Commons Attribution License. 920 https://doi.org/10.28924/2291-8639 https://doi.org/10.28924/2291-8639-18-2020-920 Int. J. Anal. Appl. 18 (6) (2020) 921 technique to develop Newton type methods that are cubically convergent for the solution of system of non- linear equation. Implementation of this Adomian decomposition technique required higher order derivatives evaluation, which is major pitfall of this method. To overcome this drawback, several new techniques have been suggested and analyzed by many researchers. Daftardar-Gejji and Jafari [10] used different modifi- cations of adomian decomposition method and introduced simple technique which do not need derivative evaluation of the Adomian polynomial. Numerous authors used the decomposition technique [10] and de- rived different higher order iterative methods. Numerical computation of univariate integral is one of the most notable problem in numerical analysis. Weerakon and Fernando cite32 improved the convergence of Newton method using quadrature rule. Later on, Ozban [27] investigated some new variant forms of Newton method by using the concept of harmonic mean and mid-point rule. Frontini and Sormani [11] modified the P-dimensional case using quadrature formula and developed new iterative methods for system of non- linear equations that are better than classical methods. Noor [25] developed fifth order convergent iterative method using Gaussian quadrature formula and investigated its efficacy as compared to the methods already existed in literature. Ali et al.[3] introduced a family of iterative method using the quadrature formula as well as the fundamental theorem of calculus and decomposition technique and checked the validity and performance of these methods by examining two mathematical models. Also in [21, 29, 30], decomposition technique [10] is merged sophistically with coupled system of equation to investigate various order iterative methods. Motivated and inspired by the on-going research activities in this direction in this work we con- sider the well-known fixed point iterative method in which we rewrite the nonlinear equation f(u) = 0 as u = g(u). In section2, we use fundamental law of calculus, along with the functional equation u = g(u) as coupled system of equation and applying the decomposition technique [10]. We present and introduce some new iterative methods. Section 3 consists of convergence analysis of proposed methods. Some numerical examples are presented to make a comparative study of newly constructed methods with known third and fourth order convergent iterative algorithms. 2. Creation of iterative methods This section comprises of some new multistep third and fourth order convergent iterative methods in view of mid-point, Trapezoidal rule and decomposition technique [10]. 2.1. Mid-Point Rule. Consider the nonlinear equation f(u) = 0 (2.1) which is equivalent to u = g(u). (2.2) Int. J. Anal. Appl. 18 (6) (2020) 922 Assume that α is simple root of nonlinear equation (2.1) and γ is initial guess sufficiently close to root. Using fundamental theorem of calculus and mid-point quadrature formula, we have u = g(γ) + (u−γ)g′( u + γ 2 ). (2.3) Now using the technique of He [16], the nonlinear equation (2.1) can be written as an equivalent coupled system of equations u = g(γ) + (u−γ)g′( u + γ 2 ) + H(u) H(u) =g(u) −g(γ) − (u−γ)g′( u + γ 2 ) =u ( 1 −g′( u + γ) 2 ) + γg′( u + γ 2 ) −g(γ), (2.4) from which, it follows that u = H(u) 1 −g′ ( u+γ 2 ) + g(γ) −γg′(u+γ2 ) 1 −g′ ( u+γ 2 ) =c + M(u), (2.5) where c =γ (2.6) M(u) = H(u) 1 −g′(u+γ 2 ) + g(γ) −γ 1 −g′(u+γ 2 ) . (2.7) It is clear that M(u) is nonlinear operator.Now we establish sequence of higher order iterative methods implementing decomposition technique presented by Daftardar-Gejji and Jafari [10]. In this technique, the solution of (2.1) can be represented as in terms of infinite series. u = ∞∑ i=0 ui. (2.8) here the operator M(u) can be decomposed as M(u) = M(uo) + ∞∑ i=1 {M( i∑ j=0 uj) −M( i−1∑ j=0 uj)}. (2.9) Thus from equation (2.5),(2.8) and (2.9), we have ∞∑ i=1 ui = c + M(uo) = ∞∑ i=1 M( i∑ j=0 uj) −M( i−1∑ j=0 uj), (2.10) Int. J. Anal. Appl. 18 (6) (2020) 923 which generates the following iterative scheme uo = c u1 = M(uo) u2 = M(uo + u1) −M(uo) . . . un+1 = M( n∑ j=0 uj) −M( n−1∑ j=0 uj) n = 1, 2, ... (2.11) Consequently, it follows that u1 + u2 + ... + un+1 = M(uo + u1 + u2 + ... + un), and u = c + ∞∑ i=1 ui. (2.12) It is noted that u is approximated by Un = (uo + u1 + u2 + ... + un), and lim x→∞ Un = u. For n=0, u ≈ Uo = uo = c = γ. (2.13) From (2.4), it can easily be computed as H(uo) = 0. Using(2.7), we get u1 = M(uo) = H(u0) 1 −g′(uo+γ 2 ) + g(γ) −γ 1 −g′(uo+γ 2 ) = g(γ) −γ 1 −g′(uo+γ 2 ) For n=1, u ≈ U1 = uo + u1 = uo + M(uo), Int. J. Anal. Appl. 18 (6) (2020) 924 u ≈ U1 = uo + u1 = γ + g(γ) −γ 1 −g′(uo+γ 2 ) . Using(2.13), we have u = g(γ) −γg′(γ) 1 −g′(γ) . (2.14) This fixed point formulation is used to suggest the following algorithm. Algorithm2A. For a given u0 (initial guess), approximate solution un+1 is computed by the following iterative scheme un+1 = g(un) −ung′(un) 1 −g′(un) . (2.15) Kang et al. [28] developed this algorithm and proved that Algorithm 2A has quadratic convergence. From(2.4) and (2.7),we have H(uo + u1) = g(uo + u1) −g(γ) − (uo + u1 −γ)g′( uo + u1 + γ 2 ). Thus u1 + u2 = M(uo + u1) = H(uo + u1) 1 −g′(uo+u1+γ 2 ) + g(γ) −γ 1 −g′(uo+u1+γ 2 ) = g(uo + u1) −g(γ) − (uo + u1 −γ)g′(uo+u1+γ2 ) 1 −g′(uo+u1+γ 2 ) + g(γ) −γ 1 −g′(uo+u1+γ 2 ) = g(uo + u1) 1 −g′(uo+u1+γ 2 ) − (uo + u1 −γ)g′(uo+u1+γ2 ) 1 −g′(uo+u1+γ 2 ) − γ 1 −g′(uo+u1+γ 2 ) For n=2, u ≈ U2 = uo + u1 + u2 = c + M(uo + u1). = γ − γ 1 −g′(uo+u1+γ 2 ) + g(uo + u1) 1 −g′(uo+u1+γ 2 ) − (uo + u1 −γ)g′(uo+u1+γ2 ) 1 −g′(uo+u1+γ 2 ) = g(uo + u1) 1 −g′(uo+u1+γ 2 ) − (uo + u1)g ′(uo+u1+γ 2 ) 1 −g′(uo+u1+γ 2 ) Take uo + u1 = v = g(γ) −γg′(γ) 1 −g′(γ) = g(v) 1 −g′(v+γ 2 ) − vg′(v+γ 2 ) 1 −g′(v+γ 2 ) . Int. J. Anal. Appl. 18 (6) (2020) 925 This relation yields the following two step method for solving nonlinear equation Algorithm 2B. For a given initial guess u0, un+1 the approximated solution un+1 can be computed by the following iterative schemes. vn = g(un) −ung′(un) 1 −g′(un) (2.16) un+1 = g(vn) −vng′( un+vn) 2 1 −g′(un+vn) 2 ) , n = 0, 1, 2, .... (2.17) It is noted that u0 + u1 + u2 = w = g(uo + u1) 1 −g′(uo+u1+γ 2 ) − (uo + u1)g ′( (uo+u1+γ 2 ) 1 −g′(uo+u1+γ 2 ) . (2.18) From (2.4) and (2.7), we can write H(uo + u1 + u2) = g(uo + u1 + u2) −g(γ) − (uo + u1 + u2 −γ)g′( uo + u1 + u2 + γ 2 ). and u1 + u2 + u3 = M(uo + u1 + u2) = H(uo + u1 + u2) 1 −g′(uo+u1+u2+γ 2 ) + g(γ) −γ 1 −g′(uo+u1+u2+γ 2 ) = g(uo + u1 + u2) −g(γ) − (uo + u1 + u2 −γ)g′(uo+u1+u2+γ2 ) 1 −g′(uo+u1+u2+γ) 2 + g(γ) −γ 1 −g′(uo+u1+u2+γ 2 ) = g(uo + u1 + u2) 1 −g′(uo+u1+u2+γ 2 ) − (uo + u1 + u2 −γ)g′(uo+u1+u2+γ2 ) 1 −g′(uo+u1+u2+γ 2 ) − γ 1 −g′(uo+u1+u2+γ 2 ) . For n=3, u ≈ U3 = uo + u1 + u2 + u3 = uo + M(uo + u1 + u2). = γ − γ 1 −g′(uo+u1+u2+γ 2 ) + g(uo + u1 + u2) 1 −g′(uo+u1+u2+γ 2 ) − (uo + u1 + u2 −γ)g′(uo+u1+u2+γ2 ) 1 −g′(uo+u1+u2+γ 2 ) . Using (2.18), we have = γ + g(w) 1 −g′(w+γ 2 ) − (w −γ)g′(w+γ 2 ) 1 −g′(w+γ 2 ) − γ 1 −g′(w+γ 2 ) = g(w) 1 −g′(w+γ 2 ) − wg′(w+γ 2 ) 1 −g′(w+γ 2 ) . Using this relation, we suggest the following three step method for solving nonlinear equation (2.1). Int. J. Anal. Appl. 18 (6) (2020) 926 Algorithm 2C. For a given initial guessu0, compute the approximate solution un+1 by the following iterative scheme vn = g(un) −ung′(un) 1 −g′(un) wn = g(vn) −vng′( un+vn) 2 1 −g′(un+vn) 2 ) (2.19) un+1 = g(wn) −wng′( un+wn) 2 1 −g′(un+wn) 2 ) , n = 0, 1, 2, ... (2.20) 2.2. Trapezoidal rule. Again using the technique of He [16] and fundamental law of calculus along with Trapezoidal rule, we can obtain . u = g(γ) + (u−γ) (g′(u) + g′(γ) 2 ) + H(u) H(u) =g(u) −g(γ) − (u−γ) (g′(u) + g′(γ) 2 ) =u [ 1 − (g′(u) + g′(γ) 2 )] + γ (g′(u) + g′(γ) 2 ) −g(γ), (2.21) from which it follows that u = H(u) 1 − ( g′(u)+g′(γ) 2 ) + g(γ) −γ ( g′(u)+g′(γ) 2 ) 1 − ( (g′(u)+g′(γ) 2 ) =c + M(u), (2.22) where c =γ (2.23) M(u) = H(u) 1 − ( g′(u)+g′(γ) 2 ) + g(γ) −γ 1 − ( g′(u)+g′(γ) 2 ). (2.24) Now applying decomposition technique of Daftardar-Gejji and Jafari [10], we have For n=0, u ≈ Uo = uo = c = γ. (2.25) From (2.21), it can easily be computed as H(uo) = 0. Using(2.24), we get u1 = M(uo) = H(uo) 1 − ( g′(uo)+g′(γ) 2 ) + g(γ) −γ 1 − ( g′(uo)+g′(γ) 2 ) = g(γ) −γ 1 − ( g′(uo)+g′(γ) 2 ). Int. J. Anal. Appl. 18 (6) (2020) 927 For n=1, u ≈ U1 = uo + u1 = uo + M(uo) (2.26) u ≈ U1 = uo + u1 = γ + g(γ) −γ 1 − ( (g′(uo)+g′(γ) 2 ). Using(2.26),we have u = g(γ) −γg′(γ) 1 −g′(γ) . (2.27) This formulation determines the algorithm 2A. From(2.21) and (2.24),we have H(uo + u1) = g(uo + u1) −g(γ) − (uo + u1 −γ) (g′(uo + u1) + g′(γ) 2 ) . and u1 + u2 = M(uo + u1) = H(uo + u1) 1 − ( g′(uo+u1)+g′(γ) 2 ) + g(γ) −γ 1 − ( g′(uo+u1)+g′(γ) 2 ) = g(uo + u1) −g(γ) − (uo + u1 −γ) ( g′(uo+u1)+g ′(γ) 2 ) 1 − ( (g′(uo+u1)+g′(γ) 2 ) + g(γ) −γ 1 − ( g′(uo+u1)+g′(γ) 2 ) = g(uo + u1) 1 − ( g′(uo+u1)+g′(γ) 2 ) − (uo + u1 −γ) ( g′(uo+u1)+g ′(γ) 2 ) 1 − ( g′(uo+u1)+g′(γ) 2 ) − γ 1 − ( g′(uo+u1)+g′(γ) 2 ). For n=2, u ≈ U2 = uo + u1 + u2 = c + M(uo + u1). =γ − γ 1 − ( g′(uo+u1)+g′(γ) 2 ) + g(uo + u1) 1 − ( g′(uo+u1)+g′(γ) 2 ) − (uo + u1 −γ) ( (g′(uo+u1)+g ′(γ) 2 ) 1 − ( g′(uo+u1)+g′(γ) 2 ) . Take uo + u1 = v = g(γ) −γg′(γ) 1 −g′(γ) . =γ − γ 1 − ( g′(v)+g′(γ) 2 ) − g(v) 1 − ( g′(v)+g′(γ) 2 ) − (v −γ) ( (g′(v)+g′(γ) 2 ) 1 − ( g′(v)+g′(γ) 2 ) = g(v) 1 − ( g′(v)+g′(γ) 2 ) − v ( g′(v)+g′(γ) 2 ) 1 − ( g′(v)+g′(γ) 2 ). Int. J. Anal. Appl. 18 (6) (2020) 928 This relation yields the following two step method for solving nonlinear equation Algorithm 2D. For a given initial guessu0, the approximate solution un+1 can be computed by the following iterative schemes vn = g(un) − (un)g′(un) 1 −g′(un) (2.28) un+1 = g(vn) −vn ( g′(vn)+g ′(un) 2 ) 1 − ( g′(vn)+g′(un) 2 ) n = 0, 1, 2, ... (2.29) It is noted that uo + u1 + u2 = w = g(uo + u1) 1 − ( g′(uo+u1)+g′(γ) 2 ) − (uo + u1) ( g′(uo+u1)+g ′(γ) 2 ) 1 − ( g′(uo+u1)+g′(γ) 2 ) . (2.30) From (2.21) and (2.24),we can write H(uo + u1 + u2) = g(uo + u1 + u2) −g(γ) − (uo + u1 + u2 −γ) (g′(uo + u1 + u2) + g′(γ) 2 ) and u1 + u2 + u3 =M(uo + u1 + u2) = H(uo + u1 + u2) 1 − ( g′(uo+u1+u2)+g′(γ) 2 ) + g(γ) −γ 1 − ( g′(uo+u1+u2)+g′(γ) 2 ) = g(uo + u1 + u2) −g(γ) − (uo + u1 + u2 −γ) ( g′(uo+u1+u2)+g ′(γ) 2 ) 1 − ( (g′(uo+u1+u2)+g′(γ) 2 ) + g(γ) − (γ)( 1−g′(uo+u1+u2)+g′(γ) 2 ) = g(uo + u1 + u2) 1 − ( g′(uo+u1+u2)+g′(γ) 2 ) − (uo + u1 + u2 −γ) ( (g′(uo+u1+u2)+g ′(γ) 2 ) 1 − ( g′(uo+u1+u2)+g′(γ) 2 ) − γ 1 − ( g′(uo+u1+u2)+g′(γ) 2 ). For n=3, u ≈ U3 = uo + u1 + u2 + u3 = uo + M(uo + u1 + u2) = γ − γ 1 − ( g(uo+u1+u2)+g′(γ) 2 ) + g(uo + u1 + u2) 1 − ( g′(uo+u1+u2)+g′(γ) 2 ) − (uo + u1 + u2 −γ) ( (g′(uo+u1+u2)+g ′(γ) 2 ) 1 − ( g′(uo+u1+u2)+g′(γ) 2 ) . Int. J. Anal. Appl. 18 (6) (2020) 929 Using (2.29), we obtain =γ − γ 1 − ( g′(w)+g′(γ) 2 ) − g(w) 1 − ( g′(w)+g′(γ) 2 ) − (w −γ) ( g′(w)+g′(γ) 2 ) 1 − ( g′(w)+g′(γ) 2 ) = g(w) 1 − ( g′(w)+g′(γ) 2 ) − w ( g′(w)+g′(γ) 2 ) 1 − ( g′(w)+g′(γ) 2 ). This formulation yeilds the following three step method for solving nonlinear equation(2.1). Algorithm 2E. For a given initial guessu0, the approximated solution un+1 is computed by the follow- ing iterative schemes. vn = g(un) −ung′(un) 1 −g′(un) wn = g(vn) −vn ( g′(vn)+g ′(un) 2 ) 1 − ( g′(vn)+g′(un) 2 ) (2.31) un+1 = g(wn) −wn ( g′(wn)+g ′(un) 2 ) 1 − ( g′(wn)+g′(un) 2 ) , n = 0, 1, 2, ... (2.32) 3. Convergence Analysis of proposed Iterative methods This section comprises of convergence analysis of proposed algorithms 2B, 2D, 2C and 2E and it shown that these methods are third and fourth order convergent respectively. Theorem 3.1.Let I ⊂ R be an open interval and f : I → R is differential function.If β ∈ I be root of f(u) = 0 and u0 is sufficiently close to β then multistep method defined by Algorithm 2B and Algorithm 2C has at least third and fourth order of convergence and satisfies the error equation. Poof. Let β be the root of nonlinear equation f(u) = 0 or equivalently u = g(u). Let en and en+1 be the errors at nth and (n+1) iterations respectively. Now expanding g(u) and g′(u) by using taylor series about α, we have g(u) = β + eng ′(β) + e2n 2 g′′(β) + e3n 6 g′′′(β) + O(e4n) g′(u) = g′(β) + eng ′′(β) + e2n 2 g′′′(β) + e3n 6 giv(β) + O(e4n) g(un) −ung(un) = β −βg′(β) −βg′′(β)en − 1 2 ( g′′(β) + βg′′′(β) ) e2n + 1 6 ( 2g′′′(β) + βgiv(β) ) + O(e4n) 1 −g′(un) = 1 −g′(β) −g′′(β)en − e2n 2 g′′′(β) − e3n 6 giv(β) + O(e4n) g(un) −ung(un) 1 −g′(un) = β + g′′(β) 2(−1 + g′(β)) e2n − 2g′′′(β) − 2g′′′(β)g′(β) + 3g′′2(β) 6(−1 + g′(β))2 e3n + O(e 4 n). Int. J. Anal. Appl. 18 (6) (2020) 930 From (2.16), we obtain vn = β + g′′(β) 2(−1 + g′(β)) e2n + (−2g′′′(β) + 2g′′′(β)g′(β) − 3g′′2(β) 6(−1 + g′(β))2 ) e3n + 1 12(−1 + g′(β))3 ( 2giv(β) − 4giv(β)g′(β) + 2giv(β)g′2(β) + 7g′′(β)g′′′(β) − 7g′′(β)g′′′(β)g′(β) + 6g′′3(β) ) e4n + O(e 5 n). g(vn) = β+ g′(β)g′′(β) 2(−1 + g′(β)) e2n + ( g′(β) ( − 2g′′′(β) + 2g′′′(β)g′(β) − 3g′′2(β) )) 6(−1 + g′(β))2 e3n + 1 24(−1 + g′(β))3 ( 4giv(β)g′(β) − 8giv(β)g′2(β) + 4giv(β)g′3(β) − 14g′′(β)g′′′(β)g′2(β) +14g′′(β)g′′′(β)g′(β) + 15g′′3(β)g′(β) − 3g′′3(β) ) e4n + O(e 5 n). (3.1) Expanding g′ ( vn+un 2 ) in terms of Taylors series about α, we have g′ (vn + un 2 ) = g′(β) + g′′(β) 2 en + ( 2g′′2(β) + g′′′(β)g′(β) −g′′′(β) ) 8(−1 + g′(β)) e2n + (−14g′′(β)g′′′(β) + 14g′′(β)g′′′(β)g′(β) + giv(β) − 12g′′3(β) − 2giv(β)g′(β) + giv(β)g′2(β) 48(−1 + g′(β))2 ) e3n + 1 96(−1 + g′(β))3 ( 11giv(β)g′′(β) − 22giv(β)g′(β)g′′(β) + 11giv(β)g′2(β)g′′(β) + 37g′′2(β)g′′′(β) − 37g′′2(β)g′′′(β)g′(β) + 24g′′4(β) + 8g′′′2(β) − 16g′′′3(β)g′(β) + 8g′2(β)g′′′2(β) ) + O(e5n). (3.2) g(vn) −vng′ (vn + un 2 ) = −β(−1 + g′(β)) − 1 2 g′′(β)en − 1 8(−1 + g′(β) ( β ( 2g′′2(β) −g′′′(β) −g′′′(β)g′(β) ) e2n − 1 48(−1 + g′(β))2 ( − 14βg′′(β)g′′′(β) + 14βg′(β)g′′(β)g′′′(β) − 12βg′′′2(β) +βgiv(β) − 2(β)giv(β)g′(β) + βgiv(β)g′2(β) − 12g′′2(β) + 12g′(β)g′′2(β) ) e3n + 1 96(−1 + g′(β))3 ( − 44g′′2(β)g′′′(β)g′(β) + 22g′′′(β)g′′(β)g′2(β) + 8βg′′′2(β) + 24g′′3(β) −24g′(β)g′′3(β) + 22g′′′(β)g′′(β) + 11βg′′(β)giv(β) + 37βg′′2(β)g′′′(β) − 16βg′′′2(β)g′(β) +8g′′′2(β)g′2(β) + 24βg′′4(β) − 22βg′(β)g′′(β)giv(β) + 11βg′2(β)g′′(β)giv(β) − 37βg′(β)g′′′(β)g′′2(β) ) e4n + O(e 5 n). (3.3) Int. J. Anal. Appl. 18 (6) (2020) 931 1 −g′ (vn + un 2 ) = 1 −g′(β) − g′′(β) 2 en + ( 2g′′2(β) + g′′′(β)g′(β) −g′′′(β)) ) 8(−1 + g′(β)) e2n − (−14g′′(β)g′′′(β) + 14g′′(β)g′′′(β)g′(β) + giv(β) − 12g′′3(β) − 2giv(β)g′(β) + giv(β)g′2(β) 48(−1 + g′(β))2 ) e3n − 1 96(−1 + g′(β))3 ( 11giv(β)g′′(β) − 22giv(β)g′(β)g′′(β) + 11giv(β)g′2(β)g′′(β) + 37g′′2(β)g′′′(β) −37g′′2(β)g′′′(β)g′(β) + 24g′′4(β) + 8g′′′2(β) − 16g′′′3(β)g′(β) + 8g′2(β)g′′′2(β) ) e4n + O(e 5 n) (3.4) Dividing (3.3),(3.4) and from (2.19), we have wn = g(vn) −vng′ ( vn+un 2 ) 1 −g′ ( vn+un 2 ) = β + ( g′′2(β) 4(−1 + g′(β))2 ) e3n + 1 48(−1 + g′(β))3 ( g′′(β) ( 11g′′′(β)g′(β) − 11g′′′(β) − 18g′′2(β) )) e4n + O(e5n). Expanding g(wn) in terms of Taylor series about α, we obtain g(wn) = β + ( g′(β)g′′2(β) 4(−1 + g′(β))2 ) e3n − 1 48(−1 + g′(β))3 ( g′(β)g′′(β) ( − 11g′′′(β)g′(β) + 11g′′′(β) + 18g′′2(β) )) e4n + O(e 5 n). (3.5) Expanding g ( wn+un 2 ) in terms of Taylor series about α, we get g′ (wn + un 2 ) = g′(β) + g′′(β) 2 en + g′′′(β) 8 e2n + 1 48(−1 + g′(β))2 ( 6g′′3(β) + giv(β) − 2giv(β)g′(β) +giv(β)g′2(β) )) e3n − 1 96(−1 + g′(β))3 ( g′′2(β) − 17g′′′(β)g′(β) + 17g′′′(β) + 18g′′2(β) ) e4n + O(e 5 n). 1 −g ( wn + un 2 ) = 1 −g′(β) − g′′(β) 2 en − g′′′(β) 8 e 2 n − ( 6g′′3(β) + giv(β) − 2giv(β)g′(β) + giv(β)g′2(β)) ) 48(−1 + g′(β))2 e 3 n − ( g ′′2 (β) (−17g′′(β)g′(β) − 17g′′′(β) + 18g′′2(β) 96(−1 + g′(β))3 )) e 4 n + O(e 5 n). (3.6) Now using (2.20)and dividing (3.5),(3.6), simplifying un+1 = g(wn) −wng′( wn+un2 ) 1 −g′( wn+un 2 ) =β + ( g′′3(β) 8(−1 + g′(β))3 ) e 4 n + O(e 5 n) en+1 =β + ( g′′3(β) 8(−1 + g′(β))3 ) e 4 n + O(e 5 n). Int. J. Anal. Appl. 18 (6) (2020) 932 Theorem 3.2.Let f : I → R is differential function and I ⊂ R be an open interval . If β ∈ I be root of f(u) = 0 and u0 is sufficiently close to β then multistep method defined by Algorithm 2D and Algorithm 2E has at least third and fourth order of convergence and satisfies the error equation. Proof. From equation (3.1), we have g(vn) = β + g′(β)g′′(β) 2(−1 + g′(β)) e 2 n + ( g′(β) ( − 2g′′′(β) + 2g′′′(β)g′(β) − 3g′′2(β) )) 6(−1 + g′(β))2 e 3 n + 1 24(−1 + g′(β))3 ( 4g iv (β)g ′ (β) − 8giv(β)g′2(β) + 4giv(β)g′3(β) − 14g′′(β)g′′′(β)g′2(β) +14g ′′ (β)g ′′′ (β)g ′ (β) + 15g ′′3 (β)g ′ (β) − 3g′′3(β) ) e 4 n + O(e 5 n). g ′ (vn) = g ′ (β) + g′′2(β) 2(−1 + g′(β)) e 2 n + ( − 2g′′′(β) + 2g′′′(β)g′(β) − 3g′′2(β) ) 6(−1 + g′(β))2 e 3 n + 1 24(−1 + g′(β)3) ( g ′′ (β) ( 4g iv (β) − 8giv(β)g′(β) + 4giv(β)g′2(β) + 11g′′(β)g′′′(β) −11g′′(β)g′′′(β)g′(β) + 12g′′3(β) )) e 4 n + O(e 5 n). Expanding ( g′(vn)+g ′(un) 2 ) in terms of Taylors series about α, we have ( g′(vn) + g ′(un) 2 ) = g ′ (β) + g′′(β) 2 en + ( g′′2(β) + g′′′(β)g′(β) −g′′′(β) ) 4(−1 + g′(β)) e 2 n + 1 12(−1 + g′(β))2 ( − 2g′′(β)g′′′(β) + 2g′′(β)g′′′(β)g′(β) + giv(β) − 3g′′3(β) − 2giv(β)g′(β) +g iv (β)g ′2 (β) ) e 3 n + 1 48(−1 + g′(β))3 ( g ′′ (β) ( 4g iv (β) − 8giv(β)g′(β) + 4giv(β)g′2(β) +11g ′′ (β)g ′′′ (β) − 11g′′(β)g′′′(β)g′(β) + 12g′′3(β) )) + O(e 5 n). g(vn) −vn ( g′(vn) + g ′(un) 2 ) = −β(−1 + g′(β)) − 1 2 βg ′′ (β)en − 1 4(−1 + g′(β) ( β ( −g′′′(β) −g′′2(β) +g ′′′ (β)g ′ (β) ) e 2 n − 1 12(−1 + g′(β))2 ( − 2βg′′(β)g′′′(β) + 2βg′(β)g′′(β)g′′′(β) + βgiv(β) − 2βgiv(β)g′(β) +βg iv (β)g ′2 (β) − 3(β)g′′2(β) − 3g′′2(β) + 3g′(β)g′′2(β) ) e 3 n − 1 48(−1 + g′(β))3 ( g ′′ (β) ( 4βg iv (β) + 11βg ′′′ (β)g ′′ (β) − 8βgiv(β)g′(β) + 4βgiv(β)g′2(β) −11βg′(β)g′′(β)g′′′(β) − 28g′(β)g′′′(β) + 14g′′(β)g′2(β) + 12g′′2(β) − 12g′(β)g′′2(β) +14g ′′′ (β) + 12g ′′3 (β) ) e 4 n + O(e 5 n). (3.7) 1 − ( g′(vn) + g ′(un) 2 ) = 1 −g′(β) − g′′(β) 2 en − ( g′′2(β) + g′′′(β)g′(β) −g′′′(β)) ) 4(−1 + g′(β)) e 2 n − (−2g′′(β)g′′′(β) + 2g′′(β)g′′′(β)g′(β) + giv(β) − 3g′′3(β) − 2giv(β)g′(β) + giv(β)g′2(β) 12(−1 + g′(β))2 ) e 3 n − 1 48(−1 + g′(β))3 ( g ′′ (β) ( 4g iv (β) − 8giv(β)g′(β) + 4giv(β)g′2(β) + 11g′′(β)g′′′(β) −11g′′(β)g′′′(β)g′(β) + 12g′′3(β) )) + O(e 5 n). (3.8) Int. J. Anal. Appl. 18 (6) (2020) 933 Subsituting the values (3.7),(3.8) in (2.30) and simplifying, we obtain wn = g(vn) −vn ( g′(vn)+g ′(un) 2 ) 1 − ( g′(vn)+g′(un) 2 ) =β + ( g′′2(β) 4(−1 + g′(β))2 ) e 3 n + 1 24(−1 + g′(β))3 ( g ′′ (β) ( 7g ′′′ (β) − 7g′′′(β) − 9g′′2(β) )) e 4 n + O(e 5 n). Expanding g(wn) in terms of Taylor series about α, we obtain g(wn) = β + ( g′(β)g′′2(β) 4(−1 + g′(β))2 ) e 3 n + 1 24(−1 + g′(β))3 ( g ′′ (β) ( 7g ′′′ (β)g ′ (β) − 7g′′′(β) − 9g′′2(β) )) e 4 n + O(e 5 n). (3.9) Expanding ( g′(wn)+g ′(un) 2 ) in terms of Taylor series about α, we have ( g′(wn) + g ′(un) 2 ) = g ′ (β) + g′′(β) 2 en + g′′′(β) 4 e 2 n + 1 24(−1 + g′(β))2 ( 2g iv (β) − 4giv(β)g′(β) + 2g iv (β)g ′2 (β) + 3g ′′3 (β)) ) e 3 n + (g′′2(β)(7g′′′(β)g′(β) − 7g′′′(β) − 9g′′2(β) 48(−1 + g′(β))3 ) e 4 n + O(e 5 n). (3.10) g(wn) −wn ( g′(wn) + g ′(un) 2 ) = −β(−1 + g′(β)) − 1 2 βg ′′ (β)en − 1 4 βg ′′′ (β)e 2 n − 1 24(−1 + g′(β))2 ( β ( 2g iv (β) − 4giv(β)g′(β) + 2giv(β)g′2(β) + 3g′′3(β) )) e 3 n − 1 48(−1 + g′(β))3 ( g ′′2 (β) ( 7βg ′′′ (β)g ′ (β) − 9βg′′2(β) − 7βg′′′(β) − 6g′′(β) + 6g′(β)g′′(β) )) e 4 n +O(e 5 n). (3.11) 1 − ( g′(wn) + g ′(un) 2 ) = 1 −g′(β) − g′′(β) 2 en − g′′′(β) 4 e 2 n − 1 24(−1 + g′(β))2 ( 2g iv (β) −4giv(β)g′(β) + 2giv(β)g′2(β) + 3g′′3(β) ) e 3 n − 1 48(−1 + g′(β))3 ( g ′′2 (β) ( 7g ′′′ (β)g ′ (β) − 7g′′′(β) − 9g′′2(β) )) e 4 n +O(e 5 n). (3.12) Substituting the values from (3.11) and (3.12) in (2.31) and simplifying un+1 = g(wn) −wng′( g ′(wn)+g ′(un) 2 ) 1 − ( g ′(wn)+g′(un) 2 ) =β + ( g′′2(β) 8(−1 + g′(β))3 ) e 4 n + O(e 5 n) en+1 =β + ( g′′2(β) 8(−1 + g′(β))3 ) e 4 n + O(e 5 n). This shows that Algoithm 2E has fourth order convergence. � Int. J. Anal. Appl. 18 (6) (2020) 934 4. Numerical Results and Applications This section elaborated the efficacy of algorithms introduced in this paper with the support of examples. We obtain estimated simple root rather than the exact based on the exactness � of the computer. We utilize �=10−15,For computational work we use the following stopping criteria (i). |un+1 −un| < � and (ii). |f(un+1| < � We make a comparative representation of the Chun method (CM)[6], Newton method (NM), the Abbasbandy method (AM)[1],Halley method (HM) [12,13], Weerakoon and Fernando(WF) [30], with the Algorithm 2B (AG1), Algorithm 2C (AG2) and Algorithm 2D (AG3) and Algorithm 2E (AG4).As for the convergence criteria, it was required that the distance of two successive estimation for the zero was less than 10−15. In Table 1 we also displayed, the number of iterations (IT), the approximate root un and the valuef(un) The examples are the same as in Chun [4] f1(u) = u 3 − 10, g(u) = √ 10 u f2(u) = cosu−u, g(u) =cosu f3(u) = (u− 1)3 − 1, g(u) =1 + √ 1 u− 1 f4(u) = e u2+7u−30 − 1, g(u) = 1 7 (30 −u2) f5(u) = sin 2 u−u2 + 1, g(u) =sinu + 1 sinu + u f6(u) = u 2 −eu − 3u, g(u) = u2 −eu + 2 3 Table1.Numerical Examples f1(u) = u 3 − 10, g(u) = √ 10 u , u0=1.5 Methods IT un f(un) δ = |un −un−1| NM 6 2.1544346900318837 7.857615e-27 3.486728e-14 HM 6 2.1544346900318837 6.573851e-30 1.008516e-15 AM 4 2.1544346900318837 3.071853e-26 1.972938e-09 CM 9 2.1544346900318837 1.399647e-29 1.040560e-15 WF 4 2.1544346900318837 7.066989e-31 5.866631e-11 AG1 3 2.1544346900318837 8.936935e-27 3.625733e-09 AG2 3 2.1544346900318837 9.833722e-66 1.458067e-16 AG3 3 2.1544346900318837 4.856920e-26 6.374604e-09 AG4 3 2.1544346900318837 7.081450e-63 7.553158e-16 f2(u) = cosu−u, g(u) = cosu , u0=1.7 Int. J. Anal. Appl. 18 (6) (2020) 935 NM 4 0.7390851332151609 3.924473e-16 3.258805e-08 HM 5 0.7390851332151606 2.373589e-27 8.014391e-14 AM 4 0.7390851332151606 8.935133e-33 2.845706e-11 CM 5 0.7390851332151606 7.035845e-25 9.756878e-13 WF 3 0.7390851332151606 1.735664e-21 4.084953e-07 AG1 3 0.7390851332151606 5.258162e-26 8.637471e-09 AG2 3 0.7390851332151606 3.458981e-60 3.722340e-15 AG3 3 0.7390851332151606 1.117701e-27 2.392676e-09 AG4 3 0.7390851332151606 2.655872e-66 1.101871e-16 f3(u) = (u− 1)3 − 1, g(u) = 1 + √ 1 u−1 , u0=3.5 NM 7 2 2.484117e-21 2.877567e-11 HM 8 2 8.422670e-24 1.675577e-12 AM 6 2 2.626229e-20 6.615927e-11 CM 5 2 7.505295e-35 2.657272e-12 WF 5 2 9.839450e-37 6.550899e-13 AG1 3 2 1.956948e-17 4.708259e-06 AG2 3 2 1.845148e-41 1.408551e-10 AG3 3 2 2.746096e-20 5.271145e-07 AG4 3 2 9.745968e-50 1.200801e-12 f4(u) = e u2+7u−30 − 1, g(u) = 1 7 (30 −u2) , u0=3.5 NM 12 3 5.475744e-24 2.530687e-13 HM Diverge AM 7 3 4.775960e-18 2.353603e-07 CM 8 3 9.665693e-21 7.518279e-12 WF 8 3 5.290016e-24 1.916154e-09 AG1 4 3 1.938304e-114 2.931715e-38 AG2 3 3 2.118685e-21 2.446184e-05 AG3 3 3 3.811229e-37 1.704784e-12 AG4 3 3 4.174513e-90 1.629758e-22 Int. J. Anal. Appl. 18 (6) (2020) 936 f5(u) = sin 2u−u2 + 1, g(u) = sinu + 1 sinu+u , u0= -1 NM 6 1.4044916482153412 1.819126e-25 3.058088e-13 HM 6 1.4044916482153412 9.561689e-28 2.217103e-14 CM 6 1.4044916482153412 1.669602e-16 6.551037e-09 AM 4 1.4044916482153412 6.138132e-21 1.329320e-07 WF 4 1.4044916482153412 9.413532e-30 1.793023e-10 AG1 4 1.4044916482153412 2.097175e-88 9.878333e-30 AG2 3 1.4044916482153412 7.391875e-68 3.273071e-17 AG3 3 1.4044916482153412 1.895915e-31 9.551650e-11 AG4 3 1.4044916482153412 3.473304e-75 4.818950e-19 f6(u) = u 2 −eu − 3u, g(u) = u 2−eu+2 3 , u0=2 NM 5 1.4044916482153412 3.439576e-27 9.869210e-14 HM 8 1.4044916482153412 1.163256e-29 5.739414e-15 CM 4 1.4044916482153412 7.664653e-31 1.280280e-10 AM 5 1.4044916482153412 9.348485e-20 3.638191e-10 WF 4 1.4044916482153412 6.103947e-34 1.630507e-11 AG1 3 1.4044916482153412 4.442861e-18 5.125169e-06 AG2 3 1.4044916482153412 7.862110e-56 7.105574e-14 AG3 4 1.4044916482153412 6.398407e-50 6.398407e-501 AG4 3 1.4044916482153412 5.810512e-38 2.083378e-09 5. Conclusion We have developed the Algorithms 2A,2B,2C,2D,2E by approximating the definite integral ∫u γ g(u)du = g(u)−g(γ) by mean of quadrature rule along with writing the nonlinear equation as coupled system of equation and using the decomposition method. We have determined the convergence analysis of the newly suggested iterative schemes. With the help of test examples, computational comparison has been made with well known third and fourth order convergent iterative methods. Moreover it is observed that multistep algorithms suggested in this article requires only first derivative evaluation of function and our results can be considered as an alternative to already known third and fourth order convergent iterative method. Int. J. Anal. Appl. 18 (6) (2020) 937 Acknowledgements The authors would like to thank the Rector, COMSATS University Islamabad,Islamabad Pakistan, for providing excellent research and academic environments. Conflicts of Interest: The author(s) declare that there are no conflicts of interest regarding the publication of this paper. References [1] S. Abbasbandy, Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method., Appl. Math. Comput. 145 (2003), 887–893. [2] G. Adomian, Nonlinear stochastic systems and applications to physics, Kluwer Academic Publishers, Dordrecht, 1989. [3] F. Ali, W. Aslam, A. Rafiq, Some new iterative techniques for the problems involving nonlinear equations, Int. J. Comput. Math. 16 (2019),1-18. [4] E. Babolian, J. Biazar, A.R. Vahidi, Solution of a system of nonlinear equations by Adomian decomposition method, Appl. Math. Comput. 150 (3) (2004), 847–854. [5] F.A. Shah, M. Darus, I. Faisal, M.A. Shafiq, Decomposition technique and family of efficient schemes for nonlinear equa- tions, Discrete Dyn. Nat. Soc. 2017 (2017), 3794357. [6] C. Chun, Iterative methods improving Newton’s method by the decomposition method, Comput. Math. Appl. 50 (2005), 1559–1568. [7] C. Chun, Y. Ham, Some fourth-order modifications of Newton’s method, Appl. Math. Comput. 197 (2) (2008), 654–658. [8] A. Cordero, J.R. Torregrosa, Variants of Newton’s method using fifth-order quadrature formulas, Appl. Math. Comput. 190 (1) (2007), 686–698. [9] M.T. Darvishi, A. Barati, A third-order newton-type method to solve systems of nonlinear equations, Appl. Math. Comput. 187 (2007) ,630–635. [10] V. Daftardar-Gejji and H. Jafari, An iterative method for solving nonlinear functional equations, J. Math. Anal.and Appl. 316 (2) (2006), 753–763. [11] M. Frontini, E. Sormani, Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149 (3) (2004), 771– 782. [12] E. Halley, A new, exact and easy method for finding the roots of equations generally, and that without any previous reduction, Phil. R. Soc. Lond. 18 (1964), 136–147. [13] A. Melman, Geometry and convergence of Halley’s method, Siam Rev. 39 (4) (1997), 728–735. [14] J.H. He, Variational iteration method some new results and new interpretations, J. Comput., Appl. Math. 207 (1) (2007), 3-17. [15] J.H. He, Homotpy perturbation technique, Comp. Math. Appl. Mech. Eng. 178 (3-4) (1999), 257-262. [16] J.H. He, A new iteration method for solving algebraic equations, Appl. Math. Comput. 135 (2003), 81–84. [17] H.H. Homeier, On Newton-type methods with cubic convergence, J. Comput. Appl. Math. 176 (2) (2005), 425–432. [18] V.I. Hasanov, I.G. Ivanov, and G. Nedzhibov, A new modification of Newton method, Appl. Math. Eng. 27 (2002), 278–286. [19] M.A. Noor, K.I. Noor, S.T. Mohyud-Din, A. Shabbir, An iterative method with cubic convergence for nonlinear equations, Appl. Math. Comput. 183 (2006), 1249–1255. [20] M.A. Noor, New iterative schemes for nonlinear equations, Appl. Math. Comput. 187 (2007), 937–943. Int. J. Anal. Appl. 18 (6) (2020) 938 [21] M.A. Noor, K.I. Noor, E. Al-Said, M. Waseem, Some new iterative methods for nonlinear equations, Math. Probl. Eng. 2010, (2010), 198943. [22] M.A. Noor, Some iterative methods for solving nonlinear equations using homotopy perturbation method, Int. J. Comput. Math. 87 (2010), 141–149. [23] M.A. Noor, K.I. Noor, E. Al-Said, M. Waseem, Higher-order iterative algorithms for solving nonlinear equations, World Appl. Sci. J., 16 (2012), 1657–1663. [24] M.A. Noor, M. Waseem, K.I. Noor, M.A. Ali, New iterative technique for solving nonlinear equations, Appl. Math. Comput. 265 (2015), 1115–1129. [25] M.A. Noor, Fifth order convergent iterative method for solving nonlinear equation using quadrature formula, J. Math. Control Sci. Appl. 4 (1) (2018), 95-104. [26] O. Ogbereyivwe, K.O. Muka, On the efficiency of family of quadrature based methods for solving nonlinear equations, Glob. Sci. J. 6 (9) (2018), 149-159. [27] A.Y. Ozban, Some New Variants of Newton’s method, Appl. Math. Lett. 17 (6) (2004), 677–682. [28] S.M. Kang, A. Rafiq, Y.C. Kwun, A new second-order iteration method for solving nonlinear equations, Abstr. Appl. Anal. 2013 (2013), 48706. [29] S.M. Kang, M. Saqib, M. Fahad, W. Nazeer, Two new third and fourth order algorithms for resolution of nonlinear scalar equations based on decomposition technique, Far East J. Math. Sci. 101 (3) (2017), 457-471. [30] M. Saqib, M. Iqbal, Some multi-step iterative methods for solving nonlinear equations, Open J. Math. Sci. 1 (2017), 25-33. [31] M. Saqib, M. Iqbal, S. Ali, T. Ismail, New fourth and fifth order iterative methods for solving nonlinear equations, Appl. Math. 6 (2015), 1220-1227. [32] S. Weerakoon, T.G.I. Fernando, A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Letter., 13(8) (2000), 87-93. [33] M. Waseem, M.A. Noor, K.I. Noor, F.A. Shah, K.I. Noor, An efficient technique to solve nonlinear equations using multiplicative calculus, Turk. J. Math. 42 (2018), 679-691. 1. Introduction 2. Creation of iterative methods 2.1. Mid-Point Rule 2.2. Trapezoidal rule 3. Convergence Analysis of proposed Iterative methods 4. Numerical Results and Applications 5. Conclusion Acknowledgements References