©2021 Ada Academica https://adac.eeEur. J. Math. Anal. 1 (2021) 68-85doi: 10.28924/ada/ma.1.68 Unified Convergence Analysis of Two-Step Iterative Methods for Solving Equations Ioannis K. Argyros Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA Correspondence: iargyros@cameron.edu Abstract. In this paper we consider unified convergence analysis of two-step iterative methods forsolving equations in the Banach space setting. The convergence order four was shown using Taylorexpansions requiring the existence of the fifth derivative not on this method. But these hypotheseslimit the utilization of it to functions which are at least five times differentiable although the methodmay converge. As far as we know no semi-local convergence has been given in this setting. Ourgoal is to extend the applicability of this method in both the local and semi-local convergence caseand in the more general setting of Banach space valued operators. Moreover, we use our idea ofrecurrent functions and conditions only on the first derivative and divided differences which appearon the method. This idea can be used to extend other high convergence multipoint and multistepmethods. Numerical experiments testing the convergence criteria complement this study. 1. Introduction We consider the problem of approximating a solution x∗ of equation F (x) = 0, (1.1) where F : Ω ⊂ B −→ B1 is a continuous operator acting between Banach spaces B and B1 with Ω 6= ∅. Since a closed form solution is not possible in general, iterative methods are used forsolving (1.1). Many iterative methods are studied for approximating x∗. In this paper, we considerthe iterative methods, defined for n = 0, 1, 2, . . . , by yn = xn −F ′(xn)−1F (xn) xn+1 = yn −AnF ′(xn)−1F (yn), (1.2) An = A(xn,yn), A : Ω×Ω −→ L(B,B1), where A−1 ∈ L(B1,B). Many methods are special casesof (1.2). For example: Received: 31 Aug 2021. Key words and phrases. iterative methods; Banach space; convergence criterion; continuous functions.68 https://adac.ee https://doi.org/10.28924/ada/ma.1.68 Eur. J. Math. Anal. 1 (2021) 69 Traub [35] yn = xn −F ′(xn)−1F (xn) xn+1 = yn −F ′(xn)−1F (yn), (1.3) Newton [6] yn = xn −F ′(xn)−1F (xn) xn+1 = yn −F ′(yn)−1F (yn), (1.4) Ostrowski [25] yn = xn −F ′(xn)−1F (xn) xn+1 = yn − (2[xn,yn; F ] −F ′(xn))−1F (yn), (1.5) Kung-Traub [35–37] yn = xn −F ′(xn)−1F (xn) xn+1 = yn − [xn,yn; F ]−1F ′(xn)[xn,yn; F ]−1F (yn), (1.6) Ostrowski-type [25] yn = xn −F ′(xn)−1F (xn) xn+1 = yn − (2[xn,yn; F ]−1 −F ′(xn)−1)F (yn), (1.7) Sharma type [32] yn = xn −F ′(xn)−1F (xn) xn+1 = yn −P (xn,yn)F ′(xn)−1F (yn). (1.8) To obtain all these special cases choose, An = I,An = F ′(yn)−1F ′(xn), An = (2[xn,yn; F ] − F ′(xn))F ′(xn), An = [xn,yn; F ] −1F ′(xn)[xn,yn; F ] −1F ′(xn), An = (2[xn,yn; F ] −1−F ′(xn)−1)F ′(xn), An = P (xn,yn), respectively, where [., .; F ] : Ω × Ω −→ L(B,B1) is a divided difference of orderone and P : Ω×Ω −→ L(B,B1) is weight operator [32] (see also [15,28,40] and reference therein).These special methods were shown to be of order four using Taylor expansion and assumptions onthe fifth order derivative of F, which is not on these methods . So, the assumptions on the fifthderivative reduce the applicability of these methods [1–41].For example: Let B = B1 = R, Ω = [−0.5, 1.5]. Define λ on Ω by λ(t) = { t3 log t2 + t5 − t4 if t 6= 0 0 if t = 0. Then, we get t∗ = 1, and λ′′′(t) = 6 log t2 + 60t2 − 24t + 22. Eur. J. Math. Anal. 1 (2021) 70 Obviously λ′′′(t) is not bounded on Ω. So, the convergence of method (1.2) is not guaranteed bythe previous analyses in [1–41].In this paper we introduce a majorant sequence and use our idea of recurrent functions to extendthe applicability of method (1.2). Our analysis includes error bounds and results on uniqueness of x∗ based on computable Lipschitz constants not given before in [1–41] and in other similar studiesusing Taylor series. Our idea is very general. So, it applies on other methods too.The rest of the paper is set up as follows: In Section 2 we present results on majorizing sequences.Sections 3,4 contain the semi-local and local convergence, respectively, where in Section 4 thenumerical experiments are presented. Concluding remarks are given in the last Section 5. 2. Results on majorizing sequences We recall the definition followed by convergence results. Definition 2.1. Let {w̄n} be a sequence in a Banach space. Then, a nondecreasing scalar sequence {wn} is called majorizing for {w̄n} if ‖w̄n+1 − w̄n‖≤ wn+1 −wn for each n = 0, 1, 2, . . . . (2.1) Sequence {wn} is used instead to study the convergence of {w̄n} [23–25]. Set M = [0,∞).Let η > 0, P0 : M −→ R, P : M −→ R, a : M × M × M −→ R, ā : M × M × M −→ Rand b : M ×M ×M ×M −→ R be continuous and nondecreasing functions. Set an = a(n) and ξn = b(n). Define scalar sequences {sn}, {tn} for each n = 0, 1, 2, . . . by t0 = 0,s0 = η, tn+1 = sn + ᾱn(sn − tn) sn+1 = tn+1 + βn(tn+1 − sn), (2.2) where ᾱn = ān ∫ 10 P̄ ((1 −θ)(sn − tn))dθ and βn = ξn 1 −P0(tn+1) , ān = { ā, if n = 0 a, if n = 1, 2, . . . , P̄ = { P0, if n = 0 P, if n = 1, 2, . . .Next, we present results on the convergence of sequence {sn},{tn}. LEMMA 2.2. Suppose that there exists µ > 0 such that for each n = 0, 1, 2, . . . , tn ≤ µ (2.3) and P0(µ) < 1. (2.4) Eur. J. Math. Anal. 1 (2021) 71 Then, sequences {sn},{tn} converge to their unique least upper bound t∗ ∈ [η,µ] and tn ≤ sn ≤ tn+1. Proof. It follows from (2.2)-(2.4) that these sequences are nondecreasing, bounded from aboveby µ, and as such they converge to t∗. � LEMMA 2.3. If function P0 is increasing then conditions (2.3) and (2.4) can be replaced by tn ≤ P−10 (1). (2.5) Proof. Set µ = P−10 (1) in Lemma 2.2. � REMARK 2.4. Conditions (2.3)-(2.5) are very general and can be verified only in special cases. That is why we present stronger conditions that are easier to verify. Define functions f and g on the interval [0, 1) by f (t) = a( η 1 − t , η 1 − t ,t2η) ∫ 1 0 P ((1 −θ)t2η)dθ− t and g(t) = b( η 1 − t , η 1 − t ,t2η,t3η) + tP0( η 1 − t ) − t. Suppose that these functions have minimal zeros λf and λg in (0, 1), respectively. Set λ = min{λf ,λg} and λ0 = max{α0,β0}. Then, we can show the third result on majorizing sequencefor method (1.2). LEMMA 2.5. Suppose that µ0 ≤ λ0 ≤ λ. (2.6) Then, sequences {sn},{tn} are nondecreasing, bounded from above by t∗∗ = η1−λ, and converge to t∗ ∈ [0,t∗∗]. Moreover, the following estimates hold for each n = 1, 2, . . . 0 ≤ sn − tn ≤ λ(tn − sn−1) ≤ λ2nη, (2.7) 0 ≤ tn+1 − sn ≤ λ(sn − tn) ≤ λ2n+1η, (2.8) 0 ≤ sn ≤ 1 −λ2n+1 1 −λ η (2.9) and 0 ≤ tn+1 ≤ 1 −λ2n+1 1 −λ η. (2.10) Eur. J. Math. Anal. 1 (2021) 72 Proof. Estimates (2.7)-(2.10) hold if 0 ≤ αm ≤ λ, (2.11) 0 ≤ βm ≤ λ, (2.12) and tm ≤ sm ≤ tm+1, (2.13) are true for m = 0, 1, 2, . . . . These estimates hold for m = 0 by (2.6). We suppose that (2.11)-(2.13) are true for m = 1, 2, . . .n. By induction hypotheses, (2.7) and (2.8), we have sm ≤ tm + λ2mη ≤ sm−1 + λ2m−1η + λ2mη ≤ η + λη + . . . + λ2mη = 1 −λ2m+1 1 −λ η < η 1 −λ = t∗∗, and tm+1 ≤ sm + λ2m+1η ≤ tm + λ2mη + λ2m+1η ≤ η + λη + . . . + λ2m+1η = 1 −λ2m+2 1 −λ η < η 1 −λ = t∗∗. Therefore, by (2.13) and the induction hypotheses, we see that sequences {sm} and {tm} arenondecreasing. Then, (2.11) shall be true if a(tm,sm,sm − tm) ∫ 1 0 ψ((1 −θ)(sm − tm))dθ ≤ λ or a( 1 −λ2m 1 −λ η, 1 −λ2m+1 1 −λ η,λ2mη) ∫ 1 0 P ((1 −θ)λ2mη)dθ ≤ λ or a( η 1 −λ , η 1 −λ ,λ2η) ∫ 1 0 ψ((1 −θ)λ2η)dθ ≤ λ or f (λ) ≤ 0, which is true by the definition of λf and λ. Similarly, (2.12) shall be true if b( 1 −λ2m 1 −λ η, 1 −λ2m 1 −λ η,λ2mη,λ2m+1η) +λP0( 1 −λ2m+2 1 −λ η) ≤ λ, or b( η 1 −λ , η 1 −λ ,λ2η,λ3η) + λP0( η 1 −λ ) ≤ λ Eur. J. Math. Anal. 1 (2021) 73 or g(λ) ≤ 0, which is also true by the definition of λg and λ. Hence, we conclude (2.13) holds and limm−→∞ sm = limm−→∞ tm = t ∗. � 3. Semi-local convergence Let U(x0, r) = {x ∈ B : ‖x − x0‖ < r,r > 0} and U[x0, r] = {x ∈ B : ‖x − x0‖ ≤ r,r > 0}.We use some parameters and functions. Consider M = [0,∞). Suppose that there exists function P0 : M −→ M which is continuous and nondecreasing such that functions P0(t) − 1 = 0 has aminimal zero s ∈ (0,∞). Set M0 = [0,s). Suppose function P0 : M0 −→ M is continuous andnondecreasing. The following conditions (C) are needed: (C1) There exists x0 ∈ Ω and η > 0 such that F ′(x0)−1 ∈ L(B1,B) and ‖F ′(x0)−1F (x0)‖≤ η. (C2) For each x ∈ Ω ‖F ′(x0)−1(F ′(u) −F ′(x0))‖≤ P0(‖u −x0‖). Set S0 = U(x0,s) ∩ Ω.(C3) For each x,y ∈ S0 the following hold ‖F ′(x0)−1(F ′(y) −F ′(x))‖≤ P (‖y −x‖) (C4) For each n = 0, 1, 2, . . . ‖AnF ′(xn)−1F ′(x0)‖≤ an F ′(x0) −1([y,x; F ] −F ′(x))‖≤ L2‖y −x‖ and ‖F ′(x0)−1Hn‖≤ ξn, where Hn = F ′(x0) −1 ∫ 1 0 (F ′(yn + θ(xn+1 −yn)) −F ′(xn)A−1n )dθ. (C5) Conditions of Lemma 2.2 or Lemma 2.3 or Lemma 2.5 hold.and(C6) U[x0,t∗] ⊂ Ω.Then, we can show the semi-local convergence of method (1.2) using the conditions (C) and thepreceding notation. Eur. J. Math. Anal. 1 (2021) 74 THEOREM 3.1. Under the conditions (C), sequences {yn},{xn} generated by method (1.2) are well defined in U[x0,t∗], remain in U[x0,t∗] for each n = 0, 1, 2, . . . and converge to a solution x∗ ∈ U[x0,t∗] of equation F (x) = 0. Moreover, the following error estimates hold for each n = 0, 1, 2, . . . ‖x∗ −xn‖≤ t∗ − tn. Proof. We shall show items (Pm) ‖ym −xm‖≤ sm − tm(Qm) ‖xm+1 −ym‖≤ tm+1 − smusing mathematical induction on integer m. By the first substep of method (1.2) for n = 0 and (C1),we have ‖y0 −x0‖ = ‖F ′(x0)−1F (x0)‖≤ η = s0 − t0 = s0 ≤ t∗,so y0 ∈ U[x0,t∗] and (P0) holds. We can write by the first sustep of method (1.2) that F (y0) = F (y0) −F (x0) −F ′(x0)(y0 −x0) = ∫ 1 0 (F ′(x0 + θ(y0 −x0)) −F ′(x0))(y0 −x0)dθ, leading by (C2) and (P0) to ‖F ′(x0)−1F (y0)‖ ≤ ∫ 1 0 P0(θ‖y0 −x0‖)dθ‖y0 −x0‖ ≤ ∫ 1 0 P̄ (θ(s0 − t0))dθ(s0 − t0). (3.1) Let z ∈ U(x0,t∗). In view of (C2), we get ‖F ′(x0)−1(F ′(z) −F ′(x0))‖ ≤ P0(‖z −x0‖) ≤ P0(t∗) < 1, (3.2) so ‖F ′(z)−1F ′(x0)‖≤ 1 1 −P0(‖z −x0‖) (3.3) holds by a lemma on invertible linear operators due to Banach [24] and (3.2). Therefore, iterate x1is well defined and we can write in turn by (C3) and (3.3) (for z = x0,y0) ‖x1 −y0‖ = ‖A0F ′(x0)−1F (y0)‖ ≤ ‖A0F ′(x0)−1F ′(x0)‖‖ ∫ 1 0 F ′(x0) −1(F ′(x0 + θ(y0 −x0)) −F ′(x0))dθ(y0 −x0)‖ ≤ a0 ∫ 1 0 P̄ ((1 −θ)‖y0 −x0‖)dθ‖y0 −x0‖ 1 −P0(‖x0 −x0‖) ≤ a0 ∫ 1 0 P̄ ((1 −θ)(s0 − t0))dθ 1 −P0(0) (s0 − t0) = t1 − s0, (3.4) Eur. J. Math. Anal. 1 (2021) 75 showing (Q0). Then, we have ‖x1 −x0‖≤‖x0 −y0‖ + ‖y0 −x0‖≤ t1 − s0 + s0 − t0 = t1 ≤ t∗, so x1 ∈ U[x0,t∗]. Moreover, we can write F (x1) = F (x1) −F (y0) + F (y0) = F (x1) −F (y0) −F ′(x0)A−10 (x1 −y0) = ∫ 1 0 (F ′(y0 + θ(x1 −x0)) −F ′(x0)A−10 )dθ(x1 −y0) = H0(x1 −y0), (3.5) since by the second substep of method (1.2), we have F (y0) = −F ′(x0)A−10 (x1−y0). By (C3), (3.4)and (3.5), we obtain ‖F ′(x0)−1F (x1)‖ ≤ ‖F ′(x0)−1H0‖‖x1 −y0‖ ≤ ξ0(t1 − s0), (3.6) so ‖y1 −x1‖ ≤ ‖F ′(x1)−1F ′(x0)‖‖F ′(x0)−1F (x1)‖ ≤ ξ0(t1 − s0) 1 −P0(t1) = s1 − t1, (3.7) showing (P1) for m = 1. Suppose (Pm), (Qm) hold ym and xm+1 ∈ U[x0,t∗]. Then, by repeatingthese computations with xm,ym,xm+1 replacing x0,y0,x1, respectively, we complete the induction.Moreover, sequence {xm} is complete in a Banach space, so it converges to some x∗ ∈ U[x0,t∗].Finally, by letting m −→∞ in the estimation ‖F ′(x0)−1F (xm+1)‖≤ ξm(tm+1 − sm) (3.8) and using the continuity of F, we conclude F (x∗) = 0. �Next, we present a result for uniqueness of the solution x∗. PROPOSITION 3.2. Suppose (a) x∗ is a solution of F (x) = 0 (b) there exists s̃ ≥ t∗ such that ∫ 1 0 P0((1 −θ)s̃ + θt∗)dθ < 1. (3.9) Set S1 = U[x0, s̃] ∩ Ω. Then, the only solution of equation F (x) = 0 in the region S1 is x∗. Eur. J. Math. Anal. 1 (2021) 76 Proof. Set T = ∫ 1 0 F ′(x̃ + θ(x∗ − x̃))dθ for some x̃ ∈ S1 with F (x̃) = 0. Using (C2) and (3.9),we get ‖F ′(x0)−1(T −F ′(x0))‖ ≤ ∫ 1 0 P0(‖x̃ + θ(x∗ − x̃) −x0‖dθ ≤ ∫ 1 0 P0((1 −θ)‖x̃ −x0‖ + θ‖x∗ −x0‖)dθ ≤ ∫ 1 0 P0((1 −θ)s̃ + θt∗)dθ < 1, leading to x̃ = x∗, where we used the identity T (x∗ − x̃) = F (x∗) −F (x̃) = 0 − 0 = 0 and theinvertability of T. � REMARK 3.3. Let us specialize operators An to see how sequences {sn},{tn},{an},{ξn},{αn} and {βn} are defined. Choose the case of Newton’s method (1.4). Then, we have ‖AnF ′(xn)−1F ′(x0)‖ = ‖F ′(yn)−1F ′(x0)‖ ≤ 1 1 −P0(‖yn −x0‖) , and ‖F ′(x0)−1Hn‖ = ‖ ∫ 1 0 F ′(x0) −1(F ′(yn + θ(xn+1 −yn)) −F ′(xn)A−1n )dθ‖ = ‖ ∫ 1 0 F ′(x0) −1(F ′(yn + θ(xn+1 −yn)) −F ′(yn))dθ‖ ≤ ∫ 1 0 P̄ (θ‖xn+1 −yn‖)dθ ≤ ∫ 1 0 P̄ (θ(tn+1 − sn))dθ, so we can choose an = 1 1 −P0(sn) (3.10) and ξn = ∫ 1 0 P̄ (θ(tn+1 − sn))dθ. (3.11) In this case we can show another result on majorizing sequences which is weaker than Lemma 2.5 for the interesting case P0(t) = L0t and P (t) = Lt. We get in this special case that αn = L(sn − tn) 2(1 −L0sn) (3.12) and βn = L(tn+1 − sn) 2(1 −L0tn+1) . (3.13) Eur. J. Math. Anal. 1 (2021) 77 Define sequences of function {f (1)n },{f (2)n } on the interval [0, 1) by f (1) n (t) = L 2 t2n−1η + L0(1 + t + . . . + t 2n)η − 1, f (2) n (t) = L 2 t2nη + L0(1 + t + . . . + t 2n+1)η − 1, and polynomial ϕ by ϕ(t) = L0t 3 + (L0 + L 2 )t2 − L 2 . Notice that ϕ(0) = −L 2 and ϕ(1) = 2L0. Denote by ρ the smallest zero of polynomial ϕ in (0, 1)assured to exist by the intermediate value theorem. LEMMA 3.4. Suppose that λ0 ≤ ρ < 1 −L0η. (3.14) Then, the conclusions of Lemma 2.5 hold for sequences {sn},{tn} with ρ replacing λ. Proof. We must show this time 0 ≤ L(sm − tm) 2(1 −L0sm) ≤ ρ, (3.15) 0 ≤ L(tm+1 − sm) 2(1 −L0tm+1) ≤ ρ (3.16) and tm ≤ sm ≤ tm+1. (3.17)These estimates hold for m = 0 by (3.14) and the definition of these sequences. Then, as in Lemma2.5 we can show instead for (3.15) that L 2 ρ2mη + ρL0(1 + ρ + . . . + ρ 2m)η − 1 ≤ 0. (3.18) This estimate motivates us to define recurrent functions f (1)m by f (1) m (t) = L 2 t2m−1η + L0(1 + t + . . . + t 2m)η − 1. (3.19) We shall find a relationship between recurrent functions f (1)m+1 and f (1)m . By definition (3.9), we havein turn that f (1) m+1(t) = L 2 t2m+1η + L0(1 + t + . . . + t 2m+2)η − 1 − L 2 t2m−1η −L0(1 + t + . . . + t2m)η + 1 + f (1) m (t) = f (1) m (t) + ( L 2 t2 − L 2 + L0(t 2 + t3))t2m−1η = f (1) m (t) + p(t)t 2m−1η. (3.20) In particular, we have fm+1(ρ) = fm(ρ), (3.21) Eur. J. Math. Anal. 1 (2021) 78 so evidently (3.8) holds if f (1) m (ρ) ≤ 0. (3.22)Define f (1)∞ (t) = limm−→∞ f (1)m (t). Then, we have f∞(t) = L0η 1 − t − 1. (3.23) Then, (3.22) holds if f∞(ρ) ≤ 0, (3.24)which is true by (3.14). Similarly, (3.16) holds if L 2 ρ2m+1η + ρL0(1 + ρ + . . . + ρ 2m+1)η −ρ ≤ 0 (3.25) or f (2) m (ρ) ≤ 0. (3.26)As in (3.20), we get in turn that f (2) m+1(t) = L 2 t2m+2η + L0(1 + t + . . . + t 2m+3)η − 1 − L 2 t2mη −L0(1 + t + . . . + t2m+1)η + 1 + f (2) m (t) = f (2) m (t) + ϕ(t)t 2mη. (3.27) Define f (2)∞ (t) = lim(2)m−→∞(t). Then, we get again f (2)∞ (t) = f (1) ∞ (t), so f (2)∞ (ρ) ≤ 0,can be shown instead of (3.26). But this is true by (3.14). The induction for items (3.15)-(3.17) iscompleted. The rest of the proof follows as in Lemma 2.2. 4. Local Convergence We shall introduce real parameters and functions to be used in the convergence analysis. Set M = [0,∞).Suppose function(i) ψ0(t) − 1 = 0 has a smallest zero R0 ∈ M −{0}, where function ψ0 : M −→ M is continuousand nondecreasing. Set M0 = [0,R0).(ii) ψ1(t)−1 = 0, has a smallest zero R1 ∈ M0−{0}, where function ψ : M0 −→ M is continuousand nondecreasing and ψ1 : M0 −→ M is defined by ψ1(t) = ∫ 1 0 ψ((1 −θ)t)dθ 1 −ψ0(t) . Eur. J. Math. Anal. 1 (2021) 79 (iii) ψ0(ψ1(t)t)−1 has a smallest zero R̄1 ∈ M0−{0}. Ser R̄2 = min{R0, R̄1} and M1 = [0, R̄2).(iv) ψ2(t) − 1 = 0 has a smallest zero R2 ∈ M1 −{0}, where ψ2(t) = [ψ1(ψ1(t)t) + (ψ0(t) + h(t,ψ1(t)t)) ∫ 1 0 ω(θψ1(t)t)dθ (1 −ψ0(t))(1 −ψ0(ψ1(t)t)) ]ψ1(t), where ω : M1 −→ M and h : M ×M1 −→ M are continuous and nondecreasing. We shall showthat R = min{R1,R2}, (4.1)is a convergence radius for method (1.2). Set M2 = [0,R). These definitions, imply that for each t ∈ M2 0 ≤ ψ0(t) < 1, (4.2) 0 ≤ ψ0(ψ1(t)t) < 1, (4.3)and 0 ≤ ψi (t) < 1, i = 1, 2. (4.4)The conditions (H) shall be used provided that x∗ is a simple solution of equation F (x) = 0.Suppose: (H1) For each x ∈ Ω ‖F ′(x∗)−1(F ′(x) −F ′(x∗))‖≤ ψ0(‖x −x0‖). Set Ω0 = U(x∗,R0) ∩ Ω.(H2) For each x,y ∈ Ω0 ‖F ′(x∗)−1(F ′(y) −F ′(x))‖≤ ψ(‖y −x‖), ‖F ′(x∗)−1F ′(x)‖≤ ω(‖x −x∗‖), and ‖F ′(x∗)−1(F ′(x∗) −A(x,y))‖≤ h(‖x −x∗‖,‖y −x∗‖). (H3) U[x∗,R] ⊂ Ω. Next, we show the local convergence of method (1.2) based on the preceding notation and conditions(H).. THEOREM 4.1. Under conditions (H) further suppose that x0 ∈ U(x∗,R) − {x∗}. Then, we conclude limn−→∞xn = x∗. Proof. Let v ∈ U(x∗,R) −{x∗}. Using (4.1), (4.2), and (H1) we obtain in turn that ‖F ′(x∗)−1(F ′(v) −F ′(x∗))‖≤ ψ0(‖v −x∗‖) ≤ ψ0(R) < 1, so ‖F ′(v)−1F ′(x∗)‖≤ 1 1 −ψ0(‖v −x∗‖) . (4.5) Eur. J. Math. Anal. 1 (2021) 80 In particular, iterate is well defined for v = x0 and the first substep of method (1.2), from which wecan also write y0 −x∗ = x0 −x∗ −F ′(x0)−1F (x0) = (F ′(x0) −1F ′(x∗)) ×( ∫ 1 0 F ′(x∗)−1(F ′(x∗ + θ(x0 −x∗)) −F ′(x0))dθ(x0 −x∗). (4.6) By (4.1), (4.4) (for i = 1), (4.5) (for v = x0), (4.6) and (H2), we get in turn that ‖y0 −x∗‖ ≤ ∫ 1 0 ψ̄((1 −θ)‖x0 −x∗‖)dθ‖x0 −x∗‖ 1 −ψ0(‖x0 −x∗‖) ≤ ‖x0 −x∗‖ < R, (4.7) so y0 ∈ U(x∗,R). We also have that (4.5) holds for v = y0, and iterate x1 is well defined fromwhich we can write in turn that x1 −x∗ = y0 −x∗ −F ′(y0)−1F (x0) +(F ′(y0) −1 −A0F ′(x0)−1)F (y0) = y0 −x∗ −F ′(y0)−1F (y0) + F ′(y0)−1(F ′(x0) −A0)F ′(x0)1F (y0). (4.8) In view of (4.1), (4.4) (for i = 2), (4.5)(for v = x0,y0), (4.7), (4.8) and (H2), we obtain in turn ‖x1 −x∗‖ ≤ [ψ1(ψ1(‖x0 −x∗‖)) + (ψ0(‖x0 −x∗‖) + h(‖x0 −x∗‖,‖y0 −x∗‖)) ∫ 1 0 ω(θ‖y0 −x∗‖)dθ (1 −ψ0(‖y0 −x∗‖))(1 −ψ0(‖x0 −x∗‖)) ]‖y0 −x∗‖ ≤ ψ2(‖x0 −x∗‖)‖x0 −x∗‖≤‖x0 −x∗‖ < R, (4.9) so x1 ∈ U(x∗,R). Simply, switch x0,y0,x1 by xm,ym,xm+1, respectively in the preceding calcula-tions to get ‖ym −x∗‖≤ ψ1(‖xm −x∗‖)‖xm −x∗‖≤‖xm −x∗‖ < R (4.10)and ‖xm+1 −x∗‖≤ ψ2(‖xm −x∗‖)‖xm −x∗‖≤‖xm −x∗‖. (4.11)Then, by the estimation ‖xm+1 −x∗‖≤ d‖xm −x∗‖ < R, (4.12)where d = ψ2(‖x0 −x∗‖) ∈ [0, 1), we get limm−→∞xm = x∗ and xm+1 ∈ U(x∗,R). �Next, we present a uniqueness result. Eur. J. Math. Anal. 1 (2021) 81 PROPOSITION 4.2. Suppose: (i) There exists a simple solution x∗ of equation F (x) = 0 (ii) There exists R∗ ≥ R such that ∫ 1 0 ψ0(θR ∗)dθ < 1. (4.13) Set Ω2 = Ω ∩U[x∗,R∗]. Then, the only solution of equation F (x) = 0 in the region Ω2 is x∗. Proof. Consider x̃ ∈ Ω1 with F (x̃) = 0. Set T = ∫ 10 F ′(x∗ + θ(x̃−x∗))dθ. Then, using (H1) and(4.13), we get in turn that ‖F ′(x∗)−1(T −F ′(x∗))‖ ≤ ∫ 1 0 ψ0(θ‖x̃ −x∗‖dθ ≤ ∫ 1 0 ψ0(θR ∗)dθ < 1, so x̃ = x∗, follows by T−1 ∈ L(B1,B) and T (x̃ −x∗) = F (x̃) −F (x∗) = 0 − 0 = 0. � 5. Numerical Experiments We provide some examples in this section. EXAMPLE 5.1. Define function q(t) = ξ0t + ξ1 + ξ2 sin ξ3t, x0 = 0, where ξj, j = 0, 1, 2, 3 are parameters. Choose P0(t) = L0t and P (t) = Lt. Notice that L0 and L are the center Lipschitz and Lipschitz constants, respectively. Then, from the graph of q(t) clearly for ξ3 large and ξ2 small, L0L can be small (arbitrarily). Notice that L0 L −→ 0. EXAMPLE 5.2. Let B = B1 = C[0, 1] and Ω = U[0, 1]. It is well known that the boundary value problem [16]. ς(0) = 0, (1) = 1, ς′′ = −ς −σς2 can be given as a Hammerstein-like nonlinear integral equation ς(s) = s + ∫ 1 0 Q(s,t)(ς3(t) + σς2(t))dt where σ is a parameter. Then, define F : Ω −→ B1 by [F (x)](s) = x(s) − s − ∫ 1 0 Q(s,t)(x3(t) + σx2(t))dt. Eur. J. Math. Anal. 1 (2021) 82 Choose ς0(s) = s and Ω = U(ς0,ρ0). Then, clearly U(ς0,ρ0) ⊂ U(0,ρ0 + 1), since ‖ς0‖ = 1. Suppose 2σ < 5. Then, conditions (A) are satisfied for L0 = 2σ + 3ρ0 + 6 8 , L = σ + 6ρ0 + 3 4 , and η = 1+σ 5−2σ. Notice that L0 < L. In the last two examples we consider Traub’s method (1.3). So, we take A(x,y) = I and h(s,t) = 0. EXAMPLE 5.3. Consider the motion system G′1(v1) = e v1, G′2(y) = (e − 1)v2 + 1, G ′ 3(v3) = 1 with G1(0) = G2(0) = G3(0) = 0. Let G = (G1,G2,G3). Let B = B1 = R3, Ω = Ū(0, 1),x∗ = (0, 0, 0)T . Define function G on Ω for v = (v1,v2,v3)T by G(v) = (ev1 − 1, e − 1 2 v22 + v2,v3) T . Then, we get G′(v) =  ex 0 0 0 (e − 1)v2 + 1 0 0 0 1  , so ψ0(t) = (e−1)t, ψ(t) = e 1 e−1t, ω(t) = e 1 e−1 and K = e is the Lipschitz constant on Ω and ρT is given in [29, 35]. Then, the radii: R1 = 0.3827 = ρA = 2 2(e − 1) + e 1 e−1 , R2 = 0.3061 = R, ρT = 2 3K = 0.2453. EXAMPLE 5.4. Consider B = B1 = C[0, 1], Ω = U(0, 1) and Q : Ω −→ B1 defined by Q(ς)(x) = %(x) − 5 ∫ 1 0 xθς(θ)3dθ. (5.1) We obtain Q′(ς(ξ))(x) = ξ(x) − 15 ∫ 1 0 xθς(θ)2ξ(θ)dθ, for each ξ ∈ D. Then, since x∗ = 0, we set ψ0(t) = 7.5t, ψ(t) = 15t, ω(t) = 15 and K = 15. Then, the radii: R1 = 0.0667 = ρA = 2 2(7.5) + 15 , R2 = 0.0290 = R, ρT = 2 3K = 0.0444. Notice that in the last two examples ρA is the radius given by us in [1–7] and is the largest. Eur. J. Math. Anal. 1 (2021) 83 6. Conclusion We have provided sufficient convergence criterion for the semi-local and local convergence oftwo-step methods. Upon specializing the parameters involved we show that although our majorizingsequence is more general than earlier ones: Convergence criteria are weaker (i.e., the utility of themethods is extended); the upper error estimates are more accurate (i.e. at least as few iterates arerequired to achieve a predecided error tolerance) and we have an at least as large ball containingthe solution. These benefits are obtained without additional hypotheses. According to our newtechnique we locate a more accurate domain than before containing the iterates resulting to moreaccurate (at least as small) Lipschitz condition.Our theoretical results are further justified using numerical experiments. References [1] I.K. Argyros, On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math. 169 (2004), 315-332, https://doi.org/10.1016/j.cam.2004.01.029[2] I.K. Argyros, Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors:C.K. Chui and L. Wuytack, Elsevier Publ. Co. New York, U.S.A, 2007.[3] I.K. Argyros, Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008), https://doi.org/10.1007/978-0-387-72743-1.[4] I.K. Argyros, S. Hilout, Weaker conditions for the convergence of Newton’s method. J. Complex. 28 (2012), 364–387, https://doi.org/10.1016/j.jco.2011.12.003.[5] I.K. Argyros, S. Hilout, On an improved convergence analysis of Newton’s method, Appl. Math. Comput. 225 (2013),372-386, https://doi.org/10.1016/j.amc.2013.09.049.[6] I.K. Argyros, A.A. Magréñan, Iterative methods and their dynamics with applications, CRC Press, New York, USA,2017.[7] I.K. Argyros, A.A. Magréñan, A contemporary study of iterative methods, Elsevier (Academic Press),New York, 2018, https://www.elsevier.com/books/a-contemporary-study-of-iterative-methods/ magrenan/978-0-12-809214-9.[8] R. Behl, P. Maroju, E. Martinez, S. Singh, A study of the local convergence of a fifth order iterative method, IndianJ. Pure Appl. Math. 51 (2020), 439-455, https://doi.org/10.1007/s13226-020-0409-5.[9] E. Cătinaş, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp.74 (2005), 291-301, https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.96.1713&rep=rep1& type=pdf.[10] X. Chen, T. Yamamoto, Convergence domains of certain iterative methods for solving nonlinear equations, Numer.Funct. Anal. Optim. 10 (1989), 37-48, https://doi.org/10.1080/01630568908816289.[11] J.E. Dennis Jr., On Newton-like methods. Numer. Math. 11 (1968), 324–330, https://doi.org/10.1007/ BF02166685.[12] J.E. Dennis Jr., R.B. Schnabel, Numerical methods for unconstrained optimization and nonlinear equations, SIAM,Philadelphia, 1996. First published by Prentice-Hall, Englewood Cliffs, New Jersey, (1983), https://epubs. siam.org/doi/pdf/10.1137/1.9781611971200.fm.[13] P. Deuflhard, G. Heindl, Affine invariant convergence theorems for Newton’s method and extensions to relatedmethods. SIAM J. Numer. Anal. 16 (1979), 1-10, https://doi.org/10.1137/0716001. https://doi.org/10.1016/j.cam.2004.01.029 https://doi.org/10.1007/978-0-387-72743-1 https://doi.org/10.1016/j.jco.2011.12.003 https://doi.org/10.1016/j.amc.2013.09.049 https://www.elsevier.com/books/a-contemporary-study-of-iterative-methods/magrenan/978-0-12-809214-9 https://www.elsevier.com/books/a-contemporary-study-of-iterative-methods/magrenan/978-0-12-809214-9 https://doi.org/10.1007/s13226-020-0409-5 https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.96.1713&rep=rep1&type=pdf https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.96.1713&rep=rep1&type=pdf https://doi.org/10.1080/01630568908816289 https://doi.org/10.1007/BF02166685 https://doi.org/10.1007/BF02166685 https://epubs.siam.org/doi/pdf/10.1137/1.9781611971200.fm https://epubs.siam.org/doi/pdf/10.1137/1.9781611971200.fm https://doi.org/10.1137/0716001 Eur. J. Math. Anal. 1 (2021) 84 [14] P. Deuflhard, Newton methods for nonlinear problems. Affine invariance and adaptive algorithms, Springer Seriesin Computational Mathematics, 35, Springer - Verlag, Berlin. (2004), https://www.springer.com/gp/book/ 9783540210993.[15] S. Erden, H. Budak, M.Z. Sarikaya, Fractional Ostrowski type inequalities for functions of bounded variaton withtwo variables, Miskolc Math. Notes 21 (2020), 171-188. https://doi.org/10.18514/MMN.2020.3076.[16] J. A. Ezquerro, M. A. Hernandez, Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzer-land, (2018), https://www.springer.com/gp/book/9783319559759.[17] M. Grau-Sánchez, À. Grau, M. Noguera, Ostrowski type methods for solving systems of nonlinear equations. Appl.Math. Comput. 281 (2011), 2377-2385, https://doi.org/10.1016/j.amc.2011.08.011.[18] J.M. Gutiérrez, Á.A. Magreñán, N. Romero, On the semilocal convergence of Newton-Kantorovich method undercenter-Lipschitz conditions, Appl. Math. Comput. 221 (2013), 79-88, https://doi.org/10.1016/j.amc.2013. 05.078.[19] M.A. Hernandez, N. Romero, On a characterization of some Newton-like methods of R− order at least three, J.Comput. Appl. Math. 183 (2005), 53-66, https://doi.org/10.1016/j.cam.2005.01.001.[20] L.V. Kantorovich, G.P. Akilov, Functional Analysis, Pergamon Press, Oxford, (1982).[21] A.A. Magréñan, I.K. Argyros, J.J. Rainer, J.A. Sicilia, Ball convergence of a sixth-order Newton-like methodbased on means under weak conditions, J. Math. Chem. 56 (2018), 2117-2131, https://doi.org/10.1007/ s10910-018-0856-y.[22] A.A. Magréñan, J.M. Gutiérrez, Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput.Appl. Math. 275 (2015), 527–538, https://dl.acm.org/doi/abs/10.5555/2946148.2946231.[23] M.Z. Nashed, X. Chen, Convergence of Newton-like methods for singular operator equations using outer inverses,Numer. Math. 66 (1993), 235-257, https://doi.org/10.1007/BF01385696.[24] L.M. Ortega, W.C. Rheinboldt„ Iterative Solution of Nonlinear Equations in Sev-eral Variables, Academic press, New York, (1970), https://www.elsevier.com/books/ iterative-solution-of-nonlinear-equations-in-several-variables/ortega/978-0-12-528550-6.[25] A. M. Ostrowski, Solution of equations in Euclidean and Banach spaces, Elsevier, 1973.[26] F.A. Potra, V. Pták, Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pit-man(Advanced Publishing Program), Boston, MA. (1984), http://www.sciepub.com/reference/50811.[27] P.D. Proinov, General local convergence theory for a class of iterative processes and its applications to Newton’smethod, J. Complex. 25 (2009), 38-62, https://doi.org/10.1016/j.jco.2008.05.006.[28] M.A. Ragusa, Parabolic Herz spaces and their applications, Appl. Math. Lett. 25 (2012), 1270-1273, https: //doi.org/10.1063/1.3498444[29] W.C. Rheinboldt, An adaptive continuation process of solving systems of nonlinear equations. Polish Academy ofScience, Banach Ctr. Publ. 3 (1978), 129-142, https://eudml.org/doc/208686.[30] S.M. Shakhno, O.P. Gnatyshyn, , On an iterative algorithm of order 1.839... for solving nonlinear least squaresproblems, Appl. Math. Appl. 161 (2005), 253-264, https://doi.org/10.1016/j.amc.2003.12.025.[31] S.M. Shakhno, R.P. Iakymchuk, H.P. Yarmola, Convergence analysis of a two step method for the nonlin-ear squares problem with decomposition of operator, J. Numer. Appl. Math. 128 (2018), 82-95, https://hal. archives-ouvertes.fr/hal-01857847/document.[32] J.R. Sharma, R.K. Guha, R. Sharma, An efficient fourth order weighted - Newton method for systems of nonlinearequations. Numer. Algorithms, 62 (2013), 307-323, https://doi.org/10.1007/s11075-012-9585-7.[33] F. Soleymani, T. Lotfi, P. Bakhtiari, A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 8(2014), 1001-1015, https://doi.org/10.1007/s11590-013-0617-6. https://www.springer.com/gp/book/9783540210993 https://www.springer.com/gp/book/9783540210993 https://doi.org/10.18514/MMN.2020.3076 https://www.springer.com/gp/book/9783319559759 https://doi.org/10.1016/j.amc.2011.08.011 https://doi.org/10.1016/j.amc.2013.05.078 https://doi.org/10.1016/j.amc.2013.05.078 https://doi.org/10.1016/j.cam.2005.01.001 https://doi.org/10.1007/ s10910-018-0856-y https://doi.org/10.1007/ s10910-018-0856-y https://dl.acm.org/doi/abs/10.5555/2946148.2946231 https://doi.org/10.1007/BF01385696 https://www.elsevier.com/books/iterative-solution-of-nonlinear-equations-in-several-variables/ortega/978-0-12-528550-6 https://www.elsevier.com/books/iterative-solution-of-nonlinear-equations-in-several-variables/ortega/978-0-12-528550-6 http://www.sciepub.com/reference/50811 https://doi.org/10.1016/j.jco.2008.05.006 https://doi.org/10.1063/1.3498444 https://doi.org/10.1063/1.3498444 https://eudml.org/doc/208686 https://doi.org/10.1016/j.amc.2003.12.025 https://hal.archives-ouvertes.fr/hal-01857847/document https://hal.archives-ouvertes.fr/hal-01857847/document https://doi.org/10.1007/s11075-012-9585-7 https://doi.org/10.1007/s11590-013-0617-6 Eur. J. Math. Anal. 1 (2021) 85 [34] J.F. Steffensen, Remarks on iteration. Skand Aktuar Tidsr. 16 (1993), 64-72, https://doi.org/10.1080/ 03461238.1933.10419209.[35] J.F. Traub, Iterative methods for the solution of equations prentice Hall, New Jersey, U.S.A, (1964), https://doi. org/10.1017/S0008439500028125.[36] J.F. Traub, A.G. Werschulz, Complexity and information, Lezioni Lince.[Lincei Lectures] Cambridge University Press,Cambridge, 1998, xii+139 pp.[37] J.F. Traub, Wozniakowski, H, Path integration on a quantum computer, Quant. Inf. Process. 1 (2002), 356-388, https://arxiv.org/abs/quant-ph/0109113.[38] T. Yamamoto, A convergence theorem for Newton-like methods in Banach spaces. Numer. Math. 51 (1987), 545-557, https://eudml.org/doc/133212.[39] R. Verma, New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019).[40] L. Xu, Y.-M. Chu, S. Rashid, A.A. El-Deeb, K.S. Nisar, On new unifed bounds for a family of functions via fractionalq-calculus theory, J. Funct. Space 2020 (2020), 4984612, https://doi.org/10.1155/2020/4984612.[41] P.P. Zabrejko, D.F. Nguen, The majorant method in the theory of Newton-Kantorovich approximations and the Ptákerror estimates, Numer. Funct. Anal. Optim. 9 (1987), 671-684, https://doi.org/10.1080/01630568708816254. https://doi.org/10.1080/03461238.1933.10419209 https://doi.org/10.1080/03461238.1933.10419209 https://doi.org/10.1017/S0008439500028125 https://doi.org/10.1017/S0008439500028125 https://arxiv.org/abs/quant-ph/0109113 https://eudml.org/doc/133212 https://doi.org/10.1155/2020/4984612 https://doi.org/10.1080/01630568708816254 1. Introduction 2. Results on majorizing sequences 3. Semi-local convergence 4. Local Convergence 5. Numerical Experiments 6. Conclusion References