

























































©2023 Ada Academica https://adac.eeEur. J. Math. Anal. 3 (2023) 19doi: 10.28924/ada/ma.3.19
Efficient Derivative-Free Class of Seventh Order Method for Non-differentiable Equations

Ioannis K. Argyros1,∗, Samundra Regmi2, Jinny Ann John3, Jayakumar Jayaraman3
1Department of Computing and Mathematical Sciences, Cameron University, Lawton, 73505, OK, USA

iargyros@cameron.edu
2Department of Mathematics, University of Houston, Houston, 77204, TX, USA

sregmi5@uh.edu
3Department of Mathematics, Puducherry Technological University, Pondicherry 605014, India

jinny3@pec.edu, jjayakumar@ptuniv.edu.in
∗Correspondence: iargyros@cameron.edu

Abstract. Many applications from a wide variety of disciplines in the natural sciences and also inengineering are reduced to solving of an equation or a system of equations in a correspondinglychosen abstract area. For most of these problems, the solutions are found iterative, because theiranalytic versions are difficult to find or impossible. This article encompasses efficient, derivatives-free,high-convergence iterative methods. Convergence of two types: Local and Semi-local areas will beinvestigated under the conditions of the ϕ,ψ-continuity utilizing operators on the method. The newmethod can also be applied to other methods, using inverses of the linear operator or the matrix.

1. Introduction
In the area of Applied Science and Technology, a great number of problems can be resolved byconverting them into nonlinear form equation

G(x) = 0 (1)
where G : B ⊂ U → U is differentiable as per Fréchet, U denotes complete normed linear spaceand B is a non-empty, open and convex set.Normally, the solutions to these non-linear equations can not be obtained in a closed-form.Therefore, the most frequently used solving techniques are of iterative nature. Newton’s Methodis a well-known iterative method for handling non-linear equations. Recently, with advances inScience and Mathematics many new iterative methods of higher order have been discovered for thehandling of non-linear equations and are currently being used [1, 2, 4–8, 10–22]. The computationof derivatives of second and higher order is a great disadvantage for the iterative systems of higherorder and is not suitable for the practical application. Because of the computation of G′′, the

Received: 3 May 2023.
Key words and phrases. Steffensen-like methods; convergence; Banach space; divided difference.1

https://adac.ee
https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 2
cubically converging classical schemas are not appropriate with respect to the cost of calculations.We found that many such methods rely on Taylor series extensions to prove convergence resultsand require the existence of derivative with at least an order of magnitude greater than that ofthe methodology [1, 2, 4, 10–19, 21, 22]. Here we consider, for example, a three-step two-parameterfamily of derivative free methods with seventh-order of convergence for solving systems of nonlinearequations proposed in [18] and which may be expressed in the following formulation:For x0 ∈B and each n = 0, 1, 2, . . .

wn = xn + aG(xn), sn = xn −aG(xn), An = [wn,sn; G],

yn = xn −A−1n G(xn),

zn = yn −A−1n G(yn),

un = zn + bG(zn), vn = zn −bG(zn), Qn = [un,vn; G],

xn+1 = zn − (pI + A−1n Qn(qI + A
−1
n Qn(rI + dA

−1
n Qn)))A

−1
n G(zn),

(2)

where a,b,p,q,r,d ∈ R, [·, ·; G] : B ×B → W (U), the space of bounded linear operators from
U into U. The local convergence analysis of the method (2) is provided in [18] using the Taylorseries expansion approach and conditions reaching the eighth derivative of the operator G. Thesederivatives do not appear on the method (2). The convergence order is shown to be seven providedthat p = 17

4
,q = −27

4
, r = 19

4
and d = −5

4
. The conditions on high order derivatives restrict theapplicability of the method (2) for solving equations where at least G(8) should exist. Although,the method may converge. Let us consider the toy example for B = [−1, 2] and G defined by

G(t) =

{
t4 log t + 5t7 − 5t6, if t 6= 0
0, if t = 0

It follows by this definition that G(ξ) = G(1) = 0 but G(4) is not bounded on B. Thus, the resultsin [18] cannot assure that limn→∞xn = ξ = 1. But, the method converges to 1.Therefore, there is a need to weaken the conditions. In this article, we use only conditions onthe operators on the method (2). Therefore, the method can be utilized to solve non-differentiableequations. Furthermore, the results should also demonstrate the isolation of the solution and thebounds of error in advance. This is what is new and what motivates our article. This meansextending its applicability, taking advantage of weaker conditions for such methods. In addition,we are also discussing a more interesting case of semi-local convergence. It is obvious that theaforementioned goals can be easily achieved in a similar way for other iterative methods [1, 2, 4,10–17, 19, 21, 22]. Furthermore, our bounds of error is more precise and our criteria for convergenceapply even if the assumptions referred to in the references above are infringed.The remainder of the article is organized as follows: Analysis of local convergence is provided inSection 2. Majorizing sequences will be introduced and analyzed for the semi-local convergenceanalysis of 2 in Section 3. Results demonstrating isolation of the solution is discussed in Section

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 3
4. Numeric experiments that use convergence results from the previous sections are described inSection 5. The concluding remarks of Section 6 bring this article to an end.

2. Convergence 1: Local
Let M = [0, +∞). The following conditions are used:
(C1) There exist continuous and non-decreasing functions (CNF) ϕ0 : M×M → M, δ1 : M → M,

δ2 : M → M, a solution ξ ∈B of the equation G(x) = 0 and a linear operator P such thatfor each w = x + aG(x), s = x −aG(x) and P−1 ∈ W (U)
‖P−1([w,s; G] − P)‖≤ ϕ0(‖w −ξ‖,‖s −ξ‖),

‖w −ξ‖≤ δ1(‖x −ξ‖)

and
‖s −ξ‖≤ δ2(‖x −ξ‖).

(C2) The equation ϕ0(δ1(t),δ2(t)) − 1 = 0 has a smallest positive solution denoted by ρ0. Let
M0 = [0,ρ0) and B0 = B∩S(ξ,ρ0).

(C3) There exist CNF ϕ : M0 ×M0 ×M0 → M, δ3 : M0 → M, δ4 : M0 → M, ϕ1 : M0 ×M0 ×
M0×M0 → M, ϕ2 : M0 → M such that for each x,z ∈B0, u = z + bG(z), v = z −bG(z),

‖u −ξ‖≤ δ3(‖z −ξ‖), ‖v −ξ‖≤ δ4(‖z −ξ‖),

‖P−1([w,s; G] − [x,ξ; G])‖≤ ϕ(‖x −ξ‖,‖w −ξ‖,‖s −ξ‖),

‖P−1([w,s; G] − [u,v; G])‖≤ ϕ1(‖w −ξ‖,‖s −ξ‖,‖u −ξ‖,‖v −ξ‖)

and
‖P−1([z,ξ; G] − P)‖≤ ϕ2(‖z −ξ‖).

(C4) The equations hi(t)−1 = 0, i = 1, 2, 3 have smallest solutions ri ∈ M0−{0}, respectivelywhere the functions hi : M0 → M are defined by
h1(t) =

ϕ(t,δ1(t),δ2(t))

1 −ϕ0(δ1(t),δ2(t))
,

h2(t) =
ϕ(h1(t)t,δ1(t),δ2(t))h1(t)

1 −ϕ0(δ1(t),δ2(t))

�(t) =
ϕ1(δ1(t),δ2(t),δ3(h2(t)t),δ4(h2(t)t)

1 −ϕ0(δ1(t),δ2(t))
,

λ(t) = |p + q + r + d − 1| + |p + 2r + 3d|�(t) + |r + 3d|�(t)2 + |d|�(t)3,

h3(t) =

[
ϕ(h2(t)t,δ1(t),δ2(t))

1 −ϕ0(δ1(t),δ2(t))
+
λ(t)(1 + ϕ2(h2(t)t))

1 −ϕ0(δ1(t),δ2(t))

]
h2(t).

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 4
Set r = min{ri}. Let M1 = [0, r). It follows by these definitions that for each t ∈ M1

0 ≤ ϕ0(δ1(t),δ2(t)) < 1,

0 ≤ �(t),

0 ≤ λ(t)

and
0 ≤ hi(t) < 1.

Notice that for x0 ∈ S(ξ,r) −{ξ} the conditions (C1)-(C2) and (C4) imply
‖P−1([w0,s0; G] − P)‖ϕ0(‖w0 −ξ‖,‖s0 −ξ‖)

≤ ϕ0(δ1(r),δ2(r)) < 1.

Thus A−10 ∈ W (U) by the Banach lemma on invertible operators [3, 9, 10] and the firstiterate y0 is well-defined by the first sub-step of the method (2).
(C5) S[ξ,r] ⊂B.The motivation for the development of the functions hi follows in turn by the estimates

‖A−1n P‖≤
1

1 −ϕ0(‖wn −ξ‖,‖sn −ξ‖)
≤

1

1 −ϕ0(δ1(‖xn −ξ‖),δ2(‖xn −ξ‖))
,

yn −ξ = A−1n (An − [xn,ξ; G])(xn −ξ),

‖yn −ξ‖≤
ϕ(‖xn −ξ‖,‖wn −ξ‖,‖sn −ξ‖)‖xn −ξ‖

1 −ϕ0(δ1(‖xn −ξ‖),δ2(‖xn −ξ‖))

≤ h1(‖xn −ξ‖)‖xn −ξ‖≤‖xn −ξ‖ < r.

Similarly,
‖zn −ξ‖≤

ϕ(‖yn −ξ‖,‖wn −ξ‖,‖sn −ξ‖)‖yn −ξ‖
1 −ϕ0(δ1(‖xn −ξ‖),δ2(‖xn −ξ‖))

≤ h2(‖xn −ξ‖)‖xn −ξ‖≤‖xn −ξ‖,

xn+1 −ξ = zn −ξ−A−1n G(zn) − [(p + q + r + d − 1)I + (q + 2r + 3d)(A
−1
n Qn − I)

+ (r + 3d)(A−1n Qn − I)
2 + d(A−1n Qn − I)

3]A−1n G(zn)

which can be shortened for
Dn = A

−1
n (Qn −An),

Tn = (p + q + r + d − 1)I + (p + 2r + 3d)Dn + (r + 3d)D2n + dD
3
n.

Thus
xn+1 −ξ = A−1n (An − [zn,ξ; G])(zn −ξ) −TnA

−1
n G(zn).

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 5
But,

‖Dn‖≤‖A−1n P‖‖P
−1(Qn −An)‖

≤
ϕ1(‖wn −ξ‖,‖sn −ξ‖,‖un −ξ‖,‖vn −ξ‖)

1 −ϕ0(‖wn −ξ‖,‖sn −ξ‖)
= �n,

‖Tn‖≤ |p + q + r + d − 1| + |p + 2r + 3d|�n + |r + 3d|�2n + |d|�
3
n = λn,

leading to
‖xn+1 −ξ‖≤

[
ϕ(‖zn −ξ‖,‖wn −ξ‖,‖sn −ξ‖)

1 −ϕ0(δ1(‖xn −ξ‖),δ2(‖xn −ξ‖)
+

λn(1 + ϕ2(‖zn −ξ‖))
1 −ϕ0(δ1(‖xn −ξ‖),δ2(‖xn −ξ‖))

]
‖zn −ξ‖

≤ h3(‖xn −ξ‖)‖xn −ξ‖ < ‖xn −ξ‖.

Hence, the iterates {xn},{yn},{zn}⊂ S(ξ,r) and there exists c = h3(‖x0−ξ‖) ∈ [0, 1) such that
‖xn+1 −ξ‖≤ c‖xn −ξ‖ < r,

from which it follows that limn→∞xn = ξ.Therefore, we achieve the following local convergence result for the method (2).
THEOREM 2.1. Under the assumptions (C1)-(C5), {xn} ⊂ S(ξ,r) and limn→+∞xn = ξ provided
that x0 ∈ S(ξ,r) −{ξ}.

REMARK 2.2. The functions δj , j = 1, 2, 3, 4 are left uncluttered in the Theorem 2.1. A possible
choice for the first function δ1 is motivated by the estimate

w −ξ = x −ξ + aF (x) = (I + a[x,ξ; F ])(x −ξ)

= (I + aPP−1([x,ξ; G] − P + P))(x −ξ),

= [(I + aP) + aPP−1([x,ξ; G] − P)](x −ξ),

‖w −ξ‖≤ [‖I + aP‖ + |a|‖P‖ϕ0(‖x −ξ‖)]‖x −ξ‖.

Thus, we can choose
δ1(t) = [‖I + aP‖ + |a|‖P‖ϕ0(t)]t.

Similarly, we can choose

δ2(t) = [‖I −aP‖ + |a|‖P‖ϕ0(t)]t,

δ3(t) = [‖I + bP‖ + |b|‖P‖ϕ0(h2(t)t)]h2(t)t

and

δ4(t) = [‖I −bP‖ + |b|‖P‖ϕ0(h2(t)t)]h2(t)t.

Two possible choices for the linear operator P are:
The differentiable option : P = G′(ξ) and
The non-differentiable option : P = [x0,x−1; G]. Other choices are possible [18].

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 6
3. Convergence 2: Semi-Local

The role of ξ, “ϕ” is replaced by x0, “ψ” as follows. Assume:(H1) There exist CNF ψ0 : M × M → M, x0 ∈ B, g1 : M → M,g2 : M → M and a linearoperator P such that for x ∈B
w = x + aG(x), s = x −aG(x),

‖w −x0‖≤ g1(‖x −x0‖), ‖s −x0‖≤ g2(‖x −x0‖)

‖P−1([w,s; G] − P)‖≤ ψ0(‖w −x0‖,‖s −x0‖).

(H2) The equation ψ0(g1(t),g2(t)) − 1 = 0 has a smallest positive solution denoted by ρ.Let M2 = [0,ρ) and B1 = B∩S(x0,ρ).Notice that ‖P−1([w0,s0; G] − P)‖≤ ψ(0, 0) < 1.Thus, A−10 ∈ W (U) and the iterate y0 is well-defined by the first sub-step of the method(2).
(H3) There exists CNF g3 : M2 → M, g4 : M2 → M, ψ1,ψ2 : M2 ×M2 ×M2 ×M2 → M suchthat for each x,y ∈B1

‖u −x0‖≤ g3(‖z −x0‖, ‖v −x0‖≤ g4(‖z −x0‖)

‖P−1([y,x; G] − [w,s; G])‖≤ ψ1(‖x −x0‖,‖y −x0‖,‖w −x0‖,‖s −x0‖)

and
‖P−1([w,s; G] − [u,v; G])‖≤ ψ2(‖w −x0‖,‖s −x0‖,‖u −x0‖,‖v −x0‖).

Define the real sequence {αn} for α0 = 0,β0 ≥‖A−10 G(x0)‖, and each n = 0, 1, 2, . . . by
γn = βn +

ψ1(αn,βn,g1(αn),g2(αn))(βn −αn)
1 −ψ0(g1(αn),g2(αn))

,

�n,1 =
ψ2(g1(αn),g2(αn),g3(γn),g4(γn))

1 −ψ0(g1(αn),g2(αn))
,

λn,1 = |p + q + r + d| + |p + 2r + 3d|�n,1 + |r + 3d|�2n,1 + |d|�
3
n,1,

αn+1 = γn +
ψ1(βn,γn,g1(αn),g2(αn))(γn −βn)λn,1

1 −ψ0(g1(αn),g2(αn)
,

δn+1 = ψ1(αn,αn+1,g1(αn),g2(αn))(αn+1 −αn) + (1 + ψ0(g1(αn),g2(αn))(αn+1 −βn)

βn+1 = αn+1 +
δn+1

1 −ψ0(g1(αn+1,g2(αn+1)
.

(3)

A convergence set of conditions for the sequence {αn} is given for each n = 0, 1, 2, . . ..
(H4) ψ0(g1(αn),g2(αn)) < 1 and αn ≤ α < ρ.It follows by this condition and (3) that 0 ≤ αn ≤ βn ≤ γn ≤ αn+1 and there exists

α∗ ∈ [0,α] such that limn→∞αn = α∗.

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 7
and

(H5) S[x0,α∗] ⊂B.As in the local case the motivation for the introduction of the sequence {αn} follows in turn to formthe estimates:
zn −yn = −A−1n G(yn),but
G(yn) = G(yn) −G(xn) −An(yn −xn) = ([yn,xn; G] −An)(yn −xn),

so
‖zn −yn‖≤

ψ1(‖xn −x0‖,‖yn −x0‖,‖wn −x0‖,‖sn −x0‖)‖yn −xn‖
1 −ψ0(‖wn −x0‖,‖sn −x0‖)

≤ γn −βn,

‖zn −x0‖≤‖zn −yn‖ + ‖yn −x0‖≤ γn −βn + βn −α0 = γn < a∗,

xn+1 −zn = −TnA−1n G(zn),

‖xn+1 −zn‖≤
λn,1ψ1(‖yn −x0‖,‖zn −x0‖,‖wn −x0‖,‖sn −x0‖)‖zn −yn‖

1 −ψ0(‖wn −x0‖,‖sn −x0‖)

≤ αn+1 −γn,

since
Tn,1 = (p + q + r + d)I + (q + 2r + 3d)Dn + (r + 3d)D

2
n + dD

3
n,

‖Dn‖≤
ψ2(‖wn −x0‖,‖sn −x0‖,‖un −x0‖,‖vn −x0‖)

1 −ψ0(‖wn −x0‖,‖sn −x0‖)
,

‖Tn,1‖≤ λn,1

and
‖xn+1 −x0‖≤‖xn+1 −zn‖ + ‖zn −x0‖≤ αn+1 −γn + γn −α0

= αn+1 < α
∗.

Also,
G(xn+1) = G(xn+1) −G(xn) −An(yn −xn)

= G(xn+1) −G(xn) −An(xn+1 −xn) + An(xn+1 −yn),

‖P−1G(xn+1)‖≤ ψ1(‖xn −x0‖,‖xn+1 −x0‖,‖wn −x0‖,‖sn −x0‖)‖xn+1 −xn‖

+ (1 + ψ0(‖wn −x0‖,‖sn −x0‖))‖xn+1 −yn‖ = δ̄n+1 ≤ δn+1,

‖yn+1 −xn+1‖≤‖A−1n+1P‖‖P
−1G(xn+1‖

≤
δ̄n+1

1 −ψ0(‖wn+1 −x0‖,‖sn+1 −x0‖)
≤ βn+1 −αn+1

(4)

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 8and
‖yn+1 −x0‖≤‖yn+1 −xn+1‖ + ‖xn+1 −x0‖≤ βn+1 −αn+1 + αn+1 −α0

= βn+1 < α
∗.Therefore, the sequence {xn} is complete in Banach space U. Hence, there exists ξ = limn→∞xnand by (4) G(ξ) = 0.Then, we achieve the following semi-local convergence result for the method (2).

THEOREM 3.1. Under the conditions (H1)-(H5) the sequence {xn} converges to a solution ξ ∈
S[x0,a

∗] of the equation G(x) = 0.

REMARK 3.2. A possible choice for the functions gj , j = 1, 2, 3, 4 follows as in the local case.
We have in turn

w −x0 = x −x0 + a(G(x) −G(x0) + G(x0))

= [(I + aP) + aPP−1([x,x0; G] − P)](x −x0) + aG(x0),

lead to the choice
g1(t) = [‖I + aP‖ + |a|‖P‖ψ3(t)]t + |a|‖G(x0)‖

provided that for some CNF ψ3 : M1 → M, x ∈B

‖P−1([x,x0; G] − P)‖≤ ψ3(‖x −x0‖).

Similarly, we define

g2(t) = [‖I −aP‖ + |a|‖P‖ψ3(t)]t + |a|‖G(x0)‖,

g3(t) = [‖I + bP‖ + |a|‖P‖ψ3(t)]t + |b|‖G(x0)‖,

and

g4(t) = [‖I −bP‖ + |b|‖P‖ψ3(t)]t + |b|‖G(x0)‖.

The options for P are:
P = G′(x0) or P = [x0,x−1; G].

Other options exist [10].

4. Isolation of a Solution
We first present the uniqueness result for the local convergence case.

PROPOSITION 4.1. There exists a solution v∗ ∈ S(ξ,ρ2) of the equation G(x) = 0 for some
ρ2 > 0;
The last condition in (C3) holds in the ball S(ξ,ρ2) and there exists ρ3 ≥ ρ2 such that

ψ2(ρ3) < 1. (5)

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 9
Set B3 = B∩S[ξ,ρ3]. Then, ξ is the only solution of the equation G(x)=0 in the set B3.

Proof. Let v∗ 6= ξ. Then, the divided difference V = [ξ,v∗; G] is well-defined. Using the lastcondition in (C3) and (5), we obtain in turn that
‖P−1(V − P)‖≤ ψ2(‖v∗ −ξ‖) ≤ ψ2(ρ3) < 1,

so, V −1 ∈ W (U) and from the approximation
v∗ −ξ = V −1(G(v∗) −G(ξ)) = V −1(0) = 0,

we deduce v∗ = ξ. �
PROPOSITION 4.2. Assume:
There exists a solution v∗ ∈ S(x0,ρ4) of the equation G(x) = 0 for some ρ4 > 0;
The condition (H1) holds on the ball S(x0,ρ4) and there exist ρ5 ≥ ρ4 such that

ϕ0(ρ4,ρ5) < 1. (6)
Set B4 = B∩S[x0,ρ5]. Then, v∗ is the only solution of the equation G(x) = 0 in the set B4.

Proof. Let z∗ ∈B4 with G(z∗) = 0 and z∗ 6= v∗. Define the linear operator F = [v∗,z∗; G]. Then,by the condition (H1) and (6)
‖P−1(F − P)‖≤ ϕ0(‖v∗ −x0‖,‖z∗ −x0‖) ≤ ϕ0(ρ4,ρ5) < 1,

thus, again v∗ = z∗. �
REMARK 4.3. (i) The limit point α∗ can be replaced by ρ in the condition (H5).

(ii) Under all the assumptions (H1)-(H5), let v∗ = ξ and ρ4 = α∗ in Proposition 4.2.
5. Experiments

EXAMPLE 5.1. Consider the system of differential equations with

G′1(w1) = e
w1, G′2(w2) = (e − 1)w2 + 1, G

′
3(w3) = 1

subject to G1(0) = G2(0) = G3(0) = 0. Let G = (G1,G2,G3). Let U = R3 and B = U[0, 1]. Then
ξ = (0, 0, 0)T is a root. Let function G on B for w = (w1,w2,w3)T be

G(w) = (ew1 − 1,
e − 1

2
w22 + w2,w3)

T .

This definition gives

G′(w) =


ew1 0 0

0 (e − 1)w2 + 1 0
0 0 1



https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 10
Thus, by the definition of G it follows that G′(ξ) = 1. Let P = G′(ξ) and [x,y; G] =

∫1
0
G′(x +

θ(y − x))dθ. Then, for a = b = 1, the conditions (C1)-(C5) are validated by Remark 2.2 provided
that

δ1(t) = (2 +
1

2
(e − 1)t)t,

δ2(t) =
1

2
(e − 1)t2,

ϕ0(t1,t2) =
1

2
(e − 1)(δ1(t1) + δ2(t2))

δ3(t) = (2 +
1

2
(e − 1)h2(t))h2(t)t,

δ4(t) =
1

2
(e − 1)h2(t)2t2,

ϕ(t1,t2,t3) =
1

2
(e − 1)(t1 + δ1(t2) + δ2(t3))

ϕ1(t1,t2,t3,t4) =
1

2
(e − 1)[δ1(t1) + δ2(t2) + δ3(t3) + δ4(t4)]

and

ϕ2(t) =
1

2
(e − 1)t.

By solving, we get ρ0 = 0.426037 and hence M0 = [0,ρ0). The radii are obtained as r1 =
0.204146, r2 = 0.134409 and r3 = 0.126891. Therefore, by the definition r = min{ri}, we get the
radius of convergence, r = 0.126891.

REMARK 5.2. A non-differentiable non-linear system is solved using the method (2), where the
divided difference is defined by the 2×2 matrix given for t̄ = (t1,t2) ∈R×R, t̃ = (t3,t4) ∈R×R
and G = (G1,G2) by

[t̄, t̃; G]i,1 =
Gi(t3,t4) −Gi(t1,t4)

t3 − t1
, t3 6= t1

and

[t̄, t̃; G]i,2 =
Gi(t1,t4) −Gi(t1,t2)

t4 − t2
, t4 6= t2.

Otherwise, we set [·, ·; G] = 0.

The actual example is given below
EXAMPLE 5.3. Let us solve the non-linear and non-differentiable system given as

3t21t2 + t
2
2 − 1 + |t1 − 1| = 0

t41 + t1t
3
2 − 1 + |t2| = 0.

https://doi.org/10.28924/ada/ma.3.19


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 11
Then, we set G = (G1,G2), where

G1(t1,t2) = 3t
2
1t2 + t

2
2 − 1 + |t1 − 1|

G2(t1,t2) = t
4
1 + t1t

3
2 − 1 + |t2|

Choose the initial points (5, 5) and (1, 0). Then, using the aforementioned divided difference and
the method (2), we obtain the solution ξ = (x∗1,x

∗
2) after three iterations with

x∗1 = 0.894655074977661

and

x∗2 = 0.327826643198819.

EXAMPLE 5.4. We consider the system of 25 equations
25∑

j=1,j 6=i
xj −e−xi = 0, 1 ≤ i ≤ 25,

with initial point x0 = {1.5, 1.5, . . . , 1.5}T . Then, applying method (2) we get the solution ξ =
{0.04003162719010837 · · · , 0.04003162719010837 · · · , . . . , 0.04003162719010837 · · ·}T after 4
iterations.

6. Conclusion
A new procedure has been developed to demonstrate both local and semi-local convergenceanalysis of high-order convergence methods, using only derivatives that appear on the methodology.Previous works have proven convergence based on the existence of high-order derivatives that maynot be present in the methodology. Hence, it has been a limitation of their applicability. Thisprocedure also offers error limits and uniqueness results that were not available before. Moreover,this procedure is general in the sense that it is not dependent on the method itself. This is thereason why it may be used in the same way to broaden the scope of other methods of higher order,such as single and multi-step methods [1, 2, 4, 10–17, 19, 21, 22].

References
[1] A. Cordero, J. L. Hueso, E. Martínez, J. R. Torregrosa, A modified Newton-Jarratt’s composition, Numer. Algorithms55 (2010) 87-99. https://doi.org/10.1007/s11075-009-9359-z.[2] A. M. Ostrowski, Solutions of Equations and System of Equations, Academic Press (1960) New York.[3] F. A. Potra, V. Ptak, Nondiscrete Induction and Iterarive Processes. Pitman Publishing (1984) Boston.[4] H. Ren, Q. Wu, W. Bi, A class of two-step Steffensen type methods with fourth-order convergence, Appl. Math.Comput. 209 (2009) 206–210. https://doi.org/10.1016/j.amc.2008.12.039.[5] I. K. Argyros, J. A. John, J. Jayaraman, On the semi-local convergence of a sixth order method in Banach space, J.Numer. Anal. Approx. Theory 51 (2022) 144–154. https://doi.org/10.33993/jnaat512-1284.[6] I. K. Argyros, J. A. John, J. Jayaraman, S. Regmi, Extended local convergence for the Chebyshev method under theMajorant condition, Asian Res. J. Math. 18 (2022) 102–109. https://doi.org/10.9734/arjom/2022/v18i12629.

https://doi.org/10.28924/ada/ma.3.19
https://doi.org/10.1007/s11075-009-9359-z
https://doi.org/10.1016/j.amc.2008.12.039
https://doi.org/10.33993/jnaat512-1284
https://doi.org/10.9734/arjom/2022/v18i12629


Eur. J. Math. Anal. 10.28924/ada/ma.3.19 12
[7] I. K. Argyros, S. Regmi, J. A. John, J. Jayaraman, Extended convergence for two sixth order methods under the sameweak conditions, Foundations 3 (2023) 127–139. https://doi.org/10.3390/foundations3010012.[8] J. A. John, J. Jayaraman, I. K. Argyros, Local Convergence of an Optimal Method of Order Four for Solving Non-LinearSystem, Int. J. Appl. Comput. Math. 8 (2022) 194. https://doi.org/10.1007/s40819-022-01404-3.[9] J. F. Steffensen, Remarks on iteration, Scand. Actuar. J. 16 (1933) 64–72. https://doi.org/10.1080/03461238.

1933.10419209.[10] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, New Jersey (1964).[11] J. M Ortega, W. C Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, NewYork, (1970). https://doi.org/10.1137/1.9780898719468.fm.[12] J. R. Sharma, H. Arora, An efficient derivative free iterative method for solving systems of nonlinear equations, Appl.Anal. Discrete Math. 7 (2013) 390–403. https://doi.org/10.2298/AADM130725016S.[13] J. R. Sharma, H. Arora, A novel derivative free algorithm with seventh order convergence for solving systems ofnonlinear equations, Numer. Algorithms 4 (2014) 917–933. https://doi.org/10.1007/s11075-014-9832-1.[14] J. R. Sharma, H. Arora, Efficient derivative-free numerical methods for solving systems of nonlinear equations,Comput. Appl. Math. 35 (2016) 269–284. https://doi.org/10.1007/s40314-014-0193-0.[15] J. R. Sharma, P. Gupta, Efficient family of Traub-Steffensen-type methods for solving systems of nonlinear equations.Adv. Numer. Anal. 2014 (2014) 152187. https://doi.org/10.1155/2014/152187.[16] M. Grau-Sánchez, À. Grau, M. Noguera, Frozen divided difference scheme for solving systems of nonlinear equations,J. Comput. Appl. Math. 235 (2011) 1739-1743. https://doi.org/10.1016/j.cam.2010.09.019.[17] M. Grau-Sánchez, M. Noguera, S. Amat, On the approximation of derivatives using divided difference operatorspreserving the local convergence order of iterative methods, J. Comput. Appl. Math. 237 (2013) 363-372. https:
//doi.org/10.1016/j.cam.2012.06.005.[18] M. Narang, S. Bhatia, V. Kanwar, New efficient derivative free family of seventh-order methods for solving systemsof nonlinear equations, Numer. Algorithms 76 (2017) 283-307. https://doi.org/10.1007/s11075-016-0254-0.[19] Q. Zheng, P. Zhao, F. Huang, A family of fourth-order Steffensen-type methods with the applications on solvingnonlinear ODEs, Appl. Math. Comput. 217 (2011) 8196–8203. https://doi.org/10.1016/j.amc.2011.01.095.[20] S. Regmi, I. K. Argyros, J. A. John, J. Jayaraman, Extended convergence of two multi-step iterative methods, Foun-dations 3 (2023) 140–153. https://doi.org/10.3390/foundations3010013.[21] X. Wang, T. Zhang, A family of Steffensen type methods with seventh-order convergence, Numer. Algorithms 62(2013) 429–444. https://doi.org/10.1007/s11075-012-9597-3.[22] Z. Liu, Q, Zheng, P. Zhao, A variant of Steffensen’s method of fourth-order convergence and its applications, Appl.Math. Comput. 216 (2010) 1978-1983. https://doi.org/10.1016/j.amc.2010.03.028.

https://doi.org/10.28924/ada/ma.3.19
https://doi.org/10.3390/foundations3010012
https://doi.org/10.1007/s40819-022-01404-3
https://doi.org/10.1080/03461238.1933.10419209
https://doi.org/10.1080/03461238.1933.10419209
https://doi.org/10.1137/1.9780898719468.fm
https://doi.org/10.2298/AADM130725016S
https://doi.org/10.1007/s11075-014-9832-1
https://doi.org/10.1007/s40314-014-0193-0
https://doi.org/10.1155/2014/152187
https://doi.org/10.1016/j.cam.2010.09.019
https://doi.org/10.1016/j.cam.2012.06.005
https://doi.org/10.1016/j.cam.2012.06.005
https://doi.org/10.1007/s11075-016-0254-0
https://doi.org/10.1016/j.amc.2011.01.095
https://doi.org/10.3390/foundations3010013
https://doi.org/10.1007/s11075-012-9597-3
https://doi.org/10.1016/j.amc.2010.03.028

	1. Introduction
	2. Convergence 1: Local
	3. Convergence 2: Semi-Local
	4. Isolation of a Solution
	5. Experiments
	6. Conclusion
	References

