©2021 Ada Academica https://adac.eeEur. J. Math. Anal. 1 (2021) 106-132doi: 10.28924/ada/ma.1.106 New Iterative Algorithm for Solving Constrained Convex Minimization Problem and Split Feasibility Problem Austine Efut Ofem1,∗ , Unwana Effiong Udofia2, Donatus Ikechi Igbokwe3 1Department of Mathematics, University of Uyo, Uyo, Nigeria ofemaustine@gmail.com 2Department of Mathematics and Statistics, Akwa Ibom state University, Ikot Akpaden, Mkpatenin, Nigeria unwanaudofia.aksu@yahoo.com 3Department of Mathematics, Michael Okpara University of Agriculture, Umudike, Nigeria igbokwedi@yahoo.com ∗Correspondence: ofemaustine@gmail.com Abstract. The purpose of this paper is to introduce a new iterative algorithm to approximate the fixedpoints of almost contraction mappings and generalized α-nonexpansive mappings. Also, we show thatour proposed iterative algorithm converges weakly and strongly to the fixed points of almost contrac-tion mappings and generalized α-nonexpansive mappings. Furthermore, it is proved analytically thatour new iterative algorithm converges faster than one of the leading iterative algorithms in the liter-ature for almost contraction mappings. Some numerical examples are also provided and used to showthat our new iterative algorithm has better rate of convergence than all of S, Picard-S, Thakur andM iterative algorithms for almost contraction mappings and generalized α-nonexpansive mappings.Again, we show that the proposed iterative algorithm is stable with respect to T and data dependentfor almost contraction mappings. Some applications of our main results and new iterative algorithmare considered. The results in this article are improvements, generalizations and extensions of severalrelevant results existing in the literature. 1. Introduction Fixed point theory is concerned with solution of the equation T` = `, (1.1) where T could be a nonlinear operator defined on a metric space. Any ` that solves (1.1) is calledthe fixed point of T and the collection all such elements is denoted by F (T ). Fixed point theory is Received: 10 Sep 2021. Key words and phrases. stability; almost contraction map; generalized α-nonexpansive mapping; data dependence;iterative algorithm; constrained convex minimization problem; split feasibility problem.106 https://adac.ee https://doi.org/10.28924/ada/ma.1.106 https://orcid.org/0000-0001-8064-2326 Eur. J. Math. Anal. 1 (2021) 107 an area in nonlinear analysis that has become very attractive and interesting with a large numberof applications in various fields of mathematics and other branches of science. Fixed point theoryhas remained not only a field with a huge development, but also a very helpful means for solvingvarious problems in different fields of mathematics. It is well known that fixed point theorems areused for proving the existence and uniqueness to various mathematical models like differential,integral and partial differential equations and variational inequalities problems etc., representingphenomena arising in different fields such as steady state temperature distribution, chemical equa-tions, neutron transport theory, economic theories, epidemics and flow of fluids. Furthermore, itas also significant in the field of computer science, image processing, artificial intelligence, deci-sion making, population dynamics, computer science, operational research, industrial engineering,pattern recognition, medicine, group health underwriting, management and many others.Existence theorem is concerned with establishing sufficient conditions in which the equation (1.1)will have solution, but does not necessarily show how to find such solution. On the other hand,iteration method of fixed points is concerned with approximation or computation of sequences whichconverge to the solution of (1.1). When existence of a fixed point of an operator is guaranteed,obtaining constructive technique for finding such a fixed point is also paramount.In 2003, Berinde [6] introduced the concept of weak contraction mappings which is also knownas almost contraction mappings. He showed that the class of almost contraction mappings is moregeneral than the class of Zamfirescu mappings [41] which includes contraction mappings, Kannanmappings [22] and Chatterjea mappings [10].Throughout this paper, let Ω denote a Banach space and Λ a nonempty closed convex subset of Ω. Let R stand for set of real numbers. Definition 1.1. A mapping T : Λ → Λ is called almost contraction if there exists a constant γ ∈ (0, 1) and some constant L ≥ 0, such that ‖T`−Tζ‖≤ γ‖`−ζ‖ + L‖`−T`‖, ∀`,ζ ∈ Λ. (1.2) Definition 1.2. A mapping T : Λ → Λ is said to be Suzuki generalized nonexpansive if for all `,ζ ∈ Λ, we have 1 2 ‖`−T`‖≤‖`−ζ‖ =⇒‖T`−Tζ‖≤‖`−ζ‖. Suzuki generalized nonexpansive mappings is also known as mappings satisfying condition (C).In [33], Suzuki showed that the class of Suzuki generalized nonexpansive mappings is more generalthan the class of nonexpansive mappings and obtained some fixed points and convergence theorems. Definition 1.3. A mapping T : Λ → Λ is said to be α-nonexpansive if there exists α ∈ [0, 1) suchthat ‖T`−Tζ‖2 ≤ α‖T`−ζ‖2 + α‖`−Tζ‖2 + (1 − 2α)‖`−ζ‖2, Eur. J. Math. Anal. 1 (2021) 108 for all `,ζ ∈ Λ. The class of α-nonexpansive mappings was introduced in 2011 by Aoyama and Kohsaka [3]as generalization of nonexpansive mappings and further obtained some convergence results. Itis worthy noting that nonexpansive mappings are continuous on their domains, but Suzuki-typegeneralized nonexpansive mappings and α-nonexpansive mappings need not be continuous (see[33]). Clearly, every nonexpansive mapping is an α-nonexpansive mapping with α = 0 (i.e., 0-nonexpansive) and every α-nonexpansive mapping with a nonempty fixed point set is quasinonex-pansive. Definition 1.4. A mapping T : Λ → Λ is said to be generalized α-nonexpansive if there exists α ∈ [0, 1) such that 1 2 ‖`−T`‖ ≤ ‖`−ζ‖ implies ‖T`−Tζ‖ ≤ α‖T`−ζ‖ + α‖Tζ − `‖ + (1 − 2α)‖`−ζ‖ for all `,ζ ∈ Λ. In [26], Pant and Shukla introduced a wider class of nonexpansive mappings in Banach spacesknown as generalized α-nonexpansive mappings which contains the class of Suzuki generalizednonexpansive mappings.It is well known that the case of contraction mappings is simple and carries most of the goodbehavior using Picard iterative algorithm. But when we move to the case of nonexpansive mappings,the Picard iterative algorithm need not converge to a fixed point. Apparently, the conclusion ofBanach contraction principle fails for nonexpansive mappings even if Λ is compact. As an example,one may consider a geometric rotation on the unit circle in the plane R2.The limitation of Picard iterative algorithm gave many researchers in nonlinear analysis the roomto construct more efficient iterative algorithms for approximating the fixed points of nonexpansivemappings and other classes of mappings which are more general than the class of nonexpansivemappings.Some notable iterative algorithms in the existing literature are: Mann [24], Ishikawa [21], Noor[25], Argawal et al. [2], Abbas and Nazir [1], SP [27], S* [20], CR [12], Normal-S [28], Picard-S [17],Thakur [36], Thakur New [37], M [39], M* [38], Garodia and Uddin [16], Two-Step Mann [35] iterativealgorithms and many others.In 2007, the S iterative algorithm was introduced by Argawal et al. [2] as follows: ψ0 ∈ Λ, µs = (1 −βs)ψs + βsTψs, ψs+1 = (1 −δs)Tψs + δsTµs, ∀s ≥ 1, (1.3) where {δs} and {βs} are sequences in [0,1]. Eur. J. Math. Anal. 1 (2021) 109 In 2014, the Picard-S iterative algorithm was introduced by Gursoy and Karakaya [17] as follows: u0 ∈ Λ, ϕs = (1 −βs)us + βsTus, %s = (1 −δs)Tus + δsTϕs, us+1 = T%s, ∀s ≥ 1, (1.4) where {δs} and {βs} are sequences in [0,1]. The authors showed with the aid of an example thatPicard-S iterative algorithm (1.4) converges at a rate faster than all of Picard, Mann, Ishikawa,Noor, SP, CR, S, S*, Abbas and Nazir, Normal-S and Two-Step Mann iterative algorithms forcontraction mappings.In 2016, Thakur et al. [37] introduced the following three steps iterative algorithm: ω0 ∈ Λ, ρs = (1 −βs)ωs + βsTωs, vs = T ((1 −δs)ωs + δsρs), ωs+1 = Tvs, ∀s ≥ 1, (1.5) where {δs} and {βs} are sequences in [0,1]. With the help of numerical example, they proved that(1.5) is faster than Picard, Mann, Ishikawa, Agarwal, Noor and Abbas iterative algorithm for suzukigeneralized nonexpansive mappings.In 2018, Ullah and Arshad [39] introduced M iterative algorithm as follows: m0 ∈ Λ, cs = (1 −δs)ms + δsTms, ds = Tcs, ms+1 = Tds, ∀s ≥ 1, (1.6) where {δs} is a sequence in [0,1]. Numerically they showed that M iterative algorithm (1.2)converges faster than S iterative algorithm (1.3) and Picard-S iterative algorithm (1.4) for Suzukigeneralized nonexpansive mappings. Also, they noted that the speed of convergence of Picard-Siterative algorithm (1.4) and Thakur iterative algorithm (1.5) are almost same.Motivated by the above results, in this paper, we construct a new four step iterative algorithmwhich outperforms the iterative algorithm (1.6) in terms of convergence rate for almost contractionmappings as follows:  `0 ∈ Λ, gs = (1 −βs)`s + βsT`s, ws = (1 −δs)T`s + δsTgs, ζs = Tws, `s+1 = Tζs, ∀s ≥ 1, (1.7) where {δs} and {βs} are sequences in [0,1]. Eur. J. Math. Anal. 1 (2021) 110 The purpose of this paper is to prove analytically that our new iterative algorithm convergesfaster than (1.6) for almost contraction mappings. In order to support our analytical proof, weuse some new examples to show that our iterative algorithm (1.7) converges faster than (1.6) anda number of other leading iterative algorithms in the literature. We also prove the weak andstrong convergence of new iterative algorithm (1.7) to the fixed points generalized α-nonexpansivemappings in a uniformly convex Banach spaces. Furthermore, we show that our new iterativealgorithm is T-stable and data dependent. Finally, we use our new iterative algorithm (1.7) tosolve a constrained convex minimization problem and a split feasibility problem. 2. Preliminaries The following definitions, propositions and lemmas will be useful in proving our main results. Definition 2.1. A Banach space Ω is said to be uniformly convex if for each � ∈ (0, 2], there exists δ > 0 such that for `,ζ ∈ Ω satisfying ‖`‖≤ 1, ‖ζ‖≤ 1 and ‖`−ζ‖ > �, we have ∥∥∥`+ζ2 ∥∥∥ < 1 −δ. Definition 2.2. A Banach space Ω is said to satisfy Opial’s condition if for any sequence {`s} in Ω which converges weakly to ` ∈ Ω implies lim sup s→∞ ‖`s − `‖ < lim sup s→∞ ‖`s −ζ‖, ∀ζ ∈ Ω with ζ 6= `. Definition 2.3. Let {`s} be a bounded sequence in Ω. For ` ∈ Λ ⊂ Ω, we put r(`,{`s}) = lim sup s→∞ ‖`s − `‖. The asymptotic radius of {`s} relative to Λ is defined by r(Λ,{`s}) = inf{r(`,{`s}) : ` ∈ Λ}. The asymptotic center of {`s} relative to Λ is given as: A(Λ,{`s}) = {` ∈ Λ : r(`,{`s}) = r(Λ,{`s})}. In a uniformly convex Banach space, it is well known that A(Λ,{`s}) consist of exactly one point. Definition 2.4. [5] Let {as} and {bs} be two sequences of real numbers that converge to a and brespectively, and assume that there exists k = lim s→∞ ‖as −a‖ ‖bs −b‖ . Then, (R1) if k = 0, we say that {as} converges faster to a than {bs} does to b.(R2) If 0 < k < ∞, we say that {as} and {bs} have the same rate of convergence. Eur. J. Math. Anal. 1 (2021) 111 Definition 2.5. [5] Let {ηs} and {φs} be two fixed point iteration processes that converge to thesame point z, the error estimates ‖ηs −z‖ ≤ as, ∀s ≥ 1, ‖φs −z‖ ≤ bs, ∀s ≥ 1, are available where {as} and {bs} are two sequences of positive numbers converging to zero. Thenwe say that {ηs} converges faster to z than {φs} does if {as} converges faster than {bs}. Definition 2.6. [5] Let T , T̃ : Λ → Λ be two operators. We say that T̃ is an approximate operatorfor T if for some � > 0, we have ‖T`− T̃`‖≤ �, ∀` ∈ Λ. Definition 2.7. [18] Let {ys} be any sequence in Λ. Then, an iteration process `s+1 = f (T,ys),which converges to fixed point z, is said to be stable with respect to T , if for εs = ‖ys+1−f (T,ys)‖, ∀s ∈N, we have lim s→∞ εs = 0 ⇔ lim s→∞ ys = z. Definition 2.8. [31] A mapping T : Λ → Λ is said to satisfy condition (I) if a nondecreasingfunction f : [0,∞) → [0,∞) exists with f (0) = 0 and for all r > 0 then f (r) > 0 such that ‖`−T`‖≥ f (d(`,F (T )))) for all ` ∈ Λ, where d(`,F (T )) = infz∈F(T)‖`−z‖. Proposition 2.9. [26] Let Λ be a nonempty subset of a Banach space Ω. Suppose T : Λ → Λ is any mapping. Then(i) If T is a Suzuki generalized nonexpansive mapping, it follows that T is a generalized α-nonexpansive mapping.(ii) Every generalized α-nonexpansive mapping with a nonempty fixed point set is quasi- nonexpansive mapping.(ii) If T is a generalized α-nonexpansive mapping, then F (T ) is closed. Moreover, if Ω is strictly convex and Λ is convex, then F (T ) is also convex.(iv) If T is a generalized α-nonexpansive mapping, then the following inequality holds: ‖`−Tζ‖≤ ( 3 + α 1 −α ) ‖`−T`‖ + ‖`−ζ‖, ∀ `,ζ ∈ Λ. Lemma 2.10. [26] Let T be a self mapping on a subset Λ of a Banach space Ω which satisfies Opial’s condition. Suppose T is a generalized α-nonexpansive mapping. If {`s} converges weakly to z and lim s→∞ ‖T`s − `s‖ = 0, then Tz = z. That is, I −T is demiclosed at zero. Lemma 2.11. [33] Let T be a self mapping on a weakly compact convex subset Λ of a Banach space Ω with the Opial’s property. If T is a Suzuki generalized nonexpansive mapping, then T has a fixed point. Eur. J. Math. Anal. 1 (2021) 112 Lemma 2.12. [40] Let {‘s} and {λs} be nonnegative real sequences satisfying the following inequalities: ‘s+1 ≤ (1 −σs)‘s + λs, where σs ∈ (0, 1) for all s ∈N, ∞∑ s=0 σs = ∞ and lim s→∞ s σs = 0, then lim s→∞ ‘s = 0. Lemma 2.13. [32] Let {‘s} be a nonnegative real sequence and there exits an s0 ∈N such that for all s ≥ s0 satisfying the following condition: ‘s+1 ≤ (1 −σs)‘s + σsλs, where σs ∈ (0, 1) for all s ∈N, ∞∑ s=0 σs = ∞ and λs ≥ 0 for all s ∈N, then 0 ≤ lim sup s→∞ ‘s ≤ lim sup s→∞ λs. Lemma 2.14. [29] Suppose Ω is a uniformly convex Banach space and {ιs} is any sequence satisfying 0 < p ≤ ιs ≤ q < 1 for all s ≥ 1. Suppose {`s} and {ζs} are any sequences of Ω such that lim sup s→∞ ‖`s‖ ≤ x, lim sup s→∞ ‖ζs‖ ≤ x and lim sup s→∞ ‖ιs`s + (1 − ιs)ζs‖ = x hold for some x ≥ 0. Then lim s→∞ ‖`s −ζs‖ = 0. 3. Rate of Convergence In this section, we will prove that our new iterative algorithm (1.7) converges faster than theiterative algorithm (1.6) for almost contraction mappings. Theorem 3.1. Let Ω be a Banach space and let Λ be a nonempty closed convex subset of Ω. Let T : Λ → Λ be a mapping satisfying (1.2) with F (T ) 6= ∅. Let {`s} be the iterative algorithm defined by (1.7) with sequences {δs}, {βs} ∈ [0, 1] such that ∞∑ s=0 δsβs = ∞, then {`s} converges strongly to a unique fixed point of T . Proof. Let z ∈ F (T ) and from (1.7), we have get ‖gs −z‖ = ‖(1 −βs)`s + βsT`s −z‖ ≤ (1 −βs)‖`s −z‖ + βs‖T`s −z‖ ≤ (1 −βs)‖`s −z‖ + βsγ‖`s −z‖ = (1 − (1 −γ)βs)‖`s −z‖. (3.1) Eur. J. Math. Anal. 1 (2021) 113 Using (1.7) and (3.1), we have ‖ws −z‖ = ‖(1 −δs)T`s + δsTgs −z‖ ≤ (1 −δs)‖T`s −z‖ + δs‖Tgs −z‖ ≤ γ(1 −δs)‖`s −z‖ + γδs‖gs −z‖ ≤ γ(1 −δs)‖`s −z‖ + γδs(1 − (1 −γ)βs)‖`s −z‖ = γ(1 − (1 −γ)δsβs)‖`s −z‖. (3.2) From (1.7) and (3.2), we obtain ‖ζs −z‖ = ‖Tws −z‖ ≤ γ‖ws −z‖ ≤ γ2(1 − (1 −γ)δsβs)‖`s −z‖. (3.3) Using (1.7) and (3.3), we have ‖`s+1 −z‖ = ‖Tζs −z‖ ≤ γ‖ζs −z‖ ≤ γ3(1 − (1 −γ)δsβs)‖`s −z‖. (3.4) From (3.4), we have the following inequalities: ‖`s+1 −z‖ ≤ γ3(1 − (1 −γ)δsβs)‖`s −z‖ ≤ γ3(1 − (1 −γ)δs−1βs−1)‖`s−1 −z‖... ‖`1 −z‖ ≤ γ3(1 − (1 −γ)δ0β0)‖`0 −z‖. (3.5) From (3.5), we get ‖`s+1 −z‖ ≤ ‖`0 −z‖γ3(s+1) s∏ t=0 (1 − (1 −γ)δtβt). (3.6) Since γ ∈ (0, 1), δt,βt ∈ [0, 1] for all t ∈N, it follows that (1 − (1 −γ)δtβt) ∈ (0, 1). Since fromclassical analysis we know that 1 − ` ≤ e−` for all ` ∈ [0, 1], thus from (3.6), we have ‖`s+1 −z‖≤ γ3(s+1)‖`0 −z‖ e (1−γ) s∑ t=0 δtβt . (3.7) If we take the limits of both sides of (3.7), we get lim s→∞ ‖`s −z‖ = 0. � Eur. J. Math. Anal. 1 (2021) 114 Theorem 3.2. Let Ω be a Banach space and let Λ be a nonempty closed convex subset of Ω. Let T : Λ → Λ be a mapping satisfying (1.2) with F (T ) 6= ∅. For given `0 = m0 ∈ Λ, let {`s} and {ms} be the iterative algorithms defined by (1.7) and (1.6), respectively, with real sequences {δs} and {βs} in [0,1] such that δs ≤ δ < 1 and βs ≤ β < 1, for all s ∈N and for some δ, β > 0. Then {`s} converges to z faster than {ms} does. Proof. From (3.6) in Theorem 3.1 together with the assumptions αs ≤ α < 1 and βs ≤ β < 1, forall s ∈N and for some α, β > 0, then we have ‖`s+1 −z‖ ≤ ‖`0 −z‖γ3(s+1) s∏ t=0 (1 − (1 −γ)αtβt) = ‖`0 −z‖γ3(s+1)(1 − (1 −γ)αβ)s+1. (3.8) Similarly, from (1.6), we get ‖cs −z‖ = ‖(1 −δs)ms + δsTms −z‖ ≤ (1 −δs)‖ms −z‖ + δs‖Tms −z‖ ≤ (1 −δs)‖ms −z‖ + δsγ‖mn −z‖ = (1 − (1 −γ)δs)‖ms −z‖. (3.9) Using (1.6) and (3.9), we get ‖ds −z‖ = ‖Tcs −z‖ ≤ γ‖cs −z‖ ≤ γ(1 − (1 −γ)δs)‖ms −z‖. (3.10) Finally, from (1.6) and (3.10), we obtain ‖ms+1 −z‖ = ‖Tds −z‖ ≤ γ‖ds −z‖ ≤ γ2(1 − (1 −γ)δs)‖ms −z‖. (3.11) From (3.11), we have the following inequalities: ‖ms+1 −z‖ ≤ γ2(1 − (1 −γ)δs)‖ms −z‖ ≤ γ2(1 − (1 −γ)δs−1)‖ms−1 −z‖... ‖m1 −z‖ ≤ γ2(1 − (1 −γ)δ0)‖m0 −z‖. (3.12) Eur. J. Math. Anal. 1 (2021) 115 From (3.12), we get ‖ms+1 −z‖ ≤ ‖m0 −z‖γ2(s+1) s∏ t=0 (1 − (1 −γ)δt). Since δs ≤ δ < 1 and βs ≤ β < 1, for all s ∈N and for some δ, β > 0, then we have ‖ms+1 −z‖ ≤ ‖m0 −z‖γ2(s+1) s∏ t=0 (1 − (1 −γ)δt) = ‖m0 −z‖γ2(s+1)(1 − (1 −γ)δ)s+1. Set as = ‖`0 −z‖γ3(s+1)(1 − (1 −γ)δ)s+1, and bs = ‖`0 −z‖γ2(s+1)(1 − (1 −γ)δ)s+1. (3.13) Hence, as bs = ‖`0 −z‖γ3(s+1)(1 − (1 −γ)δβ)s+1 ‖m0 −z‖γ2(s+1)(1 − (1 −γ)δ)s+1 → 0 as s →∞. This implies that our new iterative algorithm (1.7) converges faster to z than M iterative algorithm(1.6). � In order to support analytical prove in Theorem 3.2 and demonstrate the advantage of our newiterative algorithm (1.7), we give the following example. Example 3.3. Let Ω = < and Λ = [1, 50]. Let T : Λ → Λ be a mapping defined by T (`) = √ `2 − 8` + 40. Obviously, 5 is the fixed point of T . Take δs = βs = 34 , with an initial value of `1 = 50. By writing all the codes in MATLAB (R2015a) for Example 3.3, we obtain the following com-parison Table 1 and Figure 1. Eur. J. Math. Anal. 1 (2021) 116 Table 1. Comparison of convergence behaviour of our new iterative algorithm withS, Picard-S, Thakur and M iterative algorithms.Step S Picard-S Thakur M New1 50.00000000 50.00000000 50.00000000 50.00000000 50.000000002 44.16905011 40.46668490 40.46648707 39.77487312 36.794280913 38.40054569 31.13624438 31.13566491 29.79220887 24.079581494 32.71513008 22.15533283 22.15389446 20.25245189 12.593214715 27.14503094 13.88761070 13.88380778 11.71208997 5.609365616 21.74399379 7.46589475 7.45557218 6.06597569 5.003558697 16.60935306 5.14776230 5.14203305 5.02641919 5.000015698 11.93484164 5.00348330 5.00331403 5.00042732 5.000000009 8.12786414 5.00007676 5.00007301 5.00000684 5.0000000010 5.84725921 5.00000169 5.00000161 5.00000011 5.0000000011 5.12789697 5.00000004 5.00000004 5.00000000 5.0000000012 5.01483168 5.00000000 5.00000000 5.00000000 5.0000000013 5.00164168 5.00000000 5.00000000 5.00000000 5.00000000 Iteration number s 2 4 6 8 10 12 14 S e q u e n ce v a lu e s 5 10 15 20 25 30 35 40 45 50 New Iteration M Iteration Thakur Iteration Picard-S Iteration S Iteration Figure 1. Graph corresponding to Table 1. Eur. J. Math. Anal. 1 (2021) 117 4. Convergence Results In this section, we will prove the weak and strong convergence of our new iterative algorithm (1.7)for generalized α–nonexpansive mappings in the framework of uniformly convex Banach spaces.Firstly, we will state and prove the following lemmas which will be useful in obtaining our mainresults. Lemma 4.1. Let Ω be a Banach space and Λ be a nonempty closed convex subset of Ω. Let T : Λ → Λ be a generalized α–nonexpansive mapping with F (T ) 6= ∅. If {`s} is the iterative algorithm defined by (1.7), then lim s→∞ ‖`s −z‖ exists for all z ∈ F (T ). Proof. Let z ∈ F (T ). By Proposition 2.9(ii), we know that every Suzuki generalized nonexpansivemapping with F (T ) 6= ∅ is quasi-nonexpansive mapping. Then, from (1.7), we have ‖gs −z‖ = ‖(1 −βs)`s + βsT`s −z‖ ≤ (1 −βs)‖`s −z‖ + βs‖T`s −z‖ ≤ (1 −βs)‖`s −z‖ + βs‖`s −z‖ = ‖`s −z‖. (4.1) Using (1.7) and (4.1), we obtain ‖ws −z‖ = ‖(1 −δs)T`s + δsTgs −z‖ ≤ (1 −δs)‖T`s −z‖ + δs‖Tgs −z‖ ≤ (1 −δs)‖`s −z‖ + δs‖gs −z‖ ≤ (1 −δs)‖`s −z‖ + δs‖`s −z‖ = ‖`s −z‖. (4.2) Again, using (1.7) and (4.2), we get ‖ζs −z‖ = ‖Tws −z‖ ≤ ‖ws −z‖ ≤ ‖`s −z‖. (4.3) Lastly, from (1.7) and (4.3), we have ‖`s −z‖ = ‖Tζs −z‖ ≤ ‖ζs −z‖ ≤ ‖`s −z‖. (4.4) This implies that {‖`s −z‖} is bounded and nondecreasing for all z ∈ F (T ). Hence, lim s→∞ ‖`s −z‖exists. � Eur. J. Math. Anal. 1 (2021) 118 Lemma 4.2. Let Ω be a uniformly convex Banach space and Λ be a nonempty closed convex subset of Ω. Let T : Λ → Λ be a generalized α–nonexpansive mapping. Suppose {`s} is the iterative algorithm defined by (1.7). Then, F (T ) 6= ∅ if and only if {`s} is bounded and lim s→∞ ‖T`s−`s‖ = 0. Proof. Suppose F (T ) 6= ∅ and let z ∈ F (T ). Then, by Lemma 4.1, lim s→∞ ‖`s −z‖ exists and {`s} isbounded. Put lim s→∞ ‖`s −z‖ = x. (4.5) From (4.4) and (4.5), we obtain lim sup s→∞ ‖gs −z‖≤ lim sup s→∞ ‖`s −z‖ = x. (4.6) From Proposition 2.9(ii), we know that every generalized α–nonexpansive mapping with F (T ) 6= ∅is quasi-nonexpansive mapping. So that we have lim sup s→∞ ‖T`s −z‖≤ lim sup s→∞ ‖`s −z‖ = x. (4.7) Again, using (1.7), we get ‖`s+1 −z‖ = ‖Tζs −z‖ ≤ ‖ζs −z‖ = ‖Tws −z‖ ≤ ‖ws −z‖ = ‖(1 −δs)T`s + δsTgs −z‖ ≤ (1 −δs)‖T`s −z‖ + δs‖Tgs −z‖ ≤ (1 −δs)‖`s −z‖ + δs‖gs −z‖ = ‖`s −z‖−δs‖`s −z‖ + δs‖gs −z‖. (4.8) From (4.8), we have ‖`s+1 −z‖−‖`s −z‖ δs ≤‖gs −z‖−‖`s −z‖. (4.9) Since δs ∈ [0, 1], then from (4.9), we have ‖`s+1 −z‖−‖`s −z‖≤ ‖`s+1 −z‖−‖`s −z‖ δs ≤‖gs −z‖−‖`s −z‖, which implies that ‖`s+1 −z‖≤‖gs −z‖. Therefore, from (4.5), we obtain x ≤ lim inf s→∞ ‖gs −z‖. (4.10) Eur. J. Math. Anal. 1 (2021) 119 From (4.6) and (4.10) we obtain x = lim s→∞ ‖gn −z‖ = lim s→∞ ‖(1 −βs)`s + βsT`s −z‖ = lim s→∞ ‖(1 −βs)(`s −z) + βs(T`s −z)‖ = lim s→∞ ‖βs(T`s −z) + (1 −βs)(`s −z)‖. (4.11) From (4.5), (4.7), (4.11) and Lemma 2.14, we obtain lim s→∞ ‖T`s − `s‖ = 0. (4.12) Conversely, assume that {`s} is bounded and lim s→∞ ‖T`s−`s‖ = 0. Let z ∈ A(Λ,{`s}), by Definition2.3 and Proposition 2.9(iv), we have (Tz,{`s}) = lim sup s→∞ ‖`s −Tz‖ ≤ lim sup s→∞ ( (3 + α) (1 −α) ‖T`s − `s‖ + ‖`s −z‖ ) = lim sup s→∞ ‖`s −z‖ = r(z,{`s}). (4.13) This implies that z ∈ A(Λ,{`s}). Since Ω is uniformly convex, A(Λ,{`s}) is singleton, thus wehave Tz = z. � Theorem 4.3. Let Ω, Λ, T be same as in Lemma 4.2. Suppose tat Ω satisfies Opial’s condition and F (T ) 6= ∅. Then, the sequence {`s} defined by (1.7) converges weakly to a fixed point of T . Proof. Let z ∈ F (T ), then by Lemma 4.1, we have lim s→∞ ‖`s − z‖ exists. Now we show that {`s}has weak sequential limit in F (T ). Let ` and ζ be weak limits of the subsequences {`sj} and {`sk}of {`s}, respectively. By Lemma 4.2, we have lim s→∞ ‖T`s − `s‖ = 0 and from Lemma 2.10, I −T isdemiclosed at zero. It follows that (I −T )` = 0 implies ` = T`, similarly Tζ = ζ.Next we show uniqueness. Suppose ` 6= ζ, then by Opial’s property, we obtain lim s→∞ ‖`s − `‖ = lim sj→∞ ‖`sj − `‖ < lim sj→∞ ‖`sj −ζ‖ = lim s→∞ ‖`s −ζ‖ = lim sk→∞ ‖`sk −ζ‖ < lim sk→∞ ‖`sk − `‖ = lim s→∞ ‖`s − `‖, (4.14) which is a contradiction, so ` = ζ. Hence, {`s} converges weakly to a fixed point of T . � Eur. J. Math. Anal. 1 (2021) 120 Theorem 4.4. Let Ω, Λ, T be same as in Lemma 4.2. Then, the iterative algorithm {`s} de- fined by (1.7) converges strongly to a point of F (T ) if and only if lim inf s→∞ d(`s,F (T )) = 0, where d(`s,F (T )) = inf{‖`−z‖ : z ∈ F (T )}. Proof. Necessity is obvious. Assume that lim inf s→∞ d(`s,F (T )) = 0. From Lemma 4.1, we have lim s→∞ ‖`s − z‖ exists for all z ∈ F (T ), it follows that lim inf s→∞ d(`s,F (T )) exists. But by hypothesis, lim inf s→∞ d(`s,F (T )) = 0, thus lim s→∞ d(`s,F (T )) = 0. Next we prove that {`s} is a Cauchy sequencein Λ. Since lim inf s→∞ d(`s,F (T )) = 0, then given ε > 0, there exists s0 ∈N such that, for all s,n ≥ s0,we have d(`s,F (T )) ≤ � 2 , d(`n,F (T )) ≤ � 2 . Thus, we have ‖`s − `n‖ ≤ ‖`s −z‖ + ‖`n −z‖ ≤ d(`s,F (T )) + d(`n,F (T )) ≤ � 2 + � 2 = �. Hence {`s} is a Cauchy sequence in Λ. Since Λ is closed, therefore there exists a point `1 ∈ Λsuch that lim s→∞ `s = `1. Since lim s→∞ d(`s,F (T )) = 0, it implies that lim s→∞ d(`1,F (T )) = 0. Hence, `1 ∈ F (T ) since F (T ) closed. � Theorem 4.5. Let Ω, Λ, T be same as in Lemma 4.2. If T satisfies condition (I), then the iterative algorithm {`s} defined by (1.7) converges strongly to a fixed point of T . Proof. We have shown in Lemma 4.2 that lim s→∞ ‖T`s − `s‖ = 0. (4.15) Using condition (I) in Definition 2.8 and (4.15), we get lim s→∞ f (d(`s,F (T ))) ≤ lim s→∞ ‖T`s − `s‖ = 0, (4.16) i.e., lim s→∞ f (d(`s,F (T ))) = 0. Since f : [0,∞) → [0,∞) is a nondecreasing function satisfying f (0) = 0, f (r) > 0 for all r ∈ (0,∞), we have lim s→∞ d(`s,F (T )) = 0. (4.17) From Theorem 4.4, then sequence {`s} converges strongly to a point of F (T ). � Eur. J. Math. Anal. 1 (2021) 121 5. Numerical Result In this section, we provide an example of generalized α-nonexpansive mapping which is notSuzuki generalized nonexpansive mapping. With the aid of the provided example, we will provethat our new iterative algorithm (1.7) outperforms a number of iterative algorithms in the existingliterature in terms of convergence. Example 5.1. Let Λ = [0,∞) be endowed with the usual norm | · | and let T : Λ → Λ be definedas: T` = { 0, if ` ∈ [0, 1 5 ), 3` 4 , if ` ∈ [1 5 ,∞). (5.1) Firstly, we show that T does not satisfy condition (C). To see this, let ` = 1 15 and ζ = 1 5 , then 1 2 |`−T`| = 1 30 < 2 15 = |`−ζ|. But |T`−Tζ| = 3ζ 4 = 3 20 > 2 15 = |`−ζ|. Hence, T does not satisfy condition (C), which implies that T is not a Suzuki generalized nonex-pansive mapping.Now we show that T is a generalized α-nonexpansive mapping with α = 1 3 (i.e., generalized 1 3 -nonexpansive). We consider the following cases: Case (a): When `,ζ ∈ [0, 1 5 ), we have 1 3 |T`−ζ| + 1 3 |`−Tζ| + 1 3 |`−ζ| ≥ 0 = |T`−Tζ|. Case (b): When `,ζ ∈ [1 5 ,∞), we obtain 1 3 |T`−ζ| + 1 3 |`−Tζ| + 1 3 |`−ζ| = 1 3 ∣∣∣∣3`4 −ζ ∣∣∣∣ + 13 ∣∣∣∣`− 3ζ4 ∣∣∣∣ + 13|`−ζ| ≥ 1 3 ∣∣∣∣(3`4 −ζ ) + ( `− 3ζ 4 )∣∣∣∣ + 13|`−ζ| = 7 12 |`−ζ| + 1 3 |`−ζ| = 11 12 |`−ζ| ≥ 3 4 |`−ζ| = |T`−Tζ|. Eur. J. Math. Anal. 1 (2021) 122 Case (c): When ` ∈ [1 5 ,∞) and ζ ∈ [0, 1 5 ), we get 1 3 |T`−ζ| + 1 3 |`−Tζ| + 1 3 |`−ζ| = 1 3 ∣∣∣∣3`4 −ζ ∣∣∣∣ + 13|`| + 13|`−ζ| ≥ 1 3 ∣∣∣∣3`4 −ζ ∣∣∣∣ + 13|`−ζ| ≥ 7` 12 = |T`−Tζ|. Hence, T is generalized α-nonexpansive mapping with α = 1 3 (i.e., generalized 1 3 -nonexpansive)with F (T ) = {0}.With the aid of MATLAB (R2015a), we obtain the following comparison Table 2 and Figure 2 forvarious iterative algorithms with control sequences δs = 0.65, βs = 0.8 and initial guess `1 = 50. Table 2. Comparison of convergence behaviour of our new iterative algorithm withS, Picard-S, Thakur and M iterative algorithms.Step S Picard-S Thakur M New1 50.00000000 50.00000000 50.00000000 50.00000000 50.000000002 32.62500000 24.46875000 24.46875000 23.55468750 18.351562503 21.28781250 11.97439453 11.97439453 11.09646606 6.735596924 13.89029766 5.85996932 5.85996932 5.22747581 2.472174565 9.06341922 2.86772249 2.86772249 2.46263118 0.000000006 5.91388104 1.40339169 1.40339169 1.16013016 0.000000007 3.85880738 0.00000000 0.00000000 0.00000000 0.000000008 2.51787182 0.00000000 0.00000000 0.00000000 0.000000009 1.64291136 0.00000000 0.00000000 0.00000000 0.00000000 Iteration number s 1 2 3 4 5 6 7 8 9 S e q u e n ce v a lu e s 0 5 10 15 20 25 30 35 40 45 50 New Iteration M Iteration Thakur Iteration Picard-S Iteration S iteration Figure 2. Graph corresponding to Table 2. From the above Table 2 and Figure 2, it is clear that our new iterative algorithm (1.7) outperformsa number of existing iterative algorithms. Eur. J. Math. Anal. 1 (2021) 123 6. Stability result Our aim in this section is to show that our new iterative algorithm (1.7) is T–Stable. Theorem 6.1. Let Ω be a Banach space and Λ be a nonempty closed convex subset of Ω. Let T be a mapping satisfy (1.2). Let {`s} be the iterative algorithm defined by (1.7) with sequences δs and βs ∈ [0, 1] such that ∑∞ s=0δsβs = ∞. Then the iterative algorithm (1.7) is T–stable. Proof. Let {ys} ⊂ Ω be an arbitrary sequence in Λ and suppose that the sequence iterativelygenerated by (1.7) is `s+1 = f (G,ys) converging to a unique point z and that εs = ‖ys+1−f (T,ys)‖.To prove that (1.7) is T-stable, we have to show that lim s→∞ εs = 0 ⇔ lim s→∞ ys = z.Let lim s→∞ εs = 0. Then from (1.7) and (1.6), we obtain ‖ys+1 −z‖ = ‖ys+1 − f (T,ys) + f (T,ys) −z‖ ≤ ‖ys+1 − f (T,ys)‖ + ‖f (T,ys) −z‖ = εs + ‖f (T,ys) −z‖ = εs + ‖T (T ((1 −δs)Tys + δsT ((1 −βs)ys + βsTys))) −z‖ = γ3(1 − (1 −γ)δsβs)‖ys −z‖ + εs. (6.1) For all s ≥ 1, put θs = ‖ys −z‖, σs = (1 −γ)δsβs ∈ (0, 1), λs = εs. Since lim s→∞ εs = 0, this implies that λsσs = εs(1−γ)δsβs → 0 as s →∞. Apparently, all the conditionsof Lemma 2.12 are fulfilled. Hence, from Lemma 2.12 we have lim s→∞ ys = z.Conversely, let lim s→∞ ys = z. The we have εs = ‖ys+1 − f (T,ys)‖ = ‖ys+1 −z + z − f (T,ys)‖ ≤ ‖ys+1 −z‖ + ‖f (T,ys) −z‖ ≤ ‖ys+1 −z‖ + γ3(1 − (1 −γ)δsβs)‖ys −z‖. (6.2) From (6.2), it follows that lim s→∞ εs = 0. Hence, our new iterative algorithm (1.7) is stable withrespect to T . � 7. Data Dependence result In this section, we obtain data dependence result for the mapping T satisfying (1.2) by utilizingour new iterative algorithm (1.7). Eur. J. Math. Anal. 1 (2021) 124 Theorem 7.1. Let T̃ be an approximate operator of a mapping T satisfying (1.2). Let {`s} be an iterative sequence generated by (1.7) for T and define an iterative algorithm as follows:  ˜̀ 0 ∈ Λ, g̃s = (1 −βs)˜̀s + βsT̃ ˜̀s, w̃s = (1 −δs)T̃ ˜̀s + δsT̃ g̃s, ζ̃s = T̃w̃s, ˜̀ s+1 = T̃ ζ̃s, ∀s ≥ 1, (7.1) where {δs} and {βs} are sequences in [0, 1] satisfying the following conditions:(i) 1 2 ≤ δsβs, ∀s ∈N,(ii) ∞∑ s=0 δsβs = ∞. If Tz = z and T̃ z̃ = z̃ such that lim s→∞ ˜̀ s = z̃, we have ‖z − z̃‖≤ 7� 1 −γ , where � > 0 is a fixed number. Proof. Using (1.7), (1.2) and (7.1), we have ‖`s+1 − ˜̀s+1‖ = ‖Tζs − T̃ ζ̃s‖ = ‖Tζs −Tζ̃s + Tζ̃s − T̃ ζ̃s‖ ≤ ‖Tζs −Tζ̃s‖ + ‖Tζ̃s − T̃ ζ̃s‖ ≤ γ‖ζs − ζ̃s‖ + L‖ζs −Tζs‖ + �. (7.2) From (1.7), (1.2) and (7.1), we have ‖ζs − ζ̃s‖ = ‖Tws − T̃w̃s‖ = ‖Tws −Tw̃s + Tw̃s − T̃w̃s‖ ≤ ‖Tws −Tw̃s‖ + ‖Tw̃s − T̃w̃s‖ ≤ γ‖ws − w̃s‖ + L‖ws −Tws‖ + �. (7.3) Putting (7.3) into (7.2), we have ‖`s+1 − ˜̀s+1‖ ≤ γ2‖ws − w̃s‖ + γL‖ws −Tws‖ +γ� + L‖ζs −Tζs‖ + �. (7.4) Eur. J. Math. Anal. 1 (2021) 125 Again, using (1.7), (1.2) and (7.1), we get ‖ws − w̃s‖ = (1 −δs)‖T`s − T̃ ˜̀s‖ + δs‖Tgs − T̃ g̃s‖ ≤ (1 −δs){‖T`s −T ˜̀s‖ + ‖T ˜̀s − T̃ ˜̀s‖} +δs{‖Tgs −Tg̃s‖ + ‖Tg̃s − T̃ g̃s‖} ≤ (1 −δs){γ‖`s − ˜̀s‖ + L‖`s −T`s‖ + �} +δs{γ‖gs − g̃s‖ + L‖gs −Tgs‖ + �}. (7.5) Using (1.7), (1.2) and (7.1), we get ‖gs − g̃s‖ ≤ (1 −βs)‖`s − ˜̀s‖ + βs‖T`s − T̃ ˜̀s‖ ≤ (1 −βs)‖`s − ˜̀s‖ + βs{‖T`s −T ˜̀s‖ + ‖T ˜̀s − T̃ ˜̀s‖} ≤ (1 −βs)‖`s − ˜̀s‖ + βs{γ‖`s − ˜̀s‖ + L‖`s −T`s‖ + �} = [1 − (1 −γ)βs]‖`s − ˜̀s‖ + βsL‖`s −T`s‖ + βs� (7.6) Using (7.6) and (7.5), we have ‖ws − w̃s‖ ≤ (1 −δs){γ‖`s − ˜̀s‖ + L‖`s −T`s‖ + �} +δs{γ[1 − (1 −γ)βs]‖`s − ˜̀s‖ + γβ‖`s −T`s‖ + γβs�} = γ[1 − (1 −γ)δsβs]‖`s − ˜̀s‖ + (1 −δs)L‖`s −T`s‖ +(1 −δs)� + γδsβsL‖`s −T`s‖ + γδsβs�. (7.7) Substituting (7.7) into (7.4), we obtain ‖`s+1 − ˜̀s+1‖ ≤ γ3[1 − (1 −γ)δsβs]‖`s − ˜̀s‖ + γ2(1 −δs)L‖`s −T`s‖ +γ2(1 −δs)� + γ3δsβsL‖`s −T`s‖ + γ3δsβs� +γL‖ws −Tws‖ + γ� + L‖ζs −Tζs‖ + �. (7.8) Since γ,γ2,γ3 ∈ (0, 1) and δs,βs ∈ [0, 1], then (7.8) becomes ‖`s+1 − ˜̀s+1‖ ≤ [1 − (1 −γ)δsβs]‖`s − ˜̀s‖ + L‖`s −T`s‖ +δsβsL‖`s −T`s‖ + L‖ws −Tws‖ +L‖ζs −Tζs‖ + δsβs� + 3�. (7.9) By our assumption (i) that 1 2 ≤ δsβs, we have 1 −δsβs ≤ δsβs ⇒ 1 = 1 −δsβs + δsβs ≤ δsβs + δsβs = 2δsβs. Eur. J. Math. Anal. 1 (2021) 126 This yields ‖`s+1 − ˜̀s+1‖ ≤ [1 − (1 −γ)δsβs]‖`s − ˜̀s‖ + 3δsβsL‖`s −T`s‖ +2δsβsL‖ws −Tws‖ + 2δsβsL‖ζs −Tζs‖ + 7δsβs� = (1 − (1 −γ)δsβs)‖`s − ˜̀s‖ +δsβs(1 −γ) × { 3L‖`s −T`s‖ + 2L‖ws −Tws‖ (1 −γ) + 2L‖ζs −Tζs‖ + 7� (1 −γ) } . (7.10) Set θs = ‖`s − ˜̀s‖ σs = (1 −γ)δsβs ∈ (0, 1) λs = { 3L‖`s −T`s‖ + 2L‖ws −Tws‖ + 2L‖ζs −Tζs‖ + 7� (1 −γ) } From Theorem 3.1, we know that lim s→∞ `s = z and since Tz = z, it follows that lim s→∞ ‖`s −T`s‖ = lim s→∞ ‖ws −Tws‖ = lim s→∞ ‖ζs −Gζs‖ = 0. Using Lemma 2.13, we get 0 ≤ lim sup s→∞ ‖`s − ˜̀s‖≤ lim sup s→∞ 7� (1 −γ) . (7.11) Since by Theorem 3.1, we have that lim s→∞ `s = z and from our hypothesis lim s→∞ ˜̀ s = z̃, it followsfrom (7.11) that ‖z − z̃‖≤ 7� (1 −γ) . This completes the proof. � 8. Some Applications In this section, we will prove that the sequence generated by our new iterative algorithm (1.7)converges strongly to solutions of the constrained convex minimization problem and split feasibilityproblem.Now, we present the definitions of some operators that will we be important in proving our mainresults. Let H be a Hilbert space and let C be a nonempty closed and convex subset of H. Definition 8.1. Let T : C → C be a mapping. Then T is said to be: (i) nonexpansive, if ‖T`−Tζ‖≤‖`−ζ‖, for all `,ζ ∈ C; Eur. J. Math. Anal. 1 (2021) 127 (ii) Lipschitz continuous, if there exists L > 0 such ‖T`−Tζ‖≤ L‖`−ζ‖, for all `,ζ ∈ C; (iii) monotone if, 〈T`−Tζ,`−ζ〉≥ 0, for all `,ζ ∈ C; (8.1) (iv) $-strongly monotone if there exists $ > 0, such that 〈`−ζ,T`−Tζ〉≥ $‖`−ζ‖, for all `,ζ ∈ C. (8.2) For any ` ∈ H, we define the map PC : H → C satisfying ‖`−PC`‖≤‖`−ζ‖, for all ζ ∈ C. PC is called the metric projection of H onto C. It is well known that PC is nonexpansive. 8.1. Application to constrained convex minimization problem.Consider the following constrained convex minimization problem: minimize {f (`) : ` ∈ C}, (8.3) where f : C → R is a real-valued function. The minimization problem (8.3) is consistent if it hasa solution. Throughout this paper, we shall use Γ to stand for the solution set of the problem(8.3). It is worthy noting that f is (Fréchect) differentiable, the gradient-projection method (GPM)generates a sequence {`s} by using the recursive formula:{ `0 ∈ C, `s+1 = PC(`s −λ∇f (`s)), for all s ≥ 1. (8.4) In more general form, (8.4) can be written as: { `0 ∈ C, `s+1 = PC(`s −λs∇f (`s)), for all s ≥ 1, (8.5) where λ and λs are positive real numbers.It is well known that if ∇f is $-strongly monotone and L-Lipschitzian with $,L > 0, then theoperator T = PC(I −λ∇f ) (8.6) is a contraction; thus the sequence {`s} in (8.4) converges in norm to the unique minimizer of (8.3).From [14, 30], we know that z ∈ C solve the minimization problem (8.3) if and only if z solvesthe following fixed point equation: z = PC(I −λ∇f )z, (8.7) Eur. J. Math. Anal. 1 (2021) 128 where λ > 0 is any fixed positive number. The operator T = PC(I − λ∇f ) is well known tobe nonexpansive (see [14, 30] and the references therein). Several authors have have considereddifferent iterative algorithm for constrained convex minimization problems (see [4, 9, 13, 19, 34] andthe references therein). We now give our main results Theorem 8.2. Let C be a nonempty closed convex subset of a real Hilbert space H. Supposed that the minimization problem (8.3) is consistent and let Γ denote the solution set. Supposed that the gradient ∇f is L-Lipschitzian with constant L > 0. Let {`s} be the sequence generated iteratively by  `0 ∈ C, gs = (1 −βs)`s + βsPC(I −λ∇f )`s ws = (1 −δs)PC(I −λ∇f )`s + δsPC(I −λ∇f )gs ζs = PC(I −λ∇f )ws `s+1 = PC(I −λ∇f )ζs, ∀s ≥ 1. (8.8) where {δs}, {βs} are sequences in [0,1] and λ ∈ ( 0, L 2 ) . Then the sequence {`s} converges strongly to a minimizer z of (8.3). 8.2. Application to split feasibility problem.For modeling inverse problems which emanate from phase retrieval and medical image reconstruc-tion, in 1994, Censor and Elfving [11] firstly introduced the following split feasibility problem (SFP)in finite-dimensional Hilbert spaces.Let C and Q be nonempty closed convex subsets of the Hilbert spaces H1 and H2, respectivelyand A : H1 → H2 be a bounded linear operator. Then the split feasibility problem (SFP) isformulated to find z ∈ C such that Az ∈ Q. (8.9) SFP has many applications, it has been found that SFP can been used in many areas such asimage restoration, computer tomograph, radiation therapy treatment planning. There exists someiterative several iterative methods for solving split feasibility problems, see, for instance [8, 15, 30].In 2002, Byrne [8] applied the forward-backward method, a type of projection gradient methodto approximate (8.9). The so called CQ-iterative procedure is defined as follows: `s+1 = PC[I −γA∗(1 −PQ)A]`n, ∀ n ≥ 1, (8.10) where γ ∈ (0, 2‖A‖2) with λ being the spectral radius of the of operator A∗A, PC and PQ denotethe projections onto sets C and Q, respectively, and A∗ : H∗2 → H∗1 is the adjoint of A.We assume that the solution set Γ of the SFP (8.10) is nonempty, let Γ = {` ∈ C : A` ∈ Q} = C ∩A−1Q, then Γ is closed, convex and nonempty set. Eur. J. Math. Anal. 1 (2021) 129 Lemma 8.3. [15] Let operator T = PC[I −γA∗(I −PQ)A], where γ ∈ ( 0, 2‖A‖2 ) . Then, T is said to be a nonexpansive map. Since by our assumption Γ 6= ∅, then it is clear that any z ∈ C solves (8.9) if and only if it solvesthe fixed point equation: T = PC[I −γA∗(I −PQ)A]z = z, z ∈ C. Thus, F (T ) = Γ = C ∩A−1Q, i.e., the solution set Γ is equal the set of fixed point of the map T .For more explicit explanation, the reader can see [42, 43].Now, to prove our main results in this part, we will consider the following scheme: `0 ∈ C, gs = (1 −βs)`s + βsPC[I −γA∗(I −PQ)A]`s ws = (1 −δs)PC[I −γA∗(I −PQ)A]`s + δsPC[I −γA∗(I −PQ)A]gs ζs = PC[I −γA∗(I −PQ)A]ws `s+1 = PC[I −γA∗(I −PQ)A]ζs, (8.11) for all s ≥ 1, where {δs}, {βs} are sequences in [0,1] and γ ∈ (0, 2‖A‖2). Theorem 8.4. Let {`s} be the sequence iteratively generated by (8.11). Then, {`s} converses weakly to an element in Γ. Proof. Since T = PC[I −γA∗(I −PQ)A] is a nonexpansive map and by Proposition 2.9 we knowthat every generalized α-nonexpansive map is nonexpansive map with α = 0 (i.e., 0-nonexpansive),so the conclusion follows from Theorem 4.3. � Theorem 8.5. If {`s} is the sequence generated by the iterative scheme (8.11). Then {`s} converges strongly the an element in Γ if and only if lim inf s→∞ d(`s, Γ) = 0. Proof. Since T = PC[I − γA∗(I − PQ)A] is nonexpansive map, then the conclusion of the prooffollows from Theorem 4.4. � Theorem 8.6. If T = PC[I − γA∗(I − PQ)A] satisfies condition (I) and {`s} is the sequence iteratively defined by (8.11), then {`s} converges strongly to a point in Γ. Proof. The result follows from Theorem 4.5. � 9. Conclusion In this paper, we have shown numerically and analytically that our new iterative algorithm (1.7)has a better rate of convergence than M iterative algorithm and some other well known existingiterative algorithms in the literature for almost contraction mapping and generalized α-nonexpansivemappings. Also, it is shown that our new iterative algorithm (1.7) is T–stable and data dependent Eur. J. Math. Anal. 1 (2021) 130 which make it reliable. As some applications of our new iterative algorithm (1.7), it is used tofind the solutions of constrained convex minimization problem and split feasibility problem. Now,owing to the fact that the class of generalized α-nonexpansive mappings which is considered inour paper is more general than the class of Suzuki generalized nonexpansive mappings which hasbeen considered by Ullah and Arshad [39] for M iteration, it implies that our results generalizeand improve the results in Ullah and Arshad [39] and several other related results existing in theliterature. References [1] M. Abbas and T. Nazir, A new faster iteration process applied to constrained minimization and feasibility problems,Mat. Vesn. 66(2014), 223–234.[2] R. P. Agarwal, D. O. Regan and D. R. Sahu, Iterative construction of fixed points of nearly asymptotically nonex-pansive mappings, J. Nonlinear Convex Anal. 8(2007), 61–79.[3] K. Aoyama and F. Kohsaka, Fixed point theorem for α-nonexpansive mappings in Banach spaces, Nonlinear Anal.74(13) (2011), 4387–4391.[4] A. Bejenaru and M. Postolache, Partially projective algorithm for the split feasibility problem with visualization ofthe solution set, Symmetry, 12 (2020), 608.[5] V. Berinde, Picard iteration converges faster than Mann iteration for a class of quasicontractive operators, FixedPoint Theory Appl. 2 (2004), 97–105.[6] V. Berinde, On the approximation of fixed points of weak contractive mapping, Carpath. J. Math. 19(2003), 7–22.[7] A. Bielecki, Une remarque sur l’application de la méthode de Banach–Cocciopoli-Tichonov dans la thórie del’équation s = f (x,y,z,p,q), Bull. Pol. Acad. Sci. Math. 4(1956), 265–357.[8] C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse Problems, 18(2)(2002), 441–453.[9] G. Cai, Y. Shehu, An iterative algorithm for fixed point problem and convex minimization problem with applications,Fixed Point Theory Appl. 2015 (2015) 7. https://doi.org/10.1186/s13663-014-0253-6.[10] S.K. Chatterjea, Fixed point theorems, C R Acad Bulg Sci. 25(1972), 727–730.[11] Y. Censor and T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algo-rithms, 8(2–4) (1994), 221–239.[12] R. Chugh, V. Kumar and S. Kumar, Strong convergence of a new three step iterative scheme in Banach spaces,Amer. J. Comp. Math. 2(2012), 345–357.[13] Q.L. Dong, X.H. Li, D. Kitkuan, Y.J. Cho, P. Kumam, Some algorithms for classes of split feasibility problems involvingparamonotone equilibria and convex optimization, J. Inequal. Appl. 2019 (2019) 77. https://doi.org/10.1186/ s13660-019-2030-x.[14] C. D. Enyi and M. E. Soh, Modified gradient-projection algorithm for solving convex minimization problem in Hilbertspaces, IAENG International Journal of Applied Mathematics, 44 (2014), 3.[15] M. Feng, L. Shi and R. Chen, A new three-step iterative algorithm for solving the split feasibility problem, U.P.B.Sci. Bull., Series A, 81 (2019), 93-102.[16] C. Garodia and I. Uddin, A new fixed point algorithm for finding the solution of a delay differential equation, AIMSMath. 5(2020), 3182–3200.[17] F. Gursoy and V Karakaya, A Picard–S hybrid type iteration method for solving a differential equation with retardedargument, (2014), arXiv:1403.2546v2. https://doi.org/10.1186/s13663-014-0253-6 https://doi.org/10.1186/s13660-019-2030-x https://doi.org/10.1186/s13660-019-2030-x Eur. J. Math. Anal. 1 (2021) 131 [18] M. A. Harder, Fixed point theory and stability results for fixed point iteration procedures. PhD thesis, University ofMissouri-Rolla, Missouri (2008).[19] S. He and Z. Zhao, Strong convergence of a relaxed CQ algorithm for the split feasibility problem, J. Inequal. Appl.2013 (2013), 197. .[20] I. Karahan and M. Ozdemir, A general iterative method for approximation of fixed points and their applications, Adv.Fixed Point Theory, 3(2013), 510–526.[21] S. Ishikawa, Fixed points by a new iteration method. Proc. Am. Math. Soc. 44(1974), 147–150.[22] R. Kannan, Some results on fixed point. Bull Calcutta Math. Soc. 10 (1968), 71–76.[23] K. Maleknejad and M. Hadizadeh, A New computational method for Volterra–Fredholm integral equations, Comput.Math. Appl. 37 (1999), 1–8.[24] W. R. Mann, Mean value methods in iteration, Proc. Am. Math. Soc. 4(1953), 506–510.[25] M. A. Noor, New approximation schemes for general variational inequalities, J. Math Anal Appl. 251(2000), 217–229.[26] D. Pant and R. Shukla, Approximating fixed points of generalized α-nonexpansive mappings in Banach spaces,Numer. Funct. Anal. Optim. 38(2) (2017), 248–266.[27] W. Phuengrattana and S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations forcontinuous functions on an arbitrary interval, J. Comput. Appl. Math. 235(2011), 3006–3014.[28] D. R. Sahu and A. Petrusel, Strong convergence of iterative methods by strictly pseudocontractive mappings inBanach spaces. Nonlinear Anal. Theory Methods Appl. 74(2011), 6012–6023.[29] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, B. Aust. Math. Soc.43(1991), 153–159.[30] Y. Shehu, O. S. Iyiola and C. D. Enyi, Iterative approximation of solutions for constrained convex minimizationproblem, Arab J. Math. 2(2013), 393–402.[31] H. F. Senter and W. G. Dotson, Approximating fixed points of nonexpansive mapping, Proc. Amer. Math. Soc.44(1974), 375–380.[32] S. M. Soltuz and T. Grosan, Data dependence for Ishikawa iteration when dealing with contractive like operators,Fixed Point Theory Appl., (2008)2008, 242916.[33] T. Suzuki, Fixed point theorems and convergence theorems for some generalized nonexpansive mappings, J. Math.Anal. Appl. Math. 340(2008), 1088–10995.[34] J. Tang, S. Chang, Strong convergence theorem of two-step iterative algorithm for split feasibility problems, J InequalAppl. 2014 (2014) 280. https://doi.org/10.1186/1029-242X-2014-280.[35] S. Thianwan, Common fixed points of new iterations for two asymptotically nonexpansive nonself-mappings in aBanach space, J.f Comput. Appl. Math. 224(2009), 688–695.[36] D. Thakur, B. S. Thakur, M. Postolache, A new iterative scheme for numerical reckoning fixed points of Suzuki’sgeneralized nonexpansive mappings, Appl. Math. Comput. 275 (2016), 147–155.[37] B. S. Thakur, D. Thakur and M. Postolache, A new iterative scheme for numerical reckoning fixed points of Suzuki’sgeneralized nonexpansive mappings, Appl. Math. Comput. 275(2016), 147–155.[38] K. Ullah and M. Arshad, New iteration process and numerical reckoning fixed points in Banach spaces, UniversityPolitehnica of Bucharest Scientific Bulletin Series A, 79 (2017), 113–122.[39] K. Ullah and M. Arshad, Numerical Reckoning Fixed Points for Suzuki’s Generalized Nonexpansive Mappings viaNew Iteration Process, Filomat, 32(2018), 187–196.[40] X. Weng, Fixed point iteration for local strictly pseudocontractive mapping, Proc. Am. Math. Soc. 113(1991), 727–731.[41] T. Zamfirescu, Fixed point theorems in metric spaces, Arch. Math. (Basel). 23 (1972), 292–298. https://doi.org/10.1186/1029-242X-2014-280 Eur. J. Math. Anal. 1 (2021) 132 [42] H.K. Xu, A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem, Inverse Probl.22(6) (2006), 2021–2034.[43] H.K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26(2010), 105018. 17 pp. 1. Introduction 2. Preliminaries 3. Rate of Convergence 4. Convergence Results 5. Numerical Result 6. Stability result 7. Data Dependence result 8. Some Applications 8.1. Application to constrained convex minimization problem 8.2. Application to split feasibility problem 9. Conclusion References