() CUBO A Mathematical Journal Vol.13, No¯ 03, (185–196). October 2011 Linear Convergence Analysis for General Proximal Point Algorithms Involving (H, η)−Monotonicity Frameworks Ram U. Verma Texas A&M University at Kingsville, Department of Mathematics, Kingsville, Texas 78363, USA. email: verma99@msn.com ABSTRACT General framework for the generalized proximal point algorithm, based on the notion of (H, η)− monotonicity, is developed. The linear convergence analysis for the generalized proximal point algorithm to the context of solving a class of nonlinear variational inclu- sions is examined, The obtained results generalize and unify a wide range of problems to the context of achieving the linear convergence for proximal point algorithms. RESUMEN Se desarrolla un marco general para el algoritmo de punto proximal generalizado, basado en la noción de (H, η)− monotonia. Se examina el análisis de convergencia lineal para el algoritmo de punto proximal generalizado en el contexto de la resolución de una clase de inclusiones no lineales variacional. Los resultados obtenidos generalizan y unifican una amplia gama de problemas en el contexto de lograr la convergencia lineal de los algoritmos punto proximal. Keywords. General cocoerciveness, Variational inclusions, Maximal monotone mapping, (H, η)− monotone mapping, Generalized proximal point algorithm, Generalized resolvent operator. 186 Ram U. Verma CUBO 13, 3 (2011) Mathematics Subject Classification: 49J40, 47H10, 65B05. 1. Introduction Based on recent advances on linear convergence for proximal point algorithms, we are con- cerned to develop a general framework for the generalized proximal point algorithm [7] based on the notion of (H, η)− monotonicity introduced and studied by Fang and Huang [9], and then achieve a linear convergence to the context of solving a general variational inclusion problem. As a result, we establish a significant generalization on linear convergence analysis based on proximal point algorithms/relaxed proximal point algorithms. The generalized version of relaxed proximal point algorithm generalizes the proximal point algorithm of Rockafellar [25, 26], that in turn gen- eralizes the algorithm of Martinet [18] for convex programming. It appears that a general class of problems of variational character, including minimization or maximization of functions, variational inequality problems, and minimax problems, can be unified into this form. Let X be a real Hilbert space with the norm ‖.‖ and the inner product 〈., .〉. We consider the inclusion problem: find a solution to 0 ∈ M(x), (1.1) where M : X → 2X is a set-valued mapping on X. In this communication, we first present the generalized version of proximal point algorithm based on the notion of (H, η)− monotonicity, and then apply it to approximate a solution to a general class of nonlinear inclusion problems involving (H, η)− monotone mappings in a Hilbert space setting. Second, we explore the linear convergence analysis for the generalized proximal point algorithms for solving a class of nonlinear inclusions. Also, several results on the generalized coco- ercive and generalized resolvent mappings are demonstrated . The results, thus obtained here, are significantly general in nature. For more details, we refer the reader [1-38]. 2. (H, η)− Monotonicity and Generalized Cocoerciveness This section deals with some results based on basic properties of (H, η)− monotonicity, and other results involving (H, η)− monotonicity and the generalized cocoerciveness . Let X denote a real Hilbert space with the norm ‖.‖ and inner product < ., . > . Let M : X → 2X be a multivalued mapping on X. We shall denote both the map M and its graph by M, that is, the set {(x, y) : y ∈ M(x)}. This is equivalent to stating that a mapping is any subset M of X × X, and M(x) = {y : (x, y) ∈ M}. If M is single-valued, we shall still use M(x) to represent the unique y such that (x, y) ∈ M rather than the singleton set {y}. This interpretation shall much depend on the context. The domain of a map M is defined (as its projection onto the first argument) by D(M) = {x ∈ X : ∃ y ∈ X : (x, y) ∈ M} = {x ∈ X : M(x) 6= ∅}. CUBO 13, 3 (2011) Linear Convergence Analysis for General . . . 187 D(T)=X, shall denote the full domain of M, and the range of M is defined by R(M) = {y ∈ X : ∃ x ∈ X : (x, y) ∈ M}. The inverse M−1 of M is {(y, x) : (x, y) ∈ M}. For a real number ρ and a mapping M, let ρM = (x, ρy) : (x, y) ∈ M}. If L and M are any mappings, we define L + M = {(x, y + z) : (x, y) ∈ L, (x, z) ∈ M}. Definition 2.1. Let M : X → 2X be a multivalued mapping on X. The map M is said to be: (i) (r)− strongly monotone if there exists a positive constant r such that 〈u∗ − v∗, u − v〉 ≥ r‖u − v‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (ii) (1)− strongly monotone if 〈u∗ − v∗, u − v〉 ≥ ‖u − v‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (iii) (m)−relaxed monotone if there exists a positive constant m such that 〈u∗ − v∗, u − v〉 ≥ (−m)‖u − v‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (iv) (c)− cocoercive if there is a positive constant c such that 〈u∗ − v∗, u − v〉 ≥ c‖u∗ − v∗‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). Definition 2.2. A mapping M : X → 2X is said to be maximal (m)− relaxed monotone if (i) M is (m)−relaxed monotone, (ii) For (u, u∗) ∈ X × X, and 〈u∗ − v∗, u − v〉 ≥ (−m)‖u − v‖2 ∀ (v, v∗) ∈ graph(M), we have u∗ ∈ M(u). Definition 2.3. Let M : X → 2X be a mapping on X. The map M is said to be: (i) Nonexpansive if ‖u∗ − v∗‖ ≤ ‖u − v‖ ∀ (u, u∗), (v, v∗) ∈ graph(M). 188 Ram U. Verma CUBO 13, 3 (2011) (ii) Firmly nonexpansive if ‖u∗ − v∗‖2 ≤ 〈u∗ − v∗, u − v〉 ∀ (u, u∗), (v, v∗) ∈ graph(M). (iii) (c)−Firmly nonexpansive if there exists a constant c > 0 such that ‖u∗ − v∗‖2 ≤ ‖u − v‖2 − c‖u − v − (u∗ − v∗)‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (iv) (c)−Firmly nonexpansive if there exists a constant c > 0 such that ‖u∗ − v∗‖2 ≤ c〈u∗ − v∗, u − v〉 ∀ (u, u∗), (v, v∗) ∈ graph(M). Proposition 2.1. Let H : X → X be an (r, η)−strongly monotone mapping and let M : X → 2X be an (H, η)−- monotone mapping. Then the operator (H + ρM)−1 is single-valued. Definition 2.4. Let H : X → X be an (r, η)−strongly monotone mapping and let M : X → 2X be an (H, η)− monotone mapping. Then the generalized resolvent operator JMρ,H : X → X is defined by JMρ,H (u) = (H + ρM) −1(u). Definition 2.5. Let H, T : X → X be two mappings. Then map T is said to be: (i) Monotone with respect to H if 〈T (x) − T (y), H(x) − H(y)〉 ≥ 0 ∀ (x, y) ∈ X. (ii) (r) − strongly monotone with respect to H if there exists a positive constant r such that 〈T (x) − T (y), H(x) − H(y)〉 ≥ (r)‖x − y‖2 ∀ (x, y) ∈ X. (iii) (γ, α)-relaxed cocoercive with respect to H if there exist positive constants γ and α such that 〈T (x) − T (y), H(x) − H(y)〉 ≥ −γ‖T (x) − T (y)‖2 + α‖x − y‖2 ∀(x, y) ∈ X. Definition 2.6. A map η : X × X → X is said to be: (i) (η)− monotone if 〈x − y, η(x, y)〉 ≥ 0 ∀ (x, y) ∈ X. CUBO 13, 3 (2011) Linear Convergence Analysis for General . . . 189 (ii) (t)-strongly monotone if there exists a positive constant t such that 〈x − y, η(x, y)〉 ≥ t‖x − y‖2 ∀ (x, y) ∈ X. (iii) (τ)-Lipschitz continuous if there exists a positive constant τ such that ‖η(x, y)‖ ≤ τ‖x − y‖. Definition 2.7. Let M : X → 2X be a multivalued mapping on X, and let η : X×X → X be another mapping. The map M is said to be: (i) (η)− monotone if 〈u∗ − v∗, η(u, v)〉 ≥ 0 ∀ (u, u∗), (v, v∗) ∈ graph(M). (ii) (r, η)− strongly monotone if there exists a positive constant r such that 〈u∗ − v∗, η(u, v)〉 ≥ r‖u − v‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (iii) (1, η)− strongly monotone if 〈u∗ − v∗, η(u, v)〉 ≥ ‖u − v‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (iv) (m, η)−relaxed monotone if there exists a positive constant m such that 〈u∗ − v∗, η(u, v)〉 ≥ (−m)‖u − v‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). (v) (c, η)− cocoercive if there is a positive constant c such that 〈u∗ − v∗, η(u, v)〉 ≥ c‖u∗ − v∗‖2 ∀ (u, u∗), (v, v∗) ∈ graph(M). Definition 2.8. [9] Let H : X → X be (r)− strongly monotone. The map M : X → 2X is said to be H− monotone if (i) M is monotone, (ii) R(H + ρM) = X for ρ > 0. 190 Ram U. Verma CUBO 13, 3 (2011) Definition 2.9.Let H : X → X be (r, η)− strongly monotone. The map M : X → 2X is said to be to (H, η)− monotone if (i) M is (η)− monotone, (ii) R(H + ρM) = X for ρ > 0. Definition 2.10. A map M : X → 2X is said to be (η)- monotone if (i) M is (η)− monotone, (ii) R(I + ρM) = X for ρ > 0. 3. Generalized Proximal Point Algorithms This section deals with the generalized proximal point algorithm and its application to ap- proximation solvability of the inclusion problem (1) based on the (H, η)-monotonicity. Furthermore, some results connecting the (H, η)− monotonicity and corresponding generalized resolvent operator are established, that generalize the results on the generalized cocoerciveness and H− monotonicity [9], while the auxiliary results on (H, η)− monotonicity and general maximal monotonicity are obtained. Lemma 3.1. [9] Let X be a real Hilbert space, let H : X → X be (r)−strongly monotone, and let M : X → 2X be H− monotone. Then the generalized resolvent operator associated with M and defined by JMρ,H (u) = (H + ρM) −1(u) ∀ u ∈ X, is (1/r)− Lipschitz continuous. Theorem 3.1. Let X be a real Hilbert space, let H : X → X be (r, η)−strongly monotone, and let M : X → 2X be (H, η)- monotone. Then the following statements are equivalent: (i) An element u ∈ X is a solution to (1). (ii) For an u ∈ X, we have u = JMρ,H (H (u)), where JMρ,H (u) = (H + ρM) −1(u). CUBO 13, 3 (2011) Linear Convergence Analysis for General . . . 191 Next, we introduce a generalization to the relaxed proximal point algorithm. Algorithm 3.1. Let H : X → X be a single-valued mapping, let M : X → 2X be a set-valued (H, η)− monotone mapping on X with 0 ∈ range(M), and let the sequence {xk} be generated by the iterative procedure H(xk+1) = (1 − αk)H(x k) + αky k ∀k ≥ 0, (3.1) and yk satisfies ‖yk − H(JMρk,H(H(x k)))‖ ≤ δk‖y k − H(xk)‖, where JM ρk,H = (H + ρkM) −1, ∑∞ k=0 δk < ∞, δk → 0 and, {δk}, {αk}, {ρk} ⊆ [0, ∞) are scalar sequences. Theorem 3.2 Let X be a real Hilbert space, let H : X → X be (r, η)−strongly monotone, and let M : X → 2X be (H, η)−monotone. Let η : X × X → X be (τ)−Lipschitz continuous. For an arbitrarily chosen initial point x0, suppose that the sequence {xk} is generated by Algorithm 3.1. Suppose that HoJM ρk,H is (λ, η)−cocoercive for λ > 1, that is, for all u, v ∈ X, 〈H(JMρk,H(H(u))) − H(J M ρk,H (H(v))), η(H(u), H(v))〉 ≥ λ‖H(JMρk,H(H(u))) − H(J M ρk,H (H(v)))‖2. (3.2) Then the sequence {xk} converges linearly to a solution of (1.1) with convergence rate √ 1 − 2α[1 − (1 − α)τ λ − ατ2 2λ2 − α 2 ] < 1, for λ > 1, τ < λ, ∑∞ k=0 δk < ∞, δk → 0, αk ≤ 1 and, {δk}, {αk}, {ρk} ⊆ (0, ∞) are scalar sequences. Proof. Suppose that x∗ is a zero of M. From Theorem 3.1, it follows that any solution to (1) is a fixed point of JM ρk,H oH. For all k ≥ 0, we express H(zk+1) = (1 − αk)H(x k) + αkH(J M ρk,H (H(xk))). 192 Ram U. Verma CUBO 13, 3 (2011) Next, we find the estimate using the appropriate implications of (3.2) that ‖H(zk+1) − H(x∗)‖2 = ‖(1 − αk)H(x k) + αkH(J M ρk,H (H(xk))) − [(1 − αk)H(x ∗) + αkH(J M ρk,H (H(x∗)))‖2 = (1 − αk) 2‖H(xk) − H(x∗)‖2 + 2αk(1 − αk)〈H(x k) − H(x∗), H(JMρk,H(H(x k))) − H(JMρk,H(H(x ∗)))〉 + α2k‖H(J M ρk,H (H(xk))) − H(JMρk,H(H(x ∗)))‖2 ≤ (1 − αk) 2‖H(xk) − H(x∗)‖2 + 2αk(1 − αk) τ λ ‖H(xk) − H(x∗)‖2 + α2k‖H(J M ρk,H (H(xk))) − H(JMρk,H(H(x ∗)))‖2 ≤ (1 − αk) 2‖H(xk) − H(x∗)‖2 + 2αk(1 − αk) τ λ ‖H(xk) − H(x∗)‖2 + α2k‖H(J M ρk,H (H(xk))) − H(JMρk,H(H(x ∗)))‖2 ≤ (1 − αk) 2‖H(xk) − H(x∗)‖2 + 2αk(1 − αk) τ λ ‖H(xk) − H(x∗)‖2 + α2 k τ2 λ2 ‖H(xk) − H(x∗)‖2 = [1 − 2αk[1 − (1 − αk) τ λ − αkτ 2 2λ2 − αk 2 ]]‖H(xk) − H(x∗)‖2, where τ < λ. It follows from the above inequality that ‖H(zk+1) − H(x∗)‖ ≤ θk‖H(x k) − H(x∗)‖, (3.3) where θk = √ 1 − 2αk[1 − (1 − αk) τ λ − αkτ 2 2λ2 − αk 2 ]. Since H(xk+1) = (1 − αk)H(x k) + αky k, it implies H(xk+1) − H(xk) = αk(y k − H(xk)). On the other hand, we have ‖H(xk+1) − H(zk+1)‖ = αk‖y k − H(JMρk,H(H(x k)))‖ ≤ αkδk‖y k − H(xk)‖. CUBO 13, 3 (2011) Linear Convergence Analysis for General . . . 193 Finally, we estimate ‖H(xk+1) − H(x∗)‖ ≤ ‖H(zk+1) − H(x∗)‖ + ‖H(xk+1) − H(zk+1)‖ ≤ ‖H(zk+1) − H(x∗)‖ + αkδk‖y k − H(xk)‖ ≤ ‖H(zk+1) − H(x∗)‖ + δk‖H(x k+1) − H(xk)‖ ≤ ‖H(zk+1) − H(x∗)‖ + δk[‖H(x k+1) − H(x∗)‖ + ‖H(xk) − H(x∗)‖] ≤ θk‖H(x k) − H(x∗)‖ + δk‖H(x k+1) − H(x∗)‖ + δk‖H(x k) − H(x∗)‖. This implies that ‖H(xk+1) − H(x∗)‖ ≤ θk + δk 1 − δk ‖H(xk) − H(x∗)‖, (3.4) where ĺım sup θk + δk 1 − δk = ĺım sup θk = √ 1 − 2α[1 − (1 − α)τ λ − ατ2 2λ2 − α 2 ] < 1. It follows that ‖H(xk) − H(x∗)‖ → 0 as k → ∞. Since H is (r, η)−strongly monotone, we have ‖H(xk) − H(x∗)‖ ≥ r τ ‖xk − x∗‖. Therefore, we conclude that r τ ‖xk − x∗‖ ≤ ‖H(xk) − H(x∗)‖ → 0. This completes the proof. Received: September 2010. Revised: November 2010. References [1] R. P. Agarwal, and R. U. Verma, Inexact A−proximal point algorithm and applications to nonlinear variational inclusion problems, Journal of Optimization Theory and Applications 144 (3) (2010), 431–444. [2] D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, New York, 1982. 194 Ram U. Verma CUBO 13, 3 (2011) [3] J. Douglas, and H. H. Rachford, On the numerical solution of heat conduction problems in two and three space variables, Transactions of the American Mathematical Society 82 (1956), 421–439. [4] J. Eckstein, Splitting methods for monotone operators with applications to parallel opti- mization, Doctoral dissertation, Department of Civil Engineering, Massachusetts Institute of Technology, Cambridge, MA, 1989. [5] J. Eckstein, Nonlinear proximal point algorithm using Bregman functions, with applications to convex programming, Mathematics of Operations Research 18 (1993), 203–226. [6] J. Eckstein, Approximation iterations in Bregman-function-based proximal algorithm, Math- ematical Programming 83 (1998), 113–123. [7] J. Eckstein, and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Mathematical Programming 55 (1992), 293–318. [8] J. Eckstein, and M. C. Ferris, Smooth methods of multipliers for complementarity problems, Mathematical Programming 86 (1999), 65–90. [9] Y. P. Fang, and N. J. Huang, H− monotone operators and system of variational inclusions, Communications on Applied Nonlinear Analysis 11 (1)(2004), 93–101. [10] Y. P. Fang, N. J. Huang, and H. B. Thompson, A new system of variational inclusions with (H, η)− monotone operators, Computers and Mathematics with Applications 49 (2-3)(2005), 365–374. [11] M. C. Ferris, Finite termination of the proximal point algorithm, Mathematical Program- ming 50 (199), 359–366. [12] M. M. Jin, Perturbed algorithm and stability for strongly monotone nonlinear quasi-variational inclusions involving H− accretive operators, Mathematical Inequalities & Applications (in press). [13] M. M. Jin, Iterative algorithm for a new system of nonlinear set-valued variational inclu- sions involving (H, η)− monotone mappings, Journal of Inequalities in Pure and Applied Mathematics (in press). [14] H. Y. Lan, A class of nonlinear (A, η)− monotone operator inclusion problems with relaxed cocoercive mappings, Advances in Nonlinear Variational Inequalities 9 (2)(2006), 1–11. [15] H. Y. Lan, J. H. Kim, and Y. J. Cho, On a new class of nonlinear A−monotone multivalued variational inclusions, Journal of Mathematical Analysis and Applications (in press). [16] H. Y. Lan, New resolvent operator technique for a class of general nonlinear (A, η)− equations in Banach spaces, International Journal of Applied Mathematical Sciences (in press). CUBO 13, 3 (2011) Linear Convergence Analysis for General . . . 195 [17] Z. Liu, J. S. Ume, and S. M. Kang, Generalized nonlinear variational-like inequalities in reflexive Banach spaces, Journal of Optimization Theory and Applications 126 (1)(2002), 157–174. [18] B. Martinet, Régularisation d’inéquations variationnelles par approximations successives. Rev. Francaise Inform. Rech. Oper. Ser. R-3 4 (1970), 154–158. [19] A. Moudafi, Mixed equilibrium problems: Sensitivity analysis and algorithmic aspect,Computers and Mathematics with Applications 44(2002), 1099-1108. [20] A. Moudafi, Proximal methods for a class of relaxed nonlinear variational inclusions, Advances in Nonlinear Vriational Inequalities 9 (2) (2006), 59–64 [21] J. -S. Pang, Complementarity problems, Handbook of Global Optimization ( edited by R. Horst and P. Pardalos), Kluwer Academic Publishers, Boston, MA, 1995, pp. 271–338. [22] S. M. Robinson, Composition duality and maximal monotonicity, Mathematical Program- ming 85(1999), 1–13. [23 S. M. Robinson, Linear convergence of epsilon-subgradient descent methods for a class of convex functions, Mathematical Programming 86(1999), 41–50. [24] R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pacific Journal of Mathematics 33 (1970), 209–216. [25] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM Journal of Control and Optimization 14 (1976), 877–898. [26] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Mathematics of Operations Research 1 (1976), 97–116. [27] R. T. Rockafellar, and R. J-B. Wets,Variational Analysis, Springer-Verlag, Berlin, 1998. [28 M. V. Solodov, and B. F. Svaiter, An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions, Mathematics of Operations Research 25 (2)(2000), 214–230. [29] P. Tossings,The perturbed proximal point algorithm and some of its applications , Applied Mathematics and Optimization 29 (1994), 125–159. [30] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM Journal of Control and Optimization 29(1991), 119–138. [31] P. Tseng, Alternating projection-proximal methods for convex programming and variational inequalities, SIAM Journal of Optimization 7(1997), 951–965. [32] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM Journal of Control and Optimization 38(2000), 431–446. 196 Ram U. Verma CUBO 13, 3 (2011) [33] R. U. Verma, New class of nonlinear A− monotone mixed variational inclusion problems and resolvent operator technique, Journal of Computational Analysis and Applications 8(3)(2006), 275–285. [34] R. U. Verma, Nonlinear A− monotone variational inclusion systems and the resolvent oper- ator technique, Journal of Applied Functional Analysis 1(1)(2006), 183–190. [35] R. U. Verma, A− monotonicity and its role in nonlinear variational inclusions, Journal of Optimization Theory and Applications 129(3)(2006), 457–467. [36] R. U. Verma, A fixed-point theorem involving Lipschitzian generalized pseudo- contractions, Proceedings of the Royal Irish Academy 97A(1)(1997), 83–86. [37] R. U. Verma, New approach to the η−proximal point algorithm and nonlinear variational inclusion problems, Applied Mathematics and Computation (in press). [38] E. Zeidler, Nonlinear Functional Analysis and its Applications II/B, Springer-Verlag, New York, New York, 1990. Introduction (H,)- Monotonicity and Generalized Cocoerciveness Generalized Proximal Point Algorithms