CUBO A Mathematical Journal Vol.10, N o ¯ 03, (1–12). October 2008 Proximal-Resolvent Methods for Mixed Variational Inequalities Muhammad Aslam Noor and Khalida Inayat Noor Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan email: noormaslam@hotmail.com email: khalidanoor@hotmail.com ABSTRACT It is well-known that the mixed variational inequalities are equivalent to the fixed point problem. We use this alternative equivalent formulation to suggest and analyze some new proximal resolvent methods for solving mixed variational inequalities. We also study the convergence of these new methods under some mild conditions. These new iterative methods include the projection, extragradient and proximal methods as special cases. Results obtained in this paper represent a refinement and improvement of the previously known results. RESUMEN Es bien conocido que las desigualdades variacionales mescladas son equivalentes a prob- lemas de punto fijo. Nosotros usamos esta formulación alternativa equivalente para sug- erir y analizar nuevos métodos resolventes proximales para resolver desigualdes varia- cionales mesclasdas. También estudiamos la convergencia de estos nuevos métodos bajo algunas condiciones blandas. Estos nuevos métodos iterativos incluyen como casos es- peciales la prejección, métodos extragradiente y proximales. Los resultados en este 2 Muhammad Aslam Noor and Khalida Inayat Noor CUBO 10, 3 (2008) art́ıculo representam un refinamiento y perfeccionamiento de resultados previamente conocidos. Key words and phrases: Variational inequalities, resolvent method, fixed point, proximal meth- ods, convergence. Math. Subj. Class.: 49J40, 90C30. 1. Introduction Variational inequalities, which were introduced and considered by Stampacchia [26] in 1964, have had a great impact and influence in the development of almost all branches of pure and applied sciences. It has been shown that the variational inequalities provide a simple, unified, natural, novel and general framework to study a wide class of problems arising in various branches of pure and applied sciences. The ideas and techniques of variational inequalities are being used in a variety of diverse fields and proved to innovative and productive, see [1-26] and the references therein. In recent years, variational inequalities have been extended and generalized in several directions. A useful and important generalization of variational inequalities is called the mixed variational inequality or variational inequality of the second kind containing the nonlinear term. Due to the presence of the nonlinear term, the projection method and its variant forms including the Wiener- Hopf equations can not be extended for solving the mixed variational inequality. To overcome these drawbacks, some iterative methods have been developed and investigated for solving mixed variational inequalities using the technique of auxiliary principle technique, the origin of which can be traced back to Lions and Stampacchia [10] and Glowinski, Lions and Tremolieres [7]. This technique has been used by several researchers to develop implicit and explicit methods for solving the mixed variational inequalities and the equilibrium problems, see [14-23] and the references therein. We would like to mention that, if the nonlinear term in the mixed variational inequalities is a proper, convex and lower-semicontinuous, then it has been shown [14] that the mixed variational inequalities are equivalent to the fixed point problem. This alternative equivalent formulation has been used to suggest and analyze several iterative methods for solving the mixed variational inequalities. The convergence of these resolvent iterative methods requires that the underlying operator is strong monotone and Lipschitz continuous. Secondly it is very difficult to evaluate the resolvent of the operator. These facts have motived to modify the resolvent iterative method. Noor [16-20] used the technique of updating the solution to suggest and analyze some modified extraresolvent type method. The extraresolvent method overcomes this difficulty by using the technique of updating the solution, which modified the resolvent method by performing additional step and resolvent at each step according to double resolvent formula. It is worth mentioning that the convergence of the extraresolvent method requires that the solution exists and the operator to be monotone and Lipschitz continuous. When the operator is not Lipschitz continuous or when the Lipschitz continuous is not known, the extraresolvent method and its variant forms require an Armijo-like line search procedure to compute the step size with a new resolvent needed for CUBO 10, 3 (2008) Proximal-Resolvent Methods for Mixed ... 3 each trial, which leads to expensive computation. To overcome these draw backs, many authors have suggested and proposed some modified methods for solving mixed variational inequalities. We also note that if the nonlinear term involving the mixed variational inequalities is an indicator function of a convex set in the Hilbert space, then the mixed variational inequalities are equivalent to the classical variational inequalities. He at el. [9] and Noor [19] have considered a class of modified proximal-extragradient methods for solving the classical variational inequalities, which uses a better step-size rule (inexactness criteria) and includes the proximal and the extragradient methods as a special cases. They have shown the convergence of this approximate proximal method requires either monotonicity or pseudomonotonicity. It has been shown [9] that these proximal- extragradient methods are numerically efficient and robust. It is worth mentioning that there are no such methods for solving the mixed variational inequalities. Inspired and motivated by the research going in this dynamic field, we suggest some new proximal-resolvent methods for solving the mixed variational inequalities. We show that the convergence of our methods requires only the pseudomonotonicity, which is a weaker condition than monotonicity. Results obtained in this paper include the results of He et al [9] and Noor [19] as special cases and improve the convergence criteria of methods of He et al [9]. Our results can also be viewed as a significant extension and generalization of the previously known methods for solving the mixed variational inequalities and related optimization problems. 2. Formulation Let K be a nonempty closed and convex set in a real Hilbert space H, whose inner product and norm are denoted by 〈·, ·〉 and ‖.‖ respectively. Let T : H −→ H be a nonlinear operator and S be a nonexpansive operator. Let PK be the projection of H onto the convex set K. Let ϕ : H −→ R ∪ {∞} be a continuous function. It is well known that the subdifferential ∂ϕ(.) of a proper, convex and lower-semicontinuous function ϕ is a maximal monotone operator. We consider the problem of finding u ∈ H such that 〈T u, v − u〉 + ϕ(v) − ϕ(u) ≥ 0, ∀v ∈ H, (1) which is known as the mixed variational inequality introduced or variational inequality of the second type, see Glowinski, Lions and Tremolieres [7] and Lions and Stampacchia [10].. We note that, if the function ϕ in the mixed variational inequality is a proper, convex and lower-semicontinuous, then problem (1) is equivalent to finding u ∈ H such that 0 ∈ T u + ∂ϕ(u), (2) which is known as the problem of finding a zero of sum of two (or more ) monotone operators. Here ∂ϕ is the subdifferential of the function ϕ. It is well known that a large class of problems arising in industry, ecology, finance, economics, transportation, network analysis and optimization 4 Muhammad Aslam Noor and Khalida Inayat Noor CUBO 10, 3 (2008) can be formulated and studied in the framework of (1) and (2), see [3-6, 15-24] and the references therein. If ϕ is an indicator function of a closed convex set K in H, that is, ϕ(u) = IK (v) = { 0, if v ∈ K; +∞, otherwise. then the mixed variational inequalities (1) are equivalent to finding u ∈ K such that 〈T u, v − u〉 ≥ 0, ∀v ∈ K, (3) which is known as the classical variational inequality, introduced and studied by Stampacchia [26] in 1964. For the numerical methods, formulations and applications of the mixed variational inequalities, readers are advised to see [1-25] and the references therein. We now recall some well known concepts and results. Definition 2.1[3]. For any maximal operator T, the resolvent operator associated with T, for any ρ > 0, is defined as JT (u) = (I + ρT ) −1(u), ∀u ∈ H. It is well known that an operator T is maximal monotone if and only if its resolvent operator JT is defined everywhere. It is single-valued and nonexpansive. that is, ‖JT u − JT v‖ ≤ ‖u − v‖, ∀u, v ∈ H. If ϕ(.) is a proper, convex and lower-semicontinuous function, then its subdifferential ∂ϕ(.) is a maximal monotone operator. In this case, we can define the resolvent operator Jϕ(u) = (I + ρT ) −1(u), ∀u ∈ H associated with the subdifferential ∂ϕ(.). The resolvent operator Jϕ has the following useful char- acterization, see[3,20]. Lemma 2.1. For a given z ∈ H, u ∈ H satisfies the inequality 〈u − z, v − u〉 + ρϕ(v) − ρϕ(u) ≥ 0, ∀v ∈ H (4) if and only if u = Jϕ(z, where Jϕ = (I + ρ∂ϕ) −1 is the resolvent operator. It is well-known that the resolvent operator Jϕ is a nonexpansive operator, that is, ‖Jϕ(u) − Jϕ(v)‖ ≤ ‖u − v‖, ∀u, v ∈ H. Lemma 2.1 plays a very important and significant role in the analysis of the mixed variational inequalities. If the proper, convex and semi-lowercontinuous function ϕ is an indicator function CUBO 10, 3 (2008) Proximal-Resolvent Methods for Mixed ... 5 of a closed convex set K in H, then Jϕ ≡ PK , is the projection operator from H onto the closed convex set K. In this case, Lemma 2.1 reduces to the following well known result, which is known as the projection Lemma. Lemma 2.2 . Let K be a closed convex set K in H. Then, for a given z ∈ H, u ∈ K satisfies the inequality 〈u − z, v − u〉 ≥ 0, ∀v ∈ K, if and only if u = PK z, where PK is the projection of H onto the closed convex set K. It is also known that the projection operator PK is nonexpansive. For the applications of Lemma 2.2, see [1-25]. Definition 2.2. ∀u, v ∈ H, the operator T : H −→ H with respect the function ϕ is said to be pseudomonotone, if 〈T u, v − u〉 + ϕ(v) − ϕ(u) ≥ 0 implies 〈T v, v − u〉 + ϕ(v) − ϕ(u) ≥ 0. Note that monotonicity implies pseudomonotonicity but the converse is not true [5]. 3. Main results In this section, we use the projection technique to suggest some iterative methods for solving the variational inequalities. For this purpose, we need the following result, which can be proved by invoking Lemma 2.1. Lemma 3.1. The function u ∈ H is a solution of the mixed variational inequality (1) if and only if u ∈ H satisfies the relation u = Jϕ[u − ρT u], (5) where ρ > 0 is a constant and Jϕu = (I + ρ∂ϕ) −1(u) is the resolvent operator. Lemma 3.1 implies that problems (1) and (5) are equivalent. This alternative formulation is very important from the numerical analysis point of view and has played a significant part in suggesting several numerical methods for solving variational inequalities and complementarity problems, see [1-7,10-20]. We now define the projection residue vector by the relation R(u) = u − Jϕ[u − ρT u] = u − y, y = Jϕ[u − ρT u]. Invoking Lemma 3.1, one can easily show that u ∈ H is a solution of (1) if and only if u ∈ H is a zero of the equation R(u) = 0. 6 Muhammad Aslam Noor and Khalida Inayat Noor CUBO 10, 3 (2008) For a positive constant α, we consider u = u − αR(u) = u − α{u − Jϕ[u − ρT u]}, which is another fixed point problem. We use alternative fixed point formulation to suggest and analyze the following iterative method for solving the mixed variational inequality (1). Algorithm 3.1. For a given u0 ∈ H, compute the approximate solution un+1 by the iterative scheme un+1 = Jϕ[un − γnR(un+1)] = Jϕ[un − γn{un − Jϕ[un − ρT un+1]}], n = 0, 1, 2, . . . , or equivalently yn = Jϕ[un − ρT un+1] un+1 = Jϕ[un − γn{un − yn}], n = 0, 1, 2, . . . which can be considered as a proximal point method and appears to be a new one. If ϕ is the indicator function of a closed convex set K, then Jϕ ≡ PK , the projection of H onto K. Consequently, Algorithm 3.1 collapse to the following algorithm for solving classical variational inequalities (3). Algorithm 3.2. For a given u0 ∈ H, compute the approximate solution un+1 by the iterative scheme un+1 = PK [un − γnR(un+1)] = PK [un − γn{un − PK [un − ρT un+1]}], n = 0, 1, 2, . . . , or equivalently yn = PK [un − ρT un+1] un+1 = PK [un − γn{un − yn}], n = 0, 1, 2, . . . which can be considered as a proximal-extragradient method. Note that for γn = 1, Algorithm 3.1 reduces to: Algorithm 3.3. For a given u0 ∈ H, compute the approximate solution un+1 by the iterative scheme un+1 = Jϕ[un − ρT un+1], n = 0, 1, 2 . . . which is known as the proximal method and convergence requires only pseudomonotonicity, see Noor [20]. In recent years, proximal methods have been considered and studied extensively. Several conditions have been studied which are easy to implement, see [9, 17-20]. CUBO 10, 3 (2008) Proximal-Resolvent Methods for Mixed ... 7 We now use the technique of updating the solution to rewrite the fixed-point formulation (5) as: y = Jϕ[u − ρT u] (6) u = Jϕ[y − ρT y], which can be written as u = Jϕ[Jϕ[u − ρT u] − ρT Jϕ[u − ρT u]], which is another fixed point formulation of the mixed variational inequalities (1). Here we use this equivalent alternative formulation to suggest the following method for solving mixed variational inequalities (1). Algorithm 3.4. For a given u0 ∈ H, find the approximate solution un+1 by the iterative schemes: yn = Jϕ[un − ρT un+1] un+1 = Jϕ[yn − ρT yn], n = 0, 1, 2, . . . Algorithm 3.5. For a given u0 ∈ H, find the approximate solution un+1 by the iterative schemes: un+1 = Jϕ[Jϕ[un − ρT un+1] − ρT Jϕ[un − ρT un+1]], n = 0, 1, 2, . . . Algorithms 3.4 and Algorithm 3.5 are called the two-step or predictor-corrector implicit iterative resolvent methods for solving the mixed variational inequalities (1) and appear to be new ones. If ϕ is the indicator function of a closed convex set K, then Algorithm 3.5 is equivalent to the following implicit projection iterative method for solving the classical variational inequalities (3), which are mainly due to Noor [16-18]. Algorithm 3.6. For a given u0 ∈ H, find the approximate solution un+1 by the iterative schemes: un+1 = PK [PK [un − ρT un+1] − ρT PK [un − ρT un+1]], n = 0, 1, 2, . . . Now we look at Algorithm 3.4 from a different angle. Consider y defined by (6) as an approx- imate solution of the mixed variational inequality (1) and define w = Jϕ[u − γ(u − y)] z = u − ρT w. We use this formulation to suggest the following iterative method Algorithm 3.7. For a given u0 ∈ H, calculate the approximate solution un+1 by the iterative schemes; yn = Jϕ[un − ρT un+1] wn = Jϕ[un − γ(un − yn)] un+1 := zn = un − ρT wn, n = 0, 1, 2, . . . 8 Muhammad Aslam Noor and Khalida Inayat Noor CUBO 10, 3 (2008) which is called the modified extraresolvent method and appears to be a new one. Note that for γ = 1, Algorithm 3.7 reduces to Algorithm 3.8. For a given u0 ∈ H, compute the approximate solution un+1 by the iterative scheme yn = Jϕ[un − ρT un+1] un+1 = un − ρT yn, n = 0, 1, 2, . . . which is called the extraresolvent method for solving the mixed variational inequality (1). For a positive constant α, consider u = u − α(u − z). Here the positive constant α can be viewed as a step length along the direction −(u − z). We use this fixed-point formulation to suggest the following iterative method. Algorithm 3.9. For a given u0 ∈ H, compute the following iterative schemes: yn = Jϕ[un − ρnT un+1] wn = Jϕ[un − γn(un − yn)] zn = Jϕ[un − ρnT wn] (7) un+1 = un − α(un − zn), n = 0, 1, 2, . . . (8) α = ‖zn − wn‖ 2 + ‖un − zn‖ 2 − △(wn) 2‖un − zn‖2 (9) where △(wn) ≤ ν(‖zn − wn‖ 2 + ‖un − zn‖ 2), ν < 1 = ν{2〈wn − zn, wn − un + ρnT wn + ρnϕ ′(wn)〉 − ‖wn − zn‖ 2}. (10) Here △(wn) is known as the inexactness criteria which can be viewed as stepsize and ϕ ′(.) is the differential of the convex function ϕ. For α = 1 and zn = wn, Algorithm 3.9 is exactly Algorithm 3.8. If y = w, then Algorithm 3.9 reduces to: Algorithm 3.10. For a given u∈H, compute the approximate solution un+1 by the iterative schemes yn = Jϕ[un − ρnT un+1] wn = Jϕ[un − γ(un − yn)] un+1 := zn = un − α(un − wn), n = 0, 1, 2, . . . α = ‖un − yn‖ 2 + ‖un − wn‖ 2 − △(yn) 2‖un − wn‖2 CUBO 10, 3 (2008) Proximal-Resolvent Methods for Mixed ... 9 which is an approximate extraresolvent method for solving (1). Remark 3.1. Algorithms 3.5-3.10 are called the approximate proximal extraresolvent methods, which are new ones. We would like to point out that if the nonlinear term ϕ in the mixed variational inequality (1) is an indicator function of a closed convex set K, then the resolvent Jϕ = PK is the projection operator of H onto the closed convex set K. Consequently, Algorithms 3.1-3.10 reduce to Algorithms for variational inequalities (3) which appear to be new ones for the variational inequalities (3). In a similar way, one can obtain several new and known algorithms as special cases of Algorithm 3.9. This shows that Algorithm 3.9 is more flexible and unifies several recently proposed (implicit or explicit ) algorithms for solving the mixed variational inequalities. We now study the convergence analysis of Algorithm 3.9. The analysis is in the spirit of He, Yang and Yuan [9] and Noor [19]. To convey the idea and for the sake of completeness, we include the details. Theorem 3.1. Let the operator T be pseudomonotone. If u ∈ K be a solution of the mixed variational inequality (1) and un+1 be the approximate solution obtained from Algorithm 3.9, then ‖un+1(α) − u‖ 2 ≤ ‖un − u‖ 2 − (1 − ν)2 4 {‖un − wn‖ 2 + ‖un − zn‖ 2}. (11) Proof. Let u ∈ K be a solution of (1). Then 〈T u, v − u〉 + ϕ(v) − ϕ(u) ≥ 0, ∀v ∈ K, implies that 〈T v, v − u〉 + ϕ(v) − ϕ(u) ≥ 0, (12) since T is pseudomonotone. Taking v = wn in (12), we have 〈T wn, wn − u〉 + ϕ(wn) − ϕ(u) ≥ 0, which can be written as 〈T wn, zn − u〉 ≥ 〈T wn, zn − wn〉 + ϕ(u) − ϕ(wn). (13) Taking z = [un − ρnT wn], u = zn and v = u in (4), we have 〈un − ρnT wn − zn, un − u〉 + ρnϕ(u) − ρnϕ(zn) ≥ 0, from which we have 〈un − zn, un − u〉 ≥ 〈un − u, ρnT wn〉 + ρnϕ(zn) − ρnϕ(u). (14) From (13) and (14), we have 〈un − zn, zn − wn〉 ≥ 〈ρnT wn, zn − wn〉 + ρn(ϕ(zn) − ϕ(wn) ≥ ρn〈T wn + ϕ ′(wn), zn − wn〉. (15) 10 Muhammad Aslam Noor and Khalida Inayat Noor CUBO 10, 3 (2008) Consider ‖un − u‖ 2 − ‖un+1(α) − u‖ 2 = ‖un − u‖ 2 − ‖un − α(un − zn) − u‖ 2 ≥ ‖un − u‖ 2 − ‖un − u − α(un − zn)‖ 2 = 2α〈un − u, un − zn〉 − α 2‖un − zn‖ 2 = 2α‖un − zn‖ 2 + 2α〈zn − u, un − zn〉 − α 2‖un − zn‖ 2 . (16) Combining (10), (15) and (16), we obtain ‖un − u‖ 2 − ‖un+1(α) − u‖ 2 ≥ α{‖zn − wn‖ 2 + ‖un − zn‖ 2 − △(wn)} − α 2‖un − zn‖ 2 , (17) which is a quadratic in α and has a maximum at α ∗ = ‖zn − wn‖ 2 + ‖un − zn‖ 2 − △(wn) 2‖un − wn‖2 . (18) From (10), (17) and (18), we have the required result (11). 2 Theorem 3.2. Let H be a finite dimensional subspace. If u ∈ K be a solution of (1) and un+1 be the approximate solution obtained from Algorithm 3.9, then limn−→∞(un) = u. Proof. Let u ∈ H be a solution of (1). From (11), it follows that the sequence {‖u − un‖} is nonincreasing and consequently {un} is bounded. Furthermore, we have ∞ ∑ n=1 (1 − ν)2 4 {‖zn − wn‖ 2 + ‖un − zn‖ 2} ≤ ‖u0 − u‖ 2, which implies that lim n−→∞ ‖zn − wn‖ = 0 (19) lim n−→∞ ‖un − zn‖ = 0. (20) Thus we see that the sequences {wn} and {zn} are also bounded. Also from (19) and (20), we have ‖R(wn)‖ = ‖wn − Jϕ[wn − ρT wn]‖ = ‖wn − zn + zn − Jϕ[wn − ρT wn]‖ ≤ ‖wn − zn‖ + ‖Jϕ[un − ρT wn] − Jϕ[wn − ρT wn]‖ ≤ ‖wn − zn‖ + ‖un − wn‖ = 0. Thus lim n−→∞ R(wn) = 0. (21) Let û be a cluster point of { wn} and the subsequence {wni} converges to û. Since R(u) is a continuous function of u, it follows that lim n−→∞ R(wni ) = R(û) = 0, CUBO 10, 3 (2008) Proximal-Resolvent Methods for Mixed ... 11 which shows that û is a solution of the mixed variational inequality (1). From (19) and (20), we know that limn−→∞(yni ) = û = limn−→∞(zni ). Hence from (11), we have ‖un+1 − û‖ 2 ≤ ‖un − û‖ 2 , ∀n ≥ 0, which shows that the sequence {un} converges to û, the required result. 2 Conclusion. In this paper, we have suggested and analyzed some new proximal extraresolvent methods for pseudomonotone mixed variational inequalities and related optimization problems. The convergence of the new methods require only the pseudomonotonicity of the operator, which is a weaker condition than monotonicity. It has been shown [9] that a special case of Algorithm 3.9 is numerically efficient and robust in solving traffic equilibrium problems. The results obtained are encouraging. The comparison of these new methods with the other methods is an interesting open problem for further research. Acknowledgement. I wish to express my deepest gratitude to Prof. Dr. Claudio Cuevas, Editor-in-Chief, Cubo Journal, for this invitation. I am also grateful to Dr. S. M. Junaid Zaidi, Rector, CIIT, for the excellent research facilities. Received: January 2008. Revised: April 2008. References [1] A. Bnouhachem, A self-adaptive method for mixed quasi variational inequality, J. Math. Anal. Appl., 309 (2005), 136–150. [2] A. Bnouhachem, M. Aslam Noor and T. M. Rassias, Three-step iterative algorithms for mixed variational inequalities, Appl. Math. Computation, 183 (2006), 436–446. [3] H. Brezis, Operateurs Maximaux Monotone et Semigroupes de Contractions dans les Espace d’Hilbert, North-Holland, Amsterdam, Holland, 1973. [4] P. Daniele, F. Giannessi and A. Maugeri, Equilibrium Problems and Variational Models, Kluwer Academic Publishers, United Kingdom, 2003. [5] F. Giannessi and A. Maugeri, Variational Inequalities and Network Equilibrium Problems, Plenum Press, New York, 1995. [6] F. Giannessi, A. Maugeri and P. M. Pardalos, Equilibrium Problems, Nonsmooth Optimization and Variational Inequalities Problems, Kluwer Academic Publishers, Dordrecht Holland, 2001. [7] R. Glowinski, J.L. Lions and R. Tremolieres, Numerical Analysis of Variational In- equalities, North-Holland, Amsterdam, Holland 1981. [8] B.S. He and L.Z. Liao, Improvement of some projection methods for monotone nonlinear variational inequalities, J. Optim. Theory Appl., 112 (2002), 111–128. 12 Muhammad Aslam Noor and Khalida Inayat Noor CUBO 10, 3 (2008) [9] B.S. He, Z. Yang and X.M. Yuan, An approximate proximal-extragradient method for monotone variational inequalities, J. Math. Anal. Appl., 300 (2004), 362–374. [10] J.L. Lions and G. Stampacchia, Variational inequalities, Commun. Pure Appl. Math., 20 (1967), 493–512. [11] U. Mosco, Implicit variational methods and quasi variational inequalities, in: Nonlinear Operators and the Calculus of Variations, Lecture Notes in Mathematics, Springer-Verlag, Berlin, Germany, 543 (1976), 83–126. [12] M. Aslam Noor, On Variational Inequalities, PhD Thesis, Brunel University, London, UK, 1975. [13] M. Aslam Noor, Some recent advances in variational inequalities, Part I, basic concepts, New Zealand J. Math., 26 (1997), 53–80. [14] M. Aslam Noor, Some recent advances in variational inequalities, Part II, other concepts, New Zealand J. Math., 26 (1997), 229–255. [15] M. Aslam Noor, Some algorithms for general monotone mixed variational inequalities, Math. Comput. Modell., 29 (1999), 1–9. [16] M. Aslam Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. [17] M. Aslam Noor, New extragradient-type methods for general variational inequalities, J. Math. Anal. Appl., 277 (2003), 379–395. [18] M. Aslam Noor, Some developments in general variational inequalities, Appl. Math. Com- putation, 152 (2004), 199–277. [19] M. Aslam Noor, Projection-proximal methods for general variational inequalities, J. Math. Anal. Appl., 318 (2006), 53–62. [20] M. Aslam Noor, Fundamentals of mixed quasi variational inequalities, Int. J. Pure Appl. Math., 15 (2004), 137–258. [21] M. Aslam Noor and A. Bnouhachem, Proximal extragradient methods for pseudomono- tone variational inequalities, Tamkang J. Math., 37 (2006), 109–116. [22] M. Aslam Noor, K. Inayat Noor and T.M. Rassias, Some aspects of variational in- equalities, J. Comput. Appl. Math., 47 (1993), 285–312. [23] M. Aslam Noor, K. Inayat Noor and T.M. Rassias, Set-valued resolvent equations and mixed variational inequalities, J. Math. Anal. Appl., 220 (1998), 741–759. [24] M. Patriksson, Nonlinear Programming and Variational Inequalities: A Unified Approach, Kluwer Academic Publishers, Dordrecht, 1998. [25] M.V. Solodov and B.F. Svaiter, Error bounds for proximal point subproblems and asso- ciated inexact proximal point algorithm, Math. Program., 88 (2000),371–389. [26] G. Stampacchia, Formes bilineaires coercivities sur les ensembles coercivities sur les ensem- bles convexes, C. R. Acad. Sci. Paris, 258 (1964), 4413–4416. N01