Photovoltaic Cells and Systems: SQU Journal for Science, 17 (1) (2012) 80-83 © 2012 Sultan Qaboos University 08 A New Criterion for Optimality in Nonlinear Programming L.M. Graña Drummond* and A.N. Iusem** *Faculdade de Administracao e Ciencias Contabeis, Universidade Federal de Rio de Janeiro, Brazil, Email: bolsigeno@gmail.com. **Instituto de Matematica Pura e Aplicada (IMPA), Rio de Janeiro, Brazil, Email: iusp@impa.br. ABSTRACT: We establish a sufficient condition for the existence of minimizers of real-valued convex functions on closed subsets of finite dimensional spaces. We compare this condition with other related results. KEYWORDS: Nonlinear programming, Convexity, Recession cone. في البرمجة غير الخطية يةمثللألمعيار جديد ألفريدو أيوزمموند و ادر راناجويس ل مغلقة من جزئيةفي إطار مجموعات محدبة لدالة حقيقية ىصغرشرطاُ كافياُ لوجود قيم تقترح هذه الورقة :خصمل . به مرتبطةنتائج أخرى عدودة األبعاد. نقارن هذا الشرط ممح فضاءات 1. Introduction ecessary and/or sufficient conditions for determining whether an optimization problem has an optimum have been studied for centuries. For instance, a very well-known classical result is the Bolzano-Weierstrass Theorem, which states that any continuous function attains its minimum value on a compact subset of its domain. Many other conditions have been proposed and, in general, these criteria are useful not just for theoretical purposes, but also from an algorithmic point of view. Here, for a real-valued convex objective and a closed constraint set, we propose a new condition for establishing the existence of optima. First, let us introduce some notation which will be used in the sequel. If C ⊂ R n , then the recession cone of C, which is denoted by 0 + C, is the set of directions contained in C, i.e., 0 + C = {v ∈ R n │C + tv ⊂ C ∀ t ≥ 0}. We denote by conv(C) the convex hull of C, i.e., the 'smallest' convex set that contains C, that is to say, the set of all convex combinations of elements of C. The set cl(C) will stand for the closure of C. For a function f : R n → R and α ∈ R, the α-level set of f is given by Γα = {x ∈ R n │f(x) ≤ α}. Finally, argminx∈C f (x) is the set of minimizers of f on C. N mailto:bolsigeno@gmail.com mailto:iusp@impa.br A NEW CRITERION FOR OPTIMALITY 08 2. The optimality criterion In this section we propose our optimality criterion: a set of conditions which ensures the existence of an optimum for a constrained nonlinear problem with convex objective function. Theorem 2.1 Let f : R n → R be a convex function and Ω a closed subset of R n . Assume that i. f is bounded from below on Ω , ii. 0 + cl(conv(Ω)) ∩ Γf(0) = {0} . Then, argminx∈Ω f(x) ≠ Ø. Proof. By virtue of Bolzano-Weierstrass Theorem, we can assume that Ω is unbounded. Let infx∈Ω f(x) = ν ∈ R and {x k } ⊂ Ω be a minimizing sequence, i.e., such that limk→∞ f(x k ) = ν. If {x k } has a bounded subsequence, since Ω is closed and f is continuous (because it is a real-valued convex function), there exists ∈ Ω such that f( ) = ν and, therefore, x ∈ argminx∈Ω f(x) and the proof is complete. So let us assume that {x k } has no bounded subsequences. Refining the sequence if necessary, we can assume that, for some x̂ ∈ R n with ˆ 1,x  lim k k x    and ˆlim k k x x k x   . We claim that x̂ ∈ 0 + cl(conv(Ω)). Take x ∈ cl(conv(Ω)) and t ≥ 0. We have ˆ lim (1 ) k k k k t t x tx x x x x      . Since {x k }⊂ Ω, we have that xtx ˆ is the limit of convex combinations of elements of cl(conv(Ω)). Hence, xtx ˆ ∈ cl(conv(Ω)) and our claim is true. Let us now see that x̂ ∈ Γf(0). Indeed,   1 1 ˆ( ) lim (1 ) 0 k 1 1 lim (1 ) (0) k (0) , k f x f x k k x x k f x f k k x x f                   using the continuity of f in the first equality, its convexity in the inequality, and the facts that f(x k ) → ν ∈ R and 1 0 k x  in the last equality. Thus, x̂ ∈ Γf(0) and, therefore, x̂ ∈ cl(conv(Ω)) ∩ Γf(0) = {0}, in contradiction with the fact that ˆ 1.x  So we conclude that argminx∈Ω f (x) ≠ Ø. □ 3. Final remarks Clearly, the novelty of Theorem 2.1 lies in hypothesis (ii), and thus it is worthwhile to discuss it further. We observe first that it implies that Ω ∩ Γf(0) is bounded. Indeed, since this set is contained in U = [cl(conv(Ω))] ∩ Γf(0) , it suffices to establish the boundedness of U. Since U is convex by the convexity of f , if it were unbounded, then L.M. GRAÑA DRUMMOND and A.N. IUSEM 08 a well known property of the recession cone (see Rockafellar, 1970) entails that there exists a nonzero u ∈ O + (U) ⊂ O + (cl(conv(Ω)) ∩ O + (Γf(0)) , using another well known property of the recession cone in the inclusion. Since 0 belongs to Γf(0), it follows that u = 0+u belongs to Γf(0), and hence to O + (cl(conv(Ω)) ∩ Γf(0), contradicting (ii), and thus establishing the boundedness of Ω ∩ Γf(0). Now, since Ω ∩ Γf(0) is clearly closed, if it were not only bounded but also nonempty, it would be compact, in which case, by Bolzano-Weierstrass result, f would attain its minimum on Ω ∩ Γf(0), but a minimizer of f on this set obviously also minimizes f on Ω, establishing the result of Theorem 2.1 in a direct way. In other words, under (ii), the result of Theorem 2.1 is rather immediate when Γf(0) intersects the feasible set Ω. The point here is that we do not assume that Γf(0) intersects Ω. In cases where Ω ∩ Γf(0) is empty, the above mentioned boundedness conclusion becomes void, and in principle it says nothing about the existence of solutions of the optimization problem. For instance, taking n = 1, f(x) = max{0, x + 1} and Ω = [1, +∞), we have Γf(0) = (−∞, 0], so that Ω ∩ Γf(0) = Ø , but on the other hand, O + (cl(conv(Ω)) = [0, +∞) and (ii) holds, as well as the conclusion of Theorem 2.1. Secondly, we mention that Theorem 2.1 has a certain resemblance to the following result, which appears as Theorem 1 in Graña Drummond et al. (2008): Theorem 3.1 Take F : R n → R m , Ø ≠ Ω ⊂ R n . Fix w ∈ R m , w ≠ 0, and define Hw = {y ∈ R m │ = 0}. Assume that i) F is bounded from below in Ω (i.e., there exists z ∈ R m such that zi ≤ F(x)i for all x ∈ Ω and all i ∈{1, ... , m}), ii) 0 + cl(conv(Ω)) ∩ Hw = {0} , iii) F(Ω) is closed . Then argminx∈Ω < w, F(x) > ≠ Ø . In fact, the prooflines of both Theorems 2.1 and 3.1 are rather similar, but it is important to point out that despite this resemblance (resulting basically from the similarity of assumption (ii) in both theorems), there is an essential difference, which makes them quite independent of each other. Indeed, in Theorem 2.1 the set which is equal to {0} according to assumption (ii) is a subset of the domain of f, namely R n , while in assumption (ii) of Theorem 3.1 such a set is a subset of the codomain of F, namely R m . This is made clear also in the closedness assumptions of both theorems: in Theorem 2.1 we assume that Ω is closed, while in Theorem 3.1, F(Ω) is assumed to be closed. We mention that if we look at Theorem 3.1 in the scalar case, namely m = 1, it becomes trivial. Taking, without loss of generality, w = 1, we get Hw = {0}, so that (ii) holds automatically, and the remaining assumptions just indicate that F(Ω) ⊂ R is closed and bounded below, so that it has a minimum, and hence F has a minimizer in Ω. On the other hand, in the case of Theorem 2.1 (for which we have always m = 1), assumption (ii) is not automatically satisfied, and we need indeed an additional property of f, namely its convexity, in order to establish that it attains its minimum on Ω (note that Theorem 3.1 does not require any convexity properties of F, and in fact not even its continuity; closedness of its image, in conjunction with assumption (ii), does the job). Finally, we mention the following related existence result, which appears as Theorem 4.3 in (Iusem and Sosa, 2003): Theorem 3.2 Let Ω ⊂ R be closed and convex, and f : R n → R ∪ {+∞} a proper, convex and lower semicontinuous function. If the following auxiliary problem (AP) : find x ∈ R n such that ||x|| = 1 and f (x + y) ≤ f (y) for all y ∈ Ω, does not have solutions, then argminx∈Ω f(x) ≠ Ø . We remark that the result of Theorem 3.2 might be rephrased also in terms of a recession cone, making it look more like Theorems 2.1 and 3.1, but it has an essential difference with regard to them: its validity is A NEW CRITERION FOR OPTIMALITY 08 restricted to the case of convex optimization, meaning that both the objective f and the feasible set Ω must be convex, while Theorem 2.1 demands only closedness, but not convexity of Ω. 4. Acknowledgment The work of the second author (A.N. Iusem) was partially supported by CNPq grant no. 301280/86. 5. References GRAÑA DRUMMOND, L.M., MACULAN, N. and SVAITER, B.F. 2008. On the choice of parameters for the weighting method in vector optimization. Mathematical Programming, 111: 201-216. IUSEM, A.N. and SOSA, W. 2003. New existence results for equilibrium problems. Nonlinear Analysis, 52: 621-635. ROCKAFELLAR, R.T. 1970. Convex Analysis. Princeton University Press, New Jersey. Received 11 May 2011 Accepted 19 August 2011