Existence theorem and optimality conditions for a class of convex semi-infinite problems with noncompact index sets

The paper is devoted to study of a special class of semi-infinite problems arising in nonlinear parametric optimization. These semi-infinite problems are convex and possess noncompact index sets. In the paper, we present conditions, which guarantee the existence of optimal solutions, and prove new optimality criterion. An example illustrating the obtained results is presented.


Introduction
Semi-infinite Programming (SIP) deals with optimization problems, which have an infinite number of constraints.SIP has always been a topic of a special interest due to the numerous theoretical and practical applications such as robotic, classical engineering, optimal design, the Chebyshev approximations, etc. (see [6,7,8], and the references therein).Nowadays, SIP models are efficiently used in dynamic processes, biomedical and chemical engineering, biology, tissue engineering, polymer reaction engineering, etc. (see [1,16], and others).A general SIP problem can be formulated as min κ∈R n c(κ) s.t.f (κ, τ ) ≤ 0 ∀τ ∈ T, where κ ∈ R n is a decision variable, τ is a constraint' index, T ⊂ R p is an infinite index set.When, additionally, the index set T depends on the decision variable κ, one gets a problem of the generalized SIP (see [9]).We say that a SIP problem is continuous whenever the index set T is a compact Hausdorff topological space and the functions c(κ) and f (κ, τ ) are continuous w.r.t.their variables.The compactness of the index set T ensures the existence of global maximizers of the so-called lower level problem: max t∈T f (κ, τ ).The continuity of the functions defining the SIP problem is a natural condition, which permits to apply the methods of continuous optimization.Usually, it is assumed that a SIP problem is continuous.However, it should be noted that there are some classes of problems, for which noncompact index set is commonplace.Without the compactness of T and/or the continuity of the inequality be given.Define the following sets: Y = {y = (y j , j ∈ J) : ∑ j∈J g j y j = c, y j ≥ 0, j ∈ J}, X = {x ∈ R n : q T j x + ω j = 0, j ∈ S * , q T j x + ω j ≤ 0, j ∈ S}, K(j) = {t ∈ R p : B j t ≤ 0}, j ∈ J, and suppose that 1) the (polyhedral) set Y is nonempty and bounded; 2) the set X is nonempty; 3) the following inequalities are satisfied: Consider the following optimization problem: where Ω 0 (x) := 1 2 x T W 0 x + d T 0 x + r 0 , Ω j (x) := 1 2 x T Wj x + dT j x + rj , j ∈ J; Note that in (1), functions Ω j (x), j ∈ J ∪ {0}, are convex w.r.t.x ∈ R n , and functions Ψ j (x, t), j ∈ J, are linear w.r.t.x ∈ R n , and, in general, are non-convex w.r.t.t ∈ K(j), j ∈ J.
The problems in the form (1) arise in nonlinear parametric SIP when the differential properties of solutions are being studied [10,12].Therefore, it is important to study the properties of these optimization problems.
The main aims of this paper are as follows: • to show that the optimization problem ( 1) is equivalent to some special SIP problem; • to study the issues connected with conditions, which guarantee the existence of optimal solutions of the special SIP problem (it should be mentioned that, at the moment, such conditions are not very well studied in SIP); • to formulate the optimality conditions for the special SIP problem.

Semi-infinite formulation
Let y (i) = (y (i) j , j ∈ J), i ∈ I, be the vertices (the extremal points) of the polyhedral set Y defined in Section 2. In what follows, we suppose that these vertices are known.
Let us show that max In fact, since where y 0 ∈ Y. Consequently, Since y 0 ∈ Y , then there exist numbers Hence, y 0 T f = ∑ i∈I λ i y (i) T f.From the last equality and (4), it follows that The resulting contradiction proves that (3) is false, and hence (2) holds.Taking into account equality (2), we conclude that the optimization problem (1) is equivalent to the following one: j rj , i ∈ I, and Then, problem (5) can be rewritten in the form Stat., Optim.Inf.Comput.Vol.

281
Taking into account that y (i) j ≥ 0, i ∈ I, j ∈ J, one can show that the last problem is equivalent to the following SIP problem: In problem (7), the decision variables form the vector ϕ = (x ∈ R n , ρ j ∈ R, j ∈ J, β ∈ R).Without loss of generality, we may suppose that Let us make some observations concerning problem (7).
• The constraints and the cost function of this problem are linear-quadratic w.r.t.decision variables x ∈ R n , ρ j ∈ R, j ∈ J, and β.Hence, it is evident that this problem is convex, and each its local optimal solution is a global one.• If problem (7) is consistent, then its linear constraints (infinite number) and linear-quadratic constraints (finite number) satisfy the Slater condition (i.e.these constraints are strictly satisfied for some • In problem (7), the index sets K(j), j ∈ J, are not compacts.
• For any feasible solution (x, ρ j , j ∈ J, β), the following relations take place: Let us define the function Note that for any feasible solution (x, ρ j , j ∈ J, β) of problem (7), there exists a feasible solution (x, ρ j (x), j ∈ J, β(x)), (12) such that β(x) satisfies (11) and β(x) ≤ β.Hence, without loss of generality, in what follows, we can consider only feasible solutions in the form (12).
As it was revealed above, our interest to the problems in the form (7) arose from the study of the solutions' properties in parametric SIP problems w.r.t.perturbations of parameters.Though, it is worth mentioning that problem (7) is an interesting subject itself.Similar SIP problems were considered, for example, in [3,5], et al.The references that one can find in these papers, indicate also on other areas, where such problems appear.
4. On existence of optimal solutions of problem (7)

Sufficient conditions guaranteeing existence of optimal solutions
The main result of this section is Theorem 1, which gives a sufficient conditions of solvability of the convex SIP problem (7).
Given problem (7), consider the set Theorem 1 Suppose that (A) there exists x ∈ X such that (B) either the set ∆X \ {0} is empty or the following implication takes place: Then problem (7) has an optimal solution.
Proof.Let us prove, first, that under condition (A), the feasible set X in problem ( 7) is nonempty.Note that it follows from assumption 2) (see Section 2) and condition (A), that X ̸ = ∅ and there exists x ∈ X satisfying ( 14).Given x ∈ X and j ∈ J, consider a Quadratic Programming (QP) problem Taking into account condition (A) and the results from [4], we conclude that the problem above has an optimal solution.Consequently, the vector (x, ρ j (x), j ∈ J, β(x)) (with β(x) defined in (11)) is a feasible solution of problem (7) and the set of feasible solutions X in this problem is nonempty.As it was noted in Section 3, for any (x, ρ j , j ∈ J, β) ∈ X, there exists a feasible solution (x, ρ j (x), j ∈ J, β(x)), such that β(x) ≤ β.Hence, without loss of generality, we can consider only such feasible solutions.Now, let us prove that, if, additionally, the condition (B) is satisfied, then the convex problem (7) has an optimal solution.Consider any sequence of feasible solutions x k ∈ X, k = 1, 2, ..., such that the corresponding sequence of the cost function values of problem (7) decreases.Hence the following inequalities take place: Let us show that there exists a number .. Having supposed that on the contrary, such M 0 does not exist, without loss of generality, we can consider that ||x k || → ∞ as k → ∞.It follows from the constraints of (7) that Divide inequalities ( 15) and ( 16) by ||x k || 2 , and pass to the limit as k → ∞.As a result, we obtain Stat., Optim.Inf.Comput.Vol.8), (9), and the positive semi-definitiveness of the matrices W i , i ∈ I ∪ {0}, we conclude from (17) that Moreover, it is easy to show that For the fixed j ∈ J and k ∈ N, consider the problem Since x k ∈ X, this problem has an optimal solution, which we denote here by t k j ∈ K(j), ρ j (x k ) being the optimal value of the cost function.Without loss of generality, let us consider that t k j is an optimal solution, which has the minimal norm.For t k j , the following first order (necessary) optimality conditions take place: Below, with no loss of generality, we will suppose that in (21 where M(k, j) ⊂ R mj is the set of all µ(k, j) satisfying (21).It follows from ( 21), that Having divided both sides of this equality by ||x k || 2 and passing to the limit as k → ∞, we get There are two possible situations here: I) σ > 0, ∆t T j D j ∆t j = 0, and II) σ = 0.
Suppose, first, that the situation I) takes place.Hence, there exists ∆t j such that Here and in what follows, b T mj denotes the m-th row of matrix B j .For sufficiently large k, it is evident that M k a (j) ⊂ M a (j) and from (21 Here µ m (k, j) denotes the m-th element of vector µ(k, j) ∈ R mj .Based on these observations, it is easy to show that (µ(k, j)) T B j ∆t j = 0 for sufficiently large k.Taking into account this equality, let us multiply the first equality in (21) by ∆t T j : If suppose that (c T j − x k T A j )∆t j < 0, then we should conclude that the cost function of problem (20) in not bounded from below on the feasible set.But this is impossible, since this problem admits an optimal solution.Therefore, the following inequality holds true: Let us show that for any j ∈ J, In fact, for any t ∈ K(j), τ ∈ ∆K(j) and θ ≥ 0, we have (θt + τ ) ∈ K(j).Hence for all θ ≥ 0 it holds 0 ≤ (θt + τ ) T D j (θt + τ ) = θ 2 t T D j t + θt T D j τ.For a sufficiently small θ ≥ 0, it follows from the last inequality that t T D j τ ≥ 0 and the relations (25) are proved.
Since t k j ∈ K(j), ∆t T j D j ∆t j = 0, ∆t j ∈ ∆K(j), it follows from (25) that ∆t T j D j t k j ≥ 0. Then the last inequality together with (23) and ( 24) imply By construction, the vector t k j can be written in the form Taking into account this representation, we conclude from (26) that and from the inequalities b T mj t k j ≤ 0, m = 1, ..., m j , we get b T mj (∆t j + ε k )≤ 0, m = 1, ..., m j .Consequently, for m = 1, ..., m j , the following implication is valid: It is easy to check that α(k) → 0 as k → ∞.Consider vector tk j := ( α(k)∆t j + ε k ) θk .By construction, it holds Taking into account (26)-(28), we get From the last equalities, it follows that for k = 1, 2, ..., both vectors, tk j and t k j are optimal solutions of problem (20).Note that for the large numbers k, the inequality || tk j || < ||t k j || takes place, that is impossible since t k j is the minimal norm optimal solution of problem (20).The obtained contradiction permits to conclude that the situation I) is not possible.Now, suppose that the situation II) takes a place: Let us show that lim Hence, vector µ(k, j) admits representation µ(k, j) = (∆µ(j) + w(k, j))θ k , where θ k = ||µ(k, j)||, w(k, j) → 0 as k → ∞.The relations (21) can be rewritten in the form Let ∆µ i (j), w i (k, j) be the i-th components of the vectors ∆µ(j), w(k, j), i = 1, ..., m j .From (30), we conclude that given i ∈ {1, ..., m j }, the inequality w i (k, j) < 0 implies ∆µ i (j) > 0.
It follows from ( 15) and ( 16) Divide both sides of the last inequality by ||x k || and pass to the limit as k → ∞.As a result, we get inequality d T 0 ∆x * + d T i ∆x * ≤ 0, i ∈ I, which contradicts the assumption (B) of the theorem.Hence situation II) is impossible as well as situation I).
The obtained contradictions lead us to conclude that for any sequence of feasible solutions x k ∈ X, k = 1, 2, ..., of problem (7), where x k satisfy inequalities (15), there exists M 0 > 0 such that ||x k || ≤ M 0 , k = 1, 2, ... This fact permits to conclude that problem (7) admits an optimal solution.The theorem is proved. 2 At the end of this section, we would like to make the following remarks.
1.It can be shown that the feasible set X of problem ( 7) is not empty if and only if the condition (A) of Theorem 1 is satisfied, that, in turn, always happens when t T j D j t j > 0 for all t j ∈ K(j) \ {0}, j ∈ J. 2. Note that the condition (B) is considered to be satisfied if ∆X \ {0} = ∅.Hence, from (13), one can see that the condition (B) holds true, if the matrix W := ∑ i∈I∪{0} W i is (strictly) positive definite on the set This condition will be satisfied if at least one of the matrices W i , i ∈ I∪{0}, is positive definite on the set ∆ X.
which is a slight modification of the condition (B), is a necessary condition for boundedness from below of the cost function in problem (7).

Relaxation of the sufficient condition
From the remarks done at the end of the previous subsection, one can conclude that the sufficient conditions formulated in Theorem 1, are also "almost" necessary for existence of an optimal solution in the convex SIP problem (7).The differences between the necessary and sufficient conditions appear only in the case, when there exists a vector ∆x ̸ = 0 such that ∆x ∈ ∆X, The following conjecture of more strong statement naturally arises from our considerations.

Conjecture. Problem (7) admits an optimal solution if conditions (A) and (31) (a relaxed condition (B)) are satisfied.
At the moment, we have no proof of this statement and also have certain doubts about its truthfulness.The following example indicates that, possibly, the proposed conjecture is not true.

Example. Consider a linear SIP problem
(32) Problem ( 7) has the form (32), if we set: 1 ), where O denotes the n × n null matrix.
The conditions (A1) and (B1), in general, do not guarantee the existence of optimal solutions in problem (32) even if the index set T is compact and the cost function of this problem is bounded in the feasible set.Indeed, let us consider example 5.101.from [2]: The condition (A1) is satisfied here, since for any k ≥ 1 vector z k = (k, 1/k) T is a feasible solution of this problem.The condition (B1) is fulfilled as well.Nevertheless, problem (33) does not have an optimal solution.Notice that there exists vector ∆z = (1, 0) T , such that ∆z ∈ {∆z ∈ R 2 : ā(t) T ∆z ≤ 0, ∀t ∈ T } and cT ∆z = 0, and the vector z k admits representation The aim of this section is to formulate and prove optimality conditions for a given feasible solution of problem (7).It was mentioned in the Introduction, that in the majority of papers on optimality conditions for SIP problems, the compactness of the (infinite) index set of the (continual) constraints is being assumed.In our study of problem (7), this assumption is not satisfied: it was already noticed above that the sets K(j), j ∈ J, of the indices, which correspond to the continual constraints, are not compact.
In few papers dedicated to study of optimality for SIP problems without the assumption of the compactness of the index set (see, for example, [3,5,13], and references therein), it is supposed that the Farkas-Minkowski Constraint Qualification (CQ) is satisfied.In this paper, we do not have such an assumption for our problem (7) (see Theorem below).
In the literature, there is an approach, a so called technique of homogenization, [15], which also permits to bypass the difficulties caused by non-compactness of the index set.Having applied this approach to our problem, we obtain an equivalent SIP problem, which has a compact index set but does not satisfy the Slater condition: if ∃j 0 ∈ J, ∆K(j 0 ) ̸ = ∅, this new problem has immobile indices with infinite immobility orders [11].Therefore, the known from the SIP literature necessary optimality conditions (see [2,11,14]) cannot be applied here too.Despite the above, in this section, for problem (7) with noncompact index set, we will prove the optimality criterion without any additional condition (CQ) on the constraints.Let (x, ρ j , j ∈ J, β) be a feasible solution in problem (7).As it was mentioned above, without loss of generality, we can consider that (x, ρ j , j ∈ J, β) = (x, ρ j (x), j ∈ J, β(x)), where ρ j (x), j ∈ J, and β(x) are defined in (6) and (11).Denote Theorem 2 A feasible solution (x 0 ∈ R n , ρ 0 j , j ∈ J, β 0 ) of the convex SIP problem (7) is optimal in this problem iff there exist vectors t * kj ∈ K a (j, x 0 ), k = 1, ..., p j , τ * kj ∈ ∆K(j, x 0 ), k = 1, ..., l j , j ∈ J; (34) and numbers Note that here and in what follows, we suppose that, if κ = 0, then the set {1, ..., κ} is empty and Since the sets S, S * , I and J consist of finite numbers of elements, then to simplify calculations, without loss of generality, we can present a proof of Theorem 2, considering problem (7) with |I| = 1, |J| = 1, and X = R n .In other words, we will prove here the theorem for problem (7) in the following form: where Theorem 3 (A particular case of Theorem 2) A feasible solution (x 0 , ρ 0 ) ∈ R n+1 of problem (37) is optimal in this problem iff there exist vectors and numbers such that Before proceeding with the proof, let us introduce some notation and fulfill necessary preparatory calculations.
Let (x 0 , ρ 0 = ρ(x 0 )) be a feasible solution of problem (37).Then the set K a (x 0 ) is the set of optimal solutions in the QP problem min f (t, x 0 ) s.t.t ∈ K.
Note that, in general, this problem is nonconvex.It follows from [2], that the set K a (x 0 ) can be represented in the form K a (x 0 ) = ∪ s∈S K s (x 0 ), where for any s ∈ S, the set K s (x 0 ) is a convex polyhedron and |S| < ∞.Hence, for any s ∈ S, there exist finite sets of (extremal) vectors t(s, i) ∈ K s (x 0 ), i ∈ J(s), and rays τ (s, i) ∈ ∆K(x 0 ), i ∈ I(s), such that Moreover, one can show that Now, consider the set ∆K(x 0 ) defined in (38).One can prove that this set is a union of a finite number of bounded convex polyhedra ∆K s (x 0 ), s ∈ ∆S : ∆K(x 0 ) = ∪ s∈∆S ∆K s (x 0 ).Then, for any s ∈ ∆S, there exist finite sets of (extremal) vectors τ (s, i) ∈ ∆K(x 0 ), i ∈ ∆I(s), such that From the above considerations, it follows that relations are equivalent to the following ones: Here we have taken into account that τ (s, i) ∈ ∆K(x 0 ) (see ( 42)).Now we can prove the theorem formulated above.

Conclusion
In the paper, we considered a special class of convex SIP problems with noncompact index sets, which arise in nonlinear parametric optimization.For the problems of this class, we have formulated and proved the existence theorem and new optimality conditions.The results of the paper will be used in study of differential properties of parametric SIP problems.Moreover, these results may be used as the basis of new approach to study of special classes of SIP problems, such as that of Copositive Programming, Semi-Infinite Polynomial Programming, and others, for which the noncompactness of index sets is commonplace.