A Parametric Kernel Function Generating the best Known Iteration Bound for Large-Update Methods for CQSDO

In this paper, we propose a large-update primal-dual interior point algorithm for convex quadratic semidefinite optimization (CQSDO) based on a new parametric kernel function. This kernel function is a parameterized version of the kernel function introduced by M.W. Zhang (Acta Mathematica Sinica. 28: 2313-2328, 2012) for CQSDO. The investigation according to it generating the best known iteration bound O( √ n logn log ε ) for large-update methods. Thus improves the iteration bound obtained by Zhang for large-update methods. Finally, we present few numerical results to show the efficiency of the proposed algorithm.


Introduction
Let S n denote the space of real symmetric matrices of order n and S n + the cone of symmetric positive semidefinite matrices. The standard primal form of convex quadratic semidefinite optimization (CQSDO) problems is as follows and its Lagrange dual problem where C, A i ∈ S n , with the A i are linearly independent, b ∈ R m and Q : S n → S n is a self-adjoint linear operator on S n , i.e., A • Q (B) = Q (A) • B, for all A, B ∈ S n . The notation X ≽ 0 means that the matrix X ∈ S n + . However, the symbol A • B denotes the trace inner-product in S n defined by A • B = Tr(AB) = ∑ n i,j=1 A ij B ij . The CQSDO problem has many important applications in engineering and scientific area (see, e.g., [17]). It is also a generalization of the semidefinite optimization (SDO) and a special case of the semidefinite linear complementarity problem (SDLCP) [13]. There are many solution approaches for CQSDO. Among them, the interior-point methods (IPMs) gained much more attention than others [2,5]. Several, IPMs designed for linear 877 optimization (LO) have been successfully extended to CQSDO, (e.g., [4,14,17,19,22,23,24]), due to their polynomial complexity and practical efficiency.
Kernel functions play an important role in the design of new primal-dual interior point algorithms. Recently, Peng et al. [15] presented primal-dual IPMs for LO and SDO based on the self-regular barrier function. Subsequently, Bai et al. [7,8,9] proposed a class of primal-dual IPMs for LO based on a variety of non-self-regular kernel functions and obtained the same favorable iteration bounds for large-and small-update methods as in [15]. Moreover, Bai and her co-authors extended the aforementioned results for LO to SDO [20] and CQSDO [19]. We note that similar algorithms are successfully prolonged to convex quadratic optimization over symmetric cone (CQSCO) (see [10,21]). For some other related interior-point algorithms based on the kernel functions we refer to [1,3,6,11,18,22,23,24,25]. A kernel function is a univariate strictly convex function which is defined for all positive real t and is minimal at t = 1 whereas the minimal value equals 0. In the other words ψ(t) is a kernel function when it is twice differentiable and satisfies the following conditions: The last condition indicates the barrier property of ψ(t). Furthermore, from the above properties, ψ(t) is completely determined by its second derivative This kernel function may be extended to a scaled barrier function Ψ defined from S n to S n by Ψ(V ) := Tr(ψ(V )) where V is a symmetric positive definite matrix. Therefore the value of the barrier function can be considered as a measure for the closeness of X and (y, Z) to the µ-centers of (P) and (D). In the next section, we describe how any such barrier function defines a primal-dual interior-point method. The iteration bound for such large-update algorithm is obtained by showing that each iteration decreases the barrier function by a sufficient amount. The best iteration bound namely, O( √ n log n log n ϵ ) was obtained so far by some parameterized barrier kernel functions in Table 1.
Kernel function Parameter References [15,16] t p+1 −1 Recently, Zhang [23], has introduced the following kernel function in the design of primal-dual IPMs for solving CQSDO. He showed that the iteration bound for the corresponding large-update algorithm is This bound is a factor log n worse than the above bound in Table 1.
In this paper, we introduce a new parameterized kernel function as follows: where q ≥ 1 is a parameter. Note if q = 1, the parameterized kernel function in (1) reduces to Zhang's kernel function. Therefore (1) is a parameterized version of it. With a suitable choice of the parameter q, namely q = O(log n), we show that the kernel function in (1) generating the best-known iteration bound for large-update methods, namely, O( √ n log n log n ϵ ). Thus improves the iteration bound for large-update methods obtained by Zhang [23]. Moreover, our analysis is straightforward to the SDO problems. Finally, few preliminary numerical results are reported to show by using different value of the parameter q, the best numbers are achieved with the value q = O(log n).
The paper is organized as follows. In Section 2, we recall some known facts about matrices and matrix functions which will be used in the analysis of the algorithm. The generic primal-dual interior point algorithm based on a new kernel function for CQSDO is described. In Section 3, we show that the kernel function in (1) satisfies the eligibility conditions that defines the class of kernel functions considered in Bai et al. [7]. In what follows, we use the general scheme for analyzing the generic algorithm, as presented in [7]. In Section 4, we obtain the iteration bound for large-update methods based on a new kernel function. Some numerical results are provided in Section 5. Finally, some concluding remarks follow in Section 6.
The following notations are used throughout the paper. S n ++ denote the cone of n × n symmetric positive definite matrices. Furthermore X ≽ 0 (X ≻ 0) means that X ∈ S n + (X ∈ S n ++ ). For any matrix X, λ i (X), 1 ≤ i ≤ n denote its eigenvalues. The trace of a n × n matrix X is denoted by Tr (X) = ∑ n i=1 X ii and ∥.∥ denote the Frobenius norm. The symmetric positive definite square root of any symmetric definite matrix X is denoted by for some positive constants k 1 and k 2 . Finally, I and 0 n×n denotes the identity and zero matrix of order n, respectively.

The generic primal-dual IPM algorithm for CQSDO
In this section, we review some known facts about matrices and matrix functions which will be used in the analysis of the algorithm and then we mainly derive the new kernel-function-based Nesterov-Todd direction. Finally, we present the generic primal-dual algorithm for CQSDO.

Definition 2.1
Let X be a symmetric matrix and let . . , λ n (X))Q X , be the eigenvalue decomposition of X, where λ i (X), 1 ≤ i ≤ n, denote the eigenvalues of X, and Q X is orthogonal. If ψ(t) is any univariate function whose domain contains {λ i (X), 1 ≤ i ≤ n} then the matrix function ψ(X) is defined by . . , ψ(λ n (X)))Q X . The Definition 2.1 is called the Spectral Decomposition Theorem of symmetric matrices. Its importance enables us to extend the definition of any function ψ : R −→ R to a function from S n to S n [12]. Now, throughout the paper, without loss of generality, we assume that both (P) and (D) satisfy the interior point condition (IPC), i.e., there exists In addition, the operator Q is monotone, i.e., X • Q (X) ≥ 0, for all X ∈ S n . If the IPC holds, it is well known that finding an optimal solutions of (P) and (D) is equivalent to solving the following system: Stat., Optim. Inf. Comput. Vol. 8, December 2020 L. GUERRA AND M. ACHACHE

879
The basic idea of primal-dual IPMs is to replace the third equation XZ = 0 in the system (2), the so-called complementarity condition for (P) and (D), by the parameterized equation XZ = µI (µ > 0). Thus we consider Since the IPC holds and the A i are linearly independent, the parameterized system (3), has a unique solution (X(µ), y(µ), Z(µ)) for any µ > 0, where X(µ) is the µ−center of (P) and (y(µ), Z(µ)) is the µ−center of (D). The set of µ−centers defines the central-path of (P) and (D) (with µ−running through positive real numbers). If µ → 0, then the limit of the central-path exists and since the limit points satisfy the complementarity condition, the limit yields optimal solutions for (P) and (D) (see [14]). If (X(µ), y(µ), Z(µ)) is known for some positive µ, then we decrease µ to µ := (1 − θ)µ for some fixed θ ∈ (0, 1) and solve the following system: to obtain search directions (∆X, ∆y, ∆Z). Note that ∆Z is symmetric due the second equation in (4) but ∆X may be not symmetric. Many researchers have been proposed several methods for symmetrizing the third equation in (4) such that the resulting new system has a unique symmetric solution.
In this paper, we use the Nesterov-Todd symmetrization scheme [3,4,15], which defines the so-called NT-direction. Let us define the matrix and D = P 1 2 , where P 1 2 denotes the symmetric square roote of P . The matrix D can be used to scale X and Z to the same matrix V as follows Note that both matrices D and V are symmetric and positive definite. So the NT-direction (D X , ∆y, D Z ) is obtained from the system: with Since the A i are linearly independent so theĀ i , then the system (6) has a unique solution D X , ∆y, and D Z with D X and D Z are symmetric matrices. Furthermore, since Q is a self-adjoint and monotone linear operator, The above described search direction defines the classical NT-direction. Now, as mentioned in the introduction, the barrier function is defined for every given kernel function ψ(t) as follows

880
A PARAMETRIC KERNEL FUNCTION FOR CQSDO When we use the function ψ(.) and its first three derivatives ψ ′ (.), ψ ′′ (.) and ψ ′′′ (.) without any specification, it denotes a matrix function if the argument is a matrix and a univariate function if the argument is in R.
Followed [7,11,15,16], we turn now to describe the new NT-direction for CQSDO. The kernel-function-based NT-direction for CQSDO is simply based in replacing the right hand side V −1 − V in the third equation in (6) by −ψ ′ (V ). Thus, we have the following system: where ψ(t) is a given kernel function and ψ(V ), ψ ′ (V ) are the associated matrix functions. Note that if ψ(t) is the classical kernel function ψ(t) = t 2 −1 2 − log t, then (9) coincides with the classical NT-direction in (6). Now, by taking a suitable default step-size α ∈ (0, 1), these search directions construct a new triple (X + α∆X, y + α∆y, Z + α∆Z). We repeat the procedure until we find an iterate in a certain neighborhood of (X(µ), y(µ), Z(µ)). Then µ is again reduce by the (1 − θ) and we apply Newton's method targeting the new µ−centers, and so on. This process is repeated until µ is small enough and at this stage we have found an ϵ-solution of the problems (P) and (D).
We can now describe the algorithm in a more formal way. The generic form of the large-update primal-dual interior point algorithm for solving CQSDO is stated as follows.
Generic primal-dual algorithm for CQSDO
The analysis of the generic algorithm by using an eligible kernel function is based on the application of the following general scheme as presented in [7].
Step 2. Calculate the decrease of Ψ(V ) in terms of δ = δ(V ) for the default step sizeα from Step 3. Solve the equation ψ(t) = s to get ϱ(s),the inverse function of ψ(t) for t ≥ 1. If the equation is hard to solve, derive an upper bound for ϱ(s).
Step 4. Derive a lower bound for δ(V ) in terms of Ψ(V ) by using Step 5. Using the results of steps 3 and 4 find positive constants γ andβ, with γ ∈ (0, 1] , such that Step 6. Calculate the uniform bound for the total Ψ 0 for Ψ(V ) from Step 7. Derive an upper bound for the total number of iterations from Step 8. Set τ = O(n) and θ = Θ(1) so as to calculate an iteration bound for large-update IPMs, or set τ = Θ(1) and θ = O(n) to get an iteration bound for small-update methods.
In the scheme, we use the barrier function Ψ(V ) defined in (8) as a measure function and also a norm-based proximity measure δ(V ) is used in the analysis of the algorithm and which is defined by Note that δ(V ) = 0 ⇔ V = I ⇔ Ψ(V ) = 0.

Iteration bound of the large-update algorithm
Also through the scheme, we need in step 1 and step 3 the inverse functions ρ(s) of − 1 2 ψ ′ (t) for t ≤ 1 and ϱ(s) of ψ(t) for t ≥ 1. Since we are unable to get explicit expressions for the inversion functions we derive some bounds for these inverse functions by using these two lemmas in [7] without proofs. Now by sing these two lemmas, we derive some bounds for ρ(s) and ϱ(s).

This leads to
where the last inequality is satisfied since q 2 −q+1 , for all q ≥ 1.
Step 6. From Lemma 4.1, we have ϱ( τ n ) ≤ 1 + √ 2τ n . As a consequence Step 7. By inequality (23), the number of inner iterations is bounded above by Substituting (24) in this inequality gives Step 8. For large update methods set τ = O(n) and θ = Θ(1). As a consequence, By choosing q = log ( 4 3 (1 + O(n)) ) = O(log n), the iteration bound in (25) becomes which is the currently best know complexity for such large-update methods.

Numerical results
In this section, we present some numerical results under Matlab 8.1 with the implementation is done over a Pentium 4, for solving some convex quadratic semidefinite optimization problems. Different values of the parameters q and θ are presented to show their influence on the number of iterations produced by the large-step primal-dual algorithm. The initial primal dual point (X 0 , y 0 , Z 0 ) with µ 0 > 0 is chosen such that the pair is strictly feasible and the proximity Ψ(X 0 , Z 0 ; µ 0 ) ≤ τ. However, the experiments with the selected theoretical step-size α during each inner iteration guarantees the convergence of the method but it yields to unfavorable results, i.e., a slow convergence of the corresponding algorithm. Therefore a practical one is used instead based on the following strategy. We compute at each inner iteration a maximum step size α max such that X + ξα max ∆X ≻ 0 and Z + ξα max ∆Z ≻ 0 with α max = min(α X , α Z ) and ξ ∈ (0, 1), where α X and α Z are the primal and the dual feasible step size given by and as well as The kernel function used here is: Now, we test the algorithm on two problems of convex quadratic semidefinite optimization. Problem 1.
Comment. Across the numerical results obtained by the algorithm by using different values of q and θ, the minimal number of inner iterations is achieved with the value of q = log( 4 3 (1 + n)).

Conclusion
In this paper, we introduced a new kernel function which is a parameterized version of the kernel function introduced by Zhang and showed that it yields the best known iteration bound for large-update methods for solving CQSDO. The work was inspired from general primal-dual interior point methods based on kernel functions and the scheme for analyzing such methods. Finally, we present some numerical results to show the efficiency of these algorithms and to consolidates our theoretical results under the effect of the parameter q.