A Full Nesterov-Todd Step Infeasible Interior-point Method for Symmetric Optimization in the Wider Neighborhood of the Central Path

In this paper, an improved Interior-Point Method (IPM) for solving symmetric optimization problems is presented. Symmetric optimization (SO) problems are linear optimization problems over symmetric cones. In particular, the method can be efficiently applied to an important instance of SO, a Controlled Tabular Adjustment (CTA) problem which is a method used for Statistical Disclosure Limitation (SDL) of tabular data. The presented method is a full Nesterov-Todd step infeasible IPM for SO. The algorithm converges to ε-approximate solution from any starting point whether feasible or infeasible. Each iteration consists of the feasibility step and several centering steps, however, the iterates are obtained in the wider neighborhood of the central path in comparison to the similar algorithms of this type which is the main improvement of the method. However, the currently best known iteration bound known for infeasible short-step methods is still achieved.


Introduction
Interior-Point Methods (IPMs) are theoretically powerful and numerically efficient iterative methods that are based on Newton's method. However, unlike Newton's method, IPMs guarantee convergence to the ε-approximate solution of a problem in a polynomial number of iterations. It would be ambitious to claim such a result for any optimization problem, however, this is the case for quite a large class of optimization problems that include well known and important optimization problems such as linear optimization (LO), quadratic optimization (QO), semidefinite optimization (SDO), conic optimization problems and many others. There is extensive literature on IPMs. The following references [21,26] and the references therein may serve as a good start.
IPMs have shown to be a good alternative to the classic simplex method and they can solve efficiently LO problems of large size. Moreover, they can be applied to important optimization problems not previously accessible by simplex-type methods such as conic optimization problems. The development of IPM presented in this paper was motivated by the desire to provide a theoretical foundation for the efficient solution of the conic formulation of the Controlled Tabular Adjustment (CTA) problem [13].

A FULL NT-STEP INFEASIBLE IPM FOR SO
A full Newton-step infeasible IPM for LO was first analyzed by Roos [20]. The method was generalized by Gu et al. [11] to SO by using the full Nesterov-Todd (NT) direction as a search direction. The obtained iteration bound coincides with the one derived for LO, where n is replaced by r, the rank of EJAs, and matches currently best-known iteration bounds for infeasible IPMs for SO.
In this paper, we present an infeasible full NT-step IPM for SO that is a generalization of the feasible IPM discussed in [25]. In particular, Lemma 2.3 and Lemma 2.5 from [25] were used to obtained convergence of the method in the wider neighborhood of the central path while still maintaining the best iteration bound known for these types of methods.
The outline of the paper is as follows. In Section 2 we briefly recall some important definitions and results on EJAs and symmetric cones that are needed in the paper. In addition, we give a brief description and formulation of CTA problem. In Section 3, we briefly recall the framework of the full-NT step feasible IPM for SO with its improved convergence and complexity analysis. The full-NT step infeasible IPM for SO with its convergence and complexity analysis is presented in Section 4. Finally, some concluding remarks follow in Section 5.

Euclidean Jordan Algebras and Symmetric Cones
In this section, we recall some important definitions and results on EJAs and associated symmetric cones that will be used in the rest of the paper.
A comprehensive treatment of EJAs and SCs can be found in the monograph [7] and in [1,8,9,11,22,24] as it relates to optimization.

Definition 1
Let (V, ⟨·, ·⟩) be an n-dimensional inner product space over R and • : (x, y) → x • y be a bilinear map from V × V to V. Then (V, •, ⟨·, ·⟩) (denoted by V) is an EJA, if it satisfies the following conditions: (i) x • y = y • x for all x, y ∈ V (Commutativity). The operation, x • y is called the Jordan product of x and y. Moreover, we always assume that there exists an identity element e ∈ V such that e • x = x • e = x for all x ∈ V.
For any element x ∈ V, the Lyapunov transformation L(x) : V → V is given by Furthermore, we define the quadratic representation of x in V as follows In what follows we list some basic facts about symmetric cones. Let V be a finite dimensional real Euclidean space. A nonempty subset K of V is a cone if x ∈ K and λ ≥ 0 imply λx ∈ K. Cone K is a convex cone iff it is a cone and a convex set. The dual cone of a cone K is defined as a set K * = {y ∈ v : ∀x ∈ K, ⟨x, y⟩ ≥ 0}. It is straightforward to see that K * is a closed convex cone.
In what follows we consider, convex, pointed cone K. The interior of K is denoted as intK.
A cone K is a SC if it is homogeneous and self-dual. A cone K is homogeneous if an automorphism group Aut(K) of K acts transitively on interior intK of a cone K, that is, for all x, y ∈ intK there exists g ∈ Aut(K) such that g(x) = y. Automorphism group is defined as Let's consider EJA V, and a corresponding set of squares G. LESAJA, G. Q. WANG, AND A. OGANIAN

253
It can be shown that K(V) is a SC (see, e.g., [7,23]). This is the form of SC that will be used throughout the rest of the paper. The importance of SC for optimization lays in the fact that common and frequently used cones used in optimization, such as non-negative orthant, SOC (ice cream cone), and semidefinite cone, the definitions of which are listed below, are all instances of SC.
1. The linear cone or non-negative orthant: The positive semidefinite cone: K = S n + := {X ∈ S n : X ≽ 0}, where ≽ means that X is positive semidefinite matrix and S n is a set of symmetric n-dimensional matrices. 3. The quadratic cone or SOC: In what follows we define an important concept of a rank of EJA and describe two important decompositions of EJA, a spectral decomposition of an element in EJA and a Peirce decomposition of an EJA.
For any x ∈ V, let r be the smallest integer such that the set {e, x, . . . , x r } is linearly dependent. Then r is the degree of x which is denoted as deg(x). Clearly, this degree of x is bounded by the dimension of the vector space V. Furthermore, there exist a polynomial p ̸ = 0 such that p(x) = 0. If this polynomial has a leading coefficient one (monic polynomial) and the polynomial is of the minimal degree, then it is called minimal polynomial of element x. The rank of V, denoted by rank(V), is the largest deg(x) of any element x ∈ V. An element x ∈ V is called regular if its degree equals the rank of V. In the sequel, V denotes an EJA with rank(V) = r, unless stated otherwise.
For a regular element x ∈ V, since {e, x, x 2 , · · ·, x r } is linearly dependent, there are real numbers a 1 (x), · · ·, a r (x) such that the minimal polynomial of x is given by Hence f (x; x) = 0. The polynomial f (λ; x) is called a characteristic polynomial of a regular element x. Hence, for regular elements, the minimal and characteristic polynomial coincide, however, for elements that are not regular, that may not be the case. Additionally, it can be proved that if regular element x vary, then a 1 (x), · · ·, a r (x) are polynomials in x (Proposition II.2.1 in [7]). The coefficient a 1 (x) is called the trace of x, denoted as tr(x). And the coefficient a r (x) is called the determinant of x, denoted as det(x). An element c ∈ V is said to be an idempotent if c 2 = c. Two idempotents c 1 and c 2 are said to be orthogonal if c 1 • c 2 = 0. Moreover, an idempotent is primitive if it is non-zero and cannot be written as the sum of two (necessarily orthogonal) non-zero idempotents. We say that {c 1 , . . . , c r } is a complete system of orthogonal primitive idempotents, or Jordan frame, if each c i is a primitive idempotent, c i • c j = 0, i ̸ = j, and ∑ r i=1 c i = e. The Löwner partial ordering "≽ K " of V defined by a cone K is defined by The following theorem describes a spectral decomposition of elements of EJA V, which plays an important role in the analysis of the IPMs for SO and other optimization problems.
Theorem 1 (Theorem III.1.2 in [7]) Let x ∈ V. Then there exist a Jordan frame {c 1 , . . . , c r } and real numbers λ 1 (x), . . . , λ r (x) such that The numbers λ i (x) (with their multiplicities) are the eigenvalues of x. Furthermore, the trace and the determinant of x are given by The theorem below provides another important decomposition, the Peirce decomposition, of the space V.
Theorem 2 (Theorem IV.2.1 in [7]) The space V is the orthogonal direct sum of the spaces V ij (i ≤ j), i.e., Thus, the Peirce decomposition of x ∈ V with respect to the Jordan frame {c 1 , . . . , c r } is given by As a consequence of Theorem 2, we have the following corollary.
Corollary 1 (Lemma 12 in [22]) Let x ∈ V and its spectral decomposition with respect to the Jordan frame {c 1 , . . . , c r } is given by (5). Then the following statements hold.
(i) The matrices, L(x) and P (x) commute and thus share a common system of eigenvectors; in fact the c i , 1 ≤ i ≤ r are among their common eigenvectors. (ii) The eigenvalues of L(x) have the form λi+λj As already indicated, for any x, s ∈ V, the trace inner product is given by Thus, tr(x) = ⟨x, e⟩. Hence, it is easy to verify that tr(x + s) = tr(x) + tr(s) and x ≼ K s ⇒ tr(x) ≤ tr(s).
The Frobenius norm induced by this trace inner product is then defined by It follows from Theorem 1 that One can easily verify that Furthermore, we have In the following lemmas, we recall several important inequalities used later in the paper.
Lemma 1 (Lemma 2.13 in [11]) Let x, s ∈ V and ⟨x, s⟩ = 0. Then Lemma 2 (Lemma 2.16 in [11]) Let x, s ∈ V. Then Next lemma provides an important inequality connecting eigenvalues of x • s with the sum of Frobenius norms of x and s.
Thus, the following corollary follows immediately from Lemma 3.

Corollary 2
Lemma 4 (Lemma 2.5 in [25]) Let u, v ∈ V and ⟨u, v⟩ = 0, and suppose ∥u + v∥ F = 2a with a < 1. Then Lemmas 3 and 4 are crucial in developing the improved complexity analysis of the algorithm presented in the next Section 3.

Continuous Tabular Adjustment Problem
In this subsection we provide the formulation of CTA problem as an important example of the conic problem to which the IPM developed in this paper can be efficiently applied.
The following CTA formulation is given in [13] and several other papers: Given the following set of parameters: The vector a = (a 1 , . . . , a n ) T satisfies certain linear system Aa = b where A ∈ R m×n is an m × n matrix and and b ∈ R m is m-vector. The system usually decribes the fact that sum of elements in each row and column should remain unchanged, i.e. constant. (ii) A lower, and upper bound for each cell, l ai ≤ a i ≤ u ai for i ∈ N , which are considered known by any attacker. A CTA problem is a problem of finding values z i , i ∈ N , such that z i , i ∈ S are safe values and the weighted distance between released values z i and original values a i , denoted as ∥z − a∥ l(w) , is minimized, which leads to solving the following optimization problem As indicated in the assumption (iv) above, safe values are the values that satisfy By introducing a vector of binary variables y ∈ {0, 1} s the constraint (15) can be written as where M ≫ 0 is a large positive number. Constraints (16)  Replacing the last constraint in the CTA model (14) with (16) leads to a mixed integer convex optimization problem (MIOP) which is, in general, a difficult problem to solve; however, it provides solutions with high data utility [3]. The alternative approach is to fix binary variables up front, which leads to a CTA that is a continuous convex optimization problem because all binary variables are replaced with values 0 or 1. The continuous CTA is easier to solve; however, the obtained solution may have a lower data utility because the optimal solution of the continuous CTA is either feasible or infeasible solution of the corresponding MIOP depending on the values that were assigned to the binary variables. The strategies on how to avoid a wrong assignment of binary variables that may result in the MIOP being infeasible are discussed in [4,5].
In what follows, we consider a continuous CTA where binary variables in MIOP are fixed with certain values of 0 or 1, and vector z is replaced by the vector of cell deviations x = z − a. Then, the CTA (14) reduces to the following convex optimization problem: where upper and lover bounds for x i , i ∈ N are defined as follows: The two most commonly used norms in problem (17) are the ℓ 1 and ℓ 2 norms. For the ℓ 2 -norm the problem, (17) reduces to the following ℓ 2 -CTA model: The above problem is a standard QO problem that can be efficiently solved using IPM or other methods. For the ℓ 1 -norm the problem, (17) reduces to the following ℓ 1 -CTA model: The above ℓ 1 -CTA model (21) is a convex optimization problem; however, the objective function is not differentiable at x = 0. Since most of the algorithms, including IPMs, require differentiability of the objective function, problem (21) needs to be reformulated.
The standard reformulation is the transformation of a model (21) to the following LO model: The drawback of the above LO reformulation is that number of variables and inequality constraints doubles. In [13] an alternative SOC reformulation of ℓ 1 -CTA is proposed where the dimension of the problem does not increases as much. It is based on the fact that absolute value has an obvious SOC representation since the epigraph of the absolute value function is exactly SOC, that is, A SOC formulation of the l1-CTA (21) is given below The above conic formulation of continuous CTA problem is an important example of the conic problem to which the IPM developed and analyzed in the rest this paper can be efficiently applied.

A Brief Outline of the Full NT-Step Feasible IPM
In this section, a brief outline of the feasible algorithm presented in [25] is given. Let (V, •) be an n-dimensional EJA with rank r equipped with the standard inner product ⟨x, s⟩ = tr(x • s) and K be the corresponding symmetric cone. Moreover, we always assume that there exists an identity element e ∈ V such that e • x = x for all x ∈ V. Additional facts regarding EJAs and SCs are listed in the previous Section 2 and references cited in that section.
We consider the LO problem over symmetric cones, or shortly, the SO problem given in the standard form

258
A FULL NT-STEP INFEASIBLE IPM FOR SO and its dual problem where A is a linear operator from V to R m , c and the rows of Without loss of generality, we assume that the rows of A are linearly independent. Additionally, without loss of generality we can assume that both (SOP ) and (SOD) satisfy the interior-point condition (IPC) [22], i.e., there exists (x 0 , y 0 , s 0 ) such that The perturbed Karush-Kuhn-Tucker (KKT) conditions for (SOP ) and (SOD) are given by The parameterized system (25) has a unique solution (x(µ), y(µ), s(µ)) for each µ > 0. The set of µ-centers forms a homotopy path with µ running through all positive real numbers, which is called the central path. If µ → 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition, i.e., x • s = 0, it naturally yields an optimal solution for (SOP ) and (SOD) (see, e.g., [8,22]).
IPMs follow the central path approximately and find an approximate solution of the underlying problems (P ) and (D) as µ gradually decreases to zero. Just like the case of a linear SDO, linearizing the third equation in (25) may not lead to an unique element in V. Thus, it is necessary to symmetrize that equation before linearizing it. To overcome this difficulty, the third equation of the system (25) is replaced by the following equivalent scaled equation (Lemma 28 in [22]) where u is a scaling point from the interior of the cone K (i.e., intK). Applying Newton's method, we have The appropriate choices of u that lead to obtaining the unique search directions from the above system are called commutative class of search directions (see, e.g., [22]). In this paper, we consider the so-called NT-scaling scheme, the resulting direction is called NT search direction. This scaling scheme was first proposed by Nesterov and Todd [17,18] for self-scaled cones and then adapted by Faybusovich [8,9] for symmetric cones.

Moreover,
and the scaled search directions the system (26) is further simplified From the first two equations of the system (29), one can easily verify that the scaled search directions d x and d s are orthogonal with respect to the trace inner product, i.e., ⟨d x , d s ⟩=0. This implies that ∆x and ∆s also are orthogonal, i.e., ⟨∆x, ∆s⟩=0. As a consequence, we have the important property that, after a full NT-step, the duality gap assumes the same value as at the µ-centers, namely rµ.

Lemma 6 (Lemma 3.4 in [11])
After a full NT-step, the duality gap is given by To measure the distance of an iterate to the corresponding µ-center, a norm-based proximity measure δ(x, s; µ) is introduced One can easily verify that which implies that the value of δ(v) can indeed be considered as a measure of the distance between the given iterate and the corresponding µ-center. It is crucial for us to investigate the effect on the proximity measure δ(x, s; µ) of a full NT-step to the target point (x(µ), y(µ), s(µ)). For this purpose, Wang et al. [25] established a sharper quadratic convergence result than the one mentioned in [11]. Their derivation is based on the generalization of Theorem II.52 in [21] for LO. This leads to a wider quadratic convergence neighborhood of the central path for the algorithm than the one used in [11].

Corollary 4
The following theorem provides an upper bound for the total number of the iterations produced by the full-NT step feasible IPM.
Thus, the feasible algorithm is well defined, globally convergent, and achieves quadratic convergence of full NT-steps in the wider neighborhood while still maintaining the best known iteration bound known for these types of methods, namely O ( √ r log rµ 0 ε ) .

Full NT-Step Infeasible IPM
It is well known fact that finding strictly feasible starting point may be difficult. Thus, an infeasible IPM that does not require feasible starting point may be a good alternative. First, a brief outline of the infeasible algorithm is presented. Next, we concentrate on the convergence and complexity analysis of the algorithm. The method is similar to IPM presented in [11], however, with wider neighborhood and larger steps which impacts the convergence and complexity analysis. Allowing larger steps at each iteration while still maintaining the best known iteration bound for these types of methods and having a quadratic local convergence of the proximity measure at each iteration are another advantages of the method presented in this paper.

An Outline of the Full NT-Step Infeasible IPM
In what follows, we assume that the SO problem has an optimal solution (x * , s * ) with vanishing duality gap, i.e., ⟨x * , s * ⟩ = 0. Furthermore, we choose arbitrarily (x 0 , s 0 ) ∈ intK and µ 0 > 0 such that has a unique solution (x(µ, ν), y(µ, ν), s(µ, ν)), for every µ > 0 that is called a µ-center of the perturbed problems (SOP ν ) and (SOD ν ). Hence, the central paths of (SOP ν ) and (SOD ν ) exist. The main idea of the infeasible algorithm is to simultaneously improve feasibility by reducing ν and optimality by reducing µ while keeping the iterates in the certain neighborhood of the central paths of (SOP ν ) and (SOD ν ).
The outline of one iteration of the infeasible algorithm is as follows. Suppose that for some ν ∈ (0, 1] we have an iterate (x, y, s) satisfying the feasibility condition, i.e., the first two equations of the system (36) for µ = νµ 0 , and such that tr(x • s) = rµ and δ(x, s; µ) ≤ τ . Thus, we start with the iterate in the τ -neighborhood of the central path of (SOP ν ) and (SOD ν ) that targets the µ-center on that central path . The goal is to obtain a new iterate (x + , y + , s + ) in the τ -neighborhood of the central path of the new pair of problems (SOP + ν ) and (SOD + ν ) where both ν and µ are reduced by a barrier parameter θ ∈ (0, 1), i.e., ν + = (1 − θ)ν and µ + = (1 − θ)µ = ν + µ 0 . Hence, (x + , y + , s + ) should satisfy the first two equations of the system (36), with ν replaced by ν + and µ by µ + , and such that tr(x + , s + ) = rµ + and δ(x + , s + ; µ + ) ≤ τ . The calculation of the new iterate is achieved in two phases, a feasibility phase where one feasibility step is taken and a centering phase where a few centering steps are performed. The feasibility step serves to get an iterate (x f , y f , s f ) that is strictly feasible for (SOP ν + ) and (SOD ν + ), and belongs to the quadratic convergence neighborhood with respect to the µ + -center of (SOP ν + ) and (SOD ν + ). However, (x f , y f , s f ) may not be in the τ -neighborhood of the µ + -center; therefore, several centering steps may need to be performed to get inside the τ -neighborhood.
Note that after each iteration the residuals and the duality gap are reduced by the factor (1 − θ). The algorithm stops when we obtain an iterate for which the norm of the residuals and the duality gap do not exceed the accuracy parameter ε. This iterate is called an ε-approximate optimal solution for (SOP ) and (SOD).
The feasibility step is obtained by taking full steps with NT-search directions (∆ f x, ∆ f y, ∆ f s) that are calculated from the following Newton system One may easily verify that (x f , y f , s f ) satisfies the first two equations of the system (36), with ν replaced by ν + and µ by µ + . The third equation indicates that the µ + -center of (SOP + ν ) and (SOD + ν ) is targeted. Targeting µ +center rather than µ-center contributes to the efficiency of the algorithm. The system (38) defines the feasibility step uniquely since the coefficient matrix of the resulting system is exactly the same as in the feasible case.
Similarly to the feasible case, given the variance vector v defined by (27) and scaled search directions the system (38) is reduced to the following formĀ Hence, Since it will be shown that (x f , y f , s f ) is strictly feasible and moreover in the quadratic convergence neighborhood of the µ + -center of (SOP ν + ) and (SOD ν + ), it is possible to take few centering steps to get the new iteration in the desired τ -neighborhood of the µ + -center. The centering steps are obtained by taking full steps with NT-search directions calculated from the Newton system that is the same as in the feasible case, (26), or in the scaled form, (29).
The above outline is summarized in the Fig. 1, that describes a generic full NT-step infeasible IPM.

Analysis of the Full-NT Step Infeasible IPM
The analysis of the infeasible algorithm is more complicated and more involved than in the feasible case. The main reason for this is that the scaled search directions d f x and d f s are not (necessarily) orthogonal with respect to the  trace inner product. We omit most parts of the analysis that are unchanged from the one presented in [11] and emphasize the parts where there are differences. It is shown that the feasibility steps can be taken in the wider quadratic convergence neighborhood of the central path developed in the feasible case. Feasibility Step. The lemma below provides the sufficient condition for the strict feasibility of the feasibility step (x f , y f , s f ).

Lemma 8 (Lemma 4.2 in [11]) The feasibility step
Thus, the feasibility of the (x f , y f , s f ) highly depends on the eigenvalues of the vector d f In order to measure the distance from the (x f , y f , s f ) to the µ + -center we need an upper bound on the proximity measure δ(x f , s f ; µ + ) which is for simplicity denoted also as δ(v f ), where v f is a variance vector defined by (27).
The following lemma provides an upper bound for δ(v f ).

Lemma 9 (Lemma 4.4 in [11])
Let The following lemma gives an important relationship between the infinite norm of the vector of eigenvalues of d x • d s and the Frobenius norms of d x and d s .

Lemma 10
One has

Proof
We have the following derivation: The first inequality follows from the definitions of the infinite norm for vectors and the Frobenius norm for the element of V. The second inequality follows from Lemma 2.16 in [11], the third inequality follows from the triangle inequality for the Frobenius norm, and the last inequality follows from Lemma 2.12 in [11]. This completes the proof of the lemma. Substitution of the two inequalities in Lemma 10 into the inequality in Lemma 9 yields the following upper bound Thus, a task of finding an upper bound of δ(v f ) reduces to finding an upper bound of ∥d f x ∥ 2 F + ∥d f s ∥ 2 F . After careful and somewhat involved analysis, details of which are omitted and can be found in [11], the following upper bound is derived: where δ := δ(v) and ρ(δ) := δ + √ 1 + δ 2 . Thus, the upper bound essentially depends on the barrier parameter θ and the proximity measure δ of the old iterate (x, y, s), which is a desired result since we want to connect new proximity measure with the old one.
In what follows, we want to choose θ, 0 < θ < 1, as large as possible, and such that (x f , y f , s f ) lies in the quadratic convergence neighborhood with respect to the µ + -center of the perturbed problems (SOP ν + ) and (SOD ν + ). As it was shown in the feasible case, this neighborhood can be extended to which leads to the following inequality Substituting (43) into the above inequality (45) we obtain Lemma 8 then implies that with the above choice of parameters θ and τ , (x f , y f , s f ) is indeed strictly feasible. Centering Steps. After the feasibility step we perform centering steps in order to get an iterate (x + , y + , s + ) that is in the τ -neighborhood of the µ + -center, i.e. satisfies tr(x + • s + ) = rµ + and δ(x + , s + ; µ + ) ≤ τ . Using Corollary 3, the required number of centering steps can easily be obtained. Indeed, since (x f , y f , s f ) is in the quadratic convergence neighborhood of the µ + -center, i.e. δ = δ(x f , s f ; µ + ) ≤ 1/ 4 √ 2, after k centering steps we will have an iterate (x + , y + , s + ) that is still feasible for (SOP ν + ) and (SOD ν + ) and satisfies Thus, δ(x + , s + ; µ + ) ≤ τ will be obtained after at most centering steps. Substituting τ = 1/16 into the above expression leads to Hence, at most four centering steps are needed to get the iterate (x + , y + , s + ) that satisfies δ(x + , s + ; µ + ) ≤ τ , i.e., the iterate that is in the τ -neighborhood of the µ + -center again. Iteration Bound. To summarize, each main iteration consists of at most five inner iterations, one feasibility step, and at most four centering steps. In each main iteration both the duality gap and the norms of the residual vectors are reduced by the factor (1 − θ). Hence, using tr(x 0 • s 0 ) = rζ 2 , the total number of main iterations is bounded above by 1 θ log max{rζ 2 , ∥r 0 p ∥ F , ∥r 0 d ∥ F } ε Since θ = 1 4r and at most five inner iterations per the main iteration are needed, the main result can be stated in the following theorem.
In conclusion, the infeasible algorithm in Fig 1 is well defined, globally convergent, and achieves quadratic convergence of full NT-feasibility steps in the wider neighborhood of the central path while still maintaining the best-known iteration bound known for these types of methods, namely .

Remark 1
Similarly to LO, the iteration bound in Theorem 6 is derived under the assumption that there exists an optimal solution pair (x * , y * , s * ) of (SOP ) and (SOD) with vanishing duality gap and satisfying x * + s * ≼ K ζe. During the course of the algorithm, if at some main iteration, the proximity measure δ after the feasibility step exceeds 1/ 4 √ 2, then it tells us that the above assumption does not hold. It may happen that the value of ζ has been chosen too small. In this case, one might run the algorithm once more with a larger value of ζ. If this does not help, then eventually one should realize that (SOP ) and/or (SOD) do not have optimal solutions at all, or they have optimal solutions with a positive duality gap.

Remark 2
In [11] the number of centering steps per the main iteration is three. In our paper, the 'price to pay for expanding the quadratic convergence neighborhood of the central path is a possible additional centering step which slightly increases the constant in the upper bound on the total number of inner iterations from 16 to 20; however, that does not change the order of magnitude of the required number of iterations, it still matches the best-known iteration bound for the infeasible algorithms mentioned above. It is also worth mentioning that in practice all four centering steps may not always be needed, very often only one or two suffice.

Concluding remarks
In this paper, an infeasible version of the full NT-step IPM for SO in a wider neighborhood of the central path than the one in [11] is presented and convergence analysis is given. Wider quadratic convergence neighborhood of the central path characterized by 1/ 4 √ 2 is carried over from the feasible case and applied to the feasibility steps of the infeasible algorithm resulting in larger steps. However, despite full NT-steps in the wider neighborhood of the central path, the best complexity known for the infeasible algorithm is still maintained.
Future research is planned in two directions. The first direction is implementation and numerical testing of the method on a set of conic CTA problems as well as other conic problems. The second direction is theoretical and involves the generalization of this IPM to other optimization problems such as Linear Complementarity Problems over symmetric cones.