A new smoothing method for nonlinear complementarity problems involving P 0 -function

In this paper, we present a family of smoothing methods to solve nonlinear complementarity problems (NCPs) involving P 0 -function. Several regularization or approximation techniques like Fisher-Burmeister’s method, interior-point methods (IPMs) approaches, or smoothing methods already exist. All the corresponding methods solve a sequence of nonlinear systems of equations and depend on parameters that are difficult to drive to zero. The main novelty of our approach is to consider the smoothing parameters as variables that converge by themselves to zero. We do not need any complicated updating strategy, and then obtain nonparametric algorithms. We prove some global and local convergence results and present several numerical experiments, comparisons, and applications that show the efficiency of our approach.


Introduction
The nonlinear complementarity problem (NCP) consists in finding x ∈ R n satisfying where F : R n → R n . When F is linear, problem (1) reduces to a linear complementarity problem (LCP). NCPs arise in many practical applications, for example, the Karush-Kuhn-Tucker (KKT) systems of mathematical programming problem, economic equilibria, and engineering design problems, can be formulated as NCPs (see, for instance, [11,20,29]). Different concepts have been developed to study and solve these problems: reformulation as a system of nonlinear equations or a minimization problem (see [12,15,23,27,30,31,32]). Recently, there have been strong interests in equation reformulation methods for solving the NCPs. One of the most effective methods is to transform the NCP into semi-smooth equation (NCP functions) and solve using semi-smooth Newton methods. The most wellknown NCP functions are the Fisher-Burmeister's function introduced by Fisher Burmeister in [13] and the min function studied by Kanzow, Yamashita and Fukushima [22]. Another well-known class of algorithms corresponds the smoothing methods. The main idea of smoothing approaches is to approximate or regularize the NCP to obtain smooth equations depending on some parameter (see, for example, [7,24,25,33,39]).

1268
A NEW SMOOTHING METHOD FOR NCP In this paper, we present, smoothing approximation scheme to solve (1). We replace by a sequence of smoothed systems of the form G r (x, F (x)) := (G r (x i , F i (x))) i=1,...,n := rψ −1 ψ All the functions and parameters involved in (3) will be explicit later. Depending on the context, several functions (θ, ψ, G r , ...) we apply on reals or vectors. When applied to vectors, we consider that they apply component-wise.
The novelty of our approach is that we do not need any complicated strategy to update the regularization parameter r since we will consider it as a new variable. To solve the smoothed equations system we will use the standard Newton-like method. Without requiring a strict complementarity assumption at the solution of equation (1), we prove that the proposed algorithm is well defined, globally and superlinearly convergent. At the end of the paper, we present numerical results to prove the effectiveness of the algorithm. This paper is organized as follows: some definitions are introduced in section 2. We present our approximation and formulation in section 3. In section 4, we discuss our approach and scheme to solve (1). The convergence properties of the algorithm are given in section 5. The section 6 is devoted to the numerical results with a comparison of our method with other approaches. Finally, we conclude our paper.

Preliminaries and problem setting
Consider the NCP, which is to find a solution of the system: where F : R n −→ R n is a continuous function satisfying some additional assumptions to be precised later. From (4), we obtain the equivalent formulation for component-wise products x ≥ 0, F (x) ≥ 0 and x i F i (x) = 0, i = 1, 2, ..., n.
Or equivalently x.F (x) = 0, where "." stands for the Hadamard product. It provides an explanation for the term "complementarity", namely, for all i = 1, 2, ..., n, x i and F i (x) are complementary in the sense that if one of them is positive then the other term must be zero. A particular and important class of NCP is the LCP class defined below.

Definition 2.1
When F is affine function: The corresponding NCP is called an LCP. So an problem is to find x ∈ R n such that x ≥ 0, M x + q ≥ 0 and x T (M x + q) = 0.
To solve NCP, there are essentially three different classes of methods: equation-based methods (smoothing), merit functions, and projection-type methods. Our goal in this paper is to present new and very simple smoothing and approximation schemes to solve NCP and to produce efficient numerical methods. In our approach, we do not need any complicated strategy to update the smoothing parameter since we will consider it as a new variable. First, let us introduce the usual assumptions on F and the ones that will be used in this paper. A well-known and Stat., Optim. Inf. Comput. Vol. 10, September 2022 E-H. OSMANI, M. HADDOU, N. BENSALEM AND L. ABDALLAH 1269 studied situation corresponds to monotone functions F and several methods and algorithms have been developed in this case. Almost all the solution methods consider at least the following important and standard condition on the mapping F (monotonicity): We recall that F is said to be monotone if F : R n → R n satisfies for any (x, y) ∈ R n , In this work, we will consider a weaker assumption on F : to prove the convergence of our approach. We recall the following definitions of P 0 and P functions. We say that F : R n → R n is a P 0 -function (respectively P-function) if ∀x, y ∈ R n with x ̸ = y, there exists an index It is important to notice that the index i 0 can depend on x and y. A matrix is called a P 0 -matrix (resp. P-matrix) if all its principals minors are nonnegative (resp. positive). Note that F is a P 0 -function if and only if ∇F (x) is a P 0 -matrix for all x ∈ R n . If ∇F (x) is a P-matrix for all x ∈ R n , then F is a P-function. However, the converse is not necessarily true.

Smoothing approximation functions
In this section, we present a new smoothing function for NCP. A function ϕ : An example of such function is ϕ min : R 2 → R, ϕ min (a, b) = min{a, b}.
Then, ϕ min is a NCP function. Problem (1) is then equivalent to the following system of nonlinear equations: This system is clearly nonsmooth; classical Newton-like methods can not be used to try to solve it. To overcome this difficulty, there exist several semi-smooth approaches. These techniques may present difficulties to converge. An efficient approach is to approximate (6) by a smooth one. The following subsection introduces some smoothing functions and establishes different properties that will be useful for our study.

θ-smoothing
In this section, we elaborate on how such a regularized function can be actually built up from the function (6). Our smoothing technique is based on the continuous approximation of a more elementary object, namely the step function. The step function is understood here to be the function ℑ : R + → {0, 1} defined as As an indicator of positive arguments t > 0 over R + , the step function ℑ "descriminates" the argument t = 0 by assigning a zero value to it. The price to be paid for this sharp detection is the discontinuity of ℑ at t = 0. We wish to have a regularization of ℑ, that is, a family of functions such that •Ĩ(., r) is a smooth function of t ≥ 0, for all r > 0; •Ĩ is continuous with respect to r, in some functional sense; • lim r↓0Ĩ (., r) = ℑ(.), in some functional sense.
To obtain such a family, we follow the methodology developed by Haddou and his coauthors [17,2], the key ingredient of which is a smoothing function. This notion turned out to be a versatile tool in a wide variety of pure and applied mathematical problems [3,18,19,34]. We begin with a "father" function, from which all other regularized functions will be generated.
The two most common examples of smoothing functions are: 1. the rational function θ 1 : R → (−∞, 1) defined by 2. the exponential function θ 2 : R → (−∞, 1) defined by A more general "recipe" to build such function is to consider nonincreasing probability density functions f : R + → R + and then take the corresponding cumulative distribution function on R + i.e., we complete the definition of θ on R − by θ(t) = t to get a continuous, nondecreasing function. The nonincreasing assumtion on f gives the concavity of θ. Once a favorite θ-smoothing has been selected, the next step is to dilate or compress it in order to produce a family of regularized functions for the step function ℑ.
Definition 3.2 (θ-smoothing family). Let θ be a θ-smoothing function. The family of functions is said to be the θ-smoothing family associated with θ.
Obviously, θ r is a smooth function of t ≥ 0 for all r > 0. It is also continuous with respect to r at each fixed r ≥ 0. From the defining properties (9), it can be readily shown that In other words, ℑ is the limit of θ r in the sense of pointwise convergence. Thus, {ℑ(., r) = θ r , r > 0} is a good family of regularized functions in the sense of (8). Associated with the two examples (10)-(11) are: 1. the rational family θ 1 r : R → (−∞, 1) defined by 2. the exponential family θ 2 r : R → (−∞, 1) defined by Figure 1 display the two families (15)-(16) for a few values of the parameter r. We can see that the smaller r is, the steeper is the slope at t = 0 and the closer to ℑ the function is.  3.1.1. θ-smoothing of a complementarity condition A θ-smoothing function paves the way for a smooth approximation of a complementarity condition. Let (x, z) ∈ R 2 be two scalars such that that is, In the (x, z)-plane, the set of points obeying (17) is the union of the two semi-axes {x ≥ 0, z = 0} and {x = 0, z ≥ 0}. Visually, the nonsmoothness of (17) is manifested by the "kink" at the corner (x, z) = (0, 0). It is also clear that the corresponding set is nonconvex. We consider two possible smooth approximations of (17), depending how it is rewritten in terms of the step function ℑ.
Lemma 3.1 [17] Assuming x ≥ 0 and z ≥ 0, we have the equivalence The equivalence (18) suggests us to impose for r > 0, as a smooth approximation of (17). Replacing ℑ by θ r in (18) is logical. Replacing "≤" by "=" in (18) and the (19) seems to be a bold move, but this is motivated by the fact that we want an equality to be mounted into the system of equations. Some times an additional assumption (strict complementarity x + z > 0) is made to get such equations.

A new smoothing function using θ-function
Our aim is to propose a large class of θ-functions for which the problems x (r) ≥ 0, F (x (r) ) ≥ 0 and θ r (x (r) ) + θ r (F (x (r) )) = 1, (20) are well-posed and any limit point of (x (r) ) when r goes to 0, is a solution of NCP. In the multidimensional case, the equation just above has to be interpreted as a system of n equations, Note that the relation (20) is symmetric in x and F (x). Thus, our problem can be seen as a fixed point problem for the function F r,θ (x) defined just below. Indeed, the equation (20) is equivalent to By symmetry of the equation (20), we also have the relations: but we shall not go that direction. As in [18] we propose another way to approximate a solution of NCP as follows. Let ψ r (t) = 1 − θ r (t), the relation (20) is equivalent to the three following equalities (with ψ = 1 − θ). For the sequel, we set for any x, y ∈ R n and any r > 0 where ψ : R →]0, +∞[. First, we characterize the solutions (x, y) of G r (x, y) = 0 when ψ satisfies some conditions independent of F . Let 0 < a < 1. We say that ψ satisfies condition (H a ) if there exists s a > 0 such that ψ(s) ≤ 1 2 ψ(as) ∀s ≥ s a or equivalently The condition (H a ) imposes that the decay of ψ(s) is under some uniform control for large s or in terms of θ that θ(s) should grow enough quickly with some uniformity for large s. Since ψ and θ are monotone, it is interesting to take a as large as possible in the condition (H a ) since (H a ) =⇒ (H b ) for b < a. Note that we can never take a = 1 because θ ≤ 1 unless θ is constant and equal to one for large s. But in some cases, a can be chosen as close to 1, see for instance θ 2 .
One can obtain by simple calculations that: 2. For θ 2 , we have ψ(t) = e −t and the condition (H a ) is satisfied for any 0 < a < 1 with s a = ln 2 1 − a .
From now on, all the results use the function ψ. Obviously, everything can be easily transposed on θ. We summarize here some important facts proved in [18]. The following lemma compare the function G r defined in (21) to the min function and will be useful for the rest of our analysis.

Lemma 3.2
If ψ : R →]0, +∞[ is an invertible non-increasing function, then for any (s, t) ∈ R 2 and any r > 0 The next theorem shows how the condition (H a ) gives information about the behavior of G r .
If ψ satisfies the condition (H a ) for some a ∈ ]0, 1[, then for all s, t ∈ R, For both θ 1 r and θ 2 r examples, the assertion of Theorem 3.1 is satisfied. Indeed direct computations lead to 1. For s > 0 and t > 0 such that 1 s + 1 t ≤ 1 r , we have the following explicit expression Note that the denominator is not zero when s, t are non-negative even when s = t = 0. In addition, when 2. For any s, t ∈ R, we have the following explicit expression Passing to the limit as r goes to 0, we conclude that lim r↘0 G 2 r (s, t) = min(s, t). Figure 2 illustrate the behaviour of G 2 r (x, −x).

1274
A NEW SMOOTHING METHOD FOR NCP Now, we focus on the case where ψ satisfies (H a ) for all a ∈]0, 1[ and state a stronger result (see [18]).

An approximate formulation
In this section, we present two new reformulations of the complementarity problem (4) by corresponding to two approximation schemes.
3.3.1. Approximation of NCP using θ 1 r -function and θ 2 r -function Using θ r -function, we regularize each complementarity constraint by considering This approximation yields the following formulation We consider the family {H r θ (.), r > 0}, where is a regularized function of H defined in (6).

Lemma 3.3
Let H r θ1 (x, z) and H r θ2 (x, z) be defined by (25) respectively for θ 1 r and θ 2 r . Then, for any (x, z) ∈ R 2n + the Jacobian and, for any .., l n (x, z)} are two diagonal matrices, given by In order to study the nonsingularity of the Jacobian matrix of H r θ1 (x, z) (resp. H r θ2 (x, z)), we state first a basic but essential lemma.

Lemma 3.4
Let M ∈ R n×n be a P 0 -matrix. Then any matrix in the following form is nonsingular: where N s ∈ R n×n is a positive (negative) diagonal matrix, and N t ∈ R n×n is a nonnegative (non-positive) diagonal matrix.
Let v ∈ R n be a vector such that (N s If By Lemma 3.4, we can obtain a property of H r θ1 and H r θ2 if F is a P 0 -matrix. Theorem 3.3 Let F be a P 0 -function. Then, for any r > 0, and any (x, z) ∈ R 2n + the Jacobian matrix ∇H r θ1 (x, z) (resp. ∇H r θ2 (x, z)) is nonsingular.

Proof
For all r > 0 and from Lemma 3.3, it follows that the diagonal matrix D a (x, z) (resp. Q k (x, z)) is non-negative,

New approach for solving nonlinear complementarity problems
In this section, we present the idea of our algorithms, we take inspiration from the well-known interior-point methods (IPMs) usually used in nonlinear programming. Even though we don't have any objective function to minimize, the regularization idea behind IPM can be used to tackle NCP. One can replace the original nonsmooth problems NCPs by a sequence of regularized problems and r ≥ 0 is the smoothing parameter, e ∈ R n is the vector whose components are all equal to 1. The Jacobian matrix of H r with respect to X, does not depend on r and can be denoted by where Z and (resp. X) the diagonal matrix of z (resp. x).
The main difficulty in this approach, is to drive r to 0. In [36] the authors propose a new technique where r is considered as a new variable.

When the parameter becomes a variable
In the system (26), the status of the parameter r is very distinct from that of the variable X. While X is computed "automatically" by a Newton iteration, r has to be updated "manually" in an ad-hoc manner. Our goal is to find a strategy that decreases r during iterations and ensures the nonnegative of variables. However, we must adjust the strategy when the model or its parameters are changed. To avoid this trouble, we consider r as an unknown of the system instead of a parameter as in [36]. We feel that it would be judicious to incorporate the parameter r into the variables. Let us, therefore, consider the enlarged vector of unknowns and then consider a system of 2n + 1 equations to be on X. To this end, let us remind ourselves that our ultimate goal is to solve H 0 θ1 (X) (we restrict our choice of θ-fucntion to θ r (t) = θ 1 r (t)), together with the inequalities x ≥ 0, z ≥ 0. Thus, it is really natural to first consider This construction turns out to be too naive. Indeed, if we start from some r 0 and solve the smooth system (31) by the smooth Newton method since the last equation is linear, we end up with r 1 = 0 at the first iteration. Once the boundary of the interior region is reached, we are "stuck" there.
To prevent r from rushing to zero in just one iteration, we could set At this stage, system (32) is not yet fully adequate. Indeed, the last equation is decoupled from the others. Everything happens as if r follows a prefixed sequence, generated by the Newton iterates of the scalar equation r 2 = 0, regardless of X. It is desirable to a couple r and X in a tighter way. In this respect, we advocate where x − is the vector of components x − i = min(x i , 0) and similarly for z − and This choice has the benefit of taking into account the nonnegativity condition x ≥ 0 and z ≥ 0. Indeed, the last equation of (33) implies that, as long as r ≥ 0, we are ascertained that x − = z − = 0. This amounts to saying that x ≥ 0 and z ≥ 0. Should a component of x or z become negative during the iteration, this equation would contribute to "penalize" it. Since r is now considered a variable and the scalar function t → 1 2 | min(t, 0)| 2 is differentiable and its derivative is equal to min(t, 0). From this observation, the Jacobian matrix of H θ is: where If H θ (X) = 0 where X ∈ R 2n + × R + we obtain r = 0 and x − = z − = 0. Hence in this case, ∇ X H θ (X) becomes singular, since det(∇ X H θ (X)) = 0. To solve this issue, we add a small enough positive parameter ε in the last equation. We get Hence, we define the following systems Lemma 4.1 Let X ∈Ξ (the closure of Ξ), where Ξ is the interior region defined in Let r ∈ R and X = [X; r] T . Then, det ∇H θ (X) = (ε + 2r) det ∇H r θ1 (X).
If r > − ε 2 , the two Jacobian matrices are singular or nonsingular at the same time.

Proof
Thanks to the assumption X ∈Ξ, we have x ≥ 0 and z ≥ 0, so that x − = z − = 0. Expanding the determinant of ∇H θ (X) with respect to the last row yields the desired result.

Convergence
In this section, we propose a generic algorithm to solve NCPs and prove some convergence results. From now on, the enlarged equation (36) is selected as the reference system in the design of our new algorithm. The idea is simply to apply the standard Newton method to the smooth system (36). To enforce a global convergence behavior, we also recommend using α line search like Armijo back-tracking technique. Now, we present our algorithm: 5. Set X k+1 = X k + ζ k d k and k ← k + 1. Go to step 2.
The merit function used in the line search is: A detailed description of nonparametric method is given in Algorithm 1. A few comments are in order: • The initial point X 0 = (X 0 , r 0 ) must be an interior point, namely, X 0 > 0 and the initial parameter r 0 =< x 0 , z 0 > /n has the correct order of magnitude.
provided that the Jacobian matrix is invertible. The increment for the parameter is then • There is no need to truncate the Newton direction d k to preserve positivity for x k+1 and z k+1 , since nonnegativity is "guaranteed" at convergence. However, if we want all the iterates to be nonnegative, so we need to carry out additional damping after Step 4 (Armijo's line search).

Proposition 5.1
Let F be a continuous differentiable P 0 -function. Then, step 3 in Algorithm 1 is well-defined.

Proof
We know that for all k ≥ 0, r k > 0, X k > 0, and ε > 0, By Theorem 3.3, ∇H r k θ1 (X k ) is nonsingular, so step 3 of Algorithm 1 is well-defined.

Global convergence analysis
Definition 5.1 (Regular zero). Let X * ∈ R 2n+1 be a zero of H θ , that is, H θ (X * ) = 0. If the Jacobian matrix ∇ X H θ (X * ) is nonsingular, X * is said to be a regular zero of H θ .
The main interest of Algorithm 1 lies in the prospect of global convergence, as envisioned by the theory that we are developing now. This global convergence theory, is primarily based on the regularity of zeros [Definition (5.1)]. We reproduce a concise result that can be found in the book of Bonnans [5], in view of its importance to our algorithm. We will prove the global convergence of Algorithm 1. First, we show that every d ∈ △ is a descent direction of Θ at X, where Lemma 5.1 (see [37]) If X is not a solution of NCP, i.e. Θ(X) > 0, then every d ∈ △(X) satisfies the descent condition for X, i.e., ∇Θ(X) T d < 0.
Using the preceding results, we can prove the following global convergence theorem.

Theorem 5.1
Every limit point X * = (X * , r * ) of a sequence{X k } generated by Algorithm 1 corresponds to a solution of NCP.

Proof
Since the sequence {Θ(X k )} is nonnegative and decreases monotonically, it converges to some Θ * ≥ 0. We assume Θ * > 0. Let X * be an accumulation point of {X k } and {X k } k∈K be a subsequence converging to {X * }. Taking a further subsequence if necessary, we can assume without loss of generality that lim k→∞ d k = d * , because ∆ is uniformly compact near and closed at X * (see [37]). Furthemore, by the closedness of ∆, we have It is obvious that {ζ k τ ∇Θ(X k ) T d k } converges to 0. In order to prove ∇Θ(X k ) T d k → 0, we show that {ζ k } is bounded away from 0. Now suppose that there exists a subsequence such that ζ k → 0. By the line search rule, we have where σ k = ζ k ρ jk . Since σ k → 0, taking the limit of both sides of (40) yields Since Θ(X * ) = Θ * > 0 by assumption, it follows from (39) and Lemma 5.1 that ∇Θ(X * ) T d * < 0. Since τ < 1, this contradicts (41). This implies that {ζ k } is bounded away from 0, and hence, It then follows from (39) and Lemma 5.1 that Θ(X * ) = 0. This is contradictory to Θ * > 0. Therefore, we must have Θ(X k ) → 0, which implies that any accumulaton point X * of {X k } satisfies Θ(X * ) = 0 and hence is a solution of NCP. The proof is complete.
Below is a result about the Jacobian matrix of H θ (X), when r goes to 0.

Lemma 5.2
Suppose that X * = (x * , z * ) is a solution of NCP, then we have the following equality

Proof
By considering the two possible situations (x * i ≥ z * i ) and (x * i < z * i ), very simple calculation give: Since ∇ x G and ∇ z G are bounded, the proof is complete by passing to the limit.

Lemma 5.3
Let X * = (x * , z * ) be a solution of NCP satisfying the strict complementarity condition (i.e.
The Jacobian matrix of H θ is: 1. The derivative of G 1 r (x, z) with respect to x is: 2. The derivative of G 1 r (x, z) with respect to z is: as below, the only two cases to consider are: 3. The derivative of G 1 r (x, z) with respect to r is: when r goes to 0 the only two cases to consider are: Finally, since X * = (x * , z * ) is a solution of NCP, we have x * ≥ 0 and z * ≥ 0, so that We present now, two situations where we can conclude about the nonsingularity of lim r→0 ∇ X H θ (X * , r).

Lemma 5.4
Suppose that X * = (x * , z * ) is a solution of NCP, we have two possibilities when computing the determinant of H θ on (x * , z * ).
• If X * = (x * , z * ) does not satisfy the strict complementarity condition, then by Lemma 5.6, ∀ I ⊂ {1, ..., n} we have: from Lemma 5.2 we have: is a positive diagonal matrix. From Lemma 3.4, we take: where N s is a positive diagonal matrix, and N t is a nonnegative diagonale matrix. Therefore the matrix lim r→0 ∇ X H θ (X * , r) exists and is invertible if ∇F (x * ) − I n×n is P 0 -matrix.
In the following, we focus our attention on the superlinear convergence rate of Algorithm 1.
1. (Local analysis) Let X * be a regular zero of H θ . If X 0 is close enough toX, then ζ k = 1 for all k, and X k converge to X * super-linearly (and we recover the standard Newton method).
2. (Limit point) Let X * be a limit point of sequence {X k }. If ∇H θ (X * ) is invertible, then X * is a regular zero of H. If X * is a regular zero of H θ , then ζ k = 1 for k big enough and X k converge to X * super-linearly.

Proof
We apply Theorem 5.1 and Lemma 5.4 under some condition on F.
The next lemma measures the "additional coercivity" effect of the smoothing.

Lemma 5.5
Assume F is a P 0 -function, then 1. H θ is a P-function.
2. If H θ (X, 0) exists then it is a P 0 -function.

Proof
(1) Let X, Y be two distinct vectors of R 2n . Since F is a P 0 -function there exists an index i ∈ {1, ..., 2n} such Without loss of generality, we can suppose that X i > Y i and Since ψ and ψ −1 are decreasing, we obtain consecutively that for any r > 0, Hence, H θ is a P-function.
We will now deal with the case where X i = Y i , ∀i < 2n + 1. For i = 2n + 1, we can suppose that X 2n+1 > Y 2n+1 Hence, H θ is a P function.
Now we would like to study the asymptotic behavior of the Jacobian matrix of our method with the Jacobian matrix of IPM when r goes to 0 and we need a lemma that is used to prove our main result.

Lemma 5.6
We consider the following system Z.
where Z = diag(z) and X = diag(x). Assume that Z, X are strictly complementary (i.e. Z + X > 0 n×n ). Then J is singular if and only if T is singular, where where ϕ(.) is defined in Lemma 5.3, here ϕ operates component-wise on Z and X.

Proof
By the strict complementarity hypothesis, we range the rows and the columns of J and T as follows where X 2 > 0 and Z 1 > 0, and The determinant of the two matrices J σ and (T ) σ are equal to where C is a certain matrix, are nonzeros, then we can conclude that J and T are invertibles and singulars at the same time.

Theorem 5.3
Suppose that X * = (x * , z * ) is a solution of NCP which satisfies the strict complementarity, and ∇H 0 (X * ) be defined by (28) (the Jacobian matrix of the interior-point method) is invertible. Then lim r→0 ∇ X H θ (X * , r) is invertible, i.e., the two Jacobian matrices are singular or nonsigular at the same time.

Proof
In view of Lemma 5.6, and thanks to the assumption X * = (x * , z * ) is a solution of NCP, we have x * ≥ 0 and z * ≥ 0, so that where ϕ(.) is defined in Lemma 5.3, Z * = diag(z * ) and X * = diag(x * ). From Lemma 5.6, we conclude that if ∇ X H 0 (X * ) is invertible then lim r→0 ∇ X H θ (X * , r) is invertible. This means, that if the IPM converges our method converges.
Hypothesis H 0 (F is a P 0 -function) assures us that our method is well defined and the Theorem 5.3 shows that the domain of convergence of our method is at least as large as that of the IPM.

Numerical experiments and applications
In this section, we present some numerical experiments for the two smoothing functions. Our aim is just to verify the theoretical assertions for these two "extreme" cases. First, we study eight test problems with various sizes and characteristics. Then, we present a comparison of some randomly generated problems of our method and other approaches that have been suggested recently in [6,14]. We also present numerical results for two concrete examples. All the codes are written in MATLAB 2020R and run in the system of Windows 10 with PC i5 8-th Gen and 16.00 GB RAM. We take the precision ε = 10 −9 (the termination criterion).

Example 6.1
We consider eight test problems (that can be found in [9,16,21,26,38,40]) with various sizes and characteristics. In some cases, F is monotone or strongly monotone whereas others can have a non-connected solution set, in this case, F is at most a P 0 -function. A precise description of each test problem is given in the appendix. In this table, Size stands for the number of variables, Iter corresponds to the total number of Jacobian evaluations, Opt. and Feas. correspond to the following optimality and feasibility measures The results clearly show that our methods are efficient. We also remark that the second smoothing function is much more efficient and powerful than the first one. In the next table, we compare the results of θ 2 -smoothing approach to three state-of-the-art methods (Namely: Fischer Burmeister (FB-Alg) [6], Newton Min (Min-Alg) and projection method (PM-Alg) [10]. We make a comparison among Algorithm 1, FB-Alg, Min-Alg, and PM-Alg by implementing these algorithms to solve the same benchmark test problems available in the literature. Since, we can not compare the iterative numbers, we only present the optimality measures, and cpu time(s). The results clearly show that our methods are efficient, competitive, and superior to the Fischer Burmeister method and projection method.

Example 6.2
This example is described in [16,40]. The corresponding function F (x) is of the form: where the matrices A, B and D are randomly generated as: any entry of the square n × n matrix A and of the n × n skew-symmetric matrix B is uniformly generated from ] − 5, 5[, and any entry of the diagonal matrix D is uniformly generated from ]0, 3[. The vector q is uniformly generated from ] − 500, 0[. The matrix AA T + B + D is a positive definite and the function F is strongly monotone. We used the M-files proposed in [40] to generate A, B, D and q. In this example, we will compare our methods already mentioned in sections 3 and 4, named: Algorithm 1 (θ 1 ), Algorithm 1 (θ 2 ) to some other methods (Newton min method (Min-Alg), Fischer-Burmeister's method [6] (FB-Alg) and the classical interior-point method [14] (IPM-Alg)). In order to complete this experiment, we propose the performance profiles, developed by E. D. Dolan and J. J. Moré [8], as a tool for the comparative analysis of these methods. We set "n s = 5" as the number of methods and we have chosen "n p = 100" (problems to be tested). We are interested in the comparison of the computation time and the number of iterations. The figure above shows the performance profiles of five methods where the performance measure is execution time. It is clear that our method with the θ 2 -function captures our attention (admits the highest probability value). In fact, in the interval [0, 1], our method is able to solve 99% of the problems, while the other methods do not reach 20% and require more time. We also notice that IPM-Alg is the slowest compared to others. However, for t > 4, the three algorithms FB-Alg, Min-Alg, and Algorithm 1 with θ 1 -function confirm their robustness. Figure 3 also indicates that, with respect to the computation time, with the same initial points and under the same stopping criterion, our method with θ 2 -function (resp. θ 1 -function) is the fastest method, followed respectively by Min-Alg, FB-Algor, and IPM-Alg.  In Figure 4, we illustrate the performance profiles of five methods considering the number of iterations required as a performance measure. We notice that our method with the θ 2 -function is the winner (admits the highest probability value) followed by our method with the θ 1 -function, Min-Algo, and FB-Alg. We also note that IPM-Alg needs more iterations to resolve problems. The performance of our method with the θ 1 -function becomes interesting beyond t = 3. Example 6.3 (Geochemical Models [35]) The problem comes from Geochemistry. We introduce a model which are 2-salts. The main idea is that we need to find a way to reformulate a general problem to a problem which has a form like G(x) = 0. We show the numerical results by applying several iteration methods. Let T, K are constant vectors which have meaning in chemistry. We define the problem as follows: Let x = (x 1 , x 2 , x 3 ) and p = (p 1 , p 2 ), We want to solve the equation Using our approach with θ r = θ 1 r (resp. θ r = θ 2 r ), we reformulate (47) and we get and considering x 4 = p 1 , x 5 = p 2 . Since our focus is on the effect of different smoothing approaches in solving (47), we replace the complementarity constraint by the Min function, and by the Fischer-Burmeister's function [6]. The corresponding two algorithms are referred to as Min-Alg and FB-Alg, respectively. We make a comparison among Algorithm 1, Min-Alg, and FB-Alg, IPM-Alg (the classical interior-point method [14]) by implementing these algorithms to solve problem (47). The Table 3, and Figure 5 show the results with the intial point x 0 = (3, 1, 4, 5, 6) T , T = (2, 6) T , and K = (37.5837, 7.6208) T . Table 3. Comparison of Algorithm 1 (θ 1 ) and Algorithm 1 (θ 2 ) with Min-Alg, FB-Alg, and IPM-Alg.

Conclusion
In this paper, we have presented a new smoothing approach for solving the nonlinear complementarity problem. For such an approach, some useful properties have been analyzed, which were employed to develop a well-defined and efficient Jacobian Newton algorithm for solving the nonlinear complementarity problem with P 0 -function. We have established a result of global convergence and a super-linear convergence rate for the developed algorithm. Numerical experiments prove the efficiency of our study in the following aspects: P6, has a non-degenerate solution x * = (0.2727, 2.0909, 0, 0.54545, 0.4545, 0, 0) T .

5.
A complete description of P7 and P8 can be found in [40,16]. These two examples correspond to the Nash-Cournot test problem with N = 5 and N = 10. Let x ∈ R N , Q = x i and define the functions C i (x i ) and p(Q) as follows: The NCP function is given by