A Geometrical Approach of the Levinson Algorithm for Block Toeplitz Matrices

In this paper we obtain a version of the Levinson algorithm for block Toeplitz matrices in an infinite dimensional setting from a geometrical approach. With this methodology we obtain a sequence of operators in the Levinson recurrences whose norms in geometric terms represent angles between subspaces. Additionally, under this geometric framework a block LU decomposition for a block Toeplitz matrix is obtained.


Introduction
The Levinson algorithm (cf. [20]) is a very important result and has been widely used in prediction and signal processing theory (cf. [5], [13], [21], [22], [26] and [27]). In the infinite dimensional setting, the problem of linear prediction has applications in the context of continuous time processes and for large numbers of time series (cf. [3] and [4]). A first matrix version of the Levinson algorithm was obtained by Whittle (cf. [33]) and later by Wiggins and Robinson (cf. [34]). Matrix versions of this algorithm are also discussed in (cf. [10], [11], [12], [15] and [28]). An extension to the infinite dimensional setting is found in (cf. [16]).
The Levinson recurrences are related to orthogonal polynomials in the unitary circle (cf. [11] and [17]). Also these recurrence are related with the moment problem. From it arises development in theory function, in spectral representation of operator and in statistics. In these recurrences appear an explicit parameters sequence that can be identified with the partial autocorrelation coefficients (cf. [3], [7], [10] and [28]), the reflection coefficients in geophysics (cf. [8] and [16]) and the Schur parameters in analytical functions (cf. [1], [31] and [32]). In the operator theory they are known as choice sequences. These coefficients are very important in the theory of scalar stochastic processes since they allow to characterize the autocorrelation coefficients of the process. An important application of this characterization was the development of a new spectral estimation technique known as the Burg maximum entropy method (cf. [6] and [28]). An extension of this technique to multivariate processes was studied in [29]. In [23] this concept is generalized to the Krein entropy. The partial autocorrelation coefficients are also related to dilation matrices of a stochastic process, so (cf. [14]) proposed the introduction of a new vision of stochastic process through geometry induced by dilation.
The Levinson algorithm is also used in the estimation of the coefficients of the autoregressive linear filter. In this case we need to solve the system of linear equation, T p X p = Y p , where T p is a Toeplitz matrix. For

Preliminaries
First we are going to introduce the notation that we will be using in this work. Denote by N and Z to the sets of the natural numbers and the integers respectively. We will use the symbols R and C to denote the set of real and complex numbers respectively, D will denote the open unit disk in the complex plane, that is, D := {z ∈ C : |z| < 1}. The unitary circle, the boundary of D, will be denoted by T. Define e k by e k (ζ) := ζ k , ζ ∈ T, k ∈ Z. Denote, as usual, the set of all bounded linear operator acting in the Hilbert space H as L(H). By 1 we indicate either the scalar unit or the identity operator depending on context.
Next we explicitly explain how to build the Hilbert space H p , the subspaces D p and R p and a surjective isometry V p : D p → R p . The defect spaces N p and M p are also calculated explicitly. For this, we need that the sequence of bounded linear operators {R k } p k=0 verifies certain conditions. Given a sequence of bounded linear operator 1, R 1 , · · · , R p acting in the separable Hilbert space G, define R −k = R * k , for k = 1, · · · , p and we say that the sequence is strictly positive definite if, and only if, for all not null sequence {h k } p k=0 ⊂ G. Now, we stand for T k , k = 0, 1, · · · , p the bounded operator T k : G k+1 → G k+1 defined by Therefore (1) is equivalent to that the bounded operator T p is strictly positive. Note that, if the bounded operator T p is strictly positive, then the bounded operators T k , k = 0, 1, · · · , p − 1 are also strictly positive. In the following γ k = {γ k ij } i,j=0,1,··· ,k stand for the inverse of T k . We know that the bounded operators γ k , k = 0, 1, · · · , p are strictly positive because the bounded operators T k , k = 0, 1, · · · , p are strictly positive. Hence, the bounded operators γ k 00 and γ k kk for k = 0, 1, · · · , p are strictly positive since they are the compression of the operators γ k to suitable subspaces of G k+1 .
Let's suppose from now on, that the sequence of bounded operators {R k } p k=−p satisfies (1).
p} as the set of all analytical trigonometrical polynomials of degree less or equal p, in T with values in a Hilbert space G.
Define the inner product in E p by: The space (E p , ⟨, ⟩ p ) is a Hilbert space. Indeed, we obtain this result from the fact that the operator I p : is a bounded and invertible.
Let L 2 G as usual, Clearly, L 2 G is a Hilbert space with the standard inner product For k ∈ Z, let G k the subspace of L 2 G of the functions of the form e k a (a ∈ G). It can be seen in [30] such that G i is orthogonal to G j , if i ̸ = j. Moreover, and Clearly Γ p , is well defined. The next result shows that V p is a surjective isometry and that the operator Γ p is bicontinuos.
To prove (c), we note that the operator J p : k=1 be a complete orthonormal system of the Hilbert space G. From (b) we obtain the classical normal equations, that is, for j ∈ N, Thus, from (2),

Lemma 1
Let Γ p the bicontinuos operator defined by (3) and The following statements are true: There exists an isometry J such that (3) we obtain the result. The statements (b) and (c) are easy to prove. The statement (d) is a direct consequence of the above proposition with J := I p−1,p and from the preceding results. The main result of this section is a lower and upper factorization of the matrix T −1 p . To get such factorizations we need to obtain complete orthonormal systems of the defect space M p and N p .
For this, we consider the operator L k : Clearly this operators are invertible with inverse given by From these operators we define the normalized operators trigonometric polynomials M p (e it ) and N p (e it ) given by From now on, we write M p and N p instead of M p (e it ) and N p (e it ) respectively. This operators can be obtained from the classical equation (cf. [2], [16], [28] and [35]) using (3), the classical normal equation ((5) for N p ) and from the formula The following result shows that from these operators can be obtained complete orthonormal systems of the defect space of the isometry V p .

Lemma 2
Let {α k } ∞ k=1 a complete orthonormal system of the separable Hilbert space G. Then, m k p = M p α k and n k p = N p α k ; k = 1, 2, . . .
are complete orthonormal systems of the defect spaces M p and N p respectively.
Proof: The result is a direct consequence of the preceding proposition and (4). Using a similar proof given in [24] we can to show that the zeros of n k p and m k p for k = 1, 2, . . . , lie in the open unit disk D and in the exterior of the closed unit disk, respectively.

Lemma 3
Let M p and N p the operator trigonometric polynomials defined in (6). Then, the following properties are true: (a) M p and N p are isometric isomorphism.
(b) For all j, k = 1, · · · , p and every x, y ∈ G; ⟨N j x, N k y⟩ p = δ jk and ⟨e p−k M k x, e p−j M j y⟩ p = δ jk .
Proof: The statement (a) is a direct consequence of the previous lemma. The statement (b) follows from the orthogonal decompositions Denote M p and N p by M p = e 0 M p,0 + · · · + e p M p,p and N p = e 0 N p,0 + · · · + e p N p,p and define the matrices α p and β p by Since N j,j , j = 0, 1, · · · , p are strictly positive, is invertible and for i = 2, · · · , p we can prove recursively that β i for i = 2, · · · , p are invertible. Analogously, a similar result can be obtained for The following result shows a lower and upper triangular factorization of T −1 p . Proposition 2 Let α p and β p the matrices defined above. Then, Proof: Let f = (f 0 , · · · , f p ) t and g = (g 0 , · · · , g p ) t . From the statement (b) of the previous lemma, we have and β * p is a lower triangular factorization of T −1 p . The upper triangular factorization of T −1 p can be obtained from the fact that From now on to simplify the notation we write γ ij instead of γ k ij for i, j = 0, 1, · · · , k; i, j ̸ = 0. We can see in the next section that the Levinson recurrences is an efficient algorithm to obtain this factorization.
From the previous proposition we obtain the result.
Using the other representation of T k and γ k and the previous proposition we stablish the formula for γ k 00 .

The Levinson Recurrences in an infinite dimensional setting
In this section we extend the Levinson recurrences to the infinite dimensional setting and we derive an interesting formula for these recurrences using the generators of the defect spaces of an isometry. In this way we get a set of parameters whose norm can be interpreted as the angle between the subspaces. Similar results have been obtained in the finite dimensional setting (cf. [24] and [25]).

VpNp−1
Mp−1 the angle between the subspaces V p N p−1 and M p−1 , {α k } ∞ k=1 a complete orthonormal system of the separable Hilbert space G and Λ p : G → G the map defined by Then, this map verifies the following properties ; Proof: In order to prove the statement (a), we use lemma 2 and part (b) of the proposition 1 to obtain, which proves the result. The part (b) holds under the hypothesis, {α k } ∞ k=1 is a complete orthonormal system of G, From the definition of the angle between two subspaces and the previous results, we have,

756
A GEOMETRICAL APPROACH OF THE LEVINSON ALGORITHM FOR BLOCK TOEPLITZ MATRICES Therefore, The recursion for Λ p is a direct consequence of (10). Now, we need to define the following operators to simplify the notation in the main result of this work, From the proposition 3 these operators can be computed from M p−1 , N p−1 and the sequence R 1 , · · · , R p .
Theorem 1 (Levinson algorithm) Let M p y N p the operators trigonometric polynomials defined by (6) and Λ p as in (8). Then, for every p ∈ N the following recurrences are true, Proof: First, let {α k } ∞ k=1 a complete orthonormal system of the separable Hilbert space G and note that e p α j = V p (e p−1 α j ) and e p−1 α j = P Second, from of the orthogonal decompositions Substituting (13) into (12), we obtain, Again, using the orthogonal decomposition, Dp (e p α j ). This operator is called the forward innovation operator. Now, we can rewrite (14) as, Finally, from proposition 4 we have the first recurrence of (11), For the other recursion we need the orthogonal decomposition R p = R p−1 ⊕ V p N p−1 . Thus, Now, from the orthogonal decomposition E p = R p ⊕ M p we have, P

Ep
Mp (e 0 α j ) = e 0 α j − P Ep Rp (e 0 α j ). This operator is called the backward innovation operator. Thus, we can rewrite (15) as From proposition 1 and lemma 2, we have P Ep Mp (e 0 α j ) = M p (γ p 00 ) −1/2 α j . Therefore, It remains to obtain P Ep VpNp−1 (e 0 α j ). Note that from proposition 1, lemmas 1 and 2 and (8), Substituting this expression into (16), we conclude that Which proves the main result of this work.

Conclusions and Discussions
In this work we have obtained a new version of the Levinson algorithm in infinite dimension setting using a geometrical approach. This result, together with the lower and upper triangular factorization of the inverse for block Toeplitz matrix T p allows us to solve efficiently system of equations T p X p = Y p where X p , Y p ∈ G p+1 , recursively computing the solution of the system of increasing size T k X k = Y k . Additionally, the parameters obtained can be interpreted in terms of angles between subspaces. This generalizes the result obtained in the finite dimensional case where this parameters can be identified with the partial autocorrelation coefficients. Indeed, if we assume that 1 = R 0 , . . . , R p are the scalar partial autocorrelation coefficients of a second order stationary stochastic process X = {X k } k∈Z this sequence is strictly positive definite. From [25] we know that the map X −j → e j , j ∈ Z establishes a unitary automorphism between H X = Span{X j : j ∈ Z} (the closure of the space generated by the process) and L 2 (f dt) where f is the spectral density of the process. Now, let l, k ∈ N, k > l and H l,k := Span{X −n } k n=l subspaces of H X . We define the innovations by ϵ p = X −p − P H1,p−1 X −p , ϵ * p = X 0 − P H1,p−1 X 0 and they verify These parameters are called partial de autocorrelation coefficients and are obtained from the Levinson algorithm. When the dimension is q we have a similar result. More details can be found in [24]. In this paper we show that the parameters Λ p have the same interpretation: ∥Λ p ∥ = cos . We have obtained from Corollary 1 that these parameters can be computed recursively and there exists a one to one correspondence between these parameter and the sequence of operators {R k } p k=1 . Finally, the geometrical technique obtained in this work could be useful to solve extension problems in Statistics and for prediction problems in an infinite dimensional setting.