Minimaxity and Limits of Risks Ratios of Shrinkage Estimators of a Multivariate Normal Mean in the Bayesian Case

In this article, we consider two forms of shrinkage estimators of the mean $\theta$ of a multivariate normal distribution $X\sim N_{p}\left(\theta, \sigma^{2}I_{p}\right)$ where $\sigma^{2}$ is unknown. We take the prior law $\theta \sim N_{p}\left(\upsilon, \tau^{2}I_{p}\right)$ and we constuct a Modified Bayes estimator $\delta_{B}^{\ast}$ and an Empirical Modified Bayes estimator $\delta_{EB}^{\ast}$. We are interested in studying the minimaxity and the limits of risks ratios of these estimators, to the maximum likelihood estimator $X$, when $n$ and $p$ tend to infinity.


Introduction
The problem of estimating the mean vector θ of a multivariate normal distribution N p ( θ, σ 2 I p ) in R p , has experienced many development since the papers [14,8,12]. In these works one estimates the mean θ by shrinkage estimators deduced from the empirical mean estimator, which are better in quadratic loss than the empirical mean estimator. The idea of Stein [14] showed that the maximum likelihood estimator of the mean θ is inadmissible when the dimension of space parameters exceeds two. James and Stein [8], provided a constructive shrinkage estimator denoted by δ JS = ( 1 − (p − 2)S 2 /(n + 2) ∥X∥ 2 ) X, which dominates the empirical mean estimator δ 0 = X, when the dimension of the parameter space p is 3. Baranchik [1], proposed the positive-part of the James-Stein estimator δ JS+ = max X, an estimator dominating the James-Stein estimator.
When the dimension p is infinite, Casella and Hwang [5], studied the case where σ 2 is known ( σ 2 = 1 ) and showed that if the limit of the ratio ∥θ∥ 2 /p is a constant c > 0, then the risks ratios of the James-Stein estimator δ JS and the positive-part of the James-Stein estimator δ JS+ , to the maximum likelihood estimator X, tend to a constant value c/(1 + c). Benmansour and Hamdaoui [2] have taken the same model given by Casella and Hwang [5], where the parameter σ 2 is unknown and they established the same results. Hamdaoui and Benmansour [7], considered the model where σ 2 is unknown and estimated by S 2 (S 2 ∼ σ 2 χ 2 n ). They studied the 508 MINIMAXITY AND LIMITS OF RISKS RATIOS OF SHRINKAGE ESTIMATORS following class of shrinkage estimators δ ϕ = δ JS + l(S 2 ϕ(S 2 , ∥X∥ 2 )/∥X∥ 2 )X. The authors showed that, when the sample size n and the dimension of parameter space p tend to infinity, the estimators δ ϕ have a lower bound B m = c/ (1 + c) and if the shrinkage function ϕ satisfies some conditions, the risks ratio R(δ ϕ , θ)/R(X, θ) attains this lower bound B m , in particulary the risks ratios R(δ JS , θ)/R(X, θ) and R(δ JS+ , θ)/R(X, θ). In Hamdaoui et al [10], the authors studied the limit of risks ratios of two forms of shrinkage estimators. The first one has been introduced by Benmansour and Mourid [3], and satisfies some conditions differents from the one given in Hamdaoui and Benmensour [7]. The second is the polynomial form of shrinkage estimator introduced by Li and Kio [11]. Hamdaoui and Mezouar [9] studied the general class of shrinkage estimators δ ϕ = X. They showed the same results given in Hamdaoui and Benmansour [7], with different conditions on the shrinkage function ϕ.
When the dimension p is finite, many authors studied the minimaxity of shrinkage estimators of a multivariate normal mean, see for example [4,13,16].
In this paper, we extend our previous works to the Bayesian case. We adopt the model X ∼ N p ( θ, σ 2 I p ) and independently of the observations X, we observe S 2 ∼ σ 2 χ 2 n an estimator of σ 2 . We consider the prior distribution where the hyperparameter υ is known and the hyperparameter τ 2 is known or unknown. Note that R(X, θ) = pσ 2 , is the quadratic risk of the maximum likelihood estimator. It is well known that the maximum likelihood estimator X is minimax, so that any estimator dominating it is also minimax. Our goal is to estimate the mean θ by a Modified Bayes estimator δ * B when the hyperparameter τ 2 is known and by an Empirical Modified Bayes estimator δ * EB when the hyperparameter τ 2 is unknown. The paper is organized as follows. In Section 1, we give preliminaries containing some results used in the next sections. In Section 2, we give the main results of this paper. First, we take the prior law of θ : where the hyperparameters υ and τ 2 are known and we construct a Modified Bayes estimator δ * B . When n and p are fixed, we show that the estimator δ * B is minimax. We study the behaviour of the risks ratio of this estimator to the Maximum likelihood estimator X, when n and p tend simultaneously to infinity without assuming any order relation or functional relation between n and p. In the second part of this section, we take the prior distribution of where the hyperparameter υ is known and the hyperparameter τ 2 is unknown and we construct an Empirical Modified Bayes estimators δ * EB of the mean θ. We will follow the same steps as have been given in first part. In the thirt part of this section, we illustrate graphically the results given in the paper. Finally, we give an Appendix containing technical lemmas used in the proofs of our results.

Preliminaries
We recall that if X is a multivariate Gaussian random N p denotes the non-central chi-square distribution with p degrees of freedom and non-centrality parameter λ = ∥θ∥ 2 2σ 2 . The following definition is used to calculate the expectation of functions of a non-central chi-square law's variable.
being the Poisson distribution of parameter λ 2 and χ 2 p+2k is the central chi-square distribution with p + 2k degrees of freedom.
We recall the following Lemma given by Fourdrinier et al. [6], that we will use often in the next. , being the Poisson distribution of parameter ∥θ∥ 2 2σ 2 . Now, we recall some known results of Bayes estimator.
where σ 2 is known, and hyperparameters ν, τ 2 are known. From Lindley et al. [12], we have Then, the Bayes estimator of θ is We deduce that

Main results
In this section we are interested in studying the minimaxity, bounds and limits of risks ratios of a Modified Bayes estimator and an Empirical Modified Bayes estimator, to the maximum likelihood estimator X.
To proof our main results we give the following Lemmas.

MINIMAXITY AND LIMITS OF RISKS RATIOS OF SHRINKAGE ESTIMATORS
Proof a) From the Definition 1, we have being the Poisson distribution of parameter λ 2 .
Using the conditional expectation and the fact that, the covariance of two functions one increasing and the other decreasing is non-positive, we obtain the penultimate equality follows from the Definition 1. Thus In the same way, we get b).

Lemma 3
For any c > 0, we have

Proof
On the one hand, from Jensen's inequality we have On the other hand, from Lemma 5 (Appendix), we have the inequality (4) follows from Jensen's inequality. The proof of the formula (3) is as follows: from Jensen's inequality we have In other hand, we have the equality (6) follows from Lemma 5 and the inequality (7) follows from Lemma 2. Hence Using the formula (2), we obtain

A Modified Bayes Estimator
where σ 2 unknown and estimated by the statistic S 2 ∼ σ 2 χ 2 n and θ has a prior distribution θ ∼ N p ( ν, τ 2 I p ) with the hyperparameters ν, τ 2 are known.

Proposition 1
The statistics S 2 S 2 + n τ 2 is an asymptotically unbiased estimator of the ratio

MINIMAXITY AND LIMITS OF RISKS RATIOS OF SHRINKAGE ESTIMATORS
Proof the last equality comes from Lemma 5 of the Appendix. From (2) of Lemma 3, we have If we replace in formula (1) the ratio σ 2 σ 2 + τ 2 by its estimator S 2 S 2 + nτ 2 , we obtain a Modified Bayes estimator expressed as The following Theorem gives an explicit formula of the risk of the Modified Bayes estimator δ * B , a lower and upper bound of risks ratio of the estimator δ * B to the maximum likelihood estimator X.

513
From the independence of variables X and S 2 , we have the penultimate equality comes from Lemma 4 (Appendix), and the last equality follows from Lemma 5 (Appendix). Thus

By using formulas (2) and (3) of Lemma 3, we obtain
Proof a) From the previous Theorem, we have The study of the variation of the real function h (x) = (x + 2) , shows that (n + 2) ≤ 0 for any n ≥ 5.
b) Immediatly from ii) of the Theorem 1.

An Empirical Modified Bayes Estimators
where σ 2 unknown and estimated by the statistic S 2 ∼ σ 2 χ 2 n and θ has a prior with the hyperparameter ν is known and the hyperparameter τ 2 is unknown.

Proposition 2
The statistics p − 2 n + 2 S 2 ∥X − ν∥ 2 is an asymptotically unbiased estimator of the ratio

515
As the marginal distribution of X is: Using the Definition 1, we have If we replace in formula (1) the ratio σ 2 σ 2 + τ 2 by its estimator 2 , we obtain an Empirical Modified Bayes estimator expressed as The following Theorem gives an explicit formula of the risk of the Empirical Modified Bayes estimator δ * EB .

Theorem 3
The quadratic risk of the Empirical Bayes estimator δ * EB given in (10) is Using the Definition 1, we have Using the Lemma 4 (Appendix), we have The last equality follows from the fact that E Hence Proof Immediately from the Theorem 3. Now, we consider the class of shrinkage estimators defined by: where c is a real may be depend on n and p.

Proposition 3
The quadratic risk of the estimator δ * c EB given in (11) is

Proof
Analogous to the proof of Theorem 3, so we give a brief idea.
Using the same technicals of Theorem 3, we obtain

Theorem 5
Let the estimator δ * c EB given in (11), then a) If p ≥ 3, a sufficient condition that the estimator δ * c EB is minimax, is b) the optimal value of c so that the risk of the estimator δ * c EB is minimal is c = (p − 2) n + 2 , thus the estimator δ * EB is the best in the class of shrinkage estimators δ * c EB . Proof a) Using the Proposition 3, a sufficient condition that the estimator δ * c EB is minimax, is b) From the convexity of the risk function R ( δ * c EB ; ν, τ 2 , σ 2 ) as a function of c, the optimal value of c so that the risk of the estimator δ * c EB is minimal is c = (p − 2) n + 2 . thus the estimator δ * EB is the best in the class of estimators δ * c EB .

Simulation
First, we illustrate graphically the risk ratios of the Modified Bayes estimator δ * B to the maximum likelihood estimator X, as a function of λ = τ 2 σ 2 for various values of n. In Fig.1 and Fig.2, we note that the risks ratio of the Modified Bayes estimator δ * B to the maximum likelihood estimator X, is less than 1, thus the Modified Bayes estimator is minimax for n = 5 and n = 8.
Secondly, we illustrate graphically the risks difference △ R = R ( δ * B ; ν, τ 2 , σ 2 ) − R (X) of the Modified Bayes estimator δ * B and the maximum likelihood estimator X, as a function of x = τ 2 and y = σ 2 for various values of n. In Fig.3 and Fig.4, we note that the risks difference between the Modified Bayes estimator δ * B and the Maximum likelihood estimator X, is negative, thus the Modified Bayes estimator is minimax for n = 22 and n = 28.
Finally, we illustrate the graphs of the upper bound given by the formula (3.8) for the risks difference △ R = R ( δ * B ; ν, τ 2 , σ 2 ) − R (X) divised by the risk of the maximum likelihood estimator R (X), as a function of y = τ 2 σ 2 for various values of n. In Fig.5 and Fig.6, we note that the risks difference between the Modified Bayes estimator δ * B and the Maximum likelihood estimator X, is negative, thus the Modified Bayes estimator is minimax for the large values of n, for example n = 100 and n = 1000.

Conclusion
In this work, we studied the shrinkage estimators of a multivariate normal mean distributon in the Bayesian case. We considered the model X ∼ N p ( θ, σ 2 I p ) with σ 2 is unknown and we take the prior distribution θ ∼ N p ( υ, τ 2 I p ) where the hyperparameter υ is known and the hyperparameter τ 2 is known or unknown. We constructed a Modified Bayes estimator δ * B when the hyperparameter τ 2 is known and an Empirical Modified Bayes estimator δ * EB when the hyperparameter τ 2 is unknown. We showed that the estimators δ * B and δ * EB are minimax when n and p are finite. When n and p tend simultaneously to infinity, the results agree with the one obtained in our previous published papers. An extension of this work is to study the minimaxity and the limits of risks ratios, when the model and the prior law have both a symmetrical spherical distribution.

Lemma 5 For any real function h such that
) exists, we have )} .