A Splitting-based Iterative Method for Sparse Reconstruction

In this paper, we study a l1-norm regularized minimization method for sparse solution recovery in compressed sensing and X-ray CT image reconstruction. In the proposed method, an alternating minimization algorithm is employed to solve the involved l1-norm regularized minimization problem. Under some suitable conditions, the proposed algorithm is shown to be globally convergent. Numerical results indicate that the presented method is effective and promising.


Introduction
In this paper, we mainly focus on the following ℓ 1 -norm regularized minimization problem where A ∈ R m×n (m ≪ n) is a linear operator, and b ∈ R m is an observation, parameter λ > 0 is used to trade off ℓ 2 term and ℓ 1 term for minimization.The above problem (1) is well-known as the unconstrained basis pursuit denoising problem.
Up to now, numerous approaches have been proposed and analyzed for solving the aforementioned problem (1) [7,12,23,25].Figueiredo et.al presented a GPSR [12] method, in which the ℓ 1 problem (1) was reformulated as a box-constrained quadratic program and was minimization by the Barzilai-Borwein gradient projection method [1] with an efficient nonmonotone line search.Iterative shrinkage-thresholding algorithm (ISTA) is one of the most widely studied methods for solving problem (1), which was first proposed for wavelet-based image deconvolution [13,18], and then was applied to many other filed [10,11,21,22].Using operator splitting, Hale, Yin and Zhang derived the iterative shrinkage-thresholding fixed point continuation algorithm (FPC) [16,17].Another similar method is the sparse reconstruction algorithm SpaRSA, which was proposed by Wright, Nowak and Figueiredo in [24].Recently, Beck and Teboulle proposed a FISTA algorithm with O(1/k 2 ) convergence rate [4].In [25], Yin, Osher, Goldfarb and Darbon proposed a Bregman iterative regularization method to solving the problem (1).Furthermore, a linearized Bregman method was proposed and analyzed subsequently in [7,8,26].Using the spectral gradient projection method with an efficient Euclidean projection on ℓ 1 -norm ball, Friedlander and Van den Berg [27] proposed a SPGL1 method for problem (1).Becker, Bobin and Candès [3] proposed a NESTA algorithm based on the Nesterov's smoothing technique to solve the problem (1).Moreover, an alternating direction 58 Y. CHEN, L. KANG, S. NIU, Z. YU algorithms for ℓ 1 -norm regularized minimization problem were proposed, which was derived from either the primal or the dual form of ℓ 1 -norm problem [28].
In this paper, based on the variable splitting scheme [14,20], we present the following ℓ 1 -norm regularized minimization model: It is clear that the constrained problem (2) will reduce to (1) when ε = 0.However, in general, the constrained problem ( 2) is a strict inequality constrained problem for the effect of noise.The parameter ε > 0 is the standard deviation of additive noise.The above constrained problem can be transformed into an unconstrained problem as follows: where λ 1 and λ 2 are positive regularization parameters.In fact, this variable splitting method has been proposed in image restoration [15], where the authors proposed a fast TV minimization method for image restoration.
For the problem (3), as we can see that, if we fixed u, it is a smooth quadratic optimization problem and if we fixed x, the optimization problem on u has a closed form that it can be explicitly calculated by Iterative Shrinkage/Thresholding Algorithm.
The rest of this paper is organized as follows.In Section 2, the alternating minimization algorithm is developed to solve the proposed ℓ 1 -norm minimization problem (3) and its global convergence of the iterative algorithm is established under some suitable conditions.In Section 3, numerical examples are presented to demonstrate the effectiveness of the proposed model.Finally, we have a conclusion section.

Optimization algorithm
In this section, we present a variable splitting method (VSM) to solve the problem (3).Starting from an initial guess u 0 , this method yields a sequence Then, we can obtain that (5)

Minimization with respect to x
The subproblem (4a) is a quadratic convex optimization problem, which is equivalent to solving a linear system: where (A T A + λ1 λ2 I) is a symmetric positive definite matrix.In this paper, the conjugate gradient method was utilized to solve (4a).The iterative scheme is as follows: where where . Remark: The optimal step size can be determined by

Minimization with respect to u
Since u is separable in the subproblem (4b), each of its components can be independently obtained by the shrinkage operator in [9], i.e., where u i k is the ith component of the u k .

Convergence analysis
This section is devoted to analysis the convergence of the poposd algorithm for (3).First, let us give some definitions.

Definition 1
An operator P is called nonexpansive if, for any If there exists some nonexpansive operator A and κ ∈ (0, 1) such that P = (1 − κ)I + κA, then P is called κaveraged nonexpansive.
In the following, we give the Lemma 2.4 in [6].

Lemma 1
Let φ be convex and semicontinuous and β > 0. Suppose u is defined as follows: Define S such that u = S(x) for each x, Then S is 1  2 -averaged nonexpansive.It is known that ∥u∥ 1 is convex and semicontinuous function.By Lemma 1, we can derive that S ℓ1 is 1  2 -averaged nonexpansive.

Lemma 2
The operator T defined in (5) is nonexpansive.

Proof: Since
λ2 I is symmetric positive definite and its eigenvalues are larger than λ1 λ2 , then for any u, v Therefore, The result follows. 2

Definition 2
Let C be a convex closed set in Banach space X, a mapping F : C → C is asymptotically regular if, for any x ∈ C,

Lemma 3
Let {u k } be generated by ( 5), then we have Proof: We consider the Taylor series expansion of J (x, u k ) in the first variable, and we have Since and As (A T A + λ1 λ2 I) is symmetric positive definite and its eigenvalues are larger than λ1 λ2 , then we can obtain that Since u k+1 = argminJ (x k+1 , u), then J (x k+1 , u k+1 ) ≤ J (x k+1 , u k ), therefore, Since S ℓ1 is nonexpansive, by Lemma 1, we have that Stat., Optim.Inf.Comput.Vol.Hence, we obtain that It follows that ∑ ∞ k=0 ∥u k − u k+1 ∥ 2 2 is bounded, and follows that lim k→∞ ∥u k+1 − u k ∥ 2 = 0. 2 Taking the Lemma 3 into account, we can prove that the operator T is asymptotically regular.

Lemma 4
For any initial guess u 0 , suppose u k is generated by ( 5); then T is asymptotically regular, i.e., To proof the set of the minimum of J (x, u) is nonempty, we shall show the coerciveness of J .

Lemma 5
The function J (x, u) defined in (3) is coercive. Proof: Using the matrix factorization, we note that It is obvious that the above matrix is full rank as Null(A) ∩ Null(I)=∅.In fact, Null(A) ∩ Null(I)={0}, and 0 is not taken into consideration in practice.therefore, when

Lemma 6
The the set of fixed points of T is nonempty.
Proof:Since the objective function J is coercive, the set of minimizers of J is nonempty.Assume that (x * , u * ) is a minimizer of J (x, u), i.e., ∂J ∂x Hence we get u * = S ℓ1 (S A (u * )) = T (u * ), and u * is a fixed point of T .The result follows. 2 We remark that the objective function J in ( 3) is strictly convex as A is a matrix of full column rank, Therefore, a fixed point of T is also a global minimizer of J .According to the Opial theorem [19], the sequence {u k } converges to a fixed point of J , i.e., a minimizer of J .

Theorem 1
For any initial guess u 0 ∈ R n , suppose {u k } is generated by (5), and u k converges to a stationary point of J .

Numerical experiments
In this section, we test the performance of the variable splitting method for the ℓ 1 -norm regularized minimization problem with application to computed tomography (CT) image reconstruction and compressed sensing.
All the programs were coded in MATLAB R2007a and run on a personal computer with an Intel Core 2 Duo CPU at 2.5 GHz and 2GB of memory.

Compressed sensing problem
In this subsection, the SVM algorithm was applied to solve the compressed sensing with comparison to FISTA method.In the numerical test, the original sparse signal xs and the matrix A are randomly generated, and the observation b is generated according to b=A * xs+ sigma * randn(m,1), where sigma is the standard deviation of additive Gaussian noise.Let k denote the number of nonzero components in xs.The stopping criterion was adopted as follows: where {x k } is the sequence generated by the test algorithms.And we measure the quality of recovered signal by its mean square error (MSE), i.e., where x is the signal restored by certain algorithm, x * is the original signal, n is the length of the signal.The value of the objective function (log10(fun)) and MSE are given in Table 1 and Table 2, respectively.Figure 1 and Figure 3 show the restored signal by FISTA algorithm and SVM algorithm, respectively.Figure 2 and Figure 4 show the objective function value and MSE vs the iteration numbers, respectively.The results indicate that the VSM algorithm outperforms FISTA algorithm.

Sparse-view X-ray CT image reconstruction
To validate and evaluate the performance of the SVM method, in this subsection, a modified digital Shepp-Logan phantom (Figure 5) was used in the experiments.In this subsection, the total variation regularization was used in (3) for sparse-view X-ray CT image reconstruction, the (4b) is solved by Chambolle's algorithm [5].For the CT projection simulation, we chose a geometry that was representative of a monoenergetic fan-beam CT scanner setup.This geometry was modeled with 672 bins on a 1-D detector for 2-D image reconstruction, and projection data with 25 projection numbers of views was simulated at equal angular increment on 360 around the phantom.The imaging parameters of the CT scanner were as follows: (1) the distance from the detector arrays to the x-ray source was 1040 mm; (2) the distance from the rotation center to the x-ray source was 570 mm; (3) and the space  The results from the 25-view projection data in the noise-free and noisy cases are shown in Fig. 6.Serious noiseinduced streak artifacts can be observed in the images reconstructed by the FBP method with ramp filter.The SVM algorithm can yield accurate reconstructions from spare-view projection data.

Conclusion
In this paper, we presented a splitting-based iterative method for sparse solution recovery in compressed sensing and X-ray CT image reconstruction.The numerical experiment results show that the proposed method is effective and promising.

Acknowledgement
This research was supported by the National Natural Science Foundation of China(No.61262026),Natural Science Foundation of Jiangxi Province(20132BAB201026), Science and technology programm of Jiangxi Education Committee(LDJH12088) and the graduate student innovation project of jiangxi Province(YC2014-S411).

Figure 6 .
Figure 6.Images reconstructed by FBP and SVM algorithms from 25-view projections.