PDF  PubReader

Hu* and Feng*: An Improvement Algorithm for the Image Compression Imaging

Kaiqun Hu* and Xin Feng*

An Improvement Algorithm for the Image Compression Imaging

Abstract: Lines and textures are natural properties of the surface of natural objects, and their images can be sparsely represented in suitable frames such as wavelets, curvelets and wave atoms. Based on characteristics that the curvelets framework is good at expressing the line feature and wavesat is good at representing texture features, we propose a model for the weighted sparsity constraints of the two frames. Furtherly, a multi-step iterative fast algorithm for solving the model is also proposed based on the split Bergman method. By introducing auxiliary variables and the Bergman distance, the original problem is transformed into an iterative solution of two simple sub-problems, which greatly reduces the computational complexity. Experiments using standard images show that the split-based Bergman iterative algorithm in hybrid domain defeats the traditional Wavelets framework or curvelets framework both in terms of timeliness and recovery accuracy, which demonstrates the validity of the model and algorithm in this paper.

Keywords: Curvelets , Spilt Bergman Iteration , Sparse Representation , Wave Atoms

1. Introduction

Image compressive sensing (CS) is a major breakthrough in the image processing field in recent years. It is an imaging theory based on sparse representation and optimization theory proposed by Donoho [1] One of the main applications of compressive sensing is compressive imaging [2]. Its core idea is to incorporate the sparse prior of image into the image imaging process and to utilize proper optimization of reconstructed image from the random projection far below the Nyquist sampling rate. Thus, it can decrease the cost and complexity of the image acquisition system, and can reduce the image storage space and transmission costs. Image sparse representation is a key issue in the image compression system, and only the appropriate one can guarantee the quality of the image reconstruction. The sparser the image represents, the smaller the number of random projections is to be measured and the faster the reconstruction is.

Wavelet transform does not represent the optimal basis of the image, but it shows some limitations when dealing with two-dimensional images. In recent years, people have proposed a series of multi-scale geometric analysis (MGA) [3]. At present, there are two major categories of MGA: MGA that is adaptive to image content, and MGA that is not related to image content. For instance, wedgelet, directionlet, bandlet and grouplet [4-9], all belong to adaptive MGA. Their basic functions can change with the content of the image and are suitable for image compression. The others mainly include ridgelet, curvelet, contourlet, shearlet, and so on.

The most important step in CS is nonlinear restoration of the code. Among all the algorithms, iterative shrinkage algorithms are commonly used. Its main advantages include: (1) with robustness and easy to implement, and (2) easily incorporate the current MGA into its framework (IST framework). Therefore, the iterative shrinkage algorithms are the main solutions to the nonlinear inverse problems.

In this paper, we combine the curvelet framework and the Wave atoms framework model to establish a weighted sparsity constraint model for the two frameworks. Then based on the split Bergman iteration method [10], we propose a multistep fast iterative algorithm to solve the model. By introducing intermediate variables and Bergman distance, the original problem is transformed into an iterative solution of two simple sub-problems, which greatly reduces the computational complexity. The main algorithm steps are as follows:

1) First, input the image A;

2) Establish a sparse model based on the mixture of curvelets and wave atoms;

3) Transform the image into a mixed sparse domain and obtain a sparse representation coefficient;

4) Observe the sparse representation coefficients by pseudo-random Fourier matrix to get the observations;

5) Recover the image from the observed coefficients by split Bregman iteration;

6) Output reconstructed image;

The block diagram of the main algorithm is shown in Fig. 1.

Fig. 1.

The block diagram of the main algorithm.
1.png

2. Curvelets and Wave Atoms Transform

2.1 Curvelets Transform

Compared to wavelet transform, curvelets transform is mainly determined by three parameters: scale [TeX:] $$2^{-j}, \quad j \in N_{0}$$ Rotation angle with equal spacing [TeX:] $$\theta_{j, j}=2 \pi l \cdot 2^{-[j / 2]}, \quad 0 \leq l \leq 2^{[j / 2]}-1;$$ and position [TeX:] $$x_{k}^{(j l)}=R_{\theta j l}^{-1}\left(k_{1} 2^{-j}, k_{2} 2^{-[j / 2]}\right)^{T},\left(k_{1}, k_{2}\right) \in Z^{2}, \text { where } R_{\theta, j,l}$$ is the rotation matrix of angle [TeX:] $$\theta_{j, l}.$$ Thus, curvelets function is defined as:

(1)
[TeX:] $$\Psi_{j J, k}(x)=\Psi_{j}\left(R_{\theta_{\mu}}\left(x-x_{k}^{(j, l)}\right)\right), x=\left(x_{1}, x_{2}\right) \in R^{2}$$

Among them, [TeX:] $$\Psi_{j}$$ is the smooth function of the wedge tight form in the Fourier domain. The smooth objective function [TeX:] $$f \in L_{2}\left(R^{2}\right)$$ is represented as:

(2)
[TeX:] $$f=\sum_{j, l, k}\left\langle f, \Psi_{j, j, k}\right\rangle \Psi_{j, j k}$$

[TeX:] $$\left\langle f, \Psi_{j, l, k}\right\rangle$$ represents the inner product of [TeX:] $$L_{2},$$ which is called the curvelets coefficient of the function f, whose spatial domain is shown in Fig. 2(a).

Fig. 2.

(a) Curvelets and (b) atoms transform in the space domain.
2.png
2.2 Wave Atoms Transform

Demanet and Ying [11] proposed wave atomic transform in 2007, which can be regarded as a variant of two-dimensional wavelet packet transform. Its vibration period and supporting length are consistent with similar parabolic scaling relations. Compared with wavelet, Gabor atom and curvelets, the wave atom has an optimal sparse representation for oscillatory function (which can be considered as a simple texture model), and its spatial domain representation is shown in Fig. 2(b).

According to the literature [11], firstly, for the one-dimensional wavelet packet function [TeX:] $$\psi_{m, n}^{\prime}(x)$$ , [TeX:] $$j \geq 0, m \geq 0, n \in N.$$ As a result, the center frequency position of the function is [TeX:] $$\pm \omega_{j, m}=\pm \pi 2^{j} m,$$ and [TeX:] $$c_{1} 2^{j} \leq m \leq c_{2} 2^{j}, c_{1} \text { and } c_{2}$$ are greater than zero. The center of airspace of [TeX:] $$\psi_{m, n}^{j}(x) \text { is } x_{j, n}=2^{j} n.$$

The scale index is introduced and the basic function can be defined as:

(3)
[TeX:] $$\psi_{m, n}^{j}=\psi_{m}^{j}\left(x-2^{j} n\right)=2^{j} \psi_{m}^{0}\left(2^{j} x-n\right)$$

The resulted basis function [TeX:] $$\left\{\psi_{m, n}^{j}(x)\right\}$$ is essentially different from a wavelet packet because it has uniformly bounded localization properties in the spatial and frequency domains, which play a major role in the nature and structure of the wave atoms.

Two-dimensional wave packet function is expressed in the form of tensor products, as:

(4)
[TeX:] $$\varphi_{\mu}^{+}\left(x_{1}, x_{2}\right)=\psi_{m_{1}}^{k}\left(x_{1}-2^{-j} n_{1}\right) \psi_{m_{2}}^{k}\left(x_{2}-2^{-j} n_{2}\right)$$

(5)
[TeX:] $$\varphi_{\mu}^{-}\left(x_{1}, x_{2}\right)=H \psi_{m}^{k}\left(x_{1}-2^{-j} n_{1}\right) H \psi_{m_{2}}^{k}\left(x_{2}-2^{-j} n_{2}\right)$$

where H is the Hilbert transform, [TeX:] $$\mu=(j, m, n)=\left(j,\left(m_{1}, m_{2}\right),\left(n_{1}, n_{2}\right)\right)$$ corresponds to a point [TeX:] $$\left(x_{\mu}, \omega_{\mu}\right)$$ in the phase space, and [TeX:] $$\mu_{x}=2^{-j} n, \omega_{\mu}=\pi 2^{j} m$$ .

[TeX:] $$\varphi_{\mu}^{+}\left(x_{1}, x_{2}\right) \text { and } \varphi_{\mu}^{-}\left(x_{1}, x_{2}\right)$$ are the canonical orthogonal basis for a pair of wave atomic orthonormal basis, Then its combination is: [TeX:] $$\varphi_{\mu}^{(1)}=\left(\phi_{\mu}^{+}+\varphi_{\mu}^{-}\right) / 2 \text { and } \varphi_{\mu}^{(2)}=\left(\varphi_{\mu}^{+}-\varphi_{\mu}^{-}\right) / 2,$$ which constitutes a wave-like atomic tight frame (WATF) with a redundancy of two.

3. The Algorithm of This Paper

3.1 The Establishment of Sparse Framework in Hybrid Domain
3.1.1 The establishment and expansion of the basic framework

Compressed sensing of incomplete measurements:

(6)
[TeX:] $$f=\Phi u+\varepsilon$$

Among them, [TeX:] $$\Phi \in R^{k \times N}(k<N)$$ is the measurement matrix of CS. As the matrix rows of [TeX:] $$\Phi$$ is much less than the columns, it will lead to serious ill-posed problems. Therefore, the observation of signal [TeX:] $$u\left(u \in R^{N}\right), \text { namely } f \in R^{k}, f \in R^{k}$$ is a linear under determined system.

Assuming that the signal can be expressed in sparse representation in a deterministic framework, there is a transformation matrix [TeX:] $$\Psi \in R^{M \times N}(\mathrm{M}=\mathrm{N} \text { as the base, } \mathrm{M}>\mathrm{N}\text { as the framework). }$$ As a result, [TeX:] $$\Psi u$$ includes only a small part of important information. In addition, if the measurement matrix [TeX:] $$\Phi$$ is not correlated with the transform matrix [TeX:] $$\Psi,$$ it is generally assumed that [TeX:] $$\Phi$$ satisfies the restricted isometric property (RIP), and can be reconstructed with the incomplete measurement with high precision.

The commonly used measurement matrix Y includes sparse binary random matrix, random non-sampling Fourier matrix, Toeplitz matrix and pseudo-orthogonal measurement matrix. Pseudo-orthogonal measurement matrices have been shown to converge faster in non-linear decoding schemes, where [TeX:] $$$\Phi$ satisfies $\Phi \Phi^{T}=I_{k},$ and where $I_{k}$$$ represents a unit matrix of size k.

Signal applies to transform and introduces the CS problem into the coefficient domain:

(7)
[TeX:] $$f=\Phi \Psi^{-1} \Psi u+\varepsilon=\Phi \vartheta+\varepsilon$$

Among them [TeX:] $$\vartheta=\Psi u, \tilde{\Phi}=\Phi \Psi^{-1}, \Psi$$ represents a positive transform (such as curvelet, contourlet, wave atoms), and [TeX:] $$\Psi^{-1}$$ represents its inverse transform.

(8)
[TeX:] $$B P_{\zeta}: \arg \min _{u}\left\{\|\boldsymbol{\Psi} u\|_{1}:\|f-\Phi u\| \leq \zeta\right\}$$

The positive parameter [TeX:] $$\zeta$$ is the estimated noise level. However, unconstrained issues can be expressed as:

(9)
[TeX:] $$Q P_{\lambda}: \arg \min _{u}\left\{\frac{1}{2}\|f-\Phi u\|_{2}^{2}+\lambda\|\Psi u\|_{1}\right\}$$

Through quadratic programming, it can be seen that:

(10)
[TeX:] $$L S_{\xi}: \arg \min \left\{\frac{1}{2}\|f-\Phi u\|_{2}^{2}:\|\Psi u\|_{1} < \xi\right\}$$

The three problems [TeX:] $$B P_{\zeta}, \quad Q P_{\lambda}, \quad L S_{\xi}$$ are closely related, when [TeX:] $$\zeta, \lambda, \quad \xi$$ are appropriately verified. Known as decomposition models [ 12], these models are used to determine the value of u with the smallest [TeX:] $$l_{1}$$ within the framework.

Introduce the above problem into the coefficient field. Use the frame factor [TeX:] $$\vartheta$$ to replace [TeX:] $$u, \Phi \Psi^{-1}and \ \Phi.$$ Then, it can be seen that:

(11)
[TeX:] $$Q P_{\lambda}: \underset{g}{\arg \min }\left\{\frac{1}{2}\left\|f-\Phi \Psi^{-1} \vartheta\right\|_{2}^{2}+\lambda\|\vartheta\|_{1}\right\}$$

This is a synthetic model [12] and it can be used to solve the minimum coefficient of [TeX:] $$l_{1}$$ directly.

In general, the selection of the applied model is determined by actual application situation. If [TeX:] $$\Psi$$ is orthogonal transformation, then the two models are equivalent, and this article will focus on the decomposition-based model approach.

3.1.2 The establishment of a hybrid framework model of compressed sensing

The image is usually composed of smooth structure and texture in different components. Therefore, the local part of the image can be expressed with a high degree of sparsity through a suitable transformation (such as wavelet transform). In recent years, due to the continuous research and multi-scale analysis, many effective methods for representing binary data have emerged. Among them, curvelets, contourlets and wave-element method are non-adaptive function frames and have strong anisotropy and direction selectivity. They show different smoothness and texture performances, and the prior knowledge of u signal is required to select transform frame. In this paper, two kinds of frameworks are used to construct the hybrid framework model by using the weighted sparsity constraints.

First, we assume that there are two different matrices [TeX:] $$\Psi_{c} \text { and } \Psi_{\omega}$$ that conform to the orthonormal basis or the Parseval framework, then: [TeX:] $$\Psi_{c} \in R^{N_{1} \times N}, \quad \Psi_{\omega} \in R^{N_{2} \times N}, \quad \text { where } \quad \Psi_{c}^{T} \Psi_{c}=\Psi_{\omega}^{T} \Psi_{\omega}=I_{N}, \ N_{1} \geq N, N_{2} \geq N.$$

Where [TeX:] $$I_{N}$$ represents a unit matrix of size N and [TeX:] $$\Psi^{T}$$ is a transpose of [TeX:] $$\Psi.$$ There are two main models in the coefficient domain: denoising model and recombination model. When the measurement contains apparent noise (denoising model), then it is considered as a generalized optimization problem:

(12)
[TeX:] $$\arg \min \left\{J(u)+\frac{1}{2}\left\|f-\Phi_{u}\right\|_{2}^{2}\right\}$$

(13)
[TeX:] $$J(u)=\left\|\wedge_{c} \Psi_{c} u\right\|_{1}+\left\|\wedge_{\omega} \Psi_{\omega} u\right\|_{1}$$

[TeX:] $$\wedge_{c} \text { and } \wedge_{\omega}$$ are diagonal matrix, respectively. If, for a wavelet transform, the norm corresponding to the scale wavelet coefficient [TeX:] $$l_{1}$$ is equivalent to the Besov norm of the signal u , then there is no exact equivalent description for other frameworks (curvelets or wave atoms).

The corresponding constraint problem is equivalent to formula (12):

(14)
[TeX:] $$\underset{u, \rho, s_{w}}{\arg \min }\left\{\left\|,\left.v_{c}\right|_{1}+\right\| \ldots_{\omega} \vartheta_{\omega}\left\|_{h}+\frac{1}{2}\right\| f-\Phi u \|_{2}^{2}\right\}$$

where [TeX:] $$\vartheta_{c}=\Psi_{c} u, \vartheta_{\omega}=\Psi_{\omega} u$$

Assuming that the noise included in the measurement can be ignored (reorganization model), then the optimization problem can be considered as:

(15)
[TeX:] $$\arg \min _{u}\{J(u)\} \quad \text { s.t. } \quad \Phi u=f$$

where [TeX:] $$J(u)$$ is the same as above, the corresponding constraint problem in the coefficient domain is:

(16)
[TeX:] $$\underset{\vartheta_{i}, \vartheta_{\omega}}{\arg \min }\left\{| \Lambda_{c} \vartheta_{c}\left\|_{1}+\right\| \wedge_{\omega} \vartheta_{\omega} \|_{1}\right\}$$

where [TeX:] $$u \in R^{N}, f=\Phi u, \vartheta_{c}=\Psi_{c} u, \vartheta_{\omega}=\Psi_{\omega} u.$$

It can be seen that in this research section the two model problems are shown in formula (14) and (16), when the hybrid framework is obtained by respectively denoising model and recombination model. As the signal acquisition with inevitable noise interference, this paper in later experiment will focus on the denoising model based on the actual need.

3.2 Interactive Split Bergman Method to Restore the Image
3.2.1 Split Bergman iteration method

The Bergman iterative method is mainly aimed to solve the following constraint optimization problem:

(17)
[TeX:] $$\min J(u) \quad s . \ell . Q(u, f)=0$$

where [TeX:] $$J(u)$$ and [TeX:] $$(u, f)$$ are convex functions and [TeX:] $$(u, f)$$ is differentiable, Bergman distance is defined as:

(18)
[TeX:] $$D_{j}^{p}(u, v)=J(u)-J(v)-\langle p, u-v\rangle$$

Here, [TeX:] $$P \in \partial J(v)$$ represents the gradient. As [TeX:] $$J(\cdot)$$ is a convex function, then [TeX:] $$D_{j}^{p}(u, v) \geq 0, \text { and }<\centerdot,\centerdot>$$ represents the H-space inner product.

Literature [13] first proposed an interactive split Bergman iteration algorithm. The goal of splitting the Bergman method is to extend the range of the Bergman iteration and linearized Bergman iterations [12], so that problems with broader range can be effectively solved. The basic idea is to introduce an auxiliary variable to replace the more difficult parts of the original function, such as inseparable items, non-smooth items, nonlinear terms, and so on. This will help to simplify the solution of the problem and to add equality constraints between auxiliary variables and replacement parts to ensure that the new problem is equivalent to the original problem so that the new problem can be solved by the Bergman iteration.

3.2.2 Split Bergman iteration restoration algorithm for denoising model

In this paper, the split Bergman iteration algorithm is used to solve the formula (14) presented in the previous section. To solve the constrained optimization problem (14), the problem must be transformed into an unconstrained optimization problem, so the Bergman distance is introduced. The transformation from formula (14) into a non-constrained problem can be viewed as a series of questions like this:

(19)
[TeX:] $$\underset{u, \partial_{c}, \partial_{v}}{\arg \min }\left\{E\left(u, \vartheta_{c}, \vartheta_{\omega}\right)+\frac{\mu_{k}}{2}\left(\left\|\Psi_{c} u-\vartheta_{c}\right\|_{2}^{2}+\left\|\Psi_{\omega} u-\vartheta_{\omega}\right\|_{2}^{2}\right)\right\}$$

where,

(20)
[TeX:] $$E\left(u, \vartheta_{c}, \vartheta_{\omega}\right):=\left\|\wedge_{c} \vartheta_{c}\right\|_{1}+\left\|\wedge_{\omega} \vartheta_{\omega}\right\|_{1}+\frac{1}{2}\|f-\Phi u\|_{2}^{2}$$

And [TeX:] $$\mu_{1}<\mu_{2}<\dots$$ is an increasing sequence. Compared with this method, the method that Goldstein and Osher [10] apply the Bergman distance to solve the non-constrained problem, is quoted in this paper and we can get:

(21)
[TeX:] $$D_{E}^{p}\left(\left(u, \vartheta_{c}, \vartheta_{\omega}\right),\left(u^{k}, \vartheta_{c}^{k}, \vartheta_{\omega}^{k}\right)\right):=E\left(u, \vartheta_{c}, \vartheta_{\omega}\right)-E\left(u^{k}, \vartheta_{c}^{k}, \vartheta_{\omega}^{k}\right)-\left\langle p_{u}^{k}, u-u^{k}\right\rangle-\left\langle p_{c}^{k}, \vartheta_{c}-\vartheta_{c}^{k}\right\rangle-\left\langle p_{\omega}^{k}, \vartheta_{\omega}-\vartheta_{\omega}^{k}\right\rangle$$

where, [TeX:] $$\left(p_{u}^{k}, p_{c}^{k}, p_{\omega}^{k}\right)$$ is a sub-gradient of E at [TeX:] $$\left(u^{k}, \vartheta_{c}^{k}, \vartheta_{\omega}^{k}\right),$$ that can be iterated as:

(22)
[TeX:] $$\left(u^{k+1}, \vartheta_{c}^{k+1}, \vartheta_{\omega}^{k+1}\right)=\underset{u, \vartheta_{c}, \vartheta_{\omega}}{\arg \min }\left\{D_{E}^{p}\left(u, \vartheta_{c}, \vartheta_{\omega}\right),\left(u^{k}, \vartheta_{c}^{k}, \vartheta_{o}^{k}\right)+\frac{\mu}{2}\left(\left\|\Psi_{c} u-\vartheta_{c}\right\|_{2}^{2}+\left\|\Psi_{\omega} u-\vartheta_{\omega}\right\|_{2}^{2}\right)\right\} \\ \left.=\arg \min \left\{E\left(u, \vartheta_{c}, \vartheta_{\omega}\right)-\left\langle p_{u}^{k}, u\right\rangle-\left\langle p_{c}^{k}, \vartheta_{c}\right\rangle-\left\langle p_{\omega}^{k}, \vartheta_{\omega}\right\rangle+\frac{\mu}{2}\left\|\Psi_{c} u-\vartheta_{c}\right\|_{2}^{2}+\left\|\Psi_{\omega} u-\vartheta_{\omega}\right\|_{2}^{2}\right)\right\} \\ u, \vartheta_{c}, \vartheta_{\omega}$$

Among them, [TeX:] $$\mu>0$$ is used as a fixed parameter, and generates necessary conditions:

(23)
[TeX:] $$0 \in \partial\left[E\left(u^{k+1}, \vartheta_{c}^{k+1,} \vartheta_{\omega}^{k+1}\right)-\left\langle p_{u}^{k}, u^{k+1}\right\rangle-\left\langle p_{c}^{k}, \vartheta_{c}^{k+1}\right\rangle-\left\langle p_{\omega}^{k}, \vartheta_{\omega}^{k+1}\right\rangle\right.\left.+\frac{\mu}{2}\left(\left\|\Psi_{c} u^{k+1}-\vartheta_{c}^{k+1}\right\|_{2}^{2}+\left\|\Psi_{\omega} u^{k+1}-\vartheta_{\omega}^{k+1}\right\|_{2}^{2}\right)\right]$$

Then calculate the differential, and get the recursion by [TeX:] $$\left(p_{u}^{k+1}, p_{c}^{k+1}, p_{\omega}^{k+1}\right) \in \partial E\left(u^{k+1}, \vartheta_{c}^{k+1}, \vartheta_{\omega}^{k+1}\right):$$

(24)
[TeX:] $$\left\{\begin{array}{l} p_{u}^{k+1}=p_{u}^{k}-\mu\left(\Psi_{c}^{T}\left(\Psi_{c} u^{k+1}-\vartheta_{c}^{k+1}\right)+\Psi_{o}^{T}\left(\Psi_{\omega} u^{k+1}-\vartheta_{\omega}^{k+1}\right)\right) \\ p_{c}^{k+1}=p_{c}^{k}-\mu\left(\vartheta_{c}^{k+1}-\Psi_{c} u^{k+1}\right) \\ p_{\omega}^{k+1}=p_{\omega}^{k}-\mu\left(\vartheta_{\omega}^{k+1}-\Psi_{\omega} u^{k+1}\right) \end{array}\right.$$

In accordance with the simplified theory in literature [8], the split Bergman iteration can be obtained from formula (23) to (24):

(25)
[TeX:] $$\begin{cases} \left(u^{k+1}, \vartheta_{c}^{k+1}, \vartheta_{\omega}^{k+1}\right)=\underset{u, \vartheta_{t}, \vartheta_{c}}{\arg \min }\left\{E\left(u, \vartheta_{c}, \vartheta_{\omega}\right)\right. \\ \left.+\frac{\mu}{2}\left\|\vartheta_{c}-\Psi_{c} u-b_{c}^{k}\right\|_{2}^{2}+\frac{\mu}{2}\left\|\vartheta_{\omega}-\Psi_{\omega} u-b_{\omega}^{k}\right\|\right\}, \\ b_{c}^{k+1} \quad=b_{c}^{k}+\Psi_{c} u^{k+1}-\vartheta_{c}^{k+1}, \\ b_{\omega}^{k+1} \quad=b_{\omega}^{k}+\Psi_{\infty} u^{k+1}-\vartheta_{o}^{k+1}, \end{cases}$$

The above formula can choose any initial vector [TeX:] $$\left(u^{0}, \vartheta_{c}^{0}, \vartheta_{\omega}^{0}\right), \text { and }\left(b_{c}^{0}, b_{\omega}^{0}\right)=(0,0)$$

In order to solve the minimization problem of formula (25), the following scheme is finally obtained by applying the interactive split Bergman algorithm:

(26)
[TeX:] $$\begin{cases} u^{k+1}=\underset{u}{\arg \min }\left\{\frac{1}{2}\|f-\Phi u\|_{2}^{2}\right. \\ \left.+\frac{\mu}{2}\left\|\vartheta_{c}^{k}-\Psi_{c} u-b_{c}^{k}\right\|_{2}^{2}+\frac{\mu}{2}\left\|\vartheta_{\omega}^{k}-\Psi_{\omega} u-b_{\omega}^{k}\right\|\right\}, \\ \vartheta_{c}^{k+1}=\underset{\vartheta_{c}}{\arg \min }\left\{\left\|\wedge_{c} \vartheta_{c}\right\|_{1}+\frac{\mu}{2}\left\|\vartheta_{c}-\Psi_{c} u^{k+1}-b_{c}^{k}\right\|_{2}^{2}\right\}, \\ \vartheta_{\omega}^{k+1}=\underset{\vartheta_{\alpha}}{\arg \min }\left\{\left\|\wedge_{\omega} \vartheta_{\omega}\right\|_{1}+\frac{\mu}{2}\left\|\vartheta_{\omega}-\Psi_{\omega} u^{k+1}-b_{\omega}^{k}\right\|_{2}^{2}\right\}, \\ b_{c}^{k+1}=b_{c}^{k}+\Psi_{c} u^{k+1}-\vartheta_{c}^{k+1}, \\ b_{\omega}^{k+1}=b_{\omega}^{k}+\Psi_{\omega} u^{k+1}-\vartheta_{\omega}^{k+1}, \end{cases}$$

Therefore, the interactive spilt Bergman algorithm of hybrid framework can be summarized as:

The above algorithm is significantly simpler than the algorithm in literature [ 10] (image recovery), so it is not necessary to solve the linear system equations in step 1), but only need to calculate x directly, and the step 2) and 3) are performed in the coefficient domain. In vector form, it can be simply contracted as:

(27)
[TeX:] $$\vartheta_{c}^{k+1}=\Gamma_{\mu^{-1}\left|\wedge_{c}\right|}\left(\Psi_{c} u^{k+1}+b_{c}^{k}\right)$$

(28)
[TeX:] $$\vartheta_{\omega}^{k+1}=\Gamma_{\mu^{-1}\left|\wedge_{\omega}\right|}\left(\Psi_{\omega} u^{k+1}+b_{\omega}^{k}\right)$$

where [TeX:] $$\left|\wedge_{c}\right|:=diag(\left|\lambda_{c, l}\right|_{l=0}^{N_{1}-1}), \left|\wedge_{\omega}\right|:= diag(\left|\lambda_{\omega, l}\right|_{l=0}^{N_{1}-1}), \text { if } \Psi_{c}^{T} \Psi_{c}=\Psi_{\omega}^{T} \Psi_{\omega}=I_{N},\text { and } u_{c}^{k}:=\Psi_{c}^{T} \vartheta_{c}^{k}, u_{\omega}^{k}:=\Psi_{\omega}^{T} \vartheta_{\omega}^{T}, \tilde{b_{c}^{T}} =\Psi_{c}^{T} b_{c}^{k}, \tilde{b_{\omega}}^{T}=\Psi_{\omega}^{T} b_{\omega}^{k}$$ , get the algorithm in the space domain as follows:

Step 2) includes the framework contraction process (curvelets and wavestric contraction), and the contraction framework of curvelets and wave atoms are as follows:

(29)
[TeX:] $$S_{\mu^{-1}\left|\wedge_{e}\right|}\left(u^{k+1}+ \tilde{b_{c}^{k}}\right)=\Psi_{c}^{T} \Gamma_{\mu^{-1} | \wedge_{c} |}\left(\Psi_{c}\left(u^{k+1}-\tilde{b}_{c}^{k}\right)\right)$$

(30)
[TeX:] $$S_{\mu^{-1}\left|\wedge_{\omega}\right|}\left(u^{k+1}+\tilde{b_{\omega}}^{k}\right)=\Psi_{\omega}^{T} \Gamma_{\left.\mu^{-1}\right|_{\Lambda_{c}} |}\left(\Psi_{\omega}\left(u^{k+1}-\tilde{b}_{\omega}^{k}\right)\right)$$

where, [TeX:] $$\Gamma_{\mu^{-1}\left|\wedge_{c}\right|} \operatorname{and} \Gamma_{\mu^{-1}\left|\wedge_{\omega}\right|}$$ are defined in formula (27) and (28).

4. Numerical Experiments

4.1 Sample Matrix

According to the algorithm steps given in the second section, a specific test system can be designed by MATLAB. In this paper, CS imaging is performed through a 30% sampled pseudo-random Fourier matrix, which is a polynomial variable density random sampling. It follows the probability density function of intensively sampling in low-frequency components and sparsely sampling in high-frequency components function. Therefore, it is suitable for sampling multi-scale domain.

Fig. 3(a) shows the Fourier domain sampling pattern for a 30% measurement, where white dots indicate sample points. Fig. 3(b) represents its two-dimensional probability density function.

Fig. 3.

Pseudo-random Fourier domain undersampling (CS sampling matrix): (a) sampling model in Fourier domain and (b) two-dimensional probability density function.
3.png
4.2 Experimental Results

In this paper, the Lena image with 256×256 standard is used to test the performance of the proposed curvelets-wave atoms split Bregman method. The comparison methods are the iterative wavelets threshold method and the iterative curvelets threshold method [14]. The parameters are set as follows: set up the discrete curvelets frame as the first frame, and set up the Wave atoms as the second frame. In the proposed reconstruction model, the initial value of the regularization parameter [TeX:] $$\mu=\mu_{0}\left(1+k / N_{\text {munb }}\right)^{4}$$ with increasing trial selection is [TeX:] $$\mu_{0}=0.1, \text { where } N_{m um b}=20$$ is the total iteration number, and k [TeX:] $$$$ represents the current number of iterations. In addition, the decreasing threshold [TeX:] $$\sigma=0.02 \frac{1}{\mu}$$ is used for both the curvelets threshold and the Wave atoms threshold. In the absence of a priori information for image restoration, the matrix [TeX:] $$\wedge_{c}=0.02 I_{N_{1}}$$ and the weighted unit matrix [TeX:] $$\wedge_{\omega}=0.02 I_{N}.$$ In the experiment, Wavelets chose DB6 wavelet and the decomposition level is 4 layers. All threshold iteration methods use soft threshold function.

As shown in Fig. 4, Fig. 4(a) is the original Lena image; Fig. 4(b) is the direct null-filled restored image [TeX:] $$u=\Phi^{T} f$$ and Fig. 4(c) and 4(d) show the image recovered by iterative wavelets threshold method and the comparison between the error image and the original image; Fig. 4(e) and 4(f) show the image recovered by the iterative curvelets threshold method and the comparison between the error image and the original image; Fig. 4(g) and 4(h) show the image recovered by the algorithm to recover the image and the comparison between the error image and the original image. It can be seen that the image processed by algorithm has the highest signal-to-noise ratio (SNR) with the least error and strong robustness to noise.

Fig. 4.

Various imaging methods and the corresponding error contrasting images: (a) original image, (b) direct image restoration (SNR=32.98 dB); (c, d) the results (SNR=41.21 dB) and errors of iterative wavelet threshold image restoration; (e, f) the results (SNR=40.88 dB) and errors of iterative curvelets threshold image restoration; and (g, h) the results (SNR=45.06 dB) and errors of image restoration in this paper.
4.png

Fig. 5.

Comparison of various performance curves (SNR and the times of iterations between the relation curves). The blue solid line is the method in this paper, red dot dash line is the iterative curvelets threshold method, and the blue dotted line is the wavelets threshold method.
5.png

Fig. 5 shows the change of SNR with the increase of iteration times. It can be seen that the convergence speed of the image processed by this method is obviously faster than that of the other two methods (the blue solid line is the method in this paper and it can be seen from the slope change that it converges faster), and its SNR is significantly higher after combining the advantages of curvelets and wave atoms, two multi-scale analysis methods. Pseudo-Platonic curve [15], the vertical axis is the residual information [TeX:] $$\left\|f-\Phi u_{k}\right\|_{2}^{2},$$ the abscissa is the norm [TeX:] $$\left\|u^{k}\right\|_{1}$$ (the blue band solid line is the method in this paper. The red dot dash line is the iterative curvelets threshold method, the the blue dotted line is the wavelets threshold method, and each shape icon is an iteration). It can be seen that the curvelets-wave atoms splitting Bregman proposed in this paper is more effective than the iterative wavelets threshold and the iterative curvelets threshold both in SNR and convergence speed.

5. Conclusion

In this paper, an improved fusion method of region segmentation is proposed to analyze the characteristics of multi-focus images. Firstly, the improved static wavelet is introduced to obtain the initial fusion image quickly. After normalized cut segmentation of the original fusion image, the sum of the absolute values of the NSCT transform coefficients is selected as the fusion rule. The experiment results show that the image processed by this method will not loose the definition and this method can help to avoid the selection of complex fusion rules. It is a valid method.

Acknowledgement

This paper is supported by The National Natural Science Fund of China (No. 31501229), project of science and technology research program of Chongqing Education Commission of China (No. KJ1500635, KJQN201900821), Chongqing Nature Science Foundation for Fundamental science and frontier technologies (No. cstc2018jcyjAX0483 and cstc2015jcyjA60001), Open Fund for Key Laboratory of Manufacturing Equipment Mechanism Design and Control of Chongqing (No. KFJJ2019076), and Initial Scientific Research Fund of Young Teachers in Chongqing Technology and Business University (No. 2013-56-06).

Biography

Kaiqun Hu
https://orcid.org/0000-0002-5590-2584

She received her Ph.D. degree from China Agriculture University in 2011. Now she is a lecturer in Chongqing Technology and Business University. Her main research direction is the technology of computer control and image processing.

Biography

Xin Feng
https://orcid.org/0000-0001-8793-3775

She received her Ph.D. degree from China Agriculture University in 2011. Now she is a lecturer in Chongqing Technology and Business University. Her main research direction is the technology of computer control and image processing. He received Ph.D. degree in School of Computer Science and Engineering from Lanzhou University of Technology in 2012. Since March 2013, he has been teaching in the School of Chongqing Technology and Business University.

References

  • 1 D. L. Donoho, "Compressed sensing," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006.doi:[[[10.1109/TIT.2006.871582]]]
  • 2 M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, R. G. Baraniuk, "Single-pixel imaging via compressive sampling," IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 83-91, 2008.custom:[[[-]]]
  • 3 H. Rauhut, K. Schnass, P. Vandergheynst, "Compressed sensing and redundant dictionaries," IEEE Transactions on Information Theory, vol. 54, no. 5, pp. 2210-2219, 2008.doi:[[[10.1109/TIT.2008.920190]]]
  • 4 Q. S. Lian, S. Z. Chen, "Sparse image representation using the analytic contourlet transform and its application on compressed sensing," Acta Electronica Sinica, vol. 38, no. 6, pp. 1293-1298, 2010.custom:[[[-]]]
  • 5 V. Kiani, A. Harati, A. V. Mazloum, "Iterative wedgelet transform: an efficient algorithm for computing wedgelet representation and approximation of images," Journal of Visual Communication and Image Representation, vol. 34, pp. 65-77, 2016.doi:[[[10.1016/j.jvcir.2015.10.009]]]
  • 6 J. Liu, Y. Wang, K. Su, W. He, "Image denoising with multidirectional shrinkage in directionlet domain," Signal Processing, vol. 125, pp. 64-78, 2016.doi:[[[10.1016/j.sigpro.2016.01.013]]]
  • 7 Y. Lu, Q. Gao, D. Sun, Y. Xia, D. Zhang, "SAR speckle reduction using Laplace mixture model and spatial mutual information in the directionlet domain," Neurocomputing, vol. 173, pp. 633-644, 2016.doi:[[[10.1016/j.neucom.2015.08.010]]]
  • 8 C. Dossal, E. Le Pennec, S. Mallat, "Bandlet image estimation with model selection," Signal Processing, vol. 91, no. 12, pp. 2743-2753, 2011.doi:[[[10.1016/j.sigpro.2011.01.013]]]
  • 9 P. Amorim, T. Moraes, D. Fazanaro, J. Silva, H. Pedrini, "Electroencephalogram signal classification based on shearlet and contourlet transforms," Expert Systems with Applications, vol. 67, pp. 140-147, 2017.doi:[[[10.1016/j.eswa.2016.09.037]]]
  • 10 T. Goldstein, S. Osher, "The split Bregman method for L1-regularized problems," SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323-343, 2009.doi:[[[10.1137/080725891]]]
  • 11 L. Demanet, L. Ying, "Wave atoms and sparsity of oscillatory patterns," Applied and Computational Harmonic Analysis, vol. 23, no. 3, pp. 368-387, 2017.doi:[[[10.1016/j.acha.2007.03.003]]]
  • 12 S. Osher, Y. Mao, B. Dong, W. Yin, "Fast linearized Bregman iteration for compressive sensing and sparse denoising," Communications in Mathematical Sciences, vol. 8, no. 1, pp. 93-111, 2010.custom:[[[-]]]
  • 13 S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin, "An iterative regularization method for total variation-based image restoration," Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460-489, 2005.doi:[[[10.1137/040605412]]]
  • 14 J. Ma, "Improved iterative curvelet thresholding for compressed sensing and measurement," IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 1, pp. 126-136, 2010.doi:[[[10.1109/TIM.2010.2049221]]]
  • 15 E. Van Den Berg, M. P. Friedlander, "Probing the Pareto frontier for basis pursuit solutions," SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890-912, 2009.doi:[[[10.1137/080714488]]]