** Stagewise Weak Orthogonal Matching Pursuit Algorithm Based on Adaptive Weak Threshold and Arithmetic Mean **

Liquan Zhao* and Ke Ma*

## Article Information

## Abstract

**Abstract:** In the stagewise arithmetic orthogonal matching pursuit algorithm, the weak threshold used in sparsity estimation is determined via maximum iterations. Different maximum iterations correspond to different thresholds and affect the performance of the algorithm. To solve this problem, we propose an improved variable weak threshold based on the stagewise arithmetic orthogonal matching pursuit algorithm. Our proposed algorithm uses the residual error value to control the weak threshold. When the residual value decreases, the threshold value continuously increases, so that the atoms contained in the atomic set are closer to the real sparsity value, making it possible to improve the reconstruction accuracy. In addition, we improved the generalized Jaccard coefficient in order to replace the inner product method that is used in the stagewise arithmetic orthogonal matching pursuit algorithm. Our proposed algorithm uses the covariance to replace the joint expectation for two variables based on the generalized Jaccard coefficient. The improved generalized Jaccard coefficient can be used to generate a more accurate calculation of the correlation between the measurement matrixes. In addition, the residual is more accurate, which can reduce the possibility of selecting the wrong atoms. We demonstrate using simulations that the proposed algorithm produces a better reconstruction result in the reconstruction of a one-dimensional signal and two-dimensional image signal.

**Keywords:** Compressed Sensing , Computed Correlation , Reconstruction Algorithm , Weak Threshold

## 1. Introduction

The Nyquist sampling theorem is the most commonly used method in the field of signal processing. However, technological developments have demanded ever-increasing bandwidths, rendering the Nyquist sampling theorem increasingly inadequate in the design of signal processing hardware. Compressed sensing (CS) [1] has been proposed in an effort to solve this problem. The sampling rate required by CS is much smaller than that required by the Nyquist sampling theorem, and CS can reconstruct the original signal without distortion. Therefore, researchers have increasingly turned to CS in their experiments, and CS now plays a key role in medical imaging, radar imaging, and smart glasses [2-5].

The reconstruction algorithm is the most critical step in the theory of CS as it directly determines whether CS can be applied in practice. Among the reconstruction algorithms, the greedy algorithm is the most widely used due to its simple algorithm structure. The orthogonal matching pursuit (OMP) algorithm [6] is the basic algorithm in the greedy algorithm. This algorithm selects the most relevant atom of the measurement matrix and the residual at each iteration. Selection of only one atom at each iteration increases the running time of the algorithm. To solve this problem, Liao et al. [7] proposed the regularized orthogonal matching pursuit (ROMP) algorithm, which chooses all the atoms via regularization strategy; thus, the ROMP has a shorter running time. The generalized orthogonal matching pursuit (gOMP) algorithm [8] has a similar characteristic: this algorithm selects S(S>1) as the most relevant atom for reconstruction. However, if these three algorithms select atoms that do not contain signal information, they cannot delete the wrong atoms; this will affect the quality of the reconstructed signal. Other researchers have solved this problem by applying a backtracking strategy [9,10] to the greedy algorithms, whereby these algorithms use the backtracking strategy to test the atoms a second time and remove some wrong atoms. The backtracking strategy is used to rearrange the atoms in a set in such a way that atoms that do not satisfy requirements are eliminated.

However, all of these greedy algorithms can only be used for signal reconstruction when the signal sparsity is known. In order to increase the practicality of the greedy algorithm, Lee [11] invented the stagewise weak orthogonal matching pursuit (SWOMP) algorithm, which uses a weak threshold to add the estimated sparsity for each iteration in order to approach the real sparsity. Therefore, SWOMP can recover signals even when the sparsity is not known. However, it relies heavily on the weak threshold, and so the weak threshold must be fixed; different weak thresholds lead to markedly different reconstruction results. In 2018, Zhang and Sum [12] proposed the stagewise arithmetic orthogonal matching pursuit (SAOMP) algorithm, in which the weak threshold is different in each iteration. The aim behind SAOMP is to reduce the dependence on weak thresholds and thus improve the accuracy of the reconstruction.

In this paper, we propose a modified algorithm based on the SAOMP algorithm. Our algorithm also uses a varying weak threshold, and the value of residual error is used to control the weak threshold. When the measurement matrix is matched to the residual error, the proposed algorithm does not use the traditional inner product to match the most relevant atom; rather, it modifies the inner product formula to improve the correlation between the measurement matrix and the residual error. In addition, a backtracking method is included in order to compute the estimate signal a second time and thus improve the reconstruction performance.

## 2. Compressed Sensing Model

The CS theorem is shown in (1). CS uses the measurement matrix to reduce the dimensions of the N-dimensional signal s; compared with the original signal, the reduced-dimensional signal y still contains all original information. y is then used to reconstruct the original signal.

CS can only process sparse signals, which in turn contain only K non-zero values. However, many signals are not sparse, and sparse signals are needed to perform CS. In order to make the signal sparse, we use the sparse basis matrix [TeX:] $$\Psi=\left\{\varphi_{1}, \varphi_{2}, \ldots, \varphi_{N}\right\}$$ to sparse the original signal, as shown in (2):

where is the representation of in the domain, and is an [TeX:] $$N \times N$$ matrix. If has only K non-zero values, then the signal can be compressed. Combining (1) and (2),

In CS, the and must be irrelevant. [TeX:] $$A=\Phi \Psi$$ is sensing matrix, and the original signal s can be reconstructed by y. Because the reconstruction process requires M unknown numbers to find N solutions, so it is an NP-hard problem. Researchers often use the greedy algorithms based on [TeX:] $$l_{0}$$ minimum norm to solve this problem, as can be seen in (4).

In (4), the greedy algorithm uses the least squares method to find the optimal solution, and then calculates the estimated signal value.

## 3. SAOMP Algorithm

The greedy algorithm selects the most relevant atoms between and residual error r. These atoms usually contain all the original signal information, so that these atoms can calculate the reconstructed signal. However, SWOMP uses a fixed threshold to select atoms, so SWOMP relies heavily on the fixed threshold value, which is shown in (5). Using the fixed threshold, the algorithm may still select many atoms in the later stage of the iteration, and the atomic set may contain some irrelevant atoms, which affects the reconstruction result.

Here, [TeX:] $$J_{k}$$ represents the index number found in each iteration, [TeX:] $$u_{k}$$ is the selected atoms, is the measurement matrix, r is the residual vector, and is the fixed threshold. To solve this problem, Zhang and Sun [12] proposed the SAOMP algorithm. This algorithm creates a new weak threshold strategy in the index set selection, whereby a threshold can be varied. This threshold strategy is shown in (6) and (7). The threshold increases after each iteration, and so there is less dependency on the threshold. As the threshold increases, the number of atoms that can be selected is reduced, thus making the estimated sparsity more accurate. Accordingly, the performance of reconstruction can be improved.

##### (6)

[TeX:] $$J_{k}=\operatorname{find}\left(u_{k} \geq T h_{k} \times \max \left(\left|\left\langle\Phi^{T}, r_{k-1}\right\rangle\right|\right)\right)$$

Here, S is the maximum number of iterations and [TeX:] $$T h_{k}$$ is weak threshold parameter. [TeX:] $$T h_{0}$$ is the initial weak threshold. At the first iteration, the SAOMP algorithm uses the inner product to compute the correlation between the measurement matrix and residual error, as can be seen in (8). The SAOMP algorithm then selects the index by threshold [TeX:] $$T h_{k}$$ as shown in (9).

Then, the support set is updated using (10) and the estimated signals calculated using (11):

where [TeX:] $$\Lambda_{k}$$ is the support set, [TeX:] $$b_{k}$$ is the estimated signal obtained using the least squares method, and y is the measured vector. The SAOMP algorithm applies the backtracking approach in order to make the reconstruction result more accurate (obtained by (12)). This algorithm uses the same threshold selection strategy to filter the signal estimates in order to improve their accuracy.

##### (12)

[TeX:] $$\Lambda_{k-1}=\operatorname{find}\left(\left|b_{k}\right| \geq T h_{k} * \max \left|b_{k}\right|\right)$$Then, it updates the residual error using (13) and calculates threshold [TeX:] $$T h_{k}$$ using (14).

Then, if the 2-norm of residual error [TeX:] $$\left\|r_{k}\right\|_{2}$$ satisfies the stopping condition, output [TeX:] $$b_{k}$$ is obtained. Algorithm 1 provides more detailed steps of the SAOMP algorithm.

## 4. Proposed Method

In the SAOMP method, the threshold used in sparsity estimation depends on the maximum iterations. A large number of maximum iterations leads to slower threshold growth. This makes the sparsity estimation value larger, which affects the reconstruction accuracy and undermines the success of reconstructing the original signal. To overcome this problem, we improved the variable threshold strategy to update the threshold as follows. If

then

where r is the residual error, [TeX:] $$\|\cdot\|_{2}$$ is 2-norm, [TeX:] $$\eta_{1}$$ is a set parameter, Th is threshold used in sparsity estimation, is the current iterations, and is a constant ranging from 0.01 to 0.1. If [TeX:] $$T h_{k}$$ is larger than 1, then we set [TeX:] $$T h_{k}=1$$. When the iteration number increases, the weak threshold Th becomes larger, which makes the estimated sparsity grow more slowly and approach the true sparsity. When the residual error is smaller than [TeX:] $$\eta_{1}$$, it means that it is close to convergence. If the threshold is less than or equal than 1, or if we still use the threshold hold in (16), there may be an overestimation. Therefore, we suppose that if,

then

where [TeX:] $$\eta_{2}$$ is another set parameter. It means that if the residual error is in the range specified in (17), the maximum threshold of 1 is used as the threshold. This helps avoid selection of lower correlation atoms to the atomic set. When the residual error is smaller than [TeX:] $$\eta_{2}$$, the algorithm will stop the iterations.

Based on the threshold strategy, we can use the variable threshold strategy to select atoms to extend the support set. It is expressed as follows

##### (19)

[TeX:] $$J_{k}=\operatorname{find}\left(u_{k} \geq T h_{k} \times \max \left(u_{k}\right)\right)$$where [TeX:] $$J_{k}$$ is the index number found in each iteration, [TeX:] $$T h_{k}$$ is the threshold, and [TeX:] $$u_{k}$$ (shown in (20)) is the correlation vector [13] between the measurement matrix and residual error.

For the atomic correlation, the SAOMP algorithm uses the inner product [14], which is shown in (21):

##### (21)

[TeX:] $$\langle x, y\rangle=\frac{x \square y}{\|x\|\|y\|}=\frac{\sum_{i=1}^{n} x_{i} \square y_{i}}{\sqrt{\sum_{i=1}^{n} x_{i}^{2} \sum_{i=1}^{n} y_{i}^{2}}}$$where x and y are two vectors, which are [TeX:] $$x=\left\{x_{1}, x_{2}, \ldots x_{n}\right\}$$ and [TeX:] $$y=\left\{y_{1}, y_{2}, \ldots y_{n}\right\}$$. As can be seen in (21), the denominator is the geometric mean of the two vectors. However, the geometric mean may not preserve the original state of each vector, and some information might be lost as result. This will affect the correlation vector and furthermore affect the accuracy of the reconstruction signal. To solve this problem, we used the arithmetic mean to replace the geometric mean, as shown in (22). The arithmetic mean can preserve much more information in computing correlation [15]. In order to reduce the influence from outliers, we also used covariance to replace the joint expectation for two variables [16]. Therefore, the correlation between the measurement matrix and residual error r is computed as follows:

##### (22)

[TeX:] $$g\left\langle\Phi_{i}, r_{k-1}\right\rangle=\frac{\left(\sum_{j=1}^{N} \Phi_{i, j} \times r_{k-1, j}-\frac{1}{N} \sum_{j=1}^{N} \Phi_{i, j} \times \sum_{j=1}^{N} r_{k-1, j}\right)}{\left(\left(\sum_{j=1}^{N} \Phi_{i, j}^{2}-\frac{1}{N}\left(\sum_{j=1}^{N} \Phi_{i, j}\right)^{2}\right)+\left(\sum_{j=1}^{N} r_{k-1, j}^{2}-\frac{1}{N}\left(\sum_{j=1}^{N} r_{k-1, j}^{2}\right)^{2}\right)\right)}$$where [TeX:] $$\Phi=\left[\Phi_{1}, \Phi_{2}, \cdots, \Phi_{N}\right]^{T}, \Phi_{i}=\left[\Phi_{i, 1}, \Phi_{i, 2}, \cdots, \Phi_{i, M}\right], N$$ is the length of the original signal, and M is the measurement. In order to make atom selection more accurate and ensure the reconstruction signal more closely resembles the original signal, we used the Jaccard index method to magnify the difference between the [TeX:] $$\Phi_{i}$$ and [TeX:] $$r_{k-1}$$ using (23) [17]. Therefore, we improved the expression for computing the correlation between the measurement matrix and residual error r:

##### (23)

[TeX:] $$g\left\langle\Phi_{i}, r_{k-1}\right\rangle=\frac{\left(\sum_{j=1}^{N} \Phi_{i, j} \times r_{k-1, j}-\frac{1}{N} \sum_{j=1}^{N} \Phi_{i, j} \times \sum_{j=1}^{N} r_{k-1, j}\right)}{\left(\left(\sum_{j=1}^{N} \Phi_{i, j}^{2}-\frac{1}{N}\left(\sum_{j=1}^{N} \Phi_{i, j}\right)^{2}\right)+\left(\sum_{j=1}^{N} r_{k-1, j}^{2}-\frac{1}{N}\left(\sum_{j=1}^{N} r_{k-1, j}^{2}\right)^{2}\right)\right)-\left(\sum_{j=1}^{N} \Phi_{i, j} \times r_{k-1, j}-\frac{1}{N} \sum_{j=1}^{N} \Phi_{i, j} \times \sum_{j=1}^{N} r_{k-1, j}\right)}$$Based on (23), we can use (24) and (25) to search the index of atoms and thus collect more information pertaining to the signal:

##### (25)

[TeX:] $$J_{k}=\operatorname{find}\left(u_{k} \geq T h_{k} \times \max \left(u_{k}\right)\right)$$After obtaining the value of [TeX:] $$J_k$$, the support index set is updated:

where [TeX:] $$\Lambda_{k}$$ is the support index set. The atomic sequence number (which supports the index set) is then used to compute the estimated signal [TeX:] $$b_{k}$$, as shown in (27)

where [TeX:] $$\Phi_{\Lambda_{k}}^{\dagger}$$ is the pseudo inverse of [TeX:] $$\Phi_{\Lambda_{k}}$$, and y is the measurement vector.

In order to further reduce errors in reconstruction, we used a backtracking method to estimate the signal base, as shown in (28), (29), and (30). We firstly sorted the estimated signal [TeX:] $$b_{k}$$ in descending order and selected the index of the signal with L largest value (where L is the length of the support index set). Secondly, we used the selected index as the index of measurement matrix in order to construct a new measurement matrix and reconstruct the original signal.

where [TeX:] $$\Lambda_{k}$$ is the current support index set, F is the index set that corresponds to the L largest value of [TeX:] $$b_{k}$$, Φ_F^†[TeX:] $$\Phi_{F}^{\dagger}$$ is the pseudo inverse of Φ_F[TeX:] $$b_{k}$$, [TeX:] $$\Phi_{F}$$, and y is the measurement vector. Finally, we computed the residual error [TeX:] $$r_{k}$$ with (31).

If [TeX:] $$\left\|r_{k}\right\|_{2} \leq \eta_{2}$$ or the number of iterations is larger than the maximum iterations, the algorithm stops the iteration. Algorithm 2 details the steps of the proposed method:

## 5. Results and Discussion

##### 5.1 Parameters Selection

The initial values of weak threshold [TeX:] $$T h_{0}$$ and parameter a in (16) will affect the reconstruction performance. In order to determine the optimal parameters, we used a Gaussian signal with length N=256 as the source signal, measurement value M=128, and sparsity level [TeX:] $$K \in(20,70)$$. The stop iteration parameters are [TeX:] $$\eta_{1}=10^{-2} \text {and } \eta_{2}=10^{-6}$$, the number of maximum iterations is 20, and parameter a is 0.05. Fig. 1 shows the reconstruction probability for different initial thresholds [TeX:] $$T h_{0}$$ and sparsity levels. As shown in Fig. 1, reconstruction probability reduces with increasing reconstruction probability and when the initial threshold is 0.7.The algorithm has the best reconstruction probability irrespective of the sparsity value. Fig. 2 shows the reconstruction probability with different measurements and initial thresholds. It can be seen that the reconstruction probability increases with an increasing number of measurements and that the algorithm also has the best reconstruction probability when the initial threshold is 0.7. Based on the results shown in Figs. 1 and 2, we selected 0.7 as the initial threshold in the following experiments.

Figs. 3 and 4 show the reconstruction performance for a different parameter a setting following updating of the weak threshold. Fig. 3 shows the reconstruction probability for different parameter and sparsity levels with a measurement value of 128. Fig. 4 shows the reconstruction probability result for different parameter a settings and measurement values with a sparsity level of 30. As shown in Fig. 3, our proposed method has the highest reconstruction probability for the different sparsity levels when a=0.03. In Fig. 4, when a=0.03, the best reconstruction probability is obtained. Based on the results shown in Figs. 3 and 4, we set the following thresholds [TeX:] $$T h_{0}=0.7$$ and a=0.03 in the following simulations.

##### 5.2 One-Dimensional Signal Reconstruction

This simulation was used to test the one-dimensional signal reconstruction performance of the SWOMP algorithm, SAOMP algorithm, and our proposed method. The simulation employed a Gaussian signal of N=256 as the original signal. The number of maximum iterations was 20. As described in [12], the SAOMP algorithm has the best performance in signal reconstruction when the weak threshold is 0.5. Therefore, we selected 0.5 as the weak threshold and used the stop iteration parameter [TeX:] $$\eta=10^{-6}$$ for the SAOMP method. In our proposed method, the weak threshold is 0.7, the stop iteration parameters are [TeX:] $$\eta_{1}=10^{-2}, \eta_{2}=10^{-6}$$, and parameter a is 0.03. In the SWOMP algorithm the weak threshold is 0.5, and the stop iteration parameter is [TeX:] $$\eta=10^{-6}$$. Fig. 5 shows the contrast between the estimated sparsity values based on different algorithms and real sparsity values. It can be seen in Fig. 5 that there is a convergence of estimated values with the real sparsity when real sparsity equals 20. When the real sparsity is larger than 20 and smaller than 30, there is greater dissimilarity between estimated sparsity values of the SWOMP and SAOMP algorithms, whereas the real sparsity and the estimated sparsity of our proposed method value are still equal. As the sparsity level increases, it becomes more obvious that there is better parity between our proposed method’s estimated values and the real sparsity value, compared to the SAOMP algorithm and SWOMP algorithms. This proves that the number of wrong atoms selected in our proposed algorithm is smaller.

Fig. 6 shows the reconstruction probability with different measurements for these three algorithms. The sparsity level is K=30, and the measurement value is [TeX:] $$M \in(70,125)$$; the other parameters are given at the start of this section. It can be seen that when the number of measurements is 105, the reconstruction probability is 100% for our proposed method, 94.54% for SAOMP, and 52.87% for SWOMP. This means that the measurement required for our proposed method is smaller than for the SAOMP and SWOMP algorithms. As illustrated in the Fig. 6, our proposed method attains higher reconstruction probability compared with the SAOMP and SWOMP algorithms.

Fig. 7 shows the reconstruction probability with measurement M=128 for different algorithms. In this case, we selected sparsity value [TeX:] $$K \in(20,70)$$. The other experimental conditions were the same as those of previous simulation conditions. Fig. 7 shows the reconstruction probability results of the algorithms with different sparsity levels. When the sparsity level is less than 30, the reconstruction probability of all these three algorithms is 100%. As the sparsity level increases, the reconstruction probability reduces for all algorithms. It can be seen that our proposed method maintains the highest reconstruction probability among the three algorithms. When K=55, the reconstruction probability of our proposed algorithm reaches 80%, while the reconstruction probability values of the other two algorithms are less than 40%. As shown in Figs. 6 and 7, our proposed algorithm attains a higher reconstruction probability compared with the SAOMP and SWOMP algorithms at different measurements and sparsity levels. Therefore, when processing one-dimensional signals, our proposed method has better reconstruction performance and can still maintain high probability reconstruction under more complicated conditions.

##### 5.3. Two-Dimensional Image Reconstruction

This simulation used two-dimensional image signals to test the reconstruction performance of the SWOMP, SAOMP, and our proposed algorithm. The images were labeled Lena, Brain, and Camera. All of them are grayscale images with a sampling rate of 0.6. Our proposed method uses [TeX:] $$T h_{0}=0.7, a=0.03$$, and residual error threshold [TeX:] $$\eta_{1}=10^{-2}, \eta_{2}=10^{-6}$$. In the other two algorithms, the weak threshold is 0.5, and the stop iteration parameter is [TeX:] $$\eta=10^{-6}$$. In order to test the reconstruction result, we used the peak signal-to-noise ratio (PSNR). A high PSNR value indicates high image quality as expressed by (32) and (33):

##### (32)

[TeX:] $$M S E=\frac{1}{M \times N} \sum_{i=0}^{M-1} \sum_{j=0}^{N-1}|\hat{x}(i, j)-x(i, j)|^{2}$$

In (32), M, N represents the size of the image (in this simulation the size is 256×256), x(i,j) represents the original image, and [TeX:] $$\hat{x}(i, j)$$ represents the reconstructed image. In (33), MSE is the mean squared error. The image size is [TeX:] $$256=2^{8})$$, so n is 8. The larger the PSNR value, the better the image quality. Figs. 8–10 illustrate the differences between the original images and the reconstructed images, as prepared using the three algorithms.

As can be seen in Figs. 8–10, all three algorithms completed image reconstruction, and the reconstruction effect is difficult to distinguish, so the PSNR value was used to detect the reconstruction effect of the three algorithms. In Fig. 8, the PSNR is 30.0335, 28.8417, and 26.5546 dB for our proposed method, SAOMP, and SWOMP algorithms, respectively. In Fig. 9, the PSNR is 24.3275, 22.9024, and 19.0101 dB for our proposed method, SAOMP, and SWOMP algorithms, respectively. In Fig. 10, the PSNR is 27.4959, 25.8449, and 24.8360 dB for our proposed method, SAOMP, and SWOMP algorithms, respectively. It can be clearly seen that the PSNR value of the proposed algorithm is higher than those of the other two algorithms. The PSNR value of the proposed algorithm is about 3.5 dB higher than that of the SWOMP algorithm on average, and it is 5 dB higher than the SWOMP algorithm in the Brain image. Compared with the SAOMP algorithm, the PSNR value of our proposed algorithm is about 2 dB higher. Thus, this simulation proves that our proposed algorithm has better reconstruction performance compared to the SWOMP and SAOMP algorithms.

Fig. 11 shows the average PSNR values at different sampling rates. The three algorithms in the previous simulation have good reconstruction results for the Lena images. The sampling rate ranges are from 0.3 to 0.8. The higher the sampling rate, the higher the PSNR value. The other experimental conditions are the same as those of the previous simulation conditions. Fig. 11 shows evidence that the PSNR value of our proposed algorithm is higher than those of the other two algorithms at different sampling rates, so our proposed algorithm has better practicability.

Fig. 12 shows the reconstruction times of the three algorithms with different sampling rates. Because SAOMP and our proposed algorithm use the adaptive threshold strategy, the iteration number of the two algorithms is higher than the SWOMP algorithm, so the running time of these two algorithms is longer. Further, our proposed algorithm uses the improved generalized Jaccard coefficient, and so our proposed method has the longest reconstruction time. Since, our proposed algorithm is more accurate in selecting atoms, our proposed algorithm also has fast convergence speed. Compared with the SAOMP algorithm, the reconstruction time may not have significantly improved, but it is still within the acceptable range. These simulations prove that our proposed algorithm has good reconstruction performance for two-dimensional images.

## 6. Conclusion

In this paper, we demonstrate an improved adaptive threshold strategy based on the SAOMP algorithm. The threshold is controlled by residual error, so that the error of sparsity estimation can be reduced. Secondly, we improved the generalized Jaccard coefficient for calculating the atom correlation by computing the covariance of the vector in order to reduce the error caused by selection of lower correlation atoms. Compared with SAOMP and SWOMP, our proposed algorithm achieves better reconstruction accuracy for a one-dimensional Gaussian signal and two-dimensional image signal, provided a suitable threshold is selected. Although our proposed method has longer algorithm running time, the difference in running time between our proposed method and SAOMP is not very large; nevertheless, it is still acceptable.

In the future, we intend to improve the reconstruction time of the algorithm without compromising its reconstruction accuracy.

## References

- 1 X. Chen, Y. Zhang, R. Qi, "Block sparse signals recovery algorithm for distributed compressed sensing reconstruction,"
*Journal of Information Processing Systems*, vol. 15, no. 2, pp. 410-421, 2019.custom:[[[-]]] - 2 Y. Liu, R. Song, Y. Wang, L. Bai, "Design of real-time communication system for portable smart glasses based on Raspberry PI,"
*Journal of Northeast Electric Power University*, vol. 39, no. 4, pp. 81-85, 2019.custom:[[[-]]] - 3 G. Yang, S. Yu, H. Dong, G. Slabaugh, P. L. Dragotti, X. Ye, et al., "Dagan: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction,"
*IEEE Transactions on Medical Imaging*, vol. 37, no. 6, pp. 1310-1321, 2018.doi:[[[10.1109/TMI.2017.2785879]]] - 4 C. Liu, R. Song, Y. Hou, Y. Zhang, "Design and implementation of high voltage transmission equipment auxiliary management system,"
*Journal of Northeast Electric Power University*, vol. 39, no. 4, pp. 86-90, 2019.custom:[[[-]]] - 5 B. Li, F. Liu, C. Zhou, Y. Lv, J. Hu, "Phase error correction for approximated observation-based compressed sensing radar imaging,"
*Sensors*, vol. 17, no. 3, 2017.doi:[[[10.3390/s17030613]]] - 6 R. Qi, Y. Zhang, H. Li, "Block sparse signals recovery via block backtracking based matching pursuit method,"
*Journal of Information Processing Systems*, vol. 13, no. 2, pp. 360-369, 2017.custom:[[[-]]] - 7 Y. Liao, X. Zhou, X. Shen, G. Hong, "A channel estimation method based on improved regularized orthogonal matching pursuit for MIMO-OFDM systems,"
*Acta Electronica Sinica*, vol. 45, no. 12, pp. 2848-2854, 2017.custom:[[[-]]] - 8 D. Park, "Improved sufficient condition for performance guarantee in generalized orthogonal matching pursuit,"
*IEEE Signal Processing Letters*, vol. 24, no. 9, pp. 1308-1312, 2017.doi:[[[10.1109/LSP.2017.2723724]]] - 9 F. Huang, J. Tao, Y. Xiang, P. Liu, "Parallel compressive sampling matching pursuit algorithm for compressed sensing signal reconstruction with OpenCL,"
*Journal of Systems Architecture*, vol. 27, pp. 51-60, 2017.doi:[[[10.1016/j.sysarc.2016.07.002]]] - 10 P. Goyal, B. Singh, "Subspace pursuit for sparse signal reconstruction in wireless sensor networks,"
*Procedia Computer Science*, vol. 125, pp. 228-233, 2018.doi:[[[10.1016/j.procs.2017.12.031]]] - 11 D. Lee, "MIMO OFDM channel estimation via block stagewise orthogonal matching pursuit,"
*IEEE Communications Letters*, vol. 20, no. 10, pp. 2115-2118, 2016.doi:[[[10.1109/LCOMM.2016.2594059]]] - 12 Y. Zhang, G. Sun, "Stagewise arithmetic orthogonal matching pursuit,"
*International Journal of Wireless Information Networks*, vol. 25, no. 2, pp. 221-228, 2018.doi:[[[10.1007/s10776-018-0387-2]]] - 13 X. Zhang, H. Du, B. Qiu, S. Chen, "Fast sparsity adaptive multipath matching pursuit for compressed sensing problems,"
*Journal of Electronic Imaging*, vol. 26, no. 3, 2017.doi:[[[10.1117/1.JEI.26.3.033007]]] - 14 X. Zhang, W. Dong, M. Tang, J. Guo, J. Liang, "gOMP reconstruction algorithm based on generalized Jaccard coefficient for compressed sensing,"
*Journal of Shandong University (Natural Science)*, vol. 52, no. 11, pp. 23-28, 2017.custom:[[[-]]] - 15 Y. Mu, X. Liu, L. Wang, "A Pearson’s correlation coefficient based decision tree and its parallel implementation,"
*Information Sciences*, vol. 435, pp. 40-58, 2018.custom:[[[-]]] - 16 L. Zhang, D. Liang, D. Zhang, X. Gao, X. Ma, "Study of spectral reflectance reconstruction based on an algorithm for improved orthogonal matching pursuit,"
*Journal of the Optical Society of Korea*, vol. 20, no. 4, pp. 515-523, 2016.custom:[[[-]]] - 17 H. Y asin, M. M. Y asin, F. M. Y asin, "Automated multiple related documents summarization via Jaccard’s coefficient,"
*International Journal of Computer Applications*, vol. 13, no. 3, pp. 12-15, 2011.custom:[[[-]]]