PDF  PubReader

Liu , Tian , Luo , Zou* , and Tang: A Windowed-Total-Variation Regularization Constraint Model for Blind Image Restoration

Ganghua Liu , Wei Tian , Yushun Luo , Juncheng Zou* and Shu Tang

A Windowed-Total-Variation Regularization Constraint Model for Blind Image Restoration

Abstract: Blind restoration for motion-blurred images is always the research hotspot, and the key for the blind restoration is the accurate blur kernel (BK) estimation. Therefore, to achieve high-quality blind image restoration, this thesis presents a novel windowed-total-variation method. The proposed method is based on the spatial scale of edges but not amplitude, and the proposed method thus can extract useful image edges for accurate BK estimation, and then recover high-quality clear images. A large number of experiments prove the superiority.

Keywords: Edge Amplitude , Image Restoration , Kernel , Spatial Scale , Windowed-Total-Variation

1. Introduction

The problem for blind image restoration is to estimate the blur kernel (BK) k from a blurred image y, and then restore the clear image x. This process is:

(1)
[TeX:] $$y=k^{*} x+n$$

The accurate estimation of the BK is a key for successful blind image restoration. In recent years, researchers proposed a lot of blind restoration methods. Shan et al. [1] proposed a piecewise function to estimate the BK in 2008. Almeida and Almeida [2] proposed an edge extraction filter to estimate the motion BK in 2010. In 2011, Krishnan et al. [3] combined the L1 norm and the L2 norm to realize the estimation of the BK. In 2013, Xu et al. [4] used the L0 norm to extract large-scale edges in an image, and then used the large-scale edges to estimate the BK. In 2015, Ma et al. [5] used a set of sparse priors to extract significant edge structures to estimate the BK. In 2016, Zuo et al. [6] proposed an an Lp norm to achieve accurate BK estimation. In 2016, Pan et al. [7] first introduced the dark channel prior (DCP) into blind image deblurring, which both achieved accurate BK estimation and high-quality image restoration. In 2017, Pan et al. [8] introduced the L0 norm both in the gradient domain and the space domain. In 2019, Guo and Ma [9] proposed a local image block prior, and then use these edges to guide the estimation of the BK, which can get better restoration results. In 2020, Chen et al. [10] combined the inherent structural correlation and spatial sparsity of images, and used Laplacian priors to achieve image blind deblurring. In 2019, Chen et al. [11] found that blurring will reduce the magnitude of the image gradient, and a blind restoration method named the maximum value of the local gradient (LMG) was proposed. In 2020, Lim et al. [12] fused the texture perception prior of L0 norm and L2 norm to process remotely sensed blurred images, this method has a good restoration effect on the texture area of the image. In 2020, Cai et al. [13] proposed a DBCPeNet, which uses the neural network. In 2020, Wu et al. [14] proposed a network to deal with the problem of video deblurring. In 2020, Zhang et al. [15] used two GANs to learn to deblur, the first GAN learned how to blur clear images, and the first GAN guided the second GAN to learn how to convert blurred images into clear images. In 2020, Li et al. [16] found that for the images obtained by the learning method, peak signal-to-noise ratio (PSNR) cannot accurately reflect the image quality, so they proposed a new image evaluation. 2021, Ren et al. [17] presented a deblurring technology for spatial perception. In 2021, Ren et al. [18] found that the residuals caused by blur or other image degradation are spatially dependent and complex in distribution. Therefore, training on a set of blurred and real image pairs to parameterize and learn the regularization term of the restored image. Hu et al. [19] proposed a network for single image reconstruction. Lu et al. [20] proposed an unsupervised deblurring method.

Here, a windowed-total-variation regularization constraint that can achieve accurate estimation of the BK is proposed. Different from existing methods, the windowed-total-variation regularization constraint is got using the spatial scale of image edges rather than the magnitude, so it can achieve more accurate BK estimation and high-quality image restoration.

2. The Proposed Windowed-Total-Variation Constraint

In [20], a relative total variation model (RTVM) was proposed for structure extraction. And our method is based on the RTVM, which is shown in formula (2):

(2)
[TeX:] $$r(i)=\frac{\sum_{j \in N(i)}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}+\frac{\sum_{j \in N(i)}\left(\left(\nabla_{v} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{v} x\right)(j)\right)^{2}+\varepsilon}$$

The differences between formula (2) the RTVM in [21] are: the weight in [21] is removed, and the quadratic is used instead of the absolute values. N(i) represents an image block centered on the pixel i, [TeX:] $$\nabla_{h} u \text { and } \nabla_{v} u$$ are the discrete first-order difference operations in the horizontal and vertical directions, respectively. [TeX:] $$\mathcal{E}$$ is a small positive number.

It can be seen from formula (2): if the width of N(i) is set to be the same as that of the BK, then by minimizing formula (2), we can extract the edge with a spatial scale larger than the BK, regardless of its magnitude. Therefore, to estimate the BK accurately, this thesis presents a windowed-total-variation constraint method, shown in formula (3):

(3)
[TeX:] $$\min _{x, k}\|x * k-y\|_{2}^{2}+\lambda_{u i} R(x)+\lambda_{k}\|k\|_{p}$$

where [TeX:] $$R(x)=\sum_{i} r(i),\|\cdot\|_{p} \text { and }\|\cdot\|_{2}$$ denote the Lp norm and the L2 norm respectively. [TeX:] $$\lambda_{u} \text { and } \lambda_{k}$$ represent regularization parameters. From Eq. (3) we can see that, R(x) is a windowed-total-variation regularization constraint term, which can only extract the useful large-scale edges. [TeX:] $$\|k\|_{p}$$ is sparsity constraint term, which preserves the BK. In Section 3, we will discuss how to solve the proposed windowed-total-variation regularization constraint model in detail.

3. The Solution of the Proposed Windowed-Total-Variation Method

In this section, we adopt an alternate iterative solving algorithm to solve model (3) by converting it into two sub-problems x and k, respectively.

3.1 Solving u Sub-problem

For the solution of the x sub-problem, we fix k, and formula (3) is transformed into:

(4)
[TeX:] $$\min _{x}\|x * k-y\|_{2}^{2}+\lambda_{i i} R(x)$$

Obviously, the key to the solution of formula (4) is to solve the regularization term R(x). First, we reorganize the elements in [TeX:] $$\frac{\sum_{j \in N i j}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j e N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}$$ as follows:

(5)
[TeX:] $$\sum_{i} \frac{\sum_{j \in N(i)}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}=\sum_{j} \sum_{i \in N(j)} \frac{1}{\left(\sum_{j \in N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}\left(\left(\nabla_{h} x\right)(j)\right)^{2}=\sum_{j} w_{h}(j)\left(\left(\nabla_{h} x\right)(j)\right)^{2}$$

In the same way, for the [TeX:] $$\frac{\sum_{j \in N(i)}\left(\left(\nabla_{v} x\right)(j)\right)^{2}}{\left(\sum_{j \in \mathcal{N}(t)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon},$$ we can get:

(6)
[TeX:] $$\sum_{i} \frac{\sum_{j \in N(i)}\left(\left(\nabla_{r} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon}=\sum_{j} \sum_{i \in \mathbb{N}(j)} \frac{1}{\left(\sum_{j \in \mathbb{N}(i)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon}\left(\left(\nabla_{r} x\right)(j)\right)^{2}=\sum_{j} w_{v}(j)\left(\left(\nabla_{1} x\right)(j)\right)^{2}$$

Therefore, we can get formula (7):

(7)
[TeX:] $$Therefore, we can get formula (7):$$

where [TeX:] $$W_{h}(h)=w_{h}(j), W_{v}(h)=w_{v}(j).$$ And ∘ is the piecewise multiplication. Therefore, x can be solved using fast Fourier transform (FFT).

(8)
[TeX:] $$x=F^{-1}\left(\frac{\overline{F(k)} \circ F(f)}{\overline{F(k)} \circ F(k)+\lambda_{u}\left(W_{h} \circ \overline{F\left(\nabla_{h}\right)} \circ F\left(\nabla_{h}\right)+W_{v} \circ \overline{F\left(\nabla_{v}\right)} \circ F\left(\nabla_{v}\right)\right)}\right)$$

3.2 Solving k Sub-problem

For the solution of the k sub-problem, we fix u and solve the k in the gradient domain, and the formula (3) is thus transformed into:

(9)
[TeX:] $$\min _{k}\left\|\left(\nabla_{h} x, \nabla_{v} x\right) * k-\left(\nabla_{h} y, \nabla_{v} y\right)\right\|_{2}^{2}+\lambda_{k}\|k\|_{p}$$

For the sparsity of BKs, we set p=1. To solve the k sub-problem efficiently, we use an extra variable [TeX:] $$b_{k}$$ and get:

(10)
[TeX:] $$\min _{k, b_{1}}\|\nabla x * k-\nabla y\|_{2}^{2}+\lambda_{k}\left\|b_{k}\right\|_{1}+\beta\left\|b_{k}-k\right\|_{2}^{2}$$

where [TeX:] $$b_{k}$$ is the extra variable and [TeX:] $$\beta$$ is a penalty parameter. Similarly, we can solve Eq. (10) by converting it into two sub-problems [TeX:] $$b_{k}$$ and k, respectively:

Fixing [TeX:] $$b_{k},$$ we can solve k by:

(11)
[TeX:] $$\min _{k}\left\|\left(\nabla_{h} x, \nabla_{v} x\right) * k-\left(\nabla_{h} y, \nabla_{v} y\right)\right\|_{2}^{2}+\beta\left\|b_{k}-k\right\|_{2}^{2}$$

So the same FFT as Eq. (8) can be directly used to solve k:

(12)
[TeX:] $$k=F^{-1}\left(\frac{\overline{F\left(\nabla_{h} x\right)} \circ F\left(\nabla_{h} y\right)+\overline{F\left(\nabla_{v} x\right)} \circ F\left(\nabla_{v} y\right)+\beta b_{k}}{\overline{F\left(\nabla_{h} x\right)} \circ F\left(\nabla_{h} x\right)+\overline{F\left(\nabla_{v} x\right)} \circ F\left(\nabla_{v} x\right)+\beta}\right)$$

Fixing k, we can get [TeX:] $$b_{k}$$ by:

(13)
[TeX:] $$\min _{b_{k}} \lambda_{k}\left\|b_{k}\right\|_{L}+\beta\left\|b_{k}-k\right\|_{2}^{2}$$

Using the method in [6], Eq. (13) can be solved by:

(14)
[TeX:] $$k=\operatorname{sign}\left(b_{k}\right) \cdot \max \left(\left|b_{k}\right|-\frac{\lambda_{k}}{\beta}, 0\right)$$

Finally, to get a meaningful BK, we use the following constraint on k:

(15)
[TeX:] $$k(p, q)=\left\{\begin{array}{cc} k(p, q) & k(p, q)>0 \\ 0 & k(p, q) \leq 0 \end{array}, \sum_{(p, q) \in D} k(p, q)=1\right.$$

4. Experimental Results

We compare our method with other studies [3,7,8,11] to verify the superiority of our method (Table 1).

Table 1.

Average PSNR and mean SSIM of all methods on all 704 artificial blurred images
Normalized sparsity prior [3] Dark channel prior [7] L0 prior [8] LMG [11] Proposed method
[TeX:] $$\text { Blur kernel } 1$$

Avg PSNR (dB)

Mean SSIM

19.619

0.6899

20.642

0.7294

19.704

0.6996

19.26

0.6746

21.087

0.7322

[TeX:] $$\text { Blur kernel }_{2}$$

Avg PSNR (dB)

Mean SSIM

16.962

0.5041

18.1369

0.6717

17.845

0.6626

18.096

0.6716

18.432

0.6745

[TeX:] $$\text { Blur kernel }{ }_{3}$$

Avg PSNR (dB)

Mean SSIM

18.213

0.6724

19.611

0.7796

19.006

0.7354

19.646

0.7727

19.978

0.7896

[TeX:] $$\text { Blur kernel } 4$$

Avg PSNR (dB)

Mean SSIM

15.861

0.4368

16.316

0.4446

17.575

0.5505

17.762

0.5371

18.334

0.5896

[TeX:] $$\text { Blur kernel }{ }_{5}$$

Avg PSNR (dB)

Mean SSIM

18.393

0.6257

18.257

0.6127

17.983

0.6058

18.659

0.6446

18.682

0.6517

[TeX:] $$\text { Blur kernel }_{6}$$

Avg PSNR (dB)

Mean SSIM

16.703

0.4833

18.213

0.5807

16.548

0.5108

16.516

0.5218

18.325

0.5863

[TeX:] $$\text { Blur kernel }{ }_{7}$$

Avg PSNR (dB)

Mean SSIM

19.157

0.6254

17.955

0.5653

18.414

0.5922

18.145

0.5857

19.265

0.6313

[TeX:] $$\text { Blur kernel } 8$$

Avg PSNR (dB)

Mean SSIM

16.069

0.3949

18.025

0.5935

17.707

0.5753

18.333

0.5960

19.851

0.6021

4.1 The Artificial Image Dataset Experiments

In the artificial image datasets experiments, we used three different data sets [22-24], and 704 artificial blurred images in total. Fig. 1 and Table 2 show the inverse convolution error ratio of all methods on all 704 artificial blurred images.

Fig. 1.

Cumulative histogram of the inverse convolution error rate of all methods on all 704 artificial blurred images.
1.png

Table 2.

Statistical percentage (%) of inverse convolution error rate
DERs NSP [3] DCP [7] L0 prior [8] LMG [11] Proposed method
[TeX:] $$<1.5$$ 1.99 33.24 16.62 62.22 80.25
[TeX:] $$<2$$ 3.55 67.76 46.02 87.22 95.30
[TeX:] $$<3$$ 9.66 88.21 73.58 97.30 99.28
[TeX:] $$<4$$ 18.89 94.74 84.09 98.86 100
[TeX:] $$<5$$ 29.55 96.88 87.36 99.29 100
[TeX:] $$<6$$ 40.77 97.87 90.77 90.77 100

Fig. 2.

Artificial blurred images experiments (campsite): (a) original clear image, (b) corresponding blurred image, (c-f) the results of the method [ 3, 7, 8, 11], and (g) the results of the proposed method.
2.png

Fig. 3.

Artificial blurred images experiments (terrace view): (a) original clear image, (b) corresponding blurred image, (c-f) the results of the method [ 3, 7, 8, 11], and (g) the results of the proposed method.
3.png

The estimated BKs and the final deblurred images by the methods [3,7,8,11] both have obvious defects of different degrees: the defects for BKs such as divergence, discontinuity, and smearing will eventually lead to the ringing effect, excessive smoothness, and noise in the final deblurred images. By contrast, the proposed method can not only estimate the most accurate BK (good guarantee of continuity and sparsity), and the corresponding restored image also has sharper edge details and has a very good suppression of various blemishes (Figs. 2 and 3).

4.2 Experiments on Real Images

Next, we conduct the real blurry images experiments (Fig. 4).

Fig. 4.

Real blurred images experiments (licence plate): (a) the real blurry image, (b) the restored result and its magnified area of the method [ 3], (c–e) results of [ 7], [ 8], [ 11], and (f) our method’s result.
4.png

Fig. 5 shows that among the estimated BKs, the BKs estimated by [7] and [8] have obvious expansion and tailing defects, while the BK estimated by the literature [3] is concentrated to one point, while the BK estimated by [11] has obvious discontinuities. Secondly, in the final restored images obtained by the method of [3,7,8], obvious tail shadow defects can be seen at the eyeball spot, and there are problems of color dispersion and blurring. Although the restoration result of [11] has no obvious spot smear problem, there is a certain degree of blur. In contrast, the method proposed in this paper can estimate a more accurate BK and can obtain a highest-quality restored image. See the corresponding magnified areas in Fig. 5(b)–5(f).

Literature [3] has produced serious distortion and there are a lot of flaws. The resolution of BK estimated by [7] is low. The blur kernels estimated by [8] and [11] show blur and discontinuity respectively. Secondly, the restored image in [3] is still blurry, and the restored image in [7] has color diffusion in details, and the restored images in [8] and [11] have blurring problems, and corrugated flaws appear in areas with rich texture. See the corresponding magnified areas in Fig. 6(b)–6(f).

Fig. 5.

Real blurred images experiments (face picture): (a) the real blurry image, (b) the restored result, es-timated BK, and its magnified area of the method [ 3], (c–e) results of [ 7], [ 8], [ 11], and (f) our method’s result.
5.png

Fig. 6.

Real blurred images experiments (dolls): (a) the real blurry image, (b) the restored result, estimated BK, and its magnified area of the method [ 3], (c–e) results of [ 7], [ 8], [ 11], and (f) our method’s result.
6.png

5. Conclusion

We propose a windowed-total-variation regularization constraint model for blind image deblurring. The proposed method is based on the spatial scale of the image edges but not the amplitude to accurately extract useful image edges for accurate BKs estimation and high-quality images restoration. A large number of experiments prove the superiority of our method.

Acknowledgement

All the authors will thank D. Krishnan, J. S. Pan, and L. Chen for offering their codes respectively.

Biography

Ganghua Liu
https://orcid.org/0000-0001-8490-9323

He received the MBA from the school of business administration, Chongqing, China, in 2011. He is currently a senior engineer of the state grid at Chongqing Electric Power Company. His research interests include signal processing, image processing, and intelligent rear service.

Biography

Wei Tian
https://orcid.org/0000-0002-0782-9781

He received a B.S. degree in electrical engineering from Sichuan University, Chengdu, Sichuan, China in 1998. He is currently an engineer of the state grid at Chongqing Electric Power Company. His research interests include signal processing, image processing, and intelligent rear service.

Biography

Yushun Luo
https://orcid.org/0000-0002-3387-3452

He received an M.E. degree in electrical engineering from Chongqing University, Chongqing, China, in 2006. He is currently a senior economist of the state grid at Chongqing Electric Power Company. His research interests include signal processing, image processing, and computer vision.

Biography

Juncheng Zou
https://orcid.org/0000-0002-2155-1608

He received an M.E. degree in electrical engineering from Chongqing University, Chongqing, China, in 2020. He is currently an engineer of the state grid at Chongqing Electric Power Company. His research interests include signal processing, image processing, and intelligent rear service.

Biography

Shu Tang
https://orcid.org/0000-0001-7517-7992

He received an M.E. degree in computer science from Chongqing University of Posts and Telecommunications, Chongqing, China, in 2007, and a Ph.D. degree in Chongqing University, China, in 2013. He is currently an associate professor of the College of Computer Science and Technology at Chongqing University of Posts and Telecommunications, China. His research interests include signal processing, image processing, and computer vision.

References

  • 1 Q. Shan, J. Jia, A. Agarwala, "High-quality motion deblurring from a single image," ACM Transactions on Graphics, vol. 27, no. 3, pp. 1-10, 2018.doi:[[[10.1145/1360612.1360672]]]
  • 2 M. S. Almeida, L. B. Almeida, "Blind and semi-blind deblurring of natural images," IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 36-52, 2010.doi:[[[10.1109/TIP.2009.2031231]]]
  • 3 D. Krishnan, T. Tay, R. Fergus, "Blind deconvolution using a normalized sparsity measure," in Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, 2011;pp. 233-240. custom:[[[-]]]
  • 4 L. Xu, S. Zheng, J. Jia, "Unnatural l0 sparse representation for natural image deblurring," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, 2013;pp. 1107-1114). custom:[[[-]]]
  • 5 Z. Ma, R. Liao, X. Tao, L. Xu, J. Jia, E. Wu, "Handling motion blur in multi-frame super-resolution," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, 2015;pp. 5224-5232. custom:[[[-]]]
  • 6 W. Zuo, D. Ren, D. Zhang, S. Gu, L. Zhang, "Learning iteration-wise generalized shrinkage–thresholding operators for blind deconvolution," IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1751-1764, 2016.custom:[[[-]]]
  • 7 J. Pan, D. Sun, H. Pfister, M. H. Yang, "Blind image deblurring using dark channel prior," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las V egas, NV, 2016;pp. 1628-1636. custom:[[[-]]]
  • 8 J. Pan, Z. Hu, Z. Su, M. H. Yang, "L0-regularized intensity and gradient prior for deblurring text images and beyond," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 2, pp. 342-355, 2017.custom:[[[-]]]
  • 9 Y. Guo, H. Ma, "Image blind deblurring using an adaptive patch prior," Tsinghua Science and Technology, vol. 24, no. 2, pp. 238-248, 2019.custom:[[[-]]]
  • 10 X. Chen, R. Yang, C. Guo, S. Ge, Z. Wu, X. Liu, "Hyper-Laplacian regularized non-local low-rank prior for blind image deblurring," IEEE Access, vol. 8, pp. 136917-136929, 2020.custom:[[[-]]]
  • 11 L. Chen, F. Fang, T. Wang, G. Zhang, "Blind image deblurring with local maximum gradient prior," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, 2019;pp. 1742-1750. custom:[[[-]]]
  • 12 H. Lim, S. Y u, K. Park, D. Seo, J. Paik, "Texture-aware deblurring for remote sensing images using L0-based deblurring and L2-based fusion," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 3094-3108, 2020.custom:[[[-]]]
  • 13 J. Cai, W. Zuo, L. Zhang, "Dark and bright channel prior embedded network for dynamic scene deblurring," IEEE Transactions on Image Processing, vol. 29, pp. 6885-6897, 2020.custom:[[[-]]]
  • 14 J. Wu, X. Y u, D. Liu, M. Chandraker, Z. Wang, "DA VID: dual-attentional video deblurring," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, 2020;pp. 2376-2385. custom:[[[-]]]
  • 15 K. Zhang, W. Luo, Y. Zhong, L. Ma, B. Stenger, W. Liu, H. Li, "Deblurring by realistic blurring," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seatle, WA, 2020;pp. 2734-2743. custom:[[[-]]]
  • 16 A. Li, J. Li, Q. Lin, C. Ma, B. Yan, "Deep image quality assessment driven single image deblurring," in Proceedings of 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 2020;pp. 1-6. custom:[[[-]]]
  • 17 W. Ren, J. Zhang, J. Pan, S. Liu, J. Ren, J. Du, X. Cao, M. H. Yang, "Deblurring dynamic scenes via spatially varying recurrent neural networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.doi:[[[10.1109/TPAMI..3061604]]]
  • 18 D. Ren, W. Zuo, D. Zhang, L. Zhang, M. H. Yang, "Simultaneous fidelity and regularization learning for image restoration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 284-299, 2021.custom:[[[-]]]
  • 19 Y. Hu, J. Li, Y. Huang, X. Gao, "Channel-wise and spatial feature modulation network for single image super-resolution," IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 11, pp. 3911-3927, 2020.custom:[[[-]]]
  • 20 B. Lu, J. C. Chen, R. Chellappa, "UID-GAN: unsupervised image deblurring via disentangled representations," IEEE Transactions on BiometricsBehavior , and Identity Science, vol. 2, no. 1, pp. 26-39, 2020.custom:[[[-]]]
  • 21 L. Xu, Q. Yan, Y. Xia, J. Jia, "Structure extraction from texture via relative total variation," ACM Transactions on Graphics, vol. 31, no. 6, 2012.doi:[[[10.1145/2366145.2366158]]]
  • 22 A. Levin, Y. Weiss, F. Durand, W. T. Freeman, "Understanding and evaluating blind deconvolution algorithms," in Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009;pp. 1964-1971. custom:[[[-]]]
  • 23 R. Kohler, M. Hirsch, B. Mohler, B. Scholkopf, S. Harmeling, in Computer Vision – ECCV 2012, Germany: Springer, Heidelberg, pp. 27-40, 2012.custom:[[[-]]]
  • 24 L. Sun, S. Cho, J. Wang, J. Hays, "Edge-based blur kernel estimation using patch priors," in Proceedings of IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, 2013;pp. 1-8. custom:[[[-]]]