PDF  PubReader

Yong* , Huang , Liu , Zhang , and Shao: GAN-Based Local Lightness-Aware Enhancement Network for Underexposed Images

Chen Yong* , Meiyong Huang , Huanlin Liu , Jinliang Zhang and Kaixin Shao

GAN-Based Local Lightness-Aware Enhancement Network for Underexposed Images

Abstract: Uneven light in real-world causes visual degradation for underexposed regions. For these regions, insufficient consideration during enhancement procedure will result in over-/under-exposure, loss of details and color distortion. Confronting such challenges, an unsupervised low-light image enhancement network is proposed in this paper based on the guidance of the unpaired low-/normal-light images. The key components in our network include super-resolution module (SRM), a GAN-based low-light image enhancement network (LLIEN), and denoising-scaling module (DSM). The SRM improves the resolution of the low-light input images before illumination enhancement. Such design philosophy improves the effectiveness of texture details preservation by operating in high-resolution space. Subsequently, local lightness attention module in LLIEN effectively distinguishes unevenly illuminated areas and puts emphasis on low-light areas, ensuring the spatial consistency of illumination for locally underexposed images. Then, multiple discriminators, i.e., global discriminator, local region discriminator, and color discriminator performs assessment from different perspectives to avoid over-/under-exposure and color distortion, which guides the network to generate images that in line with human aesthetic perception. Finally, the DSM performs noise removal and obtains high-quality enhanced images. Both qualitative and quantitative experiments demonstrate that our approach achieves favorable results, which indicates its superior capacity on illumination and texture details restoration.

Keywords: GAN , Local Lightness Attention Module , Local Lightness-Aware , Low-Light Image Enhancement , Multiple Discriminators

1. Introduction

Unevenly illuminated low-light images suffer from low visibility in some local regions. To alleviate this problem, researchers have developed numerous promising approaches to tackle the low-light image enhancement task effectively. They can be roughly divided into physical means, histogram equalization (HE)-based, Retinex-based, deep learning-based, and adversarial learning-based methods. One characteristic of underexposed images is low signal-to-noise ratio (SNR), which means noise is highly intensive and dominates the image signals [1]. Some physical means take the effort to acquire sufficient light for cameras, such as extending exposure time or increasing ISO (International Standardization Organization). However, the former may introduce blur when the camera shakes or the object moves, and the latter may introduce intensive noise with higher ISO, degrading the quality of the images.

HE-based methods take advantage of being executed in real-time, which benefits from simply stretching the dynamic range of images by evenly rearranging pixels. Brightness preserving dynamic histogram equalization (BPDHE) [2] is a method of global histogram equalization which performs well in preserving the lightness order of the input image. However, it fails to recover the details for dark regions because of gray-level merging. HE-based methods generally make global adjustments, which consider local dark areas insufficiently, resulting in over-/under-exposure. Besides, this kind of method fails to tackle the noise efficiently.

Retinex-based methods, which assume that an image is an integration of illumination and reflectance, are based on the Retinex theory. The key idea is to estimate and remove the influence of illumination, such as the average intensity of light. KinD++ [3] follows a divide-and-conquer principle. Not only does it brightens dark regions, but it also removes hidden degradation artifacts such as noise and color distortion. However, for Retinex-based methods, as there are no clear definitions of ground-truth illumi¬nation and reflectance, the decomposition of an image becomes difficult.

Recently, the development of deep learning-based methods has significantly boosted the performance of image restoration tasks via learning the underlying signal features of input images. Lore et al. [4] proposed a stacked deep auto-encoder named low-light net (LLNet) to learn joint denoising and lightness enhancement, the first deep learning method used in the low-light enhancement field. [5] gives attention to underexposed regions to avoid overexposure of local areas through generating attention maps. Deep stacked Laplacian restorer (DSLR) [6] proposed a decomposition-based scheme that separately recovers the global illumination and local details from the original input. It leverages valuable properties of the Laplacian pyramid based on great connections of higher-order residuals in a multi-scale structure both at the image and feature level. It is worth noting that most deep learning-based methods must be supported by large-scale paired low-/normal-light datasets. However, as it is impractical for low-/normal-light image pairs to appear concurrently, it is challenging to collect large-scale paired low-/normal-light datasets with diversified content

Generative adversarial networks (GANs) learn the mapping between two domains using adversarial learning, which shows excellent performance when dealing with domain transfer learning tasks. For example, [7] has a good effect on style transfer without paired images. Unpaired datasets contain images from two domains with no need to be in the same scene but need to present essential characteristics of the domain, such as dark or bright. As images with low and normal illumination belong to the low and normal illumination domains, researchers tend to adopt the idea of domain transfer and apply a GAN to low-light image enhancement tasks using unpaired datasets. This kind of work overcomes the lack of large-scale paired datasets, which shows the remarkable advantages of GANs that can be trained with unpaired data.

Nevertheless, the task of low-light image enhancement remains challenging. (1) Previous literature takes insufficient consideration for local dark regions, which introduces over-/under-exposure artifacts during the enhancement procedure. (2) The enhancement procedure smooths out local structural details and distorted color, causing the images to be inconsistent with human perceptual preference [8], which is not determined by a single aspect. However, previous literature focuses on only a single problem, such as illumination improvement, details recovery, or noise removal. We claim that for images with fluc¬tuating illumination distributions, the enhancement task needs to consider several aspects simultaneously, such as enhancing brightness to global and local areas, restoring local structural details, controlling color deviations, removing undesirable noise, and so on.

To satisfy the above goals and generate high-quality images, we propose a GAN-based local lightness-aware enhancement network that enhances locally underexposed images based on the guidance of the unpaired low-/normal-light images inspired by [9]. To be specific, our architecture includes three components: a super-resolution module (SRM) for details preservation, a low-light image enhancement network (LLIEN) for lightening up global and local dark regions, and a denoising-scaling module (DSM) for noise suppression. In the first stage, the SRM firstly improves the resolution of input low-light images and generates their fine-grained high-resolution versions. Such design philosophy enables the subsequent enhancement procedure to be accomplished in high-resolution space so that the texture information remains since the details of local dark regions have been amplified by super-resolution. In the second stage, to enhance the local illumination and obtain images where illumination is uniformly distributed, we integrate the local lightness attention module into LLIEN, which guides the generator to emphasize local low-light areas adaptively. Then, we introduce multiple discriminators to evaluate the enhanced images from different perspectives, driving the network to generate images that match human visual preferences. In specific, the global discriminator evaluates the global lightness enhancement. The local region discriminator distinguishes whether local areas are lightened up to realistic normal-light ones, which helps to improve the lightness of local dark areas. The color discriminator evaluates the naturalness of the restored color, which is essential in controlling color bias. In the last stage, the DSM removes noise amplified in SRM and LLIEN and performs a down-sampling operation to the original scale as an input low-light image. Compared with other algorithms, our approach considers multiple aspects, such as illumination improvement, details recovery, or noise removal, to conform to human visual preference instead of a single task. Therefore, the results of our method are more realistic and achieve higher aesthetic quality. Both qualitative and quantitative experiments indicate that our approach achieves considerable enhancements. To sum up, the main contributions of this work are as follows:

We propose a super-resolution strategy specifically designed to perform enhancement in high-resolution space, enabling the network to retain details of contents and texture for local areas suffering from low visibility.

We design a local lightness attention module to distinguish areas of underexposed regions from well-illuminated regions, enabling the network to pay more attention to local dark regions and prevent the whole image from over-/under-exposure artifacts.

We introduce multiple discriminators, which assess the enhanced images from the perspectives of global illumination distribution, local area exposure, and color distortion, driving the network to generate images that conform to human perceptual preferences.

The organization of the rest of the paper is as follows. Section 2 briefly presents the SRM and then introduces the LLIEN, where the local lightness attention module is introduced first. After that, multiple discriminators and DSM are presented successively. Section 3 provides performance analysis, including an ablation study and a comparison with other algorithms. Finally, concluding remarks are provided in Section 4.

2. Proposed Approach

2.1 Architecture Overview

The primary purpose of our method is to light up local dark regions and the whole low-light images globally while recovering details of the texture, avoiding over-/under-exposure for local regions, and controlling color deviation. As illustrated in Fig. 1, our model consists of three main components: SRM, LLIEN (which includes a local lightness attention module, and multiple discriminators), and DSM. The SRM performs the resolution improvement for input low-light images and then feeds the fine-grained low-light images into LLIEN. This strategy helps to avoid detail loss during the process of lightness enhancement in LLIEN. In LLIEN, the local lightness attention module generates an attention map that distinguishes dark and bright areas. Under the guidance of the attention map, the generator of LLIEN pays more attention to lightening up local dark regions rather than bright ones, avoiding over-/under-exposure artifacts. Afterward, benefitting from carefully designed loss functions, multiple discriminators evaluate the generated images from different perspectives, helping LLIEN to enhance global and local brightness and control color deviation. DSM removes the noise and then samples down the clean image to the size of the original low-light image at last.

Fig. 1.

Illustration of the proposed model. SRM generates a high-resolution version of low-light images, LLIEN lightens up local dark regions and the whole images, and DSM suppresses noise and generates clear, enhanced images.
1.png
2.2 Super-Resolution Module

Local structural details are usually smoothed out [10] during the low-light image enhancement procedure. Confronting such challenges, we design an SRM. Although bilinear interpolation is a practical approach for super-resolution tasks, the blur will be introduced to underexposed areas. To deal with this issue, we employ a classical method, EDSR [11]. The structure of SRM can be summarized by 32 identical residual blocks, which is crucial for details recovery, as well as several convolutions and up-sampling layers. Each residual block contains two convolutional layers with a ReLU activation function in the middle but without batch normalization layers. Firstly, the original input low-light images are fed into SRM. Then, features extracted by residual blocks are fused with the features of the input and subsequently upsampled to twice the original size. Constant scaling layers with a scaling factor of 0.1 are placed last for stable training.

2.3 Low-light Image Enhancement Network
2.3.1 Local lightness attention module

To prevent the local regions from over-/under-exposure artifacts, we introduce a local lightness atten¬tion module that generates an attention map to guide LLIEN to pay more attention to local dark regions. The local lightness attention module consists of channel attention and spatial attention. Channel attention is vital in deciding which feature maps are more meaningful than others. Spatial attention concentrates on the informative part of a specific feature map at the pixel level. The structure diagram of the local lightness attention module is illustrated in Fig. 2. To be specific, inspired by [12,13], the module firstly performs channel-wise global average pooling operation to aggregate the spatial information and obtain a squeezed vector. Then it generates weighted vector through two fully-connected (fc) layers, a ReLU (Rectified Linear Unit) function as well as a sigmoid function. Next, we obtain channel-wise attention map by multiplying weighted vector with the input feature. In short, channel attention can be expressed as follows:

Fig. 2.

Structure diagram of the local lightness attention module, which is a combination of channel attention and spatial attention.
2.png

(1)
[TeX:] $$A=A v g P o o l(I)$$

(2)
[TeX:] $$F=\sigma\left(W_2 \delta\left(W_1 A\right)\right)$$

where [TeX:] $$I$$ denotes input feature maps of low-light images, [TeX:] $$W_{1}$$ and [TeX:] $$W_{2}$$ denote two fc layers, [TeX:] $$\delta$$ refers to the ReLU function, and [TeX:] $$\sigma$$ refers to the sigmoid function.

To highlight the informative regions, a global average pooling and a global max pooling operation are applied to the feature maps along the channel axis, respectively. Each squeezes the number of channels to one and transforms the initial feature maps from [TeX:] $$W \times H \times C \text { to } W \times H \times 1$$, where [TeX:] $$W$$ denotes width and [TeX:] $$H$$ denotes height. These two feature maps are concatenated to generate an efficient feature descriptor. Then convolution layers and a sigmoid function are applied to the concatenated feature descriptor to acquire the spatial attention map. In short, spatial attention can be represented as a mathematical formula as follows:

(3)
[TeX:] $$\text { Output }=\sigma(\operatorname{conv}. (\operatorname{AvgPool}(F) ; \operatorname{MaxPool}(F)))$$

2.3.2 Multiple discriminators

Global discriminator

Global discriminator is dedicated to discriminating the enhanced images from the real normal-light images following considerations of whether they satisfy the distribution of real normal-light images. The global discriminator assists the network in improving the holistic illumination of low-light images at the image level, generating globally enlightened images.

Local region discriminator

An image-level global discriminator is not enough to enhance the local dark areas. Inspired by [9], we add a local region discriminator to consider local dark areas and enhance the lightness globally fully. Specifically, we evenly crop the generated and real normal-light images into sub-images for every gener¬ated image. The number of sub-images is preset to 4 to reduce the burden of computational cost. The local region discriminator evaluates whether each sub-image looks like a realistic, normally illuminated image, ensuring that over-/under-exposure artifacts are avoided for all local bright/dark regions.

Color discriminator

We use an image assessment network pre-trained on the Aesthetic Visual Analysis (AVA) dataset to evaluate the aesthetic quality of the enhanced results. As it is difficult to assess an image with no re¬ference, we adopt a relativistic classifier evaluating paired inputs composed of synthetic and ground truth images. The classifier outputs a binary number showing whether the enhanced image has “higher” (1) or “lower” (0) aesthetic quality than the ground-truth image. This strategy drives LLIEN to generate images with more realistic colors than ground truth.

In conclusion, the multiple discriminators evaluate the enhanced result from three aspects, aiming to restore global brightness and fine details while avoiding over-/under-exposure and the color cast.

2.3.3 Loss functions

Adversarial loss: We adopt the original LSGAN (least squares generative adversarial networks) [14] loss as our adversarial loss to learn the mapping between underexposed and target normal light images.

(5)
[TeX:] $$L_{a d v}=E_{x \sim P_N}\left[\log D_N(x)\right]+E_{x \sim P_L}\left[\log \left(1-D_N\left(G_{L \rightarrow N}(x)\right)\right)\right] .$$

Color loss: We introduce color loss to enforce the generated images to satisfy the color distribution of the normal-light images,

(6)
[TeX:] $$L_{\text {color }}=\hat{y}\left(-\log \left(\Omega\left(I_{\text {enh }}-I_G\right)\right)\right)+(1-\hat{y})\left(-\log \left(1-\Omega\left(I_G-I_{\text {enh }}\right)\right)\right)$$

where [TeX:] $$\hat{y}$$ indicates the ground-truth binary number, [TeX:] $$I_{en\text{h}}$$ and [TeX:] $$I_{G}$ represent the enhanced result and the ground-truth image, respectively, and [TeX:] $$\Omega$$ is the aesthetic network.

Perceptual preserving loss: We use perceptual preserving loss [9] from the pre-trained VGG to model our feature space distance for preserving image content features.

(7)
[TeX:] $$L_p=\frac{1}{W_{i, j} H_{i, j}} \sum_{x=1}^{W_{i, j}} \sum_{y=1}^{H_{i, j}}\left(\Phi_{i, j}\left(I^L\right)-\Phi_{i, j}\left(G\left(I^L\right)\right)\right)^2$$

[TeX:] $$I^{L}$$ stands for the input low-light image, and [TeX:] $$G(I)^{L}$$ denotes the enhanced result of the generator. [TeX:] $$\Phi_{i, j}$$ represents the feature map extracted from the VGG-16 pre-trained network. [TeX:] $$i$$ and [TeX:] $$j$$ represent [TeX:] $$i^{th}$$ max pooling and [TeX:] $$j^{th}$$ convolutional layer after [TeX:] $$i^{th}$$ max pooling layer. [TeX:] $$W_{i,j}$$ and [TeX:] $$H_{i, j}$$ are extracted feature maps’ width and height. We set [TeX:] $$i$$ to be 4 and [TeX:] $$j$$ to be 1.

Reconstruction loss: Reconstructing loss constrains the [TeX:] $$L_{1}$$ distance between the generated images and high-quality normal-light images, helping to drive the network to generate more realistic images.

(8)
[TeX:] $$L_{r e c}=E_{x \sim P_N}\left[\left|x-G_{L \rightarrow N}(x)\right|_1\right].$$

The overall loss function for training our architecture is shown below:

(9)
[TeX:] $$L_{a l l}=\omega_1 L_{a d v}+\omega_2 L_{\text {color }}+\omega_3 L_p+\omega_4 L_{\text {rec }}.$$

2.4 Denoising-Scaling Module

To remove the noise amplified [15] during the process in the modules mentioned above, we propose the DSM. In detail, we adopt CBDNet [16], an efficient denoising approach to remove noise. CBDNet contains two subnetworks, i.e., [TeX:] $$\text{CNN}_{\text{E}}$$ and [TeX:] $$\text{CNN}_{\text{D}}$$, for estimating noise level maps and performing non-blind denoising. We first feed the enhanced images produced by LLIEN into [TeX:] $$\text{CNN}_{\text{E}}$$ to obtain an estimated noise level image map, then take both images as the input of [TeX:] $$\text{CNN}_{\text{D}}$$. In [TeX:] $$\text{CNN}_{\text{D}}$$, residual learning is adopted to generate final noise-removed enhanced images. Next, we perform a down-sampling operation on the original images to half the scale due to our LLIEN. Ultimately, we obtain the final enhanced images, the final clear and bright version of the input low-light images.

3. Performance Analysis

3.1 Datasets and Implementation Details

Thanks to the advantage that GAN-based networks can be trained with unpaired low-/normal-light images, we trained our model on the large-scale unpaired training set. The low-light images are collected from the Exclusively Dark dataset [17], and the normal-light images are collected from public datasets [18] and [19]. We investigate the performance of the proposed architecture on classic public datasets. We conduct all experiments with the PyTorch framework on GTX 1080Ti GPUs. Training details and structure models from EnlightenGAN [9] help to build our proposed framework. We fix the initial learning rate at 1e-4 in the first 100 epochs and exponentially decrease it to 0. We apply the Adam method to optimize the parameters and set the batch size to 16.

3.2 Ablation Study

To investigate the effectiveness of the proposed method, we perform an ablation study in this section. As shown in Fig. 3, the super-resolution strategy on the input image is beneficial to preserving details for local dark regions, which suggests the critical role of the SRM in generating high-quality images. Compared to removing SRM, we can observe that the contrast and illumination of the local dark regions are improved to a great extent. Additionally, texture details and color are recovered vividly with SRM.

Fig. 3.

Ablation study for investigating the contribution of the super-resolution module (SRM). Panels from left to right show input underexposed images, the results without SRM, and the results with SRM sequentially.
3.png
3.3 Comparison with State-of-the-Arts

In this section, we represent comparisons of our architecture with recent competing approaches through performing qualitative analysis, quantitative analysis, and user study.

3.3.1 Qualitative analysis

We evaluate the visual quality of the images enhanced by our approach to four other low-light enhancement methods, DSLR [6], LLNet [4], RRDNet [20], and KinD++ [3]. Fig. 4 shows representative qualitative results for visual comparison.

Fig. 4.

A visual comparison of our approach with five competitive methods.
4.png

In the first example, we can observe that our proposed framework successfully reconstructs texture details while evading overexposure for local regions, whereas DSLR [6] and LLNet [4] introduce over¬exposure, and KinD++ [3] introduces unnatural artifacts. In the second example, our proposed framework enhances brightness to a large extent, while other methods cannot enhance lightness sufficiently. In the third example, our method enhances the image with natural color consistent with human visual prefer¬ence, while other approaches introduce the color cast. We can obtain the following conclusions based on the observation from Fig. 4. The key advantages of our approach are as follows. (1) Our approach represents good preservation of details for dark regions and generates high-resolution images. (2) Our approach can lighten the holistic image and underexposed local regions while evading over-/under-exposure artifacts.

3.3.2 Quantitative analysis

As our unsupervised method does not need ground-truth images during training, we evaluate enhanced results of the proposed network and other approaches by adopting natural image quality evaluator (NIQE) [21], a non-reference image quality assessment. A lower NIQE demonstrates better visual quality. As reported in Table 1 where the bold font indicates the best performance in each test, our method achieves the lowest NIQE value on four publicly available image sets, indicating that our enhanced images are of high aesthetic quality [3,4,6,20,22].

Table 1.

Quantitative comparison between six architectures
The bold font indicates the best performance in each test.
LIME MEF NPE DICM VV All
DSLR [6] 20.23 17.97 18.42 16.70 15.03 17.67
LLNet [4] 16.91 16.52 17.32 17.32 16.45 16.90
RRDNet [20] 14.86 14.85 15.39 16.15 11.71 14.59
KinD++ [3] 15.90 16.18 14.44 16.32 12.36 15.04
DRBN [22] 18.24 16.69 16.92 16.83 14.52 16.64
Ours 13.44 16.63 12.70 10.50 14.16 13.44

User study

This section investigates the effects of six competing methods through a user study. We invite thirty volunteers to sort the quality of enhanced images selected manually from classical test sets. We consider the following aspects: over-/under-exposure of the local area, color bias, and details recovery. Fig. 5 shows the most votes from Rank 1st to Rank 6th, indicating that our approach achieves the best visual quality.

Fig. 5.

Quantitative result of rating distribution for six different algorithms.
5.png

4. Conclusion

This paper proposes a GAN-based local lightness-aware enhancement network for underexposed images to achieve lightness and details restoration for local dark areas and global lightness enhancement and color recovery. The key components include SRM, GAN-based LLIEN, and DSM. We implement a super-resolution strategy on low-light images, enabling subsequent enhancement accomplished in high-resolution space to preserve texture details in local dark areas. Next, we present a local lightness attention module to pay more attention to local dark regions. Benefiting from multiple discriminators, the LLIEN comprehensively discriminates the generated images from the perspectives of global lightness, local lightness, and color. Specifically, LLIEN enhances the global and local lightness and controls color deviation while avoiding over-/under-exposure artifacts in the absence of paired datasets, guiding the network to generate images that conform to human visual preference. Finally, the DSM suppresses noise and obtains high-quality enhanced images. Both qualitative and quantitative experiments indicate the effectiveness and generalization of our method.

Biography

Yong Chen
https://orcid.org/0000-0002-0649-0763

He received his Ph.D. degree in mechanical engineering from Chongqing University in 2003. He is a candidate for academic and technical leadership in Chongqing Control Theory and Control Engineering discipline. He is currently a professor at the School of Automation at Chongqing University of Posts and Telecommunications, Chongqing, China. His current research interests include image processing, pattern recognition, and intelligent optimizing controls.

Biography

Meiyong Huang
https://orcid.org/0000-0002-0351-6462

She received a B.S. degree in electronic and information engineering from Chongqing University of Posts and Telecommunications in 2019. She is currently a postgraduate student at the School of Automation at Chongqing University of Posts and Telecom-munications, Chongqing, China. Her research interests include image enhancement and signal processing.

Biography

Huanlin Liu
https://orcid.org/0000-0001-5558-6385

She received her M.S. degree from the Chongqing University of Posts and Telecom-munications in 2000 and her Ph.D. degree from Chongqing University in 2008. She is currently a professor at the School of Communication and Information Engineering at Chongqing University of Posts and Telecommunications, Chongqing, China. Her research interests include all-optical network research, all-optical switch structure and scheduling algorithm research, information acquisition, and processing.

Biography

Jinliang Zhang
https://orcid.org/0000-0001-6742-6864

He received his B.S. degree from the Hefei University of Technology. He is currently a postgraduate student majoring in Instrument Science and technology at Chongqing University of Posts and telecommunications, Chongqing, China. His research interests include image enhancement, computer vision, and signal processing.

Biography

Kaixin Shao
https://orcid.org/0000-0001-8821-2742

He received his B.S. degree in Mechanical Design, Manufacturing, and Automation from Southwest Petroleum University in Chengdu in 2021. He is currently a postgra-duate student at the School of Automation at Chongqing University of Posts and Telecommunications, Chongqing, China. His research interest is image processing.

References

  • 1 Z. Wang, X. Huang, and F. Huang, "A new image enhancement algorithm based on bidirectional diffusion," Journal of Information Processing Systems, vol. 16, no. 1, pp. 49-60, 2020.doi:[[[10.3745/JIPS.04.0155]]]
  • 2 H. Ibrahim and N. S. P . Kong, "Brightness preserving dynamic histogram equalization for image contrast enhancement," IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1752-1758, 2007.doi:[[[10.1109/TCE.2007.4429280]]]
  • 3 Y . Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, "Beyond brightening low-light images," International Journal of Computer Vision, vol. 129, pp. 1013-1037, 2021.doi:[[[10.1007/s11263-020-01407-x]]]
  • 4 K. G. Lore, A. Akintayo, and S. Sarkar, "LLNet: a deep autoencoder approach to natural low-light image enhancement," Pattern Recognition, vol. 61, pp. 650-662, 2017.doi:[[[10.1016/j.patcog.2016.06.008]]]
  • 5 F. Lv, Y . Li, and F. Lu, "Attention guided low-light image enhancement with a large scale low-light simulation dataset," International Journal of Computer Vision, vol. 129, pp. 2175-2193, 2021.doi:[[[10.1007/s11263-021-01466-8]]]
  • 6 S. Lim and W. Kim, "DSLR: deep stacked Laplacian restorer for low-light image enhancement," IEEE Transactions on Multimedia, vol. 23, pp. 4272-4284, 2021.doi:[[[10.1109/tmm.2020.3039361]]]
  • 7 L. Zhao, Y . Zhang, and Y . Cui, "A multi-scale U-shaped attention network-based GAN method for single image dehazing," Human-centric Computing and Information Sciences, vol. 11, article no. 38, pp. 562-578, 2021. https://doi.org/10.22967/HCIS.2021.11.038doi:[[[10.22967/HCIS..11.038]]]
  • 8 X. Gao, W. Lu, L. Zha, Z. Hui, T. Qi, and J. Jiang, "Quality elevation technique for UHD video and its VLSI solution," Journal of Chongqing University of Posts and Telecommunications: Natural Science Edition, vol. 32, no. 5, pp. 681-697, 2020.custom:[[[https://jglobal.jst.go.jp/en/detail?JGLOBAL_ID=202002274621487992]]]
  • 9 Y . Jiang, X. Gong, D. Liu, Y . Cheng, C. Fang, X. Shen, J. Yang, P . Zhou, and Z. Wang, "EnlightenGAN: deep light enhancement without paired supervision," IEEE Transactions on Image Processing, vol. 30, pp. 23402349, 2021.doi:[[[10.1109/tip.2021.3051462]]]
  • 10 X. Xu, H. Liu, Y . Li, and Y . Zhou, "Image deblurring with blur kernel estimation in RGB channels," Journal of Chongqing University of Posts and Telecommunications: Natural Science Edition, vol. 30, no. 2, pp. 216221, 2018.doi:[[[10.1109/icdsp.2016.7868645]]]
  • 11 B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, "Enhanced deep residual networks for single image superresolution," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, 2017, pp. 1132-1140.custom:[[[-]]]
  • 12 J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 7132-7141.doi:[[[10.1109/cvpr.2018.00745]]]
  • 13 S. Woo, J. Park, J. Y . Lee, and I. S. Kweon, "CBAM: convolutional block attention module," in Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 2018, pp. 3-19.doi:[[[10.1007/978-3-030-01234-2_1]]]
  • 14 X. Mao, Q. Li, H. Xie, R. Y . Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," in Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 2017, pp. 2813-2821.doi:[[[10.1109/iccv.2017.304]]]
  • 15 M. Zhang and J. Yang, "A new referenceless image quality index to evaluate denoising performance of SAR images," Journal of Chongqing University of Posts and Telecommunications: Natural Science Edition, vol. 30, no. 4, pp. 530-536, 2018.custom:[[[-]]]
  • 16 S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, "Toward convolutional blind denoising of real photographs," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, 2019, pp. 1712-1722.doi:[[[10.1109/cvpr.2019.00181]]]
  • 17 Y . P . Loh and C. S. Chan, "Getting to know low-light images with the exclusively dark dataset," Computer Vision and Image Understanding, vol. 178, pp. 30-42, 2019.doi:[[[10.1016/j.cviu.2018.10.010]]]
  • 18 C. Wei, W. Wang, W. Yang, and J. Liu, "Deep Retinex decomposition for low-light enhancement," 2018 (Online). Available: https://arxiv.org/abs/1808.04560.custom:[[[https://arxiv.org/abs/1808.04560]]]
  • 19 D. T. Dang-Nguyen, C. Pasquini, V . Conotter, and G. Boato, "RAISE: a raw images dataset for digital image forensics," in Proceedings of the 6th ACM Multimedia Systems Conference, Portland, OR, 2015, pp. 219224.doi:[[[10.1145/2713168.2713194]]]
  • 20 A. Zhu, L. Zhang, Y . Shen, Y . Ma, S. Zhao, and Y . Zhou, "Zero-shot restoration of underexposed images via robust retinex decomposition," in Proceedings of 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 2020, pp. 1-6.doi:[[[10.1109/icme46284.2020.9102962]]]
  • 21 A. Mittal, R. Soundararajan, and A. C. Bovik, "Making a "completely blind" image quality analyzer," IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, 2013.doi:[[[10.1109/LSP.2012.2227726]]]
  • 22 W. Yang, S. Wang, Y . Fang, Y . Wang, and J. Liu, "From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, W A, 2020, pp. 3060-3069.doi:[[[10.1109/cvpr42600.2020.00313]]]