## Soroor Malekmohammadi Faradounbeh* and SeongKi Kim**## |

Method | Equation | Parameters |
---|---|---|

MSE | [TeX:] $$\frac{1}{M N} \sum_{i=1}^{M} \sum_{j=1}^{N}(x(i, j)-y(i, j))^{2}$$ | [TeX:] $$x(i, j), y(i, j):$$original and denoised images i, j: pixel position in M × N image. |

SSIM | [TeX:] $$\frac{\left(2 \mu_{x} \mu_{y}+c_{1}\right)\left(2 \sigma_{x y}+c_{2}\right)}{\left(\mu_{x}^{2}+\mu_{y}^{2}+c_{1}\right)\left(\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2}\right)}$$ | [TeX:] $$\mu_{x}, \mu_{y} ; \sigma_{x}^{2}, \sigma_{y}^{2} ; \sigma_{x v}:$$the average values of [TeX:] $$x, y \ c_{1}, c_{2}:$$two parameters that are used to stabilize the division with a weak denominator |

Due to the different conditions of each scene (test set), we provide the evaluation result on SPONZA in Table 2 and on PBRT renderer in Table 3. In Table 2, SPP is the number of light rays existing in every pixel, and we can obtain a more converged image with more SPPs.

A-SVGF and neural bilateral grid denoiser [56] have better results than LBF and SVGF. However, the KPCN method with 4 SPPs generates much better result than the A-SVGF method with a few modifications because it benefits from deep learning and neural networks. Table 3 compares their performance with the PBRT renderer.

In this comparison, SBMCD as a sample-based method shows better results with 4 SPPs compared to two pixel-based methods (LBF and deep Monte Carlo rendering [23]). Based on these results, it is expected that better results with few SPPs can be obtained by improving the sample-based methods.

Table 2.

Filter methods | SSIM | R-MSE | Rel-MSE | SPP |
---|---|---|---|---|

KPCN | - | - | ~0.006 | 4 |

SVGF | 0.4315 | 0.0753 | - | 1 |

LBF | 0.7770 | 0.0320 | - | 1 |

A-SVGF | 0.9227 | 0.0227 | - | 1 |

An autoencoder-based method (neural bilateral grid denoiser) | 0.9270 | - | - | 1 |

Table 3.

Filter methods | DSSIM (1-SSIM) | R-MSE | SPP |
---|---|---|---|

SBMCD | 0.0685 | 0.0482 | 4 |

LBF | 0.0869 | 1.5814 | 4 |

A machine learning-based method (deep Monte Carlo rendering) | 0.1294 | 1.0867 | 4 |

Figs. 12 show some results of the methods on datasets, and they are taken while moving the camera. All comparative methods record and maintain outstanding structures and provide acceptable and noisefree results.

Some examples of the results that apply pixel-based methods, such as KPCN and LBF, as well as sample-based methods (SBMCD) are shown in Fig. 12. All of these methods have been able to provide noise-free results with different inputs; and as can be seen in the picture, the SBMCD method is very accurate, and the results are similar to the ground-truth image.

Some examples of the results that apply pixel-based methods, such as KPCN and LBF, as well as sample-based methods (SBMCD) are shown in Fig. 12. All of these methods have been able to provide noise-free results with different inputs; and as can be seen in the picture, the SBMCD method is very accurate, and the results are similar to the ground-truth image.

Table 4.

Filter methods | Type | 4 SPP | 8 SPP | 16 SPP | 32 SPP | 64 SPP | 128 SPP |
---|---|---|---|---|---|---|---|

LBF | Pixel-based and machine learning | 10.4 | - | - | - | - | - |

KPCN | Pixel-based | 14.6 | - | - | - | - | - |

SBMCD | Sample-based | 6.0 | 10.1 | 18.9 | 35.9 | 67.0 | 156.5 |

The elapsed time of pixel-based methods is constant because all of them should check all pixels and calculate the average regardless of the status of a pixel, but the sample-based method increases linearly with the number of samples. However, the use of machine learning acts as an optimizer and takes less time than others.

Other methods such as A-SVGF and SVGF can be executed in real time with the benefit of machine learning, and the neural bilateral grid denoiser as an autoencoder approach can also remove the noises in real time. The elapsed time of real-time denoising algorithms is shown in Table 5. The scenes are animated with different camera flythroughs and were rendered at 60 FPS (frame per second). To check whether these approaches are applicable to interactive scenarios, the time was measured by the breakdown of the frame time. The common filters cannot handle this kind of animation due to the global effect on shading, including issues of temporal blur and stability.

Recently, there have been numerous studies on denoising algorithms. Rendering fully converged, noise-free images is often too expensive, and much effort has been made to improve the image quality produced from this renderer, especially to obtain high-quality results with fewer samples, which is critical for high-performance/realistic rendering. In this study, we compared some of these denoising algorithms to show their potential and balance between quality and performance. As previously mentioned, the LBF, KPCN, SVGF, A-SVGF, and ReSTIR algorithms introduced for denoising are pixel-based, and LBF, KPCN, and A-SVGF used the advantage of neural networks to improve the performance of their systems. Moreover, except the ReSTIR algorithm, the rest of the algorithms use a similar process and include postprocessing steps. In addition to comparing and introducing the pixel-based denoising methods (as common methods), other methods such as sample-based methods (SBMCD) are also discussed. We also tried to show the potential of using machine learning methods (deep Monte Carlo rendering) and neural networks (neural bilateral grid denoiser) to improve the final quality.

This paper has limitations, however. Noise removal for augmented reality (AR) applications has not been explored in this study. This field can provide researchers with a range of opportunities and research topics. Furthermore, other methods of artificial intelligence, neural networks, and fuzzy logic can be used to remove noises within the resulting images. In the future, we plan to investigate these methods.

She received her B.S. degree (Software Engineering) from Islamic Azad University, Mobarakeh Branch in 2013, and M.S. degree in Artificial Intelligence from Islamic Azad University, Najafabad Branch in 2017. Since September 2020, she is with the Department of CSE from Keimyung University as a PhD candidate. She is currently a PhD. Her current research interests include graphics algorithm, intelligent computing, virtual/augmented reality, game algorithms, machine learning and neural networks.

He received his Ph.D. degree in CSE from Seoul National University in 2009. He has researched the GPU and the GPGPU at the Samsung Electronics from 2009 to 2014. He has also researched at the Ewha Womans University, Sangmyung University and Keimyung University from 2014 to 2020. Since March 2020, he is an assistant pro-fessor at the Sangmyung University. His current research interests include graphics algorithms, an algorithm optimization with the GPU, and game/virtual/augmented reality. He is a member of the ACM and the IEEE.

- 1 J. T. Kajiya, "The rendering equation," in
*Proceedings of the 13th Annual Conference on Computer Graphics And Interactive Techniques*, Dallas, TX, 1986;pp. 143-150. custom:[[[-]]] - 2 E. V each, "Robust Monte Carlo methods for light transport simulation,"
*Ph.D. dissertationStanford University, Stanford, CA*, 1997.custom:[[[-]]] - 3 H. W. Jensen, "Global illumination using photon maps,"
*in Rendering Techniques ’96. ViennaAustria: Springer*, pp. 21-30, 1996.custom:[[[-]]] - 4 Z. Zeng, L. Wang, B. B. Wang, C. M. Kang, Y. N. Xu, "Denoising stochastic progressive photon mapping renderings using a multi-residual network,"
*Journal of Computer Science and Technology*, vol. 35, pp. 506-521, 2020.custom:[[[-]]] - 5 K. V ardis, "Efficient illumination algorithms for global illumination in interactive and real-time rendering,"
*Ph.D. dissertationAthens University of Economics Business, Greece*, 2016.custom:[[[-]]] - 6 T. Chen, J. Shi, J. Yang, G. Li, "Enhancing network cluster synchronization capability based on artificial immune algorithm,"
*Human-centric Computing and Information Sciences*, vol. 9, no. 3, 2019.doi:[[[10.1186/s13673-019-0164-y]]] - 7 P. Dutre, K. Bala, P. Bekaert,
*Advanced Global Illumination*, Boca RatonFL: AK Peters/CRC Press, 2006.custom:[[[-]]] - 8 M. Mara, M. McGuire, B. Bitterli, W. Jarosz, "An efficient denoising algorithm for global illumination," in
*Proceedings of High Performance Graphics*, Los Angeles, CA, 2017;custom:[[[-]]] - 9 Y. Zhang, H. Wang, X. Fan, "Algorithm for detection of fire smoke in a video based on wavelet energy slope fitting,"
*Journal of Information Processing Systems*, vol. 16, no. 3, pp. 557-571, 2020.custom:[[[-]]] - 10 R. Wan, L. Ding, N. Xiong, W. Shu, L. Yang, "Dynamic dual threshold cooperative spectrum sensing for cognitive radio under noise power uncertainty,"
*Human-centric Computing and Information Sciences*, vol. 9, no. 22, 2019.doi:[[[10.1186/s13673-019-0181-x]]] - 11 A. Buades, B. Coll, J. M. Morel, "A non-local algorithm for image denoising," in
*Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)*, San Diego, CA, 2005;pp. 60-65. custom:[[[-]]] - 12 J. Lv, X. Luo, "Image denoising via fast and fuzzy non-local means algorithm,"
*Journal of Information Processing Systems*, vol. 15, no. 5, pp. 1108-1118, 2019.custom:[[[-]]] - 13 K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, "Image denoising by sparse 3-D transform-domain collaborative filtering,"
*IEEE Transactions on Image Processing*, vol. 16, no. 8, pp. 2080-2095, 2007.doi:[[[10.1109/TIP.2007.901238]]] - 14 L. Fan, F. Zhang, H. Fan, C. Zhang, "Brief review of image denoising techniques,"
*Visual Computing for IndustryBiomedicine, and Art*, vol. 2, no. 7, 2019.doi:[[[10.1186/s42492-019-0016-7]]] - 15 N. K. Kalantari, S. Bako, P. Sen, "A machine learning approach for filtering Monte Carlo noise,"
*ACM Transactions on Graphics*, vol. 34, no. 4, 2015.doi:[[[10.1145/2766977]]] - 16 N. K. Kalantari, "Utilizing machine learning for filtering general Monte Carlo noise,"
*Ph.D. dissertation, University of California, Santa Barbara, CA*, 2015.custom:[[[-]]] - 17 S. Bako, T. Vogels, B. McWilliams, M. Meyer, J. Novak, A. Harvill, P. Sen, T. Derose, F. Rousselle, "Kernel-predicting convolutional networks for denoising Monte Carlo renderings,"
*ACM Transactions on Graphics*, vol. 36, no. 4, 2017.doi:[[[10.1145/3072959.3073708]]] - 18 C. Schied, A. Kaplanyan, C. Wyman, A. Patney, C. R. A. Chaitanya, J. Burgess, S. Liu, C. Dachsbacher, A. Lefohn, M. Salvi, "Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination," in
*Proceedings of High Performance Graphics*, Los Angeles, CA, 2017;pp. 1-12. custom:[[[-]]] - 19 C. Schied, C. Peters, C. Dachsbacher, "Gradient estimation for real-time adaptive temporal filtering," in
*Proceedings of the ACM on Computer Graphics and Interactive Techniques*, 2018;vol. 1, no. 2. custom:[[[10.1145/3233301]]] - 20 Y. Liu, C. Zheng, Q. Zheng, H. Y uan, "Removing Monte Carlo noise using a Sobel operator and a guided image filter,"
*The Visual Computer*, vol. 34, no. 4, pp. 589-601, 2018.doi:[[[10.1007/s00371-017-1363-z]]] - 21 M. Gharbi, T. M. Li, M. Aittala, J. Lehtinen, F. Durand, "Sample-based Monte Carlo denoising using a kernel-splatting network,"
*ACM Transactions on Graphics*, vol. 38, no. 4, 2019.doi:[[[10.1145/3306346.3322954]]] - 22 B. Bitterli, F. Rousselle, B. Moon, J. A. Iglesias‐Guitian, D. Adler, K. Mitchell, W. Jarosz, J. Novak, "Nonlinearly weighted first‐order regression for denoising Monte Carlo renderings,"
*Computer Graphics Forum*, vol. 35, no. 4, pp. 107-117, 2016.custom:[[[-]]] - 23 D. Vicini, D. Adler, J. Novak, F. Rousselle, B. Burley, "Denoising deep Monte Carlo renderings,"
*Computer Graphics Forum*, vol. 38, no. 1, pp. 316-327, 2019.doi:[[[10.1111/cgf.13533]]] - 24 H. Park, B. Moon, S. Kim, S., E. Yoon, "P-RPF: pixel-based random parameter filtering for Monte Carlo rendering," in
*Proceedings of 2013 International Conference on Computer-Aided Design and Computer Graphics*, Guangzhou, China, 2013;pp. 123-130. custom:[[[-]]] - 25 Z. Wang, X. Huang, F. Huang, "A new image enhancement algorithm based on bidirectional diffusion,"
*Journal of Information Processing Systems*, vol. 16, no. 1, pp. 49-60, 2020.custom:[[[-]]] - 26 J. Xie, L. Xu, E. Chen, "Image denoising and inpainting with deep neural networks,"
*Advances in Neural Information Processing Systems*, vol. 25, pp. 341-349, 2012.custom:[[[-]]] - 27 B. Bayar, M. C. Stamm, "A deep learning approach to universal image manipulation detection using a new convolutional layer," in
*Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security*, Vigo Galicia, Spain, 2016;pp. 5-10. custom:[[[-]]] - 28 K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, "Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,"
*IEEE Transactions on Image Processing*, vol. 26, no. 7, pp. 3142-3155, 2017.doi:[[[10.1109/TIP.2017.2662206]]] - 29 K. M. Wong, T. T. Wong, "Deep residual learning for denoising Monte Carlo renderings,"
*Computational Visual Media*, vol. 5, no. 3, pp. 239-255, 2019.custom:[[[-]]] - 30 X. Yang, D. Wang, W. Hu, L. J. Zhao, B. C. Yin, Q. Zhang, X. P. Wei, H. Fu, "DEMC: a deep dual-encoder network for denoising Monte Carlo rendering,"
*Journal of Computer Science and Technology*, vol. 34, no. 5, pp. 1123-1135, 2019.custom:[[[-]]] - 31 M. Kettunen, E. Harkonen, J. Lehtinen, "Deep convolutional reconstruction for gradient-domain rendering,"
*ACM Transactions on Graphics*, vol. 38, no. 4, pp. 126-126, 2019.doi:[[[10.1145/3306346.3323038]]] - 32 J. Talbot, D. Cline, P. K. Egbert, "Importance resampling for global illumination," in
*Proceedings of the Eurographics Symposium on Rendering Techniques*, Konstanz, Germany, 2005;pp. 139-146. custom:[[[-]]] - 33 B. Bitterli, C. Wyman, M. Pharr, P. Shirley, A. Lefohn, W. Jarosz, "Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting,"
*ACM Transactions on Graphics*, vol. 39, no. 4, 2020.doi:[[[10.1145/3386569.3392481]]] - 34 M. Pharr, W. Jakob, G. Humphreys,
*Physically Based Rendering: From Theory to Implementation, 3rd ed*, MA: Morgan Kaufmann, Cambridge, 2014.custom:[[[-]]] - 35 P. Gallinari, Y. Lecun, S. Thiria, F. F. Soulie, "Distributed associative memories: a comparison," in
*Proceedings of COGNITIVA*, Paris, La Villette, 1987;custom:[[[-]]] - 36
*D. G. Mixon and S. Villar, 2018 (Online). Available:*, https://arxiv.org/abs/1803.09319 - 37 N. K. Kalantari, P. Sen, "Removing the noise in Monte Carlo rendering with general image denoising algorithms,"
*Computer Graphics Forum*, vol. 32, no. 2pt1, pp. 93-102, 2013.doi:[[[10.1111/cgf.12029]]] - 38 T. V ogels, F. Rousselle, B. McWilliams, G. Rothlin, A. Harvill, D. Adler, M. Meyer, J. Novak, "Denoising with kernel prediction and asymmetric loss functions,"
*ACM Transactions on Graphics2018*, vol. 37, no. 4, 2013.doi:[[[10.1145/3197517.388]]] - 39 H. Dahlberg, D. Adler, J. Newlin, "Machine-learning denoising in feature film production," in
*Proceedings of ACM SIGGRAPH 2019 Talks*, Los Angeles, CA, 2019;custom:[[[-]]] - 40 S. Laine, T. Karras, J. Lehtinen, T. Aila, "High-quality self-supervised deep image denoising,"
*Advances in Neural Information Processing Systems*, vol. 32, pp. 6970-6980, 2019.custom:[[[-]]] - 41 C. R. A. Chaitanya, A. S. Kaplanyan, C. Schied, M. Salvi, A. Lefohn, D. Nowrouzezahrai, T. Aila, "Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder,"
*ACM Transactions on Graphics*, vol. 36, no. 4, 2017.doi:[[[10.1145/3072959.3073601]]] - 42 A. Keller, L. Fascione, M. Fajardo, I. Georgiev, P. Christensen, J. Hanika, C. Eisenacher, G. Nichols, "The path tracing revolution in the movie industry," in
*Proceedings of ACM SIGGRAPH 2015 Courses*, Los Angeles, CA, 2015;custom:[[[-]]] - 43 A. Alsaiari, R. Rustagi, M. M. Thomas, A. G. Forbes, "Image denoising using a generative adversarial network," in
*Proceedings of 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)*, Kahului, HI, 2019;pp. 126-132. custom:[[[-]]] - 44 S. Agarwal, A. Agarwal, M. Deshmukh, "Denoising images with varying noises using autoencoders,"
*in Computer Vision and Image Processing. Singapore: Springer*, pp. 3-14, 2019.custom:[[[-]]] - 45 H. Dai, L. Shao, "Pointae: point auto-encoder for 3d statistical shape and texture modelling," in
*Proceedings of the IEEE/CVF International Conference on Computer Vision*, Seoul, Korea, 2019;pp. 5410-5419. custom:[[[-]]] - 46 K. P. Kumar, "Fuzzy-based machine learning algorithm for intelligent systems,"
*in Data ManagementAnalytics and Innovation. Singapore: Springer*, pp. 321-339, 2019.custom:[[[-]]] - 47 N. Chauhan, B. J. Choi, "Denoising approaches using fuzzy logic and convolutional autoencoders for human brain MRI image,"
*International Journal of Fuzzy Logic and Intelligent Systems*, vol. 19, no. 3, pp. 135-139, 2019.custom:[[[-]]] - 48 B. Costa, J. Jain, "Fuzzy deep stack of autoencoders for dealing with data uncertainty," in
*Proceedings of 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)*, New Orleans, LA, 2019;pp. 1-6. custom:[[[-]]] - 49 J. Lehtinen, T. Aila, J. Chen, S. Laine, F. Durand, "Temporal light field reconstruction for rendering distribution effects," in
*Proceedings of ACM SIGGRAPH 2011 papers*, V ancouver, Canada, 2011;pp. 1-12. custom:[[[-]]] - 50 J. Lehtinen, T. Aila, S. Laine, F. Durand, "Reconstructing the indirect light field for global illumination,"
*ACM Transactions on Graphics2012*, vol. 31, no. 4, 2185.doi:[[[10.1145/520.547]]] - 51 T, Hachisuka, W. Jarosz, R. P. Weistroffer, K. Dale, G. Humphreys, M. Zwicker, H. W. Jensen, "Multidimensional adaptive sampling and reconstruction for ray tracing," in
*Proceedings of ACM SIGGRAPH 2008 papers*, Los Angeles, CA, 2008;pp. 1-10. custom:[[[-]]] - 52 P. Sen, S. Darabi, "On filtering the noise from the random parameters in Monte Carlo rendering,"
*ACM Transactions on Graphics2012*, vol. 31, no. 3, 2167.doi:[[[10.1145/076.083]]] - 53 P. Bauszat, M. Eisemann, S. John, M. Magnor, "Sample‐based manifold filtering for interactive global illumination and depth of field,"
*Computer Graphics Forum*, vol. 34, no. 1, pp. 265-276, 2015.custom:[[[-]]] - 54 Q. Zhang, Y. Li, F. Al-Turjman, X. Zhou, X. Yang, "Transient ischemic attack analysis through non-contact approaches,"
*Human-centric Computing and Information Sciences*, vol. 10, no. 16, 2020.doi:[[[10.1186/s13673-020-00223-z]]] - 55 B. Cantrell, N. Yates,
*Modeling the Environment: Techniques and Tools for the 3D Illustration of Dynamic Landscapes*, NJ: John Wiley Sons, Hoboken, 2012.custom:[[[-]]] - 56 X. Meng, Q. Zheng, V. V arshney, G. Singh, M. Zwicker, "Real-time Monte Carlo denoising with the neural bilateral grid," in
*Proceedings of the 31st Eurographics Symposium on Rendering (EGSR): Digital Library Only Track*, London, UK, 2020;pp. 13-24. custom:[[[-]]] - 57 Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity,"
*IEEE Transactions on Image Processing*, vol. 13, no. 4, pp. 600-612, 2004.doi:[[[10.1109/TIP.2003.819861]]] - 58 F. Rousselle, C. Knaus, M. Zwicker, "Adaptive sampling and reconstruction using greedy error minimization,"
*ACM Transactions on Graphics*, vol. 30, no. 6, pp. 1-12, 2011.doi:[[[10.1145/2070781.2024193]]]