Content-Based Image Retrieval Using Multi-Resolution Multi-Direction Filtering-Based CLBP Texture Features and Color Autocorrelogram Features

Hee-Hyung Bu* , Nam-Chul Kim** , Byoung-Ju Yun** and Sung-Ho Kim*

Abstract

Abstract: We propose a content-based image retrieval system that uses a combination of completed local binary pattern (CLBP) and color autocorrelogram. CLBP features are extracted on a multi-resolution multi-direction filtered domain of value component. Color autocorrelogram features are extracted in two dimensions of hue and saturation components. Experiment results revealed that the proposed method yields a lot of improvement when compared with the methods that use partial features employed in the proposed method. It is also superior to the conventional CLBP, the color autocorrelogram using R, G, and B components, and the multichannel decoded local binary pattern which is one of the latest methods.

Keywords: Autocorrelogram , Content-Based Image Retrieval , MRMD CLBP , Multi-Resolution Multi-Direction Filter

1. Introduction

Recently, content-based image retrieval (CBIR) systems are being developed by global IT companies. Google Search engine supports CBIR. The search engine is weak with rotationand scale-variant images; it cannot even retrieve complex rotated images. Bixby of Samsung Galaxy S8 also supports CBIR for pictures. When retrieving complex rotated images on a cellphone, the retrieved images are seldom similar.

The existing methods usually extract features related to texture and color information because it is considered an essential function of the human visual system for object recognition. The research on texture has usually been conducted based on the frequency domain. Because the spatial frequency domain of an image represents the rate of change of pixel values, the variant condition of edges can be determined. In particular, because high and low frequencies can be separated, high frequency components including most edge variants are employed in many research areas. Representatively, frequency domains include Gabor transformation [1], wavelet transformation [2], Fourier transformation [3], etc.

Research on color, including color histogram [4] and color autocorrelogram [5], has been conducted. Color histogram is popular as it is based on statistics; it does not consider local relations, it is measured in global areas, and it has advantages of rotation and scale-invariance. It has been adapted in many studies because of its simplicity and wide applicability.

As a method that employs color distance, color autocorrelogram adds distance information to the color histogram. Recently, the major goal of image retrieval studies has been rotation and scale-invariance for rotated and scaled variants of images. Examples of such methods are rotationand scale-invariant Gabor features for texture image retrieval proposed by Han and Ma [6]; color autocorrelogram and block difference of inverse probabilities-block variation of local correlation coefficient in wavelet domain proposed by Chun et al. [7]; texture feature extraction method for rotationand scale-invariant image retrieval proposed by Rahman et al. [8]; rotation-invariant textural feature extraction for image retrieval using eigenvalue analysis of intensity gradients and multi-resolution analysis proposed by Gupta et al.[9]; rotation-invariant texture retrieval considering the scale dependence of Gabor wavelet proposed by Li et al. [10]; and CBIR using combined color and texture features extracted by multi-resolution multidirection (MRMD) filtering proposed Bu et al. [11]. However, among the latest retrieval methods, some methods are not rotation-invariant and show good performance on databases composed of photographs taken by photographers on the ground. One of them is the retrieval method using multichannel decoded local binary pattern (LBP) proposed by Dubey et al. [12].

In image retrieval, a number of aspects need to be considered. However, in this paper, we consider two: (1) features should have less amount of redundant information and (2) dimensions of feature vectors should not be very large. The color features employed in this paper are extracted from autocorrelograms [5] by using the distance of colors in chrominance space including hue (H) and saturation (S) color components. The used texture features are extracted from complete local binary pattern (CLBP) based on MRMD filtering [11] in luminance space of value (V) color component. The MRMD filters allow easy extraction of rotation-invariant features. CLBP [13] is generalized from LBP proposed by Ojala et al. [14]; it yields more texture information than LBP. Employing HSV color space is more efficient for image retrieval than RGB color space because texture information is contained in [TeX:] $$V$$ component of luminance, and color information is contained in H and S components of chrominance.

This paper combines the CLBP texture features based on MRMD filtering and the color autocorrelogram features. As such, there is an advantage of high retrieval performance; in addition, the feature dimension is not too large. Moreover, the amount of redundant information is less because of the use of HSV color space to separate the luminance of the [TeX:] $$V$$ component for texture feature and the chrominance of the H and S components for color feature. Furthermore, the proposed CLBP is scale-based, unlike the conventional CLBP, which is distance-based.

The proposed method is explained in Section 2. Experiment and results are discussed in Section 3. Finally, the conclusion is presented in Section 4.

2. The Proposed Texture and Color Features Extraction

In this paper, CBIR system using CLBP based on MRMD filtering and color autocorrelogram is proposed.

The CLBP based on MRMD filtering is describes in the following Section 2.1. Color autocorrelogram is the same as the method proposed in our previous paper [15]. Fig. 1 shows the block diagram of the proposed image retrieval system.

Fig. 1.
Block diagram of the proposed CBIR system.
2.1 Texture Feature Extraction Using MRMD Filtering-Based CLBP
2.1.1 CLBP_S feature extraction in MRMD filtering

Texture Features are extracted using CLBP based on MRMD filtering in [TeX:] $$V$$ component. CLBP_S corresponds to RULBP [4]. The CLBP_S feature extraction on MRMD filtered domain includes four steps as follows:

Step 1: Converts the RGB query image / to HSV image for the [TeX:] $$V \text { component image } I_{V}$$.

Step 2: Conducts MRMD high pass filtering [11] with directions in a resolution [TeX:] $$r$$ in the image [TeX:] $$I_{V}$$ to get the filtered images [TeX:] $$y_{r, \theta}$$. The resolution levels are one of [TeX:] $$r \in\{1,2, \ldots, M\}$$, where [TeX:] $$M$$ is the number of resolution levels and the directions are expressed as [TeX:] $$\theta=(2 \cdot \pi \cdot n) / N$$ where [TeX:] $$n \in \{0,1,2, \ldots, N-1\} \text { and } N$$ is the number of directions.

Step 3: Creates binary image for the pixel values from the filtered images. The outcome is LBP of the image [TeX:] $$I_{V}$$. LBP based on MRMD filtering can be expressed as in the following:

(1)
[TeX:] $$L B P_{N, 2^{r-1}}(p)=\sum_{n=0}^{N-1} s\left(y_{r, \theta_{n}}(p)\right) \cdot 2^{n}$$

where, [TeX:] $$s(x)=\left\{\begin{array}{l} 1, x \geq 0 \\ 0, x<0 \end{array}\right\}, \theta_{n}$$ refers to the [TeX:] $$n$$-th direction and [TeX:] $$p$$ is pixel position.

Step 4: Normalizes RULBP histogram. The RULBP has N + 2 bins at a resolution level and the total number of directions N. The RULBP on the MRMD filtered domain is expressed as in the following:

(2)
[TeX:] $$\begin{aligned} R L B P_{N, 2} r-1(p)=&\left\{\sum_{n=0}^{N-1} s\left(y_{r, \theta_{n}}(p)\right),\right.& & \text {if } U\left(L B P_{N, 2^{r-1}}(p)\right) \leq 2 \\ & N+1, & & \text {otherwise } \end{aligned}$$

where [TeX:] $$U\left(L B P_{N, 2^{r-1}}(p)\right)=\sum_{n=0}^{N-1}\left|s\left(y_{r, \theta_{n}}(p)\right)-s\left(y_{r, \theta_{n-1}}(p)\right)\right|$$ and it refers to the sum of bit changes in LBP.

The normalized RULBP histogram is expressed as in the following:

(3)
[TeX:] $$H_{r}(i)=\frac{1}{|P|} \sum_{p \in P} \delta\left(R U L B P_{N, 2^{r-1}}(p)-i\right)$$

where [TeX:] $$i \in\{0,1,2, \ldots, N, N+1\},|P|$$ stands for the size of P or the size of image, refers to the Kronecker delta.

As an outcome, the extracted total feature dimension is M × (N + 2).

2.1.2 CLBP_M feature extraction in MRMD filtering

CLBP_M stands for RULBP of magnitude images. The CLBP_M feature extraction on MRMD filtered domain includes four steps as in the following:

Step 1–2: Steps 1 and 2 are the same as in the RULBP procedure.

Step 3: Evaluates the average [TeX:] $$\mu_{r}$$ of absolute values of the same pixel position with directions in a resolution [TeX:] $$r$$ in the filtered images. Then, compare each absolute value and the average [TeX:] $$\mu_{r}$$. The result comes out as CLBP_M of the image [TeX:] $$I_{V}$$. The CLBP_M (MLBP) on MRMD filtered domain is expressed as in the following:

(4)
[TeX:] $$M L B P_{N, 2^{r-1}}(p)=\sum_{n=0}^{N-1} t\left(\left|y_{r, \theta_{n}}(p)\right|, \mu_{r}(p)\right) \cdot 2^{n}$$

(5)
[TeX:] $$\mu_{r}(p)=\operatorname{mean}_{\theta_{n} \in \Theta}\left[\left|y_{r, \theta_{n}}(p)\right|\right], \quad t(x, c)=\left\{\begin{array}{l} 1, x \geq c \\ 0, x<c \end{array}\right\}$$

Step 4: Creates and normalize RUMLBP histogram with CLBP_M. The RUMLBP has N + 2 bins in a level and the total number of directions N. The RUMLBP on the MRMD filtered domain is expressed as in the following:

(6)
[TeX:] $$\operatorname{RUMLBP}_{N, 2^{r-1}(p)=}\left\{\begin{array}{ll} \sum_{n=0}^{N-1} t\left(\left|y_{r, \theta_{n}}(p)\right|, \mu_{r}(p)\right), & \text {if } U\left(M L B P_{N, 2^{r-1}}(p)\right) \leq 2 \\ N+1, & \text {otherwise} \end{array}\right.$$

The normalized RUMLBP histogram with CLBP_M is the same as in Eq. (3).

As an outcome, the extracted total feature dimension is M × (N + 2).

2.1.3 CLBP_C feature extraction in MRMD filtering

CLBP_C [13] is a feature related to a center, but in this paper, CLBP_C is related to the value averaged over all directions instead of the center. The CLBP_C operator gives a histogram as the result of the comparison between each value averaged over all the directions and the global average. Two bins are used per resolution level. Thus, the total feature dimension is [TeX:] $$2 M$$. The CLBP_C feature extraction on the MRMD filtered domain includes four steps as in the following:

Step 1–2: Steps 1 and 2 are the same as the RULBP procedure.

Step 3: Creates the average image [TeX:] $$I_{\mu, r}$$ by computing the average of the same pixel positions for the absolute values of filtered images with directions at a resolution [TeX:] $$r$$. Then, evaluate [TeX:] $$\mu\left(I_{\mu, r}\right)$$, which is the global average of the image [TeX:] $$\boldsymbol{I}_{\mu, r}$$. Compare each average value of [TeX:] $$\boldsymbol{I}_{\mu, r}$$ and global average value [TeX:] $$\mu\left(I_{\mu, r}\right)$$. The CLBP_C on the MRMD filtered domain is expressed as in the following:

(7)
[TeX:] $$\operatorname{CLBP}_{-} C_{N, 2^{r-1}}(p)=t\left(I_{\mu, r}(p), \mu\left(I_{\mu, r}\right)\right), \quad t(x, c)=\left\{\begin{array}{l} 1, x \geq c \\ 0, x<c \end{array}\right\}$$

(8)
[TeX:] $$I_{\mu, r}(p)=\operatorname{mean}_{\theta \in \Theta}\left[\left|y_{r, \theta}(p)\right|\right]$$

(9)
[TeX:] $$\mu\left(I_{\mu, r}\right)=\operatorname{mean}_{p \in P}\left[I_{\mu, r}(p)\right]$$

Step 3: Creates and Normalizes CLBP_C histogram is expressed as follows:

(10)
[TeX:] $$H_{r}(i)=\frac{1}{|P|} \sum_{p \in P} \delta\left(C L B P_{-} C_{N, 2^{r-1}}(p)-i\right)$$

where [TeX:] $$i \in\{0,1\},|P|$$ stands for the size of [TeX:] $$P$$ or the size of the image and stands for the Kronecker delta.

As an outcome, the extracted total feature dimension is [TeX:] $$2 M$$.

3. Experiment and Results

The experiment is conducted in two groups for 6 databases—Corel [16] and VisTex [17]; Corel_MR and VisTex_MR with scale-variant images, and Corel_MD and VisTex_MD with rotation-variant images. First is the comparison of superiority between the methods of partial features employed in the proposed method and the entire proposed method in this paper. Second is the comparison of the CLBP method and the color autocorrelogram method that uses R, G, and B components with the proposed method that use H, S, and V components.

The measurement of similarity for the comparison is given by Mahalanobis distance [18] where each of the same components is normalized by their standard deviation. The performance of image retrieval is evaluated as precision and recall [19]. The precision is computed as the percentage of relevant images among retrieved images for a query image. The recall is computed as the percentage of relevant images retrieved over the total relevant images for a query image. In this experiment, the proposed method has 152 dimensions—CLBP_S(40), CLBP_M(40), CLBP_C(8), and color autocorrelogram(64)—as shown in Table 1.

Table 1.
Color spaces and dimensions of retrieval methods used in the experiments

Fig. 2 shows the precision versus recall for comparing the partial features of the proposed method with the proposed method, for the 6 databases. Fig. 3 shows the precision versus recall for comparing the methods using R, G, and B components with the proposed methods, for the 6 databases.

Fig. 2.
The precision versus recall for comparing the separate methods employed in the proposed method with the proposed method for 6 databases: (a) Corel, (b) VisTex, (c) Corel_MR, (d) VisTex_MR, (e) Corel_MD, and (f) VisTex_MD.
Fig. 3.
The precision versus recall for comparing the existing CLBP method and color autocorrelogram method using R, G and B components with the proposed method for 6 databases: (a) Corel, (b) VisTex, (c) Corel_MR, (d) VisTex_MR, (e) Corel_MD, and (f) VisTex_MD.

The average gains of the proposed method over the methods of partial features are also investigated. In the first experiment, the average gains are 26.5% and 14.17% in Corel and 31.15% and 20.97% in VisTex; 24.75% and 12.56% in Corel_MR and 33.4% and 21.91% in VisTex_MR; 24.4% and 12.11% in Corel_MD and 35.45% and 23.13% in VisTex_MD, respectively.

In the second experiment, the average gains of the proposed method over the methods using R, G and B components are 22.96% and 12.88% in Corel and 9.3% and 6.96% in VisTex; 18.25% and 9.45% in Corel_MR and 11.01% and 7.51% in VisTex_MR; and 18.15% and 9.44% in Corel_MD and 15.16% and 10.42% in VisTex_MD, respectively. As a result, the proposed method is superior to the methods using partial features employed in the proposed method and the CLBP and color autocorrelogram methods using R, G, and B components.

Additionally, we compare the retrieval performance of the proposed method to that of the multichannel decoded LBP on Corel-1K database under the same condition of [12]. The proposed method shows the precision of 78.3%, which is 3.1% higher than that of the latter (74.93%).

4. Conclusion

In this paper, the combined method of CLBP based on MRMD and color autocorrelogram is proposed. CLBP features are extracted in an MRMD filtered domain of the V component. Color autocorrelogram features are extracted in two dimensions of H and S components. In the experiments, the proposed method is compared with the separate methods employed in the proposed method, the CLBP method, the color autocorrelogram method using R, G, and B components, and the multichannel decoded LBP method. As a result, the proposed method outperforms the three conventional methods. Our future research will include inventing a scale-invariant feature extraction method efficient for various scale-variant images.

Acknowledgement

This study was supported by the BK21 Plus project (SW Human Resource Development Program for Supporting Smart Life) funded by the Ministry of Education, School of Computer Science and Engineering, Kyungpook National University, Korea (No. 21A20131600005).

Biography

Hee-Hyung Bu
https://orcid.org/0000-0003-1637-6523

She received B.S., M.S., and Ph.D. degrees in Computer Engineering from Mokpo National University (Jeonnam, Korea), Chonnam National University (Gwangju, Korea), and Kyungpook National University (Daegu, Korea) in 2004, 2006, and 2013, respectively. Since September 2019, she has been with the School of Computer Science & Engineering at Kyungpook National University, Daegu, Korea, where she is currently an invited professor. Her research interests include image retrieval, video compression, and image processing.

Biography

Nam-Chul Kim
https://orcid.org/0000-0001-8880-0958

He received B.S. degree in Electronic Engineering from Seoul National University, in 1978, and M.S. and Ph.D. degrees in Electrical Engineering from the Korea Advanced Institute of Science and Technology, Seoul, Korea, in 1980 and 1984, respectively. Since March 1984, he has been with the School of Electronics Engineering at Kyungpook National University, Daegu, Korea, where he is currently a full professor. During 1991–1992, he was a visiting scholar in the Department of Electrical and Computer Engineering, Syracuse University, Syracuse, NY, USA. His research interests are image processing and computer vision, biomedical image processing, and image and video coding.

Biography

Byoung-Ju Yun
https://orcid.org/0000-0002-9898-2262

He received the Ph.D. degree in electrical engineering and computer science from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, 2002. From 1996 to May 2003, he has been with SK Hynix Semiconductor Inc., where he worked as a senior engineer. From June 2003 to February 2005, he has been with the Center for Next Generation Information Technology Kyungpook National University, where he worked as assistant professor. Since March 2005, he has been with the school of Electronics Engineering, where he works as an invited professor. His current re-search interests include image processing, color consistency, multimedia communi-cation system, HDR color image enhancement, biomedical image processing, and HCI.

Biography

Sung-Ho Kim
https://orcid.org/0000-0001-8569-6825

He received his B.S. degree in Electronics from Kyungpook National University, Korea in 1981, and his M.S. and Ph.D. degrees in Computer Science from the Korea Advanced Institute of Science and Technology, Korea in 1983 and 1994, respectively. He has been a faculty member of the School of Computer Science & Engineering at Kyungpook National University since 1986. His research interests include real-time image processing and telecommunication, multi-media systems, etc.

References

  • 1 Z. Tang, M. Ling, H. Yao, Z. Qian, X. Zhang, J. Zhang, S. Xu, "Robust image hashing via random Gabor filtering and DWT," ComputersMaterials and Continua, vol. 55, no. 2, pp. 331-344, 2018.custom:[[[-]]]
  • 2 L. Chen, H. C. Chen, Z. Li, Y. Wu, "A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging," Human-centric Computing and Information Sciences, vol. 7, no. 35, 2017.custom:[[[-]]]
  • 3 S. Akbarov, M. Mehdiyev, "The interface stress field in the elastic system consisting of the hollow cylinder and surrounding elastic medium under 3D non-axisymmetric forced vibration," CMC-ComputersMaterials & Continua, vol. 54, no. 1, pp. 61-81, 2018.custom:[[[-]]]
  • 4 E. Hadjidemetriou, M. D. Grossberg, S. K. Nayar, "Multiresolution histograms and their use for texture classification," in Proceedings of the 3rd International Workshop on Texture Analysis and Synthesis, Nice, France, 2003;custom:[[[-]]]
  • 5 J. Huang, S. R. Kumar, M. Mitra, W. J. Zhu, R. Zabih, "Image indexing using color correlograms," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997;pp. 762-768. custom:[[[-]]]
  • 6 J. Han, K. K. Ma, "Rotation-invariant and scale-invariant Gabor features for texture image retrieval," Image and Vision Computing, vol. 25, no. 9, pp. 1474-1481, 2007.doi:[[[10.1016/j.imavis.2006.12.015]]]
  • 7 Y. D. Chun, N. C. Kim, I. H. Jang, "Content-based image retrieval using multiresolution color and texture features," IEEE Transactions on Multimedia, vol. 10, no. 6, pp. 1073-1084, 2008.doi:[[[10.1109/TMM.2008.2001357]]]
  • 8 M. H. Rahman, M. R. Pickering, M. R. Frater, D. Kerr, "Texture feature extraction method for scale and rotation invariant image retrieval," Electronics Letters, vol. 48, no. 11, pp. 626-627, 2012.custom:[[[-]]]
  • 9 R. D. Gupta, J. K. Dash, M. Sudipta, "Rotation invariant textural feature extraction for image retrieval using eigen value analysis of intensity gradients and multi-resolution analysis," Pattern Recognition, vol. 46, no. 12, pp. 3256-3267, 2013.doi:[[[10.1016/j.patcog.2013.05.026]]]
  • 10 C. Li, G. Duan, F. Zhong, "Rotation invariant texture retrieval considering the scale dependence of Gabor wavelet," IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2344-2354, 2015.doi:[[[10.1109/TIP.2015.2422575]]]
  • 11 H. H. Bu, N. C. Kim, C. J. Moon, J. H. Kim, "Content-based image retrieval using combined color and texture features extracted by multi-resolution multi-direction filtering," Journal of Information Processing Systems, vol. 13, no. 3, pp. 464-475, 2017.doi:[[[10.3745/JIPS.02.0060]]]
  • 12 S. R. Dubey, S. K. Singh, R. K. Singh, "Multichannel decoded local binary patterns for content-based image retrieval," IEEE Transactions on Image Processing, vol. 25, no. 9, pp. 4018-4032, 2016.doi:[[[10.1109/TIP.2016.2577887]]]
  • 13 Z. Guo, L. Zhang, D. Zhang, "A completed modeling of local binary pattern operator for texture classification," IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657-1663, 2010.doi:[[[10.1109/TIP.2010.2044957]]]
  • 14 T. Ojala, M. Pietikainen, T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.doi:[[[10.1109/TPAMI.2002.1017623]]]
  • 15 H. H. Bu, N. C. Kim, K. W. Park, S. H. Kim, "Content-based image retrieval using combined texture and color features based on multi-resolution multi-direction filtering and color autocorrelogram," Journal of Ambient Intelligence and Humanized Computing, 2019.doi:[[[10.1007/s12652-019-01466-0]]]
  • 16 Y. D. Chun, S. Y. Seo, N. C. Kim, "Image retrieval using BDIP and BVLC moments," IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 9, pp. 951-957, 2003.doi:[[[10.1109/TCSVT.2003.816507]]]
  • 17 R. Pickard, C. Graszyk, S. Mann, J. Wachman, L. Pickard, and L. Campbell, 1995 (Online). Available:, https://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html
  • 18 W. Y. Ma, B. S. Manjunath, "A comparison of wavelet transform features for texture image annotation," in Proceedings of International Conference on Image Processing, Washington, DC, 1995;pp. 256-259. custom:[[[-]]]
  • 19 D. Comaniciu, P. Meer, K. Xu, D. Tyler, "Retrieval performance improvement through low rank corrections," in Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL), Fort Collins, CO, 1999;pp. 50-54. custom:[[[-]]]

Table 1.

Color spaces and dimensions of retrieval methods used in the experiments
Method Color space Dimension
CLBP RGB 186
Color autocorrelogram RGB 216 (6×6×6)
CLBP + Color autocorrelogram RGB 250 (186, 64)
RULBP (CLBP_S) V 40
CLBP_M V 40
CLBP_C V 8
Color autocorrelogram HS 64
Proposed HSV 152 (88, 64)
Block diagram of the proposed CBIR system.
The precision versus recall for comparing the separate methods employed in the proposed method with the proposed method for 6 databases: (a) Corel, (b) VisTex, (c) Corel_MR, (d) VisTex_MR, (e) Corel_MD, and (f) VisTex_MD.
The precision versus recall for comparing the existing CLBP method and color autocorrelogram method using R, G and B components with the proposed method for 6 databases: (a) Corel, (b) VisTex, (c) Corel_MR, (d) VisTex_MR, (e) Corel_MD, and (f) VisTex_MD.