PDF  PubReader

Xu* , Xiao** , and Li**: Shape Description and Retrieval Using Included-Angular Ternary Pattern

Guoqing Xu* , Ke Xiao** and Chen Li**

Shape Description and Retrieval Using Included-Angular Ternary Pattern

Abstract: Shape description is an important and fundamental issue in content-based image retrieval (CBIR), and a number of shape description methods have been reported in the literature. For shape description, both global information and local contour variations play important roles. In this paper a new included-angular ternary pattern (IATP) based shape descriptor is proposed for shape image retrieval. For each point on the shape contour, IATP is derived from its neighbor points, and IATP has good properties for shape description. IATP is intrinsically invariant to rotation, translation and scaling. To enhance the description capability, multiscale IATP histogram is presented to describe both local and global information of shape. Then multiscale IATP histogram is combined with included-angular histogram for efficient shape retrieval. In the matching stage, cosine distance is used to measure shape features’ similarity. Image retrieval experiments are conducted on the standard MPEG-7 shape database and Swedish leaf database. And the shape image retrieval performance of the proposed method is compared with other shape descriptors using the standard evaluation method. The experimental results of shape retrieval indicate that the proposed method reaches higher precision at the same recall value compared with other description method.

Keywords: Image Retrieval , Included-Angular Ternary Pattern , Multiscale , Shape Description

1. Introduction

With the wide application of image acquisition equipment, the number of images has increased rapidly. For large image databases, how to retrieve desirable images efficiently becomes an important issue. Content-based image retrieval (CBIR) is more advantageous than traditional text-based image retrieval in terms of retrieval efficiency and accuracy, especially for large image databases. Therefore, CBIR has received much attention from researchers in the fields of information retrieval and computer vision [1]. In CBIR low-level visual features are automatically extracted to represent images, such as color, texture and shape. As one of the primary visual contents of images, shape has good discrimination power, and is an important cue for indexing images. Moreover, shape feature can represent higher level of the object compared with color or texture [2], and plays an important role in a diverse range of applications.

Since shape information is so important, how to effectively represent and extract shape feature of objects has always been a hot topic in computer vision and image understanding. Many studies focus on shape feature description and matching in computer vision [1-3]. Various shape descriptors have been proposed in these studies and significant progress has been made in shape feature description. Existing shape descriptors can be roughly divided into two categories: contour-based methods and region-based methods [2,3]. In most cases the contour is more important than the inner region of shape, and human beings can distinguish shapes easily based on contour. And contour-based methods are easy to implement, so contourbased shape description and matching techniques are quite popular. Representative methods include curvature scale space (CSS), Fourier descriptor (FD), shape context (SC), inner-distance shape context (IDSC), contour points distribution histogram (CPDH), contour flexibility, and so on [4].

In CSS method the zero-cross points of curvature function defined on shape contour is located at different scales. These zero-cross points form a CSS map, and the maximum of CSS map’ contour is used for shape matching. FD is a classical method which represents two-dimensional (2D) contour using a 1D shape signature. FD is formed using the coefficients after applying Fourier transform on shape signature [5]. However, shape signatures have an important effect on FD’s retrieval performance. SC method uses a 2D histogram to represent the relative spatial distribution of landmark points around contour points. And these histograms are employed for shape matching using statistic measurement. IDSC is an extended version of SC and suitable for complicated shapes. Dynamic programming is used to match IDSC [6]. CPDH describes shapes using the distribution of points on object contour under polar coordinates [7]. Contour flexibility method represents the deformable potential at each point along a contour [4]. Several new shape descriptors make good use of the geometric features of contour points, and achieve good retrieval performances, such as farthest distance function [8], arch-height function [3], triangle-area representation [9], angular function [10] and height function [11]. It is worth noting that retrieval performances can be further improved for some methods by incorporating multiscale framework [12], and multiscale-arch-height descriptor is of the representatives [3].

Angular feature derived from shape contour points has good properties. Angular feature is intrinsically invariant to rotation, translation and scaling, and easy to extend to multiscale description. Hence it is a promising feature for shape matching, and is used in references [1,6,10,13] for shape description and matching. Angular pattern (AP) and binary angular pattern (BAP) are proposed in [10] for shape image retrieval. In this method, AP is calculated for each contour point (the value of AP ranges from 0 to 2π). Then for each point on the contour, denoted by pi, symmetric neighbor points are located, and the APs at these points are compared with that at pi for feature coding. BAP is good at describing convex strands and concave strands of contour by feature coding. In fact, line segment also exists on local shape contour, not just convex strands and concave strands. However, contour point both on line segment and concave strands will be coded with 0 in BAP. Hence, line segment can’t be distinguished from concave strands. As a result, important discrimination information is of no use in BAP. An example is shown in Fig. 1.

The shape contour in Fig. 1 belongs to bone class from MPEG-7 shape database. Contour points pi and pj are on line segment and concave strands, respectively. They play important roles in the description and identification of bone samples. But the 4-bit BAP at these two points both are [0 0 0 0] according to a study of Hu et al. [10]. Thus they cannot be effectively distinguished using BAP. In order to describe shape using angle information effectively, included-angular ternary pattern (IATP) is proposed for shape retrieval in this paper. The main contribution of our work can be summarized as follows.

(1) IATP is proposed to describe contour points well, which can clearly distinguish the difference between the line segments, convex strands and concave strands on shape contours.

(2) IATP is extended in a multiscale framework to form rich descriptors, which can obtain better shape retrieval results.

2. Included-Angular Ternary Pattern

For a given shape, its contour is detected and sampled along anticlockwise with equal interval firstly. The number of sampled points is denoted by N. Hence a shape contour can be represented by an ordered set of points, [TeX:] $$C=\left\{p_{i}=\left(x_{i}, y_{i}\right), i=1,2, \ldots, N\right\},$$ where xi and yi represent the coordinates of point pi. Two definitions are given below:

DEFINITION 1. [TeX:] $$\theta_{i} \text { at } p_{i}$$ is defined as: along contour forward and backward directions, two neighbor points [TeX:] $$p_{i+d} \text { and } p_{i-d}$$ are located respectively on the contour, satisfying the constraint that the number of sampled points between pi and either neighbor point is d. Here d is a positive integer. Points[TeX:] $$p_{i+d} \text { and } p_{i}$$ are connected, and one segment [TeX:] $$p_{i+d} p_{i}$$ is obtained. The same operations can be again applied to [TeX:] $$p_{i-d} \text { and } p_{i}.$$ The two segments form an included-angular [TeX:] $$\theta_{i}$$ at point pi, where [TeX:] $$\theta_{i} \in[0, \pi].$$ The calculation method is as follows:

(1)
[TeX:] $$d_{i 1}=\sqrt{\left(x_{i}-x_{i+d}\right)^{2}+\left(y_{i}-y_{i+d}\right)^{2}}$$

(2)
[TeX:] $$d_{i 2}=\sqrt{\left(x_{i}-x_{i-d}\right)^{2}+\left(y_{i}-y_{i-d}\right)^{2}}$$

(3)
[TeX:] $$d_{i}=\sqrt{\left(x_{i+d}-x_{i-d}\right)^{2}+\left(y_{i+d}-y_{i-d}\right)^{2}}$$

(4)
[TeX:] $$\theta_{i}=\cos ^{-1}\left(\frac{d_{i 1}^{2}+d_{i 2}^{2}-d_{i}^{2}}{2 d_{i 1} \times d_{i 2}}\right)$$

It can be seen that included-angular feature captures geometrical information of contour points. It is easy to know that included-angular is invariant to rotation, translation. And it is also scale invariant. If the contour is scaled, lengths of [TeX:] $$d_{i l}, d_{i 2} \text { and } d_{i}$$ will changed in the same proportion as the contour, and this has no effect on [TeX:] $$\theta_{i}$$, which can be seen from formula (4).

DEFINITION 2. For each contour C, the IATP at pi is defined as: the included-angular at points [TeX:] $$P_{i+d}$$ and [TeX:] $$p_{i-d}$$ are calculated according to Definition 1, and denoted using [TeX:] $$\theta_{i+d}, \theta_{i-d}$$ respectively. Then [TeX:] $$\theta_{i+d}$$ are compared with [TeX:] $$\theta_{i}$$ for included-angular coding. The coding rules contain three cases for convex strands, line segment and concave strands, respectively. If included-angular at neighbor point is larger than that at pi and the difference exceeds a certain threshold, then it is coded with 1. If included-angular at neighbor point is less than that at pi and the absolute difference exceeds a certain threshold, then it is coded with -1. In other case, it is coded with 0. Take the comparison between [TeX:] $$\theta_{i+d} \text { and } \theta_{i}$$ as an example. The included-angular coding rules for [TeX:] $$\theta_{i+d} \text { at } p_{i}$$ at pi is as follow:

(5)
[TeX:] $$\mathrm{TIA}_{l}=\left\{\begin{array}{ll}{1} & {\theta_{i+d}-\theta_{i}>\theta_{\mathrm{th}}} \\ {0-\theta_{\mathrm{th}} \leq \theta_{i+d}-\theta_{i}} & { \leq \theta_{\mathrm{th}}} \\ {-1} & {\theta_{i+d}-\theta_{i}<-\theta_{\mathrm{th}}}\end{array}\right.$$

where [TeX:] $$\mathrm{TIA}_{l}$$ is the IATP at [TeX:] $$p_{i} \text { for } p_{i+d,} \text { and } \theta_{\mathrm{th}}$$ is the threshold with a positive value. The IATP at [TeX:] $$p_{i} \text { for } p_{i-d}$$ is calculated in a same way and is denoted by [TeX:] $$\mathrm{TIA}_{r}.$$ And the IATP at pi is coded with [TeX:] $$\left[\mathrm{TIA}_{l} \mathrm{TIA}_{r}\right].$$

Take points pi and pj on the contour in Fig. 1, for example. The IATP at pi for its neighbor points marked with ο is [0 0], while the IATP at pj for its neighbor point marked with ο is [1 1]. IATP values are different at the two points because they belong to line segment or concave strands. Hence IATP is a powerful feature for shape description. The IATP is calculated for each point on contour C and code vector is formed as,

(6)
[TeX:] $$\mathrm{TIA}=\left[\mathrm{TIA}_{11}, \mathrm{TIA}_{\mathrm{rl}} ; \ldots ; \mathrm{TIA}_{1 \mathrm{N}}, \mathrm{TIA}_{\mathrm{rN}}\right]$$

wehere [TeX:] $$\mathrm{TIA}_{l i}$$ is the IATP at pi for neighbor point [TeX:] $$p_{i-d}$$, and [TeX:] $$\mathrm{TIA}_{r i}$$ is the IATP at pi for neighbor point [TeX:] $$P_{i+d}$$. Their values belong to {-1,0,1}.TIA is the IATP for contour C. For each point, the pattern number of IATP is 9, which are [1 1], [1 0], [1 -1], [0 1], [0 0], [0 -1], [-1 1], [-1 0], and [-1 -1]. The pattern is few for large shape dataset because there is no sufficiently discriminative information to be used. So IATP is extended in a multiscale framework to enhance the discriminating power.

From the construction of IATP it can be seen that parameter d has an important impact on the discriminant power of IATP. When the parameter is small, IATP can capture local information for contour points. And IATP can capture globe information if the parameter is large. Hence we extend IATP in a multiscale framework according to parameter , to describe contour from coarse to fine.

We can obtain multi-IATP code with different parameter d. For example, when d is set to [TeX:] $$d_{1}, d_{2}$$ separately [TeX:] $$\left(d_{1}<d_{2}\right),$$ we can find four neighbor points for pi, and 4-bit IATP can be obtained after comparing their included-angular with pi. And the total number of IATP patterns will be 81. Similarly, 6-bit, 8-bit, 10-bit IATP can be constructed, and the total number of IATP pattern will be [TeX:] $$3^{6}, 3^{8}, 3^{10}.$$ So sufficient discrimination information can be provided using these patterns. Because [TeX:] $$3^{8}=6561$$ is much more than the contour point number, IATP will be a vector with high dimension, and not suitable for shape description. Hence, 4-bit and 6-bit IATP are studied here.

To retrieval shape efficiently using IATP, an IATP histogram is extracted for each shape contour. Histogram for 4-bit IATP can be obtained according to the following steps:

Step 1. For each contour, the contour point number N and parameter [TeX:] $$d_{1}, d_{2}$$ are set firstly. Then for each point pi we find four neighbor points [TeX:] $$\left(p_{i-d 2}, p_{i-d l}, p_{i+d l,} p_{i+d 2}\right)$$ and compute their included-angular for comparison. 4-bit IATP at pi is obtained, which is [TeX:] $$\left[\mathrm{TIA}_{\mathrm{i1}}, \mathrm{TIA}_{\mathrm{i2}}, \mathrm{TIA}_{\mathrm{i3}}, \mathrm{TIA}_{\mathrm{i4}}\right].$$

Step 2. Determine the index of the histogram for 4-bit IATP. [TeX:] $$\left[\mathrm{TIA}_{\mathrm{i1}}, \mathrm{TIA}_{\mathrm{i} 2}, \mathrm{TIA}_{\mathrm{i3}}, \mathrm{TIA}_{\mathrm{i4}}\right]$$ belongs to [TeX:] $$k^{\mathrm{th}}$$- bin in histogram [TeX:] $$H(1 \leq k \leq 81),$$ where k can be set as below:

(7)
[TeX:] $$i n d e x_{i}=3^{0 *} T I A_{i l}+3^{1 *} T I A_{i 2}+3^{2 *} T I A_{i 3}+3^{3 *} T I A_{i 4}$$

(8)
[TeX:] $$k=i n d e x_{i}+41$$

[TeX:] $$\text {Index}_{i}$$ is the index of 4-bit IATP at pi, with a value ranging from -40 to 40. And index of the histogram bin is from 1 to 81, so indexi can be mapped to corresponding histogram bin by Eq. (8).

Step 3. Construct each bin in histogram H. for each [TeX:] $$\text {Indexi}, k^{\text {th }}-\text { bin }$$ in histogram H is uploaded as below:

(9)
[TeX:] $$H(k)=H(k)+1$$

Fig. 2.

Example of 4-bit TAP for bone shapes: (c) and (d) are bone shape samples, and their histograms are (a) and (b) separately.
2.png

Figs. 2 and 3 show histograms for 4-bit IATP belonging to bone and butterfly classes separately. Shapes in Fig. 2(c) and Fig. 1 are the same one. Horizontal axis shows the index order of histogram bins in Fig. 2(a), 2(b) and Fig. 3(a), 3(b). Vertical axis represents the number of points belonging to the histogram bins.

To retrieval shape images, the distance between each pair of IATP histograms is calculated. Given IATP histogram [TeX:] $$H_{A} \text { and } H_{B}$$ for shape A and B separately, their distance dist(A,B) measured by cosine distance is

(10)
[TeX:] $$\operatorname{dist}(A, B)=\frac{\sum_{i=1}^{D} H_{A}(i) \times H_{B}(i)}{\left\|H_{A}\right\| \times\left\|H_{B}\right\|}$$

where D is the bin number of [TeX:] $$H_{A} \text { and } H_{B},\| \|$$ represents the magnitude of vector. dist(A,B) ranges from 0 to 1.

Fig. 3.

Example of 4-bit TAP for butterfly shapes: (c) and (d) are butterfly shape samples, and their histograms are (a) and (b) separately.
3.png

3. Shape Retrieval Experimental Testing

To evaluate the effectiveness of the proposed IATP, the retrieval tests are conducted on MPEG-7 Set B contour shape database and the Swedish leaf database. The retrieval performance is compared with other methods.

3.1 Shape Databases

There are 70 shape categories with 20 similar shapes for each category in MPEG-7 Set B shape database. Representative images in MPEG-7 Set B are shown in Fig. 4. And it is a commonly used method for testing of similarity-based retrieval. The Swedish leaf database consists of isolated leaves from 15 different Swedish tree categories, with 75 leaves per category. This database is known for high inter-class similarity and large intra-class difference. Samples from the 15 leaf categories are shown in Fig. 5. In our retrieval experiments, the number of the sampled points for each shape contour is 200.

Fig. 4.

Representative images in MPEG-7 Set B.
4.png

Fig. 5.

Representative images in Swedish leaf.
5.png
3.2 Evaluation Measure

To evaluate the retrieval performance of the proposed IATP, the retrieval results are measured using precision and recall curve. Given a query shape, the total number of retrieved shape images is [TeX:] $$m_{1}.$$ And the number of retrieved shapes which belong to the same class with query shape is c. The number of shapes in database which belong to the same class with query shape is [TeX:] $$m_{2}.$$ The recall and precision for this query shape are calculated as

(11)
[TeX:] $$P=c / m_{1}, R=c / m_{2}$$

[TeX:] $$m_{2}$$ is 20 for MPEG-7 Set B and 75 for Swedish leaf database. Each shape in the database is taken as a query, and for each database the final precision is the average of all the query results at each level of recall.

The proposed method is compared with other notable commonly used or recently proposed shape descriptors on both of databases. Compared descriptors include farthest point distance (FPD), angular radius function (ARF), BAP, and sequency-ordered complex Hadamard transform based descriptor (SCHD) [14].

FPD is used for comparison here, because it has better retrieval performance than Zernike method and CSS on MPEG-7 database. ARF is chosen because both our method and ARF are based on angular information. BAP is recently proposed in [10] and our method is inspired by BAP. In shape matching stage the sequential forward selection (SFS) scheme is used for BAP, and achieves better retrieval performance than IDSC and CSS. To facilitate performance comparison, same scale parameter was used for both BAP and IATP.

3.3 Results and Discussion

As indicated in Section 2, parameter d has a crucial effect on the performance of IATP histogram. IATP can represent local information of contour with a small d, while capture global feature of contour with a large d. we test the proposed method with different parameter d. In consideration of that a good shape descriptor should have characteristics of compactness and that IATP is extended in multiscale framework, shape retrieval experiments are conducted firstly using IATP with d=1–16 on MPEG-7 database. The average precision-recall on MPEG-7 database are showed in Table 1.

Table 1.

Precision (%) for IATP with d=1–16
Recall
10 20 30 40 50 60 70 80 90 100
d=1 30.2 14.0 10.9 9.2 8.1 7.1 6.3 5.7 5.0 3.2
d=2 30.9 14.7 11.2 9.7 8.7 7.7 6.9 6.0 5.2 3.3
d=3 33.8 17.5 13.9 11.3 9.7 8.5 7.4 6.3 5.3 3.3
d=4 35.1 19.6 15.4 13.5 11.8 10.4 9.1 7.8 6.2 3.9
d=5 35.7 19.7 15.9 14.0 12.3 11.0 9.8 8.3 6.5 4.4
d=6 37.2 20.2 15.6 13.3 11.5 9.9 8.5 7.3 6.0 3.9
d=7 36.8 21.3 16.6 13.9 11.8 10.1 8.3 7.1 5.9 3.8
d=8 36.9 20.2 15.5 13.2 11.1 9.6 8.0 6.7 5.4 4.0
d=9 36.3 19.5 15.4 13.3 11.2 9.4 7.8 6.4 5.1 3.8
d=10 36.0 19.8 15.8 13.6 11.4 9.1 7.1 5.9 4.8 3.3
d=11 36.2 19.0 14.3 11.7 9.7 7.7 6.2 5.4 4.2 2.8
d=12 35.6 18.8 13.8 11.6 9.6 8.1 6.3 5.4 4.1 2.9
d=13 34.8 17.9 13.3 11.3 9.4 7.8 6.3 5.4 3.9 3.0
d=14 35.3 17.7 13.3 11.1 9.5 8.1 6.6 5.7 4.4 3.3
d=15 33.9 17.7 13.2 10.8 9.1 8.2 7.1 6.1 5.1 3.5
d=16 33.1 17.3 13.0 10.8 9.1 7.8 6.7 5.8 4.9 4.0

As can be seen in Table 1, IATP with d=5 achieves better retrieval performance in most cases than other situations. However, this performance is not good enough because there are no enough patterns to distinguish so many shape samples. Hence IATP in multiscale framework are tested next.

As indicated in Section 2, in multiscale framework the dimensions of 8-bit IATP are too high to be suitable for describing shape features. Hence only 4-bit and 6-bit IATP are tested. Each shape in MPEG- 7 database is taken as a query, and parameter d is set to 5 as indicated in Table 1. The average precisionrecall of multiscale IATP on MPEG-7 database are showed in Table 2.

It can be seen that the performance of multiscale IATP is much better than that of single-scale IATP by comparing Table 2 and Table 1, and 6-bit IATP has better performance than 4-bit.

Table 2.

Precision (%) for multiscale IATP
Recall
10 20 30 40 50 60 70 80 90 100
4-bit 56.9 32.6 24.7 20.2 17.2 14.7 12.5 10.4 8.0 4.7
6-bit 66.5 40.6 31.1 25.4 21.0 18.1 15.5 12.8 10.4 6.3

Besides, performance of 6-bit IATP is not desirable enough for shape retrieval, because IATP just captures the relations of size between the adjacent included-angular, and discards their more precise values. To make effective use of included-angle for better retrieval performance, included-angular histogram are constructed and integrated with multiscale IATP together. Included-angular histogram has 24 bins considering Included-angular ranges [0, π], and the construction is same as IATP histogram. Fig. 6 shows the precision-recall plots of proposed method and the compared methods on the MPEG-7 database. And Fig. 6 shows the precision-recall plots of proposed method and the compared methods on the Swedish leaf database. The precision-recall information of SCHD in Fig. 6 comes from a study of Wang et al. [14]. And precision-recall information of FPD and ARF come from a study of Xu et al. [15].

Fig. 7 demonstrates that among the competing methods, the proposed method has slightly better performance than BAP, FPD, and SCHD on the MPEG-7 Set B database, and they all exceed ARF. Fig. 7 shows that the proposed method reaches the highest precision with the same recall value compared with the others on the Swedish leaf database. These retrieval results demonstrate that integrated IATP can capture shape features well and shows good performance.

Fig. 6.

Precision-recall comparisons on MPEG-7 database.
6.png

Fig. 7.

Precision-recall comparisons on Swedish leaf database.
7.png

4. Conclusion

This paper proposed IATP based shape descriptor for efficient image retrieval. Shape image retrieval experiments were conducted to test key parameters of IATP on MPEG-7 Set B database and Swedish leaf database. Good shape retrieval performance was achieved by multiscale IATP integrated includedangular histogram.

In the future, we are interested in further improving the performance of the proposed shape descriptor. Since the proposed method works with angular feature derived from shape contour points, it is also suitable for describing partially observed object shapes. It is possible to match shapes in part to part way with IATP. Moreover, the multiscale method can be optimized using feature selection algorithms or granularity computing to obtain higher shape retrieval performance.

Also, the proposed shape descriptor can be implemented in several applications. For example, plant leaf recognition is very important to agricultural information and automatic plant recognition systems, and leaf retrieval experiment on the Swedish Leaf database proved the potential performance of the proposed shape descriptor method for plant recognition. In the next, it is possible to implement shapebased plant leaf recognition.

Acknowledgement

This paper is supported partially by the National Natural Science Foundation of China (No. 61503005), the National Key Research and Development Program (No. 2017YFB0802305), and Universities Key Scientific Research Project of Henan (No. 19A520029, 17A520046, and 19B520017).

Biography

Guoqing Xu
https://orcid.org/0000-0002-0405-7691

He received the Ph.D. in Control Science and Control Engineering from the University of Science and Technology Beijing, China, in 2014. He is a lecturer in Nanyang Institute of Technology. His research interests include content-based image retrieval, automatic image annotation, machine learning, and pattern recognition.

Biography

Ke Xiao
https://orcid.org/0000-0002-8654-1339

He received his Ph.D. in circuit and system from Beijing University of Posts and Telecommunications, Beijing, China, in 2008. He has been an associate professor at North China University of Technology, China, since 2012. He has long been engaged in the research and development and teaching work of wireless communications, the Internet of Things, and embedded systems.

Biography

Chen Li
https://orcid.org/0000-0001-5983-5895

She received the Ph.D. in Control Science and Control Engineering from the University of Science and Technology Beijing, China, in 2013. She has been an associate professor at North China University of Technology, China, since 2017. She has long been engaged in the research and development and teaching work of image processing, pattern recognition and information hiding.

References

  • 1 D. Jiang, J. Kim, "Texture image retrieval using DTCWT-SVD and local binary pattern features," Journal of Information Processing Systems, vol. 13, no. 6, pp. 1628-1639, 2017.doi:[[[10.3745/JIPS.02.0077]]]
  • 2 D. Hu, W. Huang, J. Yang, Z. Zhu, "Common base triangle area representation method for shape retrieval," Acta Electronica Sinica, vol. 44, no. 5, pp. 1247-1253, 2016.doi:[[[10.3969/j.issn.0372-2112.2016.05.034]]]
  • 3 B. Wang, D. Brown, Y. Gao, J. La Salle, "MARCH: multiscale-arch-height description for mobile retrieval of leaf images," Information Sciences, vol. 302, pp. 132-148, 2015.doi:[[[10.1016/j.ins.2014.07.028]]]
  • 4 C. Xu, J. Liu, X. Tang, "2D shape matching by contour flexibility," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 180-186, 2009.doi:[[[10.1109/TPAMI.2008.199]]]
  • 5 X. Shu, L. Pan, X. J. Wu, "Multi-scale contour flexibility shape signature for Fourier descriptor," Journal of Visual Communication and Image Representation, vol. 26, no. 161-167, 2015.doi:[[[10.1016/j.jvcir.2014.11.007]]]
  • 6 H. Ling, D. W. Jacobs, "Shape classification using the inner-distance," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 286-299, 2007.doi:[[[10.1109/TPAMI.2007.41]]]
  • 7 X. Shu, X. J. Wu, "A novel contour descriptor for 2D shape matching and its application to image retrieval," Image and vision Computing, vol. 29, no. 4, pp. 286-294, 2011.doi:[[[10.1016/j.imavis.2010.11.001]]]
  • 8 A. El-ghazal, O. Basir, S. Belkasim, "Farthest point distance: a new shape signature for Fourier descriptors," Signal Processing: Image Communication, vol. 24, no. 7, pp. 572-586, 2009.doi:[[[10.1016/j.image.2009.04.001]]]
  • 9 N. Alajlan, I. El Rube, M. S. Kamel, G. Freeman, "Shape retrieval using triangle-area representation and dynamic space warping," Pattern Recognition, vol. 40, no. 7, pp. 1911-1920, 2007.doi:[[[10.1016/j.patcog.2006.12.005]]]
  • 10 R. X. Hu, W. Jia, H. Ling, Y. Zhao, J. Gui, "Angular pattern and binary angular pattern for shape retrieval," IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1118-1127, 2014.doi:[[[10.1109/TIP.2013.2286330]]]
  • 11 J. Wang, X. Bai, X. You, W. Liu, L. J. Latecki, "Shape matching and classification using height functions," Pattern Recognition Letters, vol. 33, no. 2, pp. 134-143, 2012.doi:[[[10.1016/j.patrec.2011.09.042]]]
  • 12 S. Guo, J. Zhao, X. Li, "Research on shape representation based on statistical features of centroid-contour distance," Journal of Electronics & Information Technology, vol. 37, no. 6, pp. 1365-1371, 2015.doi:[[[10.11999/JEIT140960]]]
  • 13 Y. Yang, D. Zheng, M. Han, "a shape matching method using spatial features of multi-scaled contours," Acta Automatica Sinica, vol. 41, no. 8, pp. 1405-1411, 2015.doi:[[[10.16383/j.aas.2015.c140896]]]
  • 14 B. Wang, J. Wu, H. Shu, L. Luo, "Shape description using sequency-ordered complex Hadamard transform," Optics Communications, vol. 284, no. 12, pp. 2726-2729, 2011.doi:[[[10.1016/j.optcom.2011.01.061]]]
  • 15 G. Xu, Z. Mu, Y. Xu, "Shape retrieval using multi-level included angle functions-based Fourier descriptor," Journal of Southeast University, vol. 30, no. 1, pp. 22-26, 2014.doi:[[[10.3969/j.issn.1003-7985.2014.01.005]]]