Gait Recognition Algorithm Based on Feature Fusion of GEI Dynamic Region and Gabor Wavelets

Jun Huang* , Xiuhui Wang** and Jun Wang**

Abstract

Abstract: The paper proposes a novel gait recognition algorithm based on feature fusion of gait energy image (GEI) dynamic region and Gabor, which consists of four steps. First, the gait contour images are extracted through the object detection, binarization and morphological process. Secondly, features of GEI at different angles and Gabor features with multiple orientations are extracted from the dynamic part of GEI, respectively. Then averaging method is adopted to fuse features of GEI dynamic region with features of Gabor wavelets on feature layer and the feature space dimension is reduced by an improved Kernel Principal Component Analysis (KPCA). Finally, the vectors of feature fusion are input into the support vector machine (SVM) based on multi classification to realize the classification and recognition of gait. The primary contributions of the paper are: a novel gait recognition algorithm based on based on feature fusion of GEI and Gabor is proposed; an improved KPCA method is used to reduce the feature matrix dimension; a SVM is employed to identify the gait sequences. The experimental results suggest that the proposed algorithm yields over 90% of correct classification rate, which testify that the method can identify better different human gait and get better recognized effect than other existing algorithms.

Keywords: Gait Recognition , Feature Fusion , Gabor Wavelets , GEI , KPCA

1. Introduction

Gait recognition system usually includes motion detection, gait cycle detection, feature extraction and pattern recognition, in which feature extraction and pattern recognition are most important basic operations. As a result, to improve the recognition performance of such systems, researchers had to pay more attention to these two operations and had proposed various methods. Currently, gait recognition is based on some gait datasets, such as CASIA gait dataset, USF gait dataset, which contain several gait sequences of selected subjects, in which a subject is the person whose gait data is sampled and a gait sequence refers to the video of some subject. Gait feature extraction methods can be classified into model-based and model-free approaches [ 1 ]. In the model-based approaches, gait is represented by structural model that describes the shape of human body parts or by motion model that describes the motion of each body part. Although model-based methods have many advantages, but because it is hard to extract the implicit model from the gait sequences, their performance is limited and computational complexity is relatively high [ 2 ]. The another kind of methods are model-free approaches [ 3 ] which typically analyze the gait sequence by motion that subjects make during walking to extract gait features for recognition. Compared to the model-based methods, the model-free methods demonstrated a better performance with lower computational complexity on most gait databases.

The average silhouette over one gait cycle, known as GEI is widely used in recent model-free gait recognition algorithms because of its simplicity and effectiveness [ 4 ]. In GEI approach, the real and synthetic gait templates are generated to overcome the limitation of training templates and to improve the accuracy of gait recognition. Compared to the other methods, the averaging operation makes GEI less sensitive to segmentation errors [ 4 ]. Since GEI is significantly affected by the silhouette quality, the method in [ 5 ] improved the silhouette quality and recognition accuracy by using standard gait models as prior knowledge. Zhang et al. [ 6 ] proposed an active energy image (AEI) + two-dimensional local preserving projection (2DLPP) method by accumulating image difference between subsequent silhouette images, which first extract the active regions by calculating the difference of two adjacent silhouette images, and constructed an AEI by accumulating these active regions. The AEI+GEI method [ 7 ] can make the best of complementary of dynamic information and static information and combine them for gait recognition.

In gait recognition, since the linear methods fail to perform very well with the influence of nonlinear factors such as illumination, view and walk speed, the better solutions could be achieved by nonlinear methods such as kernel-based methods. Principal Component Analysis (PCA) is a classical method for dimensional reduction and feature extraction. KPCA is another generalization of PCA, in which the kernel trick is employed first to project the input image data into a high-dimensional feature space F, and then the standard PCA is performed in F [ 8 ]. Yang and Qiu [ 9 ] integrated KPCA and SVM methods to improve the classification of gait patterns and increased the recognition rate and Fazli et al. [ 10 ] utilized linear discriminant analysis (LDA) for feature reduction and SVM classification technique as an optimal discriminant method. Inspired by sparse representation (SR), Qiao et al. [ 11 ] proposed a sparsity preserving projection (SPP) method for face recognition and Wang et al. [ 12 ] proposed a kernel sparsity preserving projection (KSPP) method for gait recognition. A spatio-temporal gait recognition method based on radon transform was proposed in [ 13 ], in which the gait contour images are decomposed into two temporal templates and these two templates were subjected to Radon transform for feature extraction. As for matrix-based subspace analysis, two-dimensional PCA (2DPCA) is proposed [ 14 ], which can achieve better performance than PCA in recognition when the number of samples is small.

Due to the robustness of Gabor features against local distortions, the Gabor wavelets have been successfully applied for gait recognition. Huang et al. [ 15 ] proposed a method for recognizing human identity by gait features based on Gabor wavelets and modified gait energy images and Abdullah and El- Alfy [ 16 ] proposed a statistical gait recognition approach based on the analysis of overlapping Gaborbased regions. In order to overcome the shortcoming of high dimension feature of the traditional Gabor feature, a gait recognition method based on integrated Gabor feature was proposed by Shao and Hong [ 17 ], in which the active region Gabor feature images were integrated in a multi-scale and multiangle way by means of mean fusion and differential binary encoding methods. Meanwhile, a method based Gabor wavelets and (2D)2PCA (two-dimensional 2DPCA) was also proposed by [ 18 ] to reduce high feature dimension.

In this paper, we present a novel gait recognition algorithm based on fusing the features of GEI dynamic region with Gabor features. The gait features are extracted first from GEI using Gabor wavelets. Furthermore, the improved KPCA method is adopted to reduce the feature matrix dimension. A SVM is employed at last to identify the gait sequences. Experimental results show that the proposed algorithm can greatly reduce the feature matrix dimension and improve recognition accuracy rates.

The rest of the paper is organized as follows. In Section 2, we present a short description of the image preprocessing and cycle detection. Section 3 describes the proposed method based on feature fusion of GEI dynamic region and Gabor. In Section 4, experimental methods and results analysis are discussed. Finally, conclusions and future works are drawn shortly in Section 5.

2. Preprocessing and Cycle Detection

The results of image preprocessing may directly affect gait feature extraction in the gait recognition. Due to the influence of light, shelter and other external factors of gait image, some problems such as loss of information, image shadows, and improper threshold of image preprocessing may occur. In order to solve these problems, it is necessary to preprocess the image of gait recognition. The paper mainly follows these steps to extract the body image silhouette.

Step 1 (Background reconstruction): Since the scene is almost static in the whole video sequence and background corresponds to low-frequency information, average of pixels in image sequence can be used to estimate static background.

Step 2 (Moving object detection): Background subtraction is used to detect moving object of image sequence and max-entropy threshold method is used to process image binarization. After background is modeled, body silhouette will be extracted by the background subtraction method, in which the common way is that the current gait frame is first subtracted with the modeled background frame to get a difference and then compare the difference with the preset threshold. The frame which difference is greater than threshold is used as body silhouette and frame less than threshold as background. After subtraction, the method process image binarization and get the initial gait silhouette.

Step 3 (Morphology post-processing): After image binarization, the erosion and dilation operator of mathematical morphology is used first to remove noise and eliminate small cavities of image, and then the method of sticking the tab and analyzing connected domain is used to separate out the complete object.

Step 4 (Normalization of binary image): In order to reduce computational complexity, remove redundant information and eliminate the influence of the inconsistency of body silhouette caused by the change of camera focal length, the paper process object silhouette image normalization to scale the image to a uniform size.

In cycle detection, the gait is the data of periodic change, and width and height of the silhouette of the gait sequence are a periodic change process. There are several existing approaches that can accurately detect gait cycle. In our algorithm, we use the approach in [ 19 ], which first calculate and filter the pixels number of the lower half of the silhouettes in the whole sequence by a median filter. The number of frames in one gait cycle is calculated from the filtered signal.

3. The Proposed Algorithm

3.1 Features of the Body Dynamic Parts in GEI

GEI is the cumulative energy diagram of a complete gait cycle time after normalized. Luminance value of each pixel reflects the frequency in the body over a period of pixel. Given the sequence of gait silhouette images, GEI is defined as follows:

(1)
[TeX:] $$G ( x , y ) = \frac { 1 } { N } \sum _ { t = 1 } ^ { N } B _ { t } ( x , y )$$

where N is a sequence of frame in a gait cycle; t represents the number of frames; x, y represent the two dimensional plane coordinate values of the image. Fig. 1 shows the GEI of 90o within a gait cycle.

Fig. 1.
GEI of three conditions at the view of 90o: (a) normal, (b) bag, and (c) cloth.

However, from Fig. 1, we can find the outermost outlines of the human body have changed when human is walking at conditions of carrying backpacks and other objects. Since the changes on top part of the body are very little and the lower part of the body (such as leg) has obvious change in normal, carrying bag and clothing conditions, the paper extracts the dynamic parts of the body as gait feature. Firstly, according to the anatomical model, the heights of a human’s pelvis is equal to 48% of the total body height, we divide the leg area from GEI. Then according to the spacing between the feet, the leg area is further divided and dynamic region of the body is gotten. The size of selected dynamic region in the paper is about 48×39 (in pixels). Fig. 2 showed that extraction of the dynamic regions of the body can eliminate effectively impact of carrying bag and clothing conditions on gait recognition.

Fig. 2.
Extraction of GEI dynamic region.
3.2 Improved Orientation Feature Extraction Based Gabor Wavelets

The characteristics of the Gabor wavelets (filters), especially for frequency and orientation representations, are similar to those of the human visual system, and are particularly appropriate for human perceptive representation and discrimination. The Gabor filters-based features, directly extracted from gray-level images, have been successfully and widely applied to gait fingerprint recognition. However, the dimension of the Gabor feature vector is very high when multiple scales and orientations are adopted. For example, if the size of an image is 64×64, and 3 scales and 8 orientations are selected, the dimension of the Gabor feature vector will reach 98,304 (64×64×3×8). It is difficult to calculate such a high dimension feature vectors [ 20 ]. Therefore, in this paper, we propose an improved scheme for the feature extraction of gait images. The improved methods use 2D Gabor filter method of [ 21 ], which is defined as:

(2)
[TeX:] $$g ( z ) = \exp \left\{ - \frac { 1 } { 2 } \left( \frac { \mu ^ { 2 } } { \sigma ^ { 2 }_{x} } + \frac { v ^ { 2 } } { \sigma ^ { 2 }_{y} } \right) \right\} \cos ( 2 \pi f \mu )$$

(3)
[TeX:] $$\mu = x \cos \theta + y \sin \theta , v = - y \sin \theta + x \cos \theta$$

where f in (2) and θ in (3), respectively represent the frequency and orientation of the filter, σ x and σ y are Gaussian envelope constant. The feature information of image orientation can be derived by changing θ parameter. The Gabor wavelets characteristic value of gait image I is:

(4)
[TeX:] $$G a b o r ( z ) = g ( z ) * I ( z ) = \iint g ( z ) I ( z ) d x d y$$

where * is operation of a convolution, I(z) is gray value of z=(x, y) in gait image, g(z) is the coefficient of Gabor filter for angle parameter, and Gabor(z) is the Gabor features of gait image after Gabor filtering. The amplitude characteristics G of image in z=(x, y) is:

(5)
[TeX:] $$G ( z ) = \operatorname { Gabor } ( z )$$

Our experimental database is a publicly available CASIA gait database provided by Institute of Automation, Chinese Academy of Sciences [ 22 ], which is large multi-view gait data set and contain 124 subjects. Therefore, the extraction of more features details at multi-view angles is crucial for recognition. Gabor filter can extract the orientation features well, and make these features have good discrimination abilities through appropriate selection of orientation parameters. Since the same one gait features have subtle difference at different view angles, the paper just only extract orientation features of GEI dynamic region and don't have to construct a Gabor filter with 3 scales and 8 orientations. The paper defines the Gabor filter with one scale and multiple orientations as the filter and selects the angle parameters based on the angle between the pedestrian and the camera. The expansion of the filtered image Gabor(z) by column will be as orientation feature vector in the features selection, also as Gabor wavelets feature of GEI dynamic regions.

3.3 Feature Fusion by Averaging Method

The fusion features selection in the paper is to make better use of characteristics of local information captured by Gabor wavelet, such as spatial frequency (scale), spatial location and orientation with one scale and multiple orientations. Compared with traditional method of multiple scales and multiple orientations, our method can greatly reduce feature dimension, data redundancy and computation complexity. The first selection for fusion features is feature of GEI dynamic regions, which can represent clearly characteristics of frequency change and speed of the various body parts in body movements, and second selected feature is Gabor features with multiple orientations, which can get gait features at different view.

Tax et al. [ 23 ] concluded that the combining result by using averaging method is superior to result by multiplying. To get better recognition rate, the paper adopts averaging method to fuses features of GEI dynamic region with features of Gabor wavelets on feature layer. The proposed algorithm first gets gait silhouette image based on gait sequence, calculates GEI sequence and separate the dynamic region. The second is to expand dynamic region by column to one-dimension vector as feature 1 and expand Gabor wavelets features of dynamic region by column to one-dimension vector as feature 2. At last, according to averaging rule, the weighted sum of feature 1 and feature 2 is as the characteristic features, which have the advantages of both dynamic and static features and can fix a single feature deficiency.

3.4 Dimension Reductions Using the Improved KPCA

The dimension reduction is an important step to improve the time complexity for the overall framework. To improve feature extraction capability of KPCA, the paper designs a new kernel function K(x, y) = mK1(x, y) + nK2(x, y) , which is a combination of Gaussian kernel function K1(x, y) and polynomial kernel function K2(x, y) , where m and n represent the contribution of a single kernel function to the fusion kernel. The gait recognition is a complicated process, not only the whole body silhouette features, but also gait local features have to be involved in the parameter selection. Therefore, the paper adopts the combination of Gaussian kernel function which has good local learning ability and polynomial kernel function which has better global generalization ability. The main steps of the improved KPCA algorithm are described in Algorithm 1.

Algorithm 1.
Improved KPCA algorithm
pseudo1.png 3.5 Gait Recognition Using SVM

SVM is a powerful machine learning technique based on statistical learning theory. The traditional SVM is a two-class classifier. There are two approaches to solve the n-class problem with SVM: the oneagainst- one approach and the one-against-rest approach [ 24 ]. Since the dimension of the selected feature subset is relatively small, the paper adopts one-against-one approach. The steps to construct the gait classifier are as follows:

Fig. 3.
SVM three classification diagram.
Fig. 4.
Some examples from CASIA dataset A (a), dataset B (b) and dataset C (c).

Step 1: Suppose there are m types of human gait to be classified, labeled as S 1 , S 2 , … Sm . During the training process, since a SVM classifier is first constructed between any two kinds of gait samples, m class gait samples require to construct m(m–1)/2 SVM classifiers f i (i = 1, 2, …, m(m–1)/2) .

Step 2: During the testing process, when classifying an unknown gait sample S j (j = 1, 2, …, m) , the class that gets the most votes at last is the class of the unknown gait.

Fig. 3 is an example to illustrate the three classes SVM. “1 and 2” represent the SVM constructed by class 1 and class 2 samples, 1 and 2 on left and right sides is the classification results of two classes. When output of “1 and 2” and “3 and 4” classifier is 1, it is classified as the class 1, and when the classification results of three SVM classifiers are different, the vote number of every class plus 1 and the total vote number is counted at last. Fig. 3 shows the class 1 has the most votes and the gait sample to be classified is classified as the class 1.

4. Experiments and Analysis

In this section, the effectiveness of the proposed algorithm is evaluated by experiments. The experiments are implemented on the publicly available CASIA gait database [ 22 ] (Fig. 4). In CASIA, there are 10 sequences for each subject, 6 sequences of them for normal walking (normal), 2 for walking with bag (bag) and 2 for walking in coat (clothing). In experiments, the LIBSVM tool is used, which is a simple, easy-to-use and fast and efficient SVM pattern recognition package. In the process of using LIBSVM, we first set SVM type of LIBSVM as C_SVC and kernel function as radial basis function. The recognition rate of the algorithm is evaluated by cross-validation. In order to compare the performance of different gait recognition algorithms, the correct classification rate (CCR) is used as an evaluation index [ 25 ].

In the first experiment, the algorithm is tested on the sequence of 124 subjects from 11 views. In the 124 subject gaits, we select randomly the image of three sets of the normal gait sequence of CASIA at every angle as the training set and the remaining three group image as the testing set. The experiment results are shown in Table 1. In Table 2, we use the same gait database of 124 subjects at 90° and compare recognition rate of single feature methods with feature fusion method by cross-validation.

Table 1.
Recognition rate of 9 different algorithms
Table 2.
Recognition rate of single and fusion feature algorithms

Another experiment is to validate the robustness of the algorithm in clothing and bagging conditions. The experiment selects first three sets of the normal gait sequence, first one walking group with bag and first one walking group in coat at 90° as the training set, and testing set is the remaining sets of the normal, clothing and bag group. Table 3 shows the recognition rate of 6 other algorithms and our proposed algorithm.

Table 3.
Comparison of different algorithms at three conditions

In the first experiment, the average recognition rate of eight other algorithms is compared with our proposed algorithm in different view angles at normal conditions. As seen from Table 1, the mean recognition rate of KPCA and PCA is almost the same poor. In KSPP and PCA+SPP algorithm, since KSPP improved the neighborhood of three kinds gait, and SPP preserved well local information of original data during the dimension reduction procedure while preserving the maximum sparsity in coefficient matrix, the poor recognition rate at some angles (such as 18°, 36°, and 144°) in KPCA and PCA are improved. When AEI is selected as gait features, the average recognition rate of AEI+2DLPP is inferior to our proposed algorithm, which proves extracting the dynamic region of GEI as gait feature is effective. The experimental result of fusion function of AEI and GEI in Table 1 is greatly superior to single AEI, which proves the feature fusion algorithm can improve performance of gait recognition. Furthermore, we can see from the Table 1 that our proposed algorithm can get at least more than 90% recognition rate at 10 views in all 12 view angles, which testifies that the method based on fusion of features of GEI dynamic regions and Gabor features can identify better the different human gait and get better recognized effect than other algorithms in the small sample gait database. As seen from Table 2, the gait recognition rate of feature fusion algorithms is higher about 10% than single-feature methods and the proposed algorithm is best of all features fusion algorithms. In addition, the data of Tables 1 and 2 show that the average recognition rate based SVM classifier is 94.35% and 93.29%, which illustrate validity of SVM in classification of gait sequences.

From the second experiment, we can see that the average recognition rate of proposed algorithm is better than other methods even in clothing and carrying bag conditions, which proves that the proposed algorithm can eliminate influences of clothing and carrying bag and have good robustness. In addition, although the classification rates of PCA+SPP, KSPP and the proposed method are very similar, but the former methods both use linear programming techniques in the sparse reconstruction process, the training time is much longer than the proposed method.

5. Conclusion

In this paper, a novel gait recognition algorithm based on Gabor wavelets and the improved KPCA is proposed to extract gait features. Compared to existing gait recognition methods, the proposed algorithm is demonstrated to have lower complexity, less training time, robustness and higher classification rates. The GEI of the algorithm can preserve the important information such as walk frequency, contour and phase information, while Gabor features can get the key gait features at different views. Meanwhile, the improved KPCA algorithm significantly reduces the feature space dimension to save the training time, and the one-against-one SVM can classify efficiently gait sequences. Experimental results show that the proposed method achieves higher recognition accuracy with less computational time than eight other existing approaches. The robustness of the proposed algorithm is demonstrated by testing on various viewing angles and rank numbers.

One limitation of the proposed algorithm is that the improved KPCA select Gaussian and polynomial kernel function to fuse, which may cause the gait recognition rate inferior to the common KPCA methods at some view angles. We will study some more robust kernel functions for different view angles in the future.

Biography

Jun Huang
https://orcid.org/0000-0003-0947-1936

He received his B.S. and M.S. degree in computer & application major from Hunan University and Chang Jiang University in 1982 and 1992, respectively. He is currently with the College of Modern Science and Technology, China Jiliang University as an associate professor of computer engineering. His research interests include network communication, digital image processing and multimedia processing.

Biography

Xiuhui Wang
https://orcid.org/0000-0003-1773-9760

He received his Ph.D. in information and computing science major from Zhejiang University in 2007. He is currently with the college of information engineering, China Jiliang University as an associate professor of computer engineering. His research interests focus on computer graphics, computer vision, and computer networks.

Biography

Jun Wang
https://orcid.org/0000-0002-1569-723X

He received the bachelor degree from South China Agricultural University, China, in 2014. He is currently a master student at college of information engineering, China Jiliang University. His research focuses on the pattern recognition and computer vision.

References

  • 1 N. V. Boulgouris, D. Hatzinakos, K. N. Plataniotis, "Gait recognition: a challenging signal processing technology for biometric identification," IEEE Signal Processing Magazine, 2005, vol. 22, no. 6, pp. 78-90. doi:[[[10.1109/msp.2005.1550191]]]
  • 2 I. Bouchrika, M. S. Nixon, "Model-based feature extraction for gait analysis and recognition," in Computer Vision/Computer Graphics Collaboration T echniques and Applications. Heidelberg: Springer2007,, pp. 150-160. doi:[[[10.1007/978-3-540-71457-6_14]]]
  • 3 C. W ang, J. Zhang, L. W ang, J. Pu, X. Y uan, "Human identification using temporal information preserving gait template," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, vol. 34, no. 11, pp. 2164-2176. doi:[[[10.1109/TPAMI.2011.260]]]
  • 4 J. Han, B. Bhanu, "Individual recognition using gait energy image," IEEE Transactions on Pattern Analysis Machine Intelligence, 2006, vol. 28, no. 2, pp. 316-322. doi:[[[10.1109/TPAMI.2006.38]]]
  • 5 Y . Makihara, T. Tanoue, D. Muramatsu, Y . Y agi, S. Mori, Y . Utsumi, M. Iwamura, K. Kise, "Individuality-preserving silhouette extraction for gait recognition," IPSJ Transactions on Computer Vision and Applications, 2015, vol. 7, pp. 74-78. doi:[[[10.2197/ipsjtcva.7.74]]]
  • 6 E. Zhang, Y . Zhao, W . Xiong, "Active energy image plus 2DLPP for gait recognition," Signal Processing, 2010, vol. 90, no. 7, pp. 2295-2302. doi:[[[10.1016/j.sigpro.2010.01.024]]]
  • 7 Y. Li, K. Li, "Gait recognition based on dual view and multiple feature information fusion," CAAI Transactions on Intelligent Systems, 2013, vol. 8, no. 1, pp. 74-79. custom:[[[-]]]
  • 8 J. Wu, J. Wang, L. Liu, "Kernel-based method for automated walking patterns recognition using kinematics data," in Advances in Natural Computation. Heidelberg: Springer, pp. 560-569. doi:[[[10.1007/11881223_69]]]
  • 9 Q. Y ang, K. Qiu, "Gait recognition based on active energy image and parameter-adaptive kernel PCA," in Proceedings of 2011 6th IEEE Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 2011;pp. 156-159. doi:[[[10.1109/ITAIC.2011.6030174]]]
  • 10 S. Fazli, H. Askarifar, M. J. Tavassoli, "Gait recognition using SVM and LDA," in Proceedings of International Conference on Advances Computing, Control, and Telecommunication Technologies, Jakarta, Indonesia, 2011;pp. 106-109. custom:[[[https://www.researchgate.net/publication/252069015_Gait_Recognition_using_SVM_and_LDA]]]
  • 11 L. Qiao, S. Chen, X. Tan, "Sparsity preserving projections with applications to face recognition," Pattern Recognition, 2010, vol. 43, no. 1, pp. 331-341. doi:[[[10.1016/j.patcog.2009.05.005]]]
  • 12 K. Wang, T. Yan, Z. Lu, M. Tang, "Kernel sparsity preserving projections and its application to gait recognition," Journal of Image and Graphics, 2013, vol. 18, no. 3, pp. 257-263. custom:[[[-]]]
  • 13 R. Atta, S. Shaheen, M. Ghanbari, "Human identification based on temporal lifting using 5/3 wavelet filters and radon transform," Pattern Recognition, 2017, vol. 69, pp. 213-224. doi:[[[10.1016/j.patcog.2017.04.015]]]
  • 14 J. Y ang, D. Zhang, A. F. Frangi, J. Y . Y ang, "Two-dimensional PCA: a new approach to appearance-based face representation and recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, vol. 26, no. 1, pp. 131-137. doi:[[[10.1109/tpami.2004.1261097]]]
  • 15 D. Y . Huang, T. W . Lin, W . C. Hu, C. H. Cheng, "Gait recognition based on Gabor wavelets and modified gait energy image for human identification," Journal of Electronic Imagingarticle no. 043039, 2013, vol. 22, no. article 043039. doi:[[[10.1117/1.JEI.22.4.043039]]]
  • 16 B. A. Abdullah, E. S. M. El-Alfy, "Statistical Gabor-based gait recognition using region-level analysis," in Proceedings of 2015 IEEE European Modelling Symposium (EMS), Madrid, Spain, 2015;pp. 137-141. doi:[[[10.1109/EMS.2015.30]]]
  • 17 H. Shao, Y. Wang, "Gait recognition method based on integrated Gabor feature," Journal of Electronic Measurement and Instrumentation, 2017, vol. 31, no. 4, pp. 573-579. doi:[[[10.13382/j. jemi. 2017. 04. 012]]]
  • 18 X. Wang, J. Wang, K. Yan, "Gait recognition based on Gabor wavelets and (2D)2PCA," Multimedia Tools and Applicationsno 10, , 2018, vol. 77, pp. 12545-12561. doi:[[[10.1007/s11042-017-4903-7]]]
  • 19 S. Sarkar, P . J. Phillips, Z. Liu, I. R. V ega, P . Grother, K. W . Bowyer, "The humanID gait challenge problem: data sets, performance, and analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, vol. 27, no. 2, pp. 162-177. doi:[[[10.1109/TPAMI.2005.39]]]
  • 20 D. H. Liu, K. M. Lam, L. S. Shen, "Optimal sampling of Gabor features for face recognition," Pattern Recognition Letters, 2004, vol. 25, no. 2, pp. 267-276. doi:[[[10.1016/j.patrec.2003.10.007]]]
  • 21 T . S. Lee, "Image representation using 2D Gabor wavelets," IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996, vol. 18, no. 10, pp. 959-971. doi:[[[10.1109/34.541406]]]
  • 22 Center for Biometrics and Security Research, 2005 (Online). Available:, http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp
  • 23 D. M. Tax, M. Van Breukelen, R. P . Duin, J. Kittler, "Combining multiple classifiers by averaging or by multiplying?," Pattern Recognition, 2000, vol. 33, no. 9, pp. 1475-1485. doi:[[[10.1016/s0031-3203(99)00138-7]]]
  • 24 K. Y an, Z. Ji, W . Shen, "Online fault detection methods for chillers combining extended Kalman filter and recursive one-class SVM," Neurocomputing, 2017, vol. 228, pp. 205-212. doi:[[[10.1016/j.neucom.2016.09.076]]]
  • 25 D. K. Vishwakarma, K. Singh, "Human activity recognition based on spatial distribution of gradients at sublevels of average energy silhouette images," IEEE Transactions on Cognitive and Developmental Systems, 2017, vol. 9, no. 4, pp. 316-327. doi:[[[10.1109/TCDS.2016.2577044]]]
  • 26 P . Liu, "Gait recognition method based on Poisson distribution on Gabor wavelet," Computer Engineering and Applications, 2015, vol. 51(Suppl), pp. 1-5. custom:[[[-]]]
  • 27 Y. Ji, H. Zhao, X. Zhang, "A feature fusion based gait recognition algorithm," Journal of Electrical Electronic Education, 2009, vol. 3, no. 5, pp. 67-70. custom:[[[-]]]
  • 28 H. Chen H, Z. Cao, "Front-view gait recognition based on the fusion of static and dynamic features," Opto-Electronic Engineering, 2013, vol. 41, pp. 83-88. doi:[[[10.3969/j.issn.1003-501X.2013.11.014]]]

Table 1.

Recognition rate of 9 different algorithms
Algorithm Angle
18° 36° 54° 72° 90° 108° 126° 144° 162° 180° Mean
PCA [ 4 ] 96.24 67.47 61.56 79.30 94.09 93.28 92.74 89.52 83.33 93.55 96.24 86.12
KPCA [ 6 ] 95.97 67.74 60.75 78.76 94.35 92.20 93.01 88.98 82.80 93.28 96.24 85.83
KSPP [ 12 ] 95.16 79.84 70.70 84.68 96.24 96.77 94.09 94.35 92.74 93.55 95.16 90.30
PCA+SPP [ 11 ] 93.55 77.15 68.28 81.99 95.43 95.16 92.47 93.01 93.28 94.35 94.35 89.00
2DPCA [ 14 ] 79.50 79.17 81.50 80.83 81.96 83.63 83.15 80.67 81.02 79.11 78.21 80.80
2DLPP [ 6 ] 72.35 71.47 78.72 77.34 80.51 86.26 84.31 83.79 83.11 78.79 76.99 79.14
Radon+2DPCA [ 14 ] 77.33 78.84 82.17 82.53 81.97 83.17 82.71 81.69 82.11 80.53 76.99 80.91
AEI+GEI [ 7 ] 93.00 92.50 91.15 95.83 90.00 95.28 90.00 85.83 90.83 94.17 96.67 93.29
KPCA+SVM (proposed algorithm)

94.35

95.70

95.16

88.71

96.77

94.35

94.55

94.62

94.89

98.92

97.58

94.35

Table 2.

Recognition rate of single and fusion feature algorithms
Algorithm Recognition rate (%)
Based on GEI feature [ 5 ] 85.83
Based on Poisson equation and Gabor [ 26 ] 86.22
Fusion of gait silhouette feature and angle feature [ 27 ] 93.75
Based on gait cycle feature and unified Hu’s feature [ 28 ] 91.11
Based on angle feature of lower limb joints and unified Hu’s feature [ 28 ] 90.67
Fusion of GEI and Gabor (proposed algorithm) 94.35

Table 3.

Comparison of different algorithms at three conditions
Algorithm Test set All test set
Normal Bag Coat
PCA [ 4 ] 93.28 86.29 88.71 90.81
LDA [ 10 ] 89.77 85.04 83.28 90.32
KPCA [ 8 ] 92.20 85.48 88.71 85.83
KSPP [ 12 ] 96.77 87.90 90.32 90.55
PCA+SPP [ 11 ] 95.16 84.68 89.52 91.94
2DPCA [ 14 ] 83.63 84.13 92.07 91.25
Gabor+KPCA (proposed algorithm) 94.35 85.48 95.97 93.29
GEI of three conditions at the view of 90o: (a) normal, (b) bag, and (c) cloth.
Extraction of GEI dynamic region.
SVM three classification diagram.
Some examples from CASIA dataset A (a), dataset B (b) and dataset C (c).