PDF  PubReader

Kundu* and Sarker*: A Multi-Level Integrator with Programming Based Boosting for Person Authentication Using Different Biometrics

Sumana Kundu* and Goutam Sarker*

A Multi-Level Integrator with Programming Based Boosting for Person Authentication Using Different Biometrics

Abstract: A multiple classification system based on a new boosting technique has been approached utilizing different biometric traits, that is, color face, iris and eye along with fingerprints of right and left hands, handwriting, palm-print, gait (silhouettes) and wrist-vein for person authentication. The images of different biometric traits were taken from different standard databases such as FEI, UTIRIS, CASIA, IAM and CIE. This system is comprised of three different super-classifiers to individually perform person identification. The individual classifiers corresponding to each super-classifier in their turn identify different biometric features and their conclusions are integrated together in their respective super-classifiers. The decisions from individual super-classifiers are integrated together through a mega-super-classifier to perform the final conclusion using programming based boosting. The mega-super-classifier system using different super-classifiers in a compact form is more reliable than single classifier or even single super-classifier system. The system has been evaluated with accuracy, precision, recall and F-score metrics through holdout method and confusion matrix for each of the single classifiers, super-classifiers and finally the mega-super-classifier. The different performance evaluations are appreciable. Also the learning and the recognition time is fairly reasonable. Thereby making the system is efficient and effective.

Keywords: Accuracy , Back Propagation Learning , Biometrics , HBC , F-score , Malsburg Learning , Mega-Super-Classifier , MOCA , Multiple Classification System , OCA , Person Identification , Precision , Recall , RBFN , SOM , Super-Classifier

1. Introduction

A trustworthy and positive recognition of a human being can be achieved using biometric information. Utilization of biometric is more secure and it makes access control more easily implementable. Biometric systems which incorporate only one biometric have limitations like lack of individuality, spoof attack, high error rate, non-universality and noise in sensed data. To overcome some of such limitations several multimodal biometric systems were already developed for person identification. In multimodal systems different biometrics can be identified by single system or separate systems independently and their conclusions can be combined together.

Conventional multimodal systems are made of using few traditional fusion techniques involving two or three biometric features for person recognition. Two or three biometrics may be inadequate to build highly secure systems, as for these systems it is difficult to prevent spoofing. Also say, for any accidental situation may be one or a couple of biometric attributes of a person are harmed or lost, in such consequences the traditional multimodal systems which were made of a few biometric attributes may not be suitable for that person’s recognition. Thus we have designed and developed such a system which uses many biometric traits for person authentication and this system is appropriate for these concerns as all the required biometric features may not be damaged and these features can compensate over damaged features. This multimodal system with various biometric traits also prevent spoofing since it would be troublesome for an masquerader to spoof numerous biometric features of a genuine user simultaneously. Another profit of utilizing multi-modality is that it can deal with the problem of data distortion. If one of the biometric samples is not acceptable due to bad quality, the other biometrics can compensate for it. For example, if a palm print has been scarred and the scanner is not able to accept the distorted palm print, having another biometric like handwriting can compensate over this.

So, here we present a multi-level multiple classification system where each classifier functions on different input features for efficient person identification. Nine individual classifiers are there for colorface, color-iris, color-eye, right and left hand fingerprints, handwriting, palm-print, gait (silhouettes) and wrist-vein identification. The adopted ANN models to learn the different biometric patterns are the radial basis function network (RBFN) with optimal clustering algorithm (OCA) for training units and back propagation (BP) learning for classification, similarly, the RBFN with modified optimal clustering algorithm (MOCA) and BP learning, the RBFN with self-organizing mapping (SOM) and BP learning, the RBFN with heuristic based clustering (HBC) and BP learning and a Malsburg learning and back propagation network (BPN) combination. Finally three super-classifiers and a mega-super-classifier provide the appropriate identification of person depending on programming based boosting logic incorporating the results of single classifiers/super-classifiers.

The contributions of the current research to the state-of-the-art are: (1) a new idea of Programming Based Boosting has been presented during different super-classification as well as mega-superclassification with different biometric features for authentic person recognition. Hence, a better result or conclusion can be achieved than that with traditional single or multimodal classifiers with one or two biometrics or multimodal systems with traditional bagging or boosting method. (2) Modified RBFN with different clustering (e.g., OCA, MOCA, HBC, SOM) based basis function computation in the basis (hidden) layer. Modified BPN at the output layer of the RBFN is used to handle complex pattern (e.g., different biometric traits like face, iris, eye, fingerprint, handwriting, palm-print, gait, wrist-vein) identification. (3) To optimize the input units in the input layer to construct optimal RBFN, during preprocessing of the variety of input features, mode based compression technique has been used. (4) In different classifiers a variety of biometrics, i.e., different types of biometrics are used. All these biometrics are of different expressions/qualities/instances as well as of different poses/angles/left and right hands or eyes.

2. Related Researches

Many researchers already developed several unimodal and multimodal biometric based systems. A face identification system using Coiflet wavelets, PCA and neural network was defined in [1]. In another facial identification system [2], the training facial images were learned incrementally. For feature extraction Gabor features and Zernike moment were applied in this system. An off-angle iris identification system was developed in [3]. Here, the LSEF and GC technique combination was utilized for iris segmentation and for feature extraction NeuWave network was used. Another iris identification system [4] was developed based on the imaginary coefficients of Morlet wavelet transform. In a EBFNN [5] based fingerprint recognition application, feature extraction was done by 6-layer WT decomposition on binary fingerprint images. These extracted features were taken as input into the EBFNN to train the classifier and perform fingerprint identification. A PCA of symmetric sub-space model of neural network algorithm (SSA) for fingerprint identification was represented in [6]. A feed forward neural network based handwriting identification system was defined in [7] which utilize scanned handwriting images. A writer recognition application was defined in [8], which apply RBF in the Off-line mode. In a palm-print recognition system [9], a feature extraction algorithm was defined based on statistical features 2D-DCT, which exploited the local spatial variations in a palm-print image. A palm print identification system [10] was developed utilizing ridge features which was extracted such as orientation field and region mask. Minutiae extraction and cascade filtering was done for matching. In a gait recognition system [11], at the time of training, the gait dynamics underlying different individuals’ gaits from different view angles were locally approximated by RBFN. In the recognition phase, a test gait pattern was compared with the set of estimators. In a BGM algorithm based hand-vein authentication system [12], vein extraction was done using a maximum curvature algorithm. Near-infrared images of dorsal hand veins were used in another hand-vein identification system [13] and matching of the key points were performed that were extracted from the dorsal hand-vein images by the scale-invariant feature transform.

A multimodal system was presented in [14], where score level fusion was performed on face and fingerprints. Fingerprint recognition was done by minutiae matching and Gabor filter and face recognition with PCA. In another multimodal system [15], face image was represented by the ALFLP feature vector and gait image was represented by the AHL feature vector. These two feature vectors were integrated at feature level. A feature-level fusion based recognition technique using fingerprint and iris was developed in [16], utilizing traditional RBF neural network. Here iris feature extraction was done by block sum method and fingerprint feature extraction was by Haar wavelet method. Iris and face features were merged and applied on modified PUM in [17] for person identification. A multimodal identification system was proposed in [18], which utilizes Contourlet transform to analyze the palm print and palm vein features. In this algorithm the local minutiae and a global feature were obtained from palm print and palm vein images and stored them as a compact code. After ROI extraction from the input images, the (2D) image spectrum was divided into fine sub-components utilizing iterated directional filter bank structure. Then for feature matching, the Euclidean distance algorithm was applied. Palm-print and finger knuckle print (FKP) were used to build another multimodal system [19]. Here from the FKP image, the local convex direction map was extracted. Then, the local features of the enhanced FKP were extracted using the Scale Invariant Feature Transform (SIFT), the Speeded Up Robust Features (SURF) and frequency feature. For the identical person, the matching score of the two biometric were merged for the final identification. A multimodal biometric system was developed [20] by the combination of iris, face and voice at match score level utilizing simple sum rule. In this system for iris, face and voice, matching scores were generated by computing the hamming distance and Euclidean distance respectively for template matching.

The above-mentioned multimodal systems are constructed depending on few conventional fusion techniques and use two or three biometric features for recognition which may not be sufficient to prevent spoof attack. Some of the above mentioned unimodal systems also give low accuracy due to poor quality images. So, to overcome these problems we present a multi-classifier based system which can deal with 9 different biometric features. Also the accuracy of the system is quite high with low identification time.

3. System Overview and Approach

3.1 Preprocessing of Different Biometric Features

Nine different biometric features such as color-face, color-iris, color-eye, right and left hand fingerprints, hand-writing, palm-print, gait and wrist-vein were used in different classifiers individually. The advantage to keep color patterns were, the classifiers take the color of the patterns as additional features. Thus the classifiers can identify different very similar same features depending on the color. As for example, in case of color-face patterns, the classifier is able to recognize different very similar faces as different faces depending on skin color. All the biometric patterns of training and test dataset must be preprocessed before learning and identification.

There were 8 different preprocessing steps which were applicable on all 8 different biometric features excluding Iris pattern. Also all 8 steps of preprocessing were not required for these 8 different biometric features. Each step is described below for different biometric patterns.

(i) RGB to gray scale image conversion: This step was applicable only on Handwriting patterns. Here all the handwriting patterns were converted from RGB to gray-scale.

(ii) Removal of noise: The patterns of training and test dataset may be noisy/blurred. In this step, all the noises have been removed from different noisy biometric patterns.

(iii) De-blurring the patterns: In this process, for color-face, color-eye, palm-print and wrist-vein patterns Lucy-Richardson method was used and for handwriting and fingerprints of right and left hands, Wiener Filter was utilized to de-blur the blurred patterns and obtain sharp patterns.

(iv) Background elimination: Then the backgrounds of face, right and left hand fingerprints, and handwriting were removed. Gaussian model was utilized to remove the background of facial patterns. Here a sum-of-Gaussians model (2 Gaussians) was utilized for the color at each pixel results in a simple way of separating the background.

(v) Image compression: The patterns were compressed by exchanging block of pixels with the mode value of the pixel intensities where mode is the intensity value which has occurred more frequently in the block. Following are the steps for image compression:

- Compute the number of pixels of the pattern.

- Calculate the block size in such a way that compressed pattern is of a particular size, say x×y pixels.

- In each block calculate the mode value of the pixel intensities.

- Exchange each block by its corresponding mode intensity to get corresponding compressed pattern of size x×y pixels.

(vi) Image normalization: In this step, all the patterns of particular biometric features are normalized into equal and lower dimensions.

(vii) Conversion of gray-scale patterns into binary patterns: Right and left hand fingerprints, handwriting, palm-print patterns were converted into corresponding binary patterns.

(viii) Conversion of RGB/binary patterns into 1D matrix: In this last step, all different biometric patterns such as color-face, color-iris, color-eye, wrist-vein, right and left hand fingerprints, handwriting, palm-print and gait patterns were converted into corresponding 1D matrix files. These sets were the input to the clustering algorithms of corresponding individual classifiers.

Preprocessing to extract Color-Iris patterns

The necessary steps to extract the color-iris patterns from color-eye patterns are given below:

(i) Compression of eye images: At the first step, all color-eye patterns were compressed.

(ii) Iris boundary localization: For the detection of the iris boundary, the radial-suppression edge detection algorithm [21] was utilized which was similar to Canny edge detection algorithm. A non-separable wavelet transform was utilized in the radial-suppression edge detection algorithm to extract the wavelet transform modulus of the iris image and then radial non maxima suppression was utilized to keep the annular edges and concurrently eliminate the radial edges. Then deduce the final binary edge map by eliminating the isolated edges utilizing an edge thresholding.

Then for the detection of the final iris boundaries, circular Hough Transformation [22] was utilized and also deducts their radius and center. The Hough transform [22] is explained, as in (1), for the circular boundary and a set of recovered edge points xj, yj (with j=1,..., n).

(1)
[TeX:] $$H \left( x _ { c } , y _ { c } , r \right) = \sum _ { j = 1 } ^ { n } h \left( x _ { j } , y _ { j } , x _ { c } , y _ { c } , r \right)$$

where,

(2)
[TeX:] $$h \left( x _ { j } , y _ { j } , x _ { c } , y _ { c } , r \right) = \left\{ \begin{array} { c c } { 1 , } & { \text { if } g \left( x _ { j } , y _ { j } , x _ { c } , y _ { c } , r \right) = 0 } \\ { 0 , } & { \text { otherwise } } \end{array} \right.$$

with

(3)
[TeX:] $$g \left( x _ { j } , y _ { j } , x _ { c } , y _ { c } , r \right) = \left( x _ { j } - x _ { c } \right) ^ { 2 } + \left( y _ { j } - y _ { c } \right) ^ { 2 } - r ^ { 2 }$$

For each edge point[TeX:] $$\left( x _ { j } , y _ { j } \right) , g \left( x _ { i } , y _ { j } , x _ { 0 } , y _ { c } , r \right) = 0$$ for each parameter0 xcycr that implies a circle through that point. The triplet maximizing H corresponds to the largest number of edge points that represents the contour of interest.

(iii) Extraction of the iris: Then, the excess portions other than the irises, as for example eyelids, eyelashes and eyebrows were eliminated for the extraction of the iris.

(iv) (iv) Conversion of RGB images into 1D matrix: Finally RGB-iris patterns were converted into 1D matrix files. This set was the input to the SOM network of SOM based modified RBFN classifier.

3.2. Theoretical Approach of the Present System

Five different classifiers such as OCA based RBFN, Modified OCA based RBFN, SOM based RBFN, combination of Malsburg learning and BPN, HBC based RBFN were used to build this multiclassification system. Among of these 5 classifiers, first 4 classifiers were used twice in this multimodal system and overall 9 different biometric features were identified by 9 classifiers. Every biometric feature was trained and tested with 5 different classifiers and finally that classifier was selected for this system which gave best performance in terms of accuracy for corresponding biometric. This system comprised of three super-classifiers. In first super-classifier, color-face, color-iris and color-eye patterns were identified separately using Modified OCA based RBFN, SOM based RBFN and a Malsburg learning and BPN combination respectively and then super-classifier1conclude the person’s identification based on programming based boosting method. In second super-classifier, OCA based RBFN, HBC based RBFN and a Malsburg learning and BPN combination performed identification of right and left hand fingerprints and handwriting respectively and super-classifier2 combine these three different classifiers conclusion depending on programing based boosting logic and conclude the decision. Similarly in third super-classifier, palm-print, gait(silhouettes) and wrist-vein patterns were identified by Modified OCA based RBFN, SOM based RBFN and OCA based RBFN respectively and super-classifier3 conclude the person’s identification using the logic of programing based boosting method. Finally mega-superclassifier integrates all these three super-classifiers decision based on again programing based boosting method to conclude the final identification/authentication of the person (Fig. 1).

For the above mentioned single classifiers in case of modified RBFNs, the networks [23-27]] individually consists of three layers: (1) an input layer for different biometric pattern representation, (2) a hidden (clustering) layer comprising ‘basis units’, and (3) an output (classification) layer. The outputs of different clustering algorithms (mean ‘μ’, standard deviation ‘σ’ and corresponding approximated normal distribution output functions) are applied in ‘basis units’ of corresponding RBFN. Thus, for the above-mentioned single classifiers, OCA, MOCA, SOM and HBC are the first phase of learning for the corresponding classifier and using BP learning the optimal weights are obtained, which is the second phase of learning.

In this multiple classification system, the time complexities of 5 individual single classifiers algorithms are as follows. The time complexity of modified RBFN including OCA and BP learning is O(n*(k*d+p)), the time complexity of modified RBFN including MOCA and BP learning is O(n*(m*k+p)), the time complexity of modified RBFN including HBC and BP learning is O(n*(n+p)), the time complexity of Malsburg learning and Back propagation Network combination is O(n*(n+p)) and the time complexity of modified RBFN including SOM and BP learning is O(n*p+K2). Here, “n” is total patterns in the pattern set, “k” is total clusters formed, “d” is the dimensionality of each and every pattern, “p” is total iteration to optimize the weight, “m” is total iteration where optimal solution is reached and “K” is total map units.

3.2.1 Classifiers of super-classifier1

In the first classifier of super-classifier1, MOCA [27,28] was applied to make groups of the input face pattern set which were used as input of the RBFN. It forms clusters of various expressions of faces of each person and view (person-view). Then BP Learning of RBFN classifies the “person-view” into “person”. Here total input nodes of RBFN was equivalents to the total training face patterns, whereas the output nodes sets to the number of classes and the hidden units were equivalent to the total clusters formed by MOCA.

Fig. 1.

Block diagram of the present system for testing identification.
597_1.png

In the second classifier of super-classifier1, a Self-organizing network [23] with Kohonen’s learning was utilized to create clusters of the preprocessed input iris pattern set which were used as input of the RBFN. The SOM can form the two dimensional feature map (for this system 15×15 feature map) from which directly the number of clusters can be evaluated. It creates clusters of various expressions of eyes (irises) of every person and of two eyes (left and right). The BP Learning of RBFN combines the irises of two eyes (left and right) of each person into “person iris”.

In the third classifier of super-classifier1, a competitive network (Malsburg Learning) [29] was applied to form clusters of the preprocessed input eye data set. It makes groups of various expressions of eyes of each person’s two eyes (left and right). Then the BPN [29,30] classifier was used which classifies the two eyes (left and right) of each person into “person eye”. From the output layer of BP learning the optimal weights can be obtained. Here total input nodes of BPN was equivalents to the total training eye patterns, the total hidden layer nodes were sets to the total clusters formed by Malsburg learning network and the total output nodes sets to the number of classes.

3.2.2 Classifiers of super-classifier2

In the first classifier of super-classifier 2, OCA [24,25, 31-33] was used to form clusters of the preprocessed input right-hand fingerprints set (thumb, second, third and fourth finger–as per standard dataset CASIA version 5). These clustering outputs were applied as input of the RBFN. Here OCA formed groups of various qualities of fingerprints of each person and individual fingers (person-finger). The BP Learning of RBFN combines the different fingers’ fingerprints of a person into “person fingerprint”.

In the second classifier of super-classifier2, HBC [26] algorithm was used to form groups of the preprocessed input left-hand fingerprints set (thumb, second, third and fourth finger–as per standard dataset CASIA version 5). Similar to the previous classifier, these clustering outputs were applied as input of the RBFN. Here HBC also formed groups of various qualities of fingerprints of each person and individual fingers (person-finger). Then the BP learning of RBFN classifier was used to classify the different fingers’ fingerprints of a person into “person fingerprint” like previous classifier.

In the third classifier of super-classifier2, again the Malsburg learning and BPN combination was utilized for handwriting identification. Here, Malsburg learning network formed groups of various qualities of handwritings (name and surname) of each person. Then the BPN was utilized which classifies the “person name-person surname” into “person name”.

3.2.3 Classifiers of super-classifier3

In the first classifier of super-classifier3, again MOCA was applied to form groups of the preprocessed input palm-print (left and right hand) set which were used as input to the RBFN. Then BP Learning of RBFN combines the palm prints of two hands (left and right) of each person into “person-palm print”.

In the second classifier of super-classifier3, SOM based RBFN was utilized for gait recognition. Silhouettes were taken here as pattern set. We divided the continuous sequences of gait in few time slots and take the alternative slots of gait patterns to train the classifier. The other instances were taken to test the classifier. Here SOM network was used to form the group of gait patterns of same time slots for each person. Finally BP learning of RBFN, classifies the different time slots of gait patterns of each person as “person-gait”.

In the third classifier of super classifier3, OCA based RBFN was utilized for wrist-vein identification. Here OCA is applied to form groups of the input wrist-vein (left and right hand) set which were used as input of the RBFN. Finally BP Learning of RBFN classifies the wrist-veins of two hands (left and right) of each person into “person-wrist vein”.

3.2.4 Programming based boosting

The present multi-classifier used programming based boosting [34,35] in super-classifiers and megasuper- classifier, i.e. super-classifiers and mega-super-classifier concluded the final person identification depending on programming based boosting method incorporating the conclusions of three distinct classifiers or super-classifiers. In this method, the weight of the vote of each classifier or super-classifier is pre-assigned or customized previously. The separate links’ weights from the distinct classifiers or super-classifiers into the integrator are ‘programmed’. These weights are the measurements in respect of normalized accuracy of the distinct classifiers or super-classifiers.

3.2.5 Identification learning with different biometric patterns

To learn the different classifiers with 9 different biometric features (color-face, color-eye/color-iris, one common training database for these two biometrics, right and left hand fingerprints, handwriting, palm-print, gait (silhouettes), and wrist-vein); 8 different training databases were used. Every database comprises of various biometric patterns of different persons.

In color-face database, for each person’s facial patterns, 6 various expressions and also 3 separate angular views, i.e., frontal, 90° left side and 90° right side view were included. In color-iris/eye database, for each person’s iris/eye patterns, 8 various expressions of left and right eyes were taken separately. In the right and left hand fingerprints distinct databases, for each person, 3 various qualities of fingerprints (hard-press, medium-press and soft-press) as well as four different fingers’ fingerprints (thumb, second, third and fourth finger–as per standard database CASIA version 5) were incorporated. The handwriting database comprises of 6 various qualities of handwritings (name and surname separately) for every person. In the palm-print database, both right and left hands’ four different qualities of palm-prints were taken for each person. In the gait database, patterns of different time slots were taken for every person. Finally in the wrist-vein database, both right and left hands’ four various qualities of wrist-vein patterns were taken for each person (Fig. 2).

Fig. 2.

Samples of some training and test patterns of various biometric features for single classifiers.
597_2.png

All the preprocessed different biometric patterns were fed as input to the corresponding separate classifiers. When the classifiers learned all the different training patterns (9 different biometric features) for all different people, the classifiers were ready for recognition of people through these learned patterns, which were called as of ‘known’ persons. The biometric patterns that were not included and learned during training process of the classifiers are called as of ‘unknown’ persons (Fig. 3).

Fig. 3.

Block diagram for learning identification of individual single classifiers.
597_3.png
3.2.6 Identification Testing with Different Biometric Patterns

The test databases of different biometrics contained different people’s (similar as training database) patterns (color-face, color-eye/color-iris, right and left hand fingerprints, handwriting, palm-print, gait (silhouettes) and wrist-vein) of different qualities/expressions/instances. These test patterns were entirely non-identical from training databases (Fig. 2).

For the performance evaluation of three different super-classifiers and mega-super-classifier, the test databases for testing contained pattern sets of various people (similar as training database). Every test set of super-classifier1 contained one color-face and color-eye. Each test set of super-classifier2 contained one right-hand fingerprint, left-hand fingerprint and handwriting and each test set of superclassifier3 contained one palm-print, gait (silhouette) and wrist-vein pattern. In the test database of mega-super-classifier, each test set comprised of one color-face, color-eye, right-hand fingerprint, lefthand fingerprint, handwriting, palm-print, gait and wrist-vein pattern. The patterns of every test set were also of several qualities/expressions/instances which were totally non-identical from training databases (Fig. 4). The test sets for 9 single classifiers, three super-classifiers and mega-super-classifier also contained several unknown patterns of various qualities/expressions/instances.

Fig. 4.

A sample of test pattern set (person3) of mega-super-classifier for person identification.
597_4.png

The test patterns of different biometrics were fed as input to 9 different preprocessors. The preprocessed patterns were taken as inputs to the formerly trained networks of 9 single classifiers. After completion of the training of different classifiers, high output values were obtained for known patterns and low output values were obtained for unknown patterns. A threshold value was necessary to differentiate between known and unknown biometric patterns. The threshold was set as the mean of the minimum output value from known patterns and maximum output value from unknown patterns. This threshold value was different for different biometric patterns. For the given biometric pattern if the respective output value is above threshold then it is considered as known pattern. The BP networks produce different output activation in different overall output units. The probability of belongingness of the given test pattern into the different classes can be obtained from the normalized activation of each output unit. Here the test pattern is considered to belong to a class for which the normalized activation itself presents the probability of belongingness of that input test pattern into the specific class. Then three individual super-classifiers conclude the identifications of the person depending on programming based boosting method incorporating the conclusions of corresponding three distinct single classifiers. Finally the mega-super-classifier determines the final identification of the person depending on again programming based boosting method incorporating the conclusions of three different super-classifiers. Finally we compute the probability of belongingness of the given test pattern set for that corresponding class decided by super-classifiers and mega-super-classifier by considering the minimum value of probability among three different classifiers/super-classifiers.

If for a test pattern set two or more classifiers produce contrary outputs, then the decision obtained by the classifier/ super-classifier with higher weighted link have to be accepted with minimum probability. So, the proposed algorithm of super-classifier/mega-super-classifier performs well also for such contrary conditions (Fig. 1 and Algorithm 1).

Algorithm 1:
Algorithm for Person Identification with Super-classifiers and Mega-super-classifier
pseudo1.jpg

Time complexity of Algorithm 1: In the above mentioned algorithm from step 1 to step 5, i.e. in case of single classifiers, complexity is O(n). From step 9 to step 14, for the super-classifiers again the time complexity is O(n) and finally for mega-super-classifier the complexity is O(n). Hence, the total time complexity of the algorithm is O(n).

4. Result and Performance Analysis

Nine different biometric traits were taken from 8 different standard databases to make the training and test databases of this system. It was not possible for us to collect all the variety of biometric patterns from a single standard database. Hence it was presumed that, different biometric patterns of various standard databases were of identical specific people without losing any generality to estimate this multiclassifier’s performance.

We utilized training and test databases for color-face samples from FEI database (http://fei.edu.br/~cet/facedatabase.html), eyes/irises from UTIRIS database (http://utiris.wordpress.com/), right and left hand fingerprints from CASIA Fingerprint Image Database Version 5.0 (http://biometrics.idealtest.org/ dbDetailForUser.do?id=7), handwritings from IAM handwriting database (http://www.iam.unibe.ch/ fki/databases/iam-handwriting-database/download-the-iam-handwriting-database), palm-prints from CASIA Palm print Image Database (http://biometrics.idealtest.org/dbDetailForUser.do?id=5), gaits (silhouettes) from CASIA Gait Database (http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) and Wrist-veins from CIE Biometrics (http://biometrics.put.poznan.pl/vein-dataset/).

4.1. Performance Evaluation Metrics of the Classifiers

Holdout method [26,27,29] was utilized for performance evaluation of the individual classifiers, super-classifiers and mega-super-classifier. Confusion matrices have been implemented in case of individual single classifiers, super-classifiers and also for mega-super-classifier.

A Confusion matrix [26-29,34-36]] is a table that is used to describe the performance of a classifier. Figuring a confusion matrix can provide a superior idea of what the classification model is getting right and what kinds of errors it is producing (Fig. 5).

Fig. 5.

Confusion matrix (2 class).
597_5.png

From the aforesaid binary confusion matrix (Fig. 5) containing only two classes (say P and Q), the accuracy, precision, recall and F-score [26,27,29] are defined as follows:

(4)
[TeX:] $$\text { Accuracy } = \frac { \mathrm { a } + \mathrm { d } } { \mathrm { a } + \mathrm { b } + \mathrm { c } + \mathrm { d } } \times 100$$

(5)
[TeX:] $$\text { Precision } = \frac { a } { a + b } \times 100$$

(6)
[TeX:] $$\text { Recall } = \frac { a } { a + c } \times 100$$

(7)
[TeX:] $$F - \text { score } = \frac { 2 * \text { recall*precision} } { \text {recall+precision} } \times 100$$

When holdout method is used to estimate the classifiers’ performance, such kinds of samples are taken for testing which are excluded in training database. When accuracy metric is applied for a classifiers’ performance estimation, the whole performance is reflected irrespective of the distinct performance for each class. That is why accuracy metric is more appropriate for the classifiers’ performance evaluation through a specific numeric value. Precision, recall and F-score metrics are utilized to explain the performance of each class.

4.2. Experimental Results

The present system was developed to learn on a computer with Intel Core 2 Duo E8400, 3.00 GHz processor with 4 GB RAM and Windows 7 32-bit Operating System. The multi-classification system was implemented using MATLAB R2008b.

Some significant part of experimental result which manages contrary case (each classifier is recognizing different person) is given below:

Fig. 6.

Graphical representation of 􁈾􀜵􀯕/􀜵􀯪􁈿 vs. threshold of MOCA for color-face (a) and palm-print (b).
597_6.png

Now if general majority voting logic is considered then super-classifier2 decide the given pattern set as of ‘unknown person’ as three separate classifiers produce three different results of identification. But whenever programing based boosting algorithm is applied, the super-classifier2 decides the given pattern set as of person 1 with probability 0.39164. Here, the respective weights of the links of three single classifiers are 0.2899, 0.3445, and 0.3656. The weight of the link associated to third classifier is highest and the minimum graded probability is gained from first classifier. Therefore the maximum weighted conclusion with minimum probability (the safest acceptable probability) is decided as final result.

From Fig. 6, graphical demonstration of [Sb, Sw] vs. Threshold of MOCA explains the maximizing of [Sb, Sw] through T1 to T2. For training color-face patterns, the value of T1 and T2 are 2500 and 7000, and for palm-print 4000 and 4170, respectively. The thresholds were incremented by 500 for color-face and 10 for palm-print. From T1 through T2 desired numbers of clusters are obtained. The mean value between T1 and T2 was the final particular threshold to obtain the perfect clusters without any misclassification.

Table 1.

Confusion matrix for super-classifier1
Predicted class Actual class
Person I Person II Person III Person IV Unknown person
Person I 18 0 0 0 2
Person II 0 18 0 0 2
Person III 0 0 18 0 0
Person IV 0 0 0 18 0
Unknown person 0 0 0 0 14

Table 2.

Confusion matrix for super-classifier2
Predicted class Actual class
Person I Person II Person III Person IV Unknown person
Person I 18 0 0 0 0
Person II 0 18 0 0 2
Person III 0 0 18 0 0
Person IV 0 0 0 18 0
Unknown person 0 0 0 0 16

Table 3.

Confusion matrix for super-classifier3
Predicted class Actual class
Person I Person II Person III Person IV Unknown person
Person I 18 0 0 0 0
Person II 0 18 0 0 1
Person III 0 0 17 0 0
Person IV 0 0 0 18 0
Unknown person 0 0 1 0 17

Table 4.

Confusion matrix for super-classifier4
Predicted class Actual class
Person I Person II Person III Person IV Unknown person
Person I 18 0 0 0 0
Person II 0 18 0 0 1
Person III 0 0 18 0 0
Person IV 0 0 0 18 0
Unknown person 0 0 0 0 17

Tables 1–4 displays the confusion matrices of three different super-classifiers and also mega-superclassifier. Table 5 displays the accuracies of 9 different classifiers for 9 different biometric traits, three different super-classifiers and mega-super-classifier. The accuracies for single classifiers are ≥90% except right hand fingerprint patterns. In this system OCA based RBFN was utilized for right hand fingerprint and wrist-vein identification. OCA may not provide perfect clusters in every case and as a consequence the accuracy for right hand finger-prints is comparatively low. Conventional OCA considers one specific intra cluster similarity or threshold to form the clusters. But in this algorithm inter cluster distances is not considerable. So the possibility for misclassification is there among the group of patterns. MOCA use both intra cluster distance and inter cluster distance to create perfect clusters and avoid misclassification among the clusters. Thus MOCA based RBFN classifiers give higher accuracy for different biometrics. The accuracies for three different super-classifiers are ≥95% and for mega-super-classifier is 98.89%. Therefore, it is evident that the mega-super-classifier is effective for person recognition than the single classifiers and also super-classifiers utilizing single or three biometric traits independently.

Table 5.

Accuracy of the classifiers (Holdout method) Classifiers
Classifiers Accuracy (%)
First classifier of super-classifier1 (color-face) 92.22
Second classifier of super-classifier1 (color-iris) 90.00
Third classifier of super-classifier1 (color-eye) 94.44
First classifier of super-classifier2 (right-hand fingerprint) 76.67
Second classifier of super-classifier2 (left-hand fingerprint) 91.11
Third classifier of super-classifier-2 (handwriting) 96.67
First classifier of super-classifier3 (palm-print) 96.67
Second classifier of super-classifier3 (gait) 90
Third classifier of super-classifier3 (wrist-vein) 94.44
Super-classifier1 95.56
Super-classifier2 97.78
Super-classifier3 97.78
Mega-super-classifier 98.89

Table 6.

Performance measurement of the classifiers (1st super-classifier)
Performance evaluation metrics person wise 1st classifier (color-face) 2nd classifier (color-iris) 3rd classifier (color-eye) Super-classifier1
Precision (%) PersonI 85.71 100 100 90
PersonII 85 66.67 100 90
PersonIII 100 100 100 100
PersonIV 94.74 100 100 100
Unknown person 100 100 78.26 100
Recall (%) PersonI 100 100 100 100
PersonII 94.44 100 72.22 100
PersonIII 100 88.89 100 100
PersonIV 100 100 100 100
Unknown person 66.67 61.11 100 77.78
F-score (%) PersonI 92.31 100 100 94.74
PersonII 89.47 80 83.87 94.74
PersonIII 100 94.12 100 100
PersonIV 97.29 100 100 100
Unknown person 80 75.86 87.81 87.50

Table 7.

Performance measurement of the classifiers (2nd super-classifier)
Performance evaluation metrics person wise 1st classifier (color-face) 2nd classifier (color-iris) 3rd classifier (color-eye) Super-classifier2
Precision (%) PersonI 100 100 100 100
PersonII 72.22 81.82 94.44 90
PersonIII 100 100 100 100
PersonIV 100 100 100 100
Unknown person 44.83 77.78 89.47 100
Recall (%) PersonI 77.78 88.89 94.44 100
PersonII 72.22 100 94.44 100
PersonIII 72.22 88.89 100 100
PersonIV 88.89 100 100 100
Unknown person 72.22 77.78 94.44 88.89
F-score (%) PersonI 87.50 94.12 97.14 100
PersonII 72.22 90 94.44 94.74
PersonIII 83.87 94.12 100 100
PersonIV 94.12 100 100 100
Unknown person 55.32 77.78 91.89 94.12

Table 8.

Performance measurement of the classifiers (3rd super-classifier)
Performance evaluation metrics person wise 1st classifier (color-face) 2nd classifier (color-iris) 3rd classifier (color-eye) Super-classifier3
Precision (%) PersonI 100 100 100 100
PersonII 100 85.71 94.44 94.74
PersonIII 100 87.50 100 100
PersonIV 100 89.47 100 100
Unknown person 85.71 90 80.95 94.44
Recall (%) PersonI 100 77.78 100 100
PersonII 100 100 94.44 100
PersonIII 94.44 77.78 100 94.44
PersonIV 88.89 94.44 83.33 100
Unknown person 100 100 94.44 94.44
F-score (%) PersonI 100 87.50 100 100
PersonII 100 92.31 94.44 97.29
PersonIII 97.14 82.35 100 97.14
PersonIV 94.12 91.89 90.91 100
Unknown person 92.31 94.74 87.18 94.44

Table 9.

Performance measurement of the Mega-Super-classifier
Precision (%) Recall (%) F-score (%)
Person I 100 100 100
Person II 94.74 100 97.29
Person III 100 100 100
Person IV 100 100 100
Unknown person 100 94.44 97.14

From Tables 6–9, precision, recall and F-score metrics explain the performance of every class with holdout method for all the classifiers. Similarly in Table 10, the present multi-classifier displays overall low recognition time (<1 second). The limitation or disadvantage of this multi-classifier is that, it took quite high time for training. But training is a single time process whereas recognition can be done for multiple times. Once the system is completely trained, then it can perform the recognition several times for different inputs as per users’ demand. Hence utilizing multiple classifiers, accurate as well as reliable identification can be obtained with minimum identification time at the cost of training time.

Table 10.

Learning time of the biometric features (unit: second)
Classifiers Training timea) Recognition timeb) Total time
First classifier of super-classifier1 (color-face) 90.709 0.0929 90.8019
Second classifier of super-classifier1 (color-iris) 122.329 0.0673 122.3963
Third classifier of super-classifier1 (color-eye) 174.267 0.0073 174.2743
First classifier of super-classifier2 (right-hand fingerprint) 13.256 0.0151 13.2711
Second classifier of super-classifier2 (left-hand fingerprint) 18.736 0.0159 18.7519
Third classifier of super-classifier2 (handwriting) 33.144 0.0076 33.1516
First classifier of super-classifier3 (palm-print) 31.978 0.0212 31.9992
Second classifier of super-classifier3 (gait) 189.184 0.1786 189.3626
Third classifier of super-classifier3 (wrist-vein) 38.250 0.2096 38.4596
Super-classifier1 387.305 0.000001 387.305001
Super-classifier2 65.136 0.000001 65.136001
Super-classifier3 259.412 0.000002 259.412002
Mega-super-classifier 711.867 0.009939 711.876939

Table 11 displays a comparative analysis of the developed multi-classifier, using different parameters with other multimodal systems mentioned in section 2 [14-20]. Compare to other multimodal systems, our developed system effectively deals with 9 different biometric features to give a very secure and reliable person authentication system with higher accuracy as well as low identification time. Therefore, the proposed approach shows improvement in case of both accuracy and identification time as compared to methods mentioned in the Section 2.

Table 11.

Comparative study with other multimodal systems
Methods or systems Biometrics used No. of biometrics or features Accuracy (%) of the system Identification time (s) of the system Characteristics of the system
Block sum method, Haar wavelet method, RBFN [16] Fingerprint, Iris 2 92 0.12 Person identification using 2 biometrics
Contourlet transform, Euclidean distance [18] Palm-print, Palm vein 2 97.40 0.0078 Person identification using 2 biometrics
SIFT, SURF, texture, Score level [19] Palm-print, Finger Knuckle Print 2 99.54 0.076 Person identification using 2 biometrics
Template matching [20] Iris, Face, Voice 3 92 - Person identification using 3 biometrics
Minutiae matching- Gabor filter, PCA [14] Face, Fingerprint 2 97.5 - Person identification using 2 biometrics
ALFLP, AHL [15] Face, Gait 2 98.6 - Person identification using 2 biometrics
Modified PUM [17] Iris, Face 2 94.2 - Person identification using 2 biometrics
Proposed system (OCA based RBFN, MOCA based RBFN, HBC based RBFN, SOM based RBFN, Malsburg learning and BPN combination) Face, Iris, Eye, Right and left hand fingerprint, Handwriting, Palm-print, Gait, Wrist-vein 9 98.89 0.009 Person identification using 9 biometrics

5. Conclusions

The present system utilizes multiple classifiers with various biometric traits rather than using a particular classifier with only one biometric trait for person authentication. It is highly beneficial as the reliability of person authentication is far more than that of single classifier depending on a specific biometric. The conclusions from different classifiers acting on different biometrics are properly integrated depending on weighted voting logic through programmed weights. Therefore, all the decisions from single classifiers and super-classifiers are merged together to get the most reliable decision. The performance measurements with accuracy, precision, recall, F-score metric through Holdout method for different classifiers, super-classifiers and finally for mega-super-classifier are quite high for various biometric features. Also the recognition time is reasonably low for various biometrics. Over and above if few biometrics are improper due to damage, the conventional multimodal systems may not be appropriate for person identification. So, the present system integrating different classifiers and there after different super-classifiers would be able to produce conclusion in such cases. Also this system is beneficial to prevent forging as it may not be possible to spoof all the required biometric features. Hence the present multi-classification system depending on different biometric traits is accurate, efficient and far more reliable than the conventional unimodal and multimodal person identification systems.

Biography

Sumana Kundu
https://orcid.org/0000-0003-0731-8284

She received her Ph.D. in Computer Science and Engineering in 2018 from National Institute of Technology, Durgapur, India. She received her B.E. degree in Computer Science and Engineering in 2008 and M.Tech. degree with specialization in Software Engineering in 2010. She is an Assistant Professor in the Department of Computer Science and Engineering at Siliguri Institute of Technology, Techno India Group in India. Her research interest includes data mining, pattern recognition, biometric identification, image classification, and neural networks.

Biography

Goutam Sarker
https://orcid.org/0000-0002-9510-6777

He is an Associate Professor in Department of Computer Science and Engineering at National Institute of Technology, Durgapur, India. He had his B.E. in Electronics and Telecommunication Engineering, M.E. with specialization in Computer Science and Engineering. In 1994, he received his Ph.D. in Artificial Intelligence. He is a life fellow of IE(I) and IETE(I). He is also a senior member of IEEE and member of ACM. He has 70 research publications in the broad area of machine learning, pattern recognition and data mining.

References

  • 1 R. Bhati, S. Jain, N. Maltare, D. K. Mishra, "A comparative analysis of different neural networks for face recognition using principal component analysis, wavelets and efficient variable learning rate," in Proceedings of International Conference on Computer and Communication Technology, Allahabad, India, 2010;pp. 526-531. doi:[[[10.1109/ICCCT.2010.5640486]]]
  • 2 H. Boughrara, M. Chtourou, C. B. Amar, "MLP neural network based face recognition system using constructive training algorithm," in Proceedings of International Conference on Multimedia Computing and Systems, Tangier, Morocco, 2012;pp. 233-238. doi:[[[10.1109/ICMCS.2012.6320263]]]
  • 3 S. H. Moi, H. Asmuni, R. Hassan, R. M. Othman, "A unified approach for unconstrained off-angle iris recognition," in Proceedings of International Symposium on Biometrics and Security Technologies, Kuala Lumpur, Malaysia, 2014;pp. 39-44. doi:[[[10.1109/ISBAST.2014.7013091]]]
  • 4 Z. Lin, B. Lu, "Iris recognition method based on the imaginary coefficients of Morlet wavelet transform," in Proceedings of the 7th International Conference on Fuzzy Systems and Knowledge Discovery, Yantai, China, 2010;pp. 573-577. doi:[[[10.1109/FSKD.2010.5569475]]]
  • 5 J. Luo, S. Lin, J. Ni, M. Lei, "An improved fingerprint recognition algorithm using EBFNN," in Proceedings of the 2nd International Conference on Genetic and Evolutionary Computing, Hubei, China, 2008;pp. 504-507. doi:[[[10.1109/WGEC.2008.48]]]
  • 6 C. Yu, Z. Jian, Y. Bo, C. Deyun, "A novel principal component analysis neural network algorithm for fingerprint recognition in online examination system," in Proceedings of Asia-Pacific Conference on Information Processing, Shenzhen, China, 2009;pp. 182-186. doi:[[[10.1109/APCIP.2009.53]]]
  • 7 C. Anton, C. Stirbu, R. Vasile Badea, "Identify handwriting individually using feed forward neural networks," International Journal of Intelligent Computing Research, 2010, vol. 1, no. 4, pp. 183-188. doi:[[[10.20533/ijicr.2042.4655.2011.0022]]]
  • 8 J. Ashok, E. G. Rajan, "Off-line hand written character recognition using radial basis function," Advanced Networking and Applications, 2011, vol. 2, no. 4, pp. 792-795. doi:[[[10.1109/AHICI.2012.6408440]]]
  • 9 H. Imtiaz, S. Aich, S. A. Fattah, "Palm-print Recognition based on DCT domain statistical features extracted from enhanced image," in Proceedings of International Conference on Electrical Engineering and Information Communication Technology, Dhaka, Bangladesh, 2014;pp. 1-4. doi:[[[10.1109/ICEEICT.2014.6919170]]]
  • 10 A. George, G. Karthick, R. Harikumar, "An efficient system for palm print recognition using ridges," in Proceedings of International Conference on Intelligent Computing Applications, Coimbatore, India, 2014;pp. 249-253. doi:[[[10.1109/ICICA.2014.60]]]
  • 11 W. Zeng, C. Wang, "View-invariant gait recognition via deterministic learning," Neurocomputing, 2016, vol. 175, pp. 324-335. doi:[[[10.1016/j.neucom.2015.10.065]]]
  • 12 S. M. Lajevardi, A. Arakala, S. Davis, K. J. Horadam, "Hand vein authentication using biometric graph matching," IET Biometrics, 2014, vol. 3, no. 4, pp. 302-313. doi:[[[10.1049/iet-bmt.2013.0086]]]
  • 13 Y. Wang, K. Zhang, L. K. Shark, "Personal identification based on multiple keypoint sets of dorsal hand vein images," IET Biometrics, 2014, vol. 3, no. 4, pp. 234-245. doi:[[[10.1049/iet-bmt.2013.0042]]]
  • 14 R. L. Telgad, P. D. Deshmukh, A. M. N. Siddiqui, "Combination approach to score level fusion for Multimodal Biometric system by using face and fingerprint," in Proceedings of International Conference on Recent Advances and Innovations Engineering, Jaipur, India, 2014;pp. 1-8. doi:[[[10.1109/ICRAIE.2014.6909320]]]
  • 15 M. S. Almohammad, G. I. Salama, T. A. Mahmoud, "Human identification system based on feature level fusion using face and gait biometrics," in Proceedings of International Conference on Engineering and Technology, Cairo, Egypt, 2012;pp. 1-5. doi:[[[10.1109/ICEngTechnol.2012.6396120]]]
  • 16 U. Gawande, M. Zaveri, A. Kapur, "Fingerprint and Iris fusion based recognition using RBF neural network," Journal of Signal and Image Processing, 2013, vol. 4, no. 1, pp. 142-148. custom:[[[https://pdfs.semanticscholar.org/0ef9/217c550964be3b8a0e31c7d62e32c71038c6.pdf]]]
  • 17 J. Lin, J. Li, H. Lin, J. Ming, "Robust person identification with face and iris by modified PUM method," in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis, Chengdu, China, 2009;pp. 321-324. doi:[[[10.1109/ICACIA.2009.5361089]]]
  • 18 D. P. Gaikwad, S. P. Narote, "Multi-modal biometric system using palm print and palm vein features," in Proceedings of 2013 Annual IEEE India Conference (INDICON), Mumbai, India, 2013;pp. 1-5. doi:[[[10.1109/INDCON.2013.6726010]]]
  • 19 E. Perumal, S. Ramachandran, "A multimodal biometric system based on palmprint and finger knuckle print recognition methods," The International Arab Journal of Information Technology, 2015, vol. 12, no. 2, pp. 118-128. custom:[[[https://pdfs.semanticscholar.org/a415/ffce93c778a7e8b11c6b20f9c7c568511829.pdf]]]
  • 20 S. Chaudhary, R. Nath, "A new multimodal biometric recognition system integrating iris, face and voice," International Journal of Advanced Research in Computer Science and Software Engineering, 2015, vol. 5, no. 4, pp. 145-150. custom:[[[http://ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April2015/V5I4-0788.pdf]]]
  • 21 J. Huang, X. You, Y. Y. Tang, L. Du, Y. Yuan, "A novel iris segmentation using radial-suppression edge detection," Signal Processing, 2009, vol. 89, no. 12, pp. 2630-2643. doi:[[[10.1016/j.sigpro.2009.05.001]]]
  • 22 V. Conti, C. Militello, F. Sorbello, "A frequency-based approach for features fusion in fingerprint and iris multimodal biometric identification systems," IEEE Transactions on SystemsMan, and Cybernetics Part C (Applications and Reviews), , 2010, vol. 40, no. 4, pp. 384-395. doi:[[[10.1109/TSMCC.2010.2045374]]]
  • 23 S. Kundu, G. Sarker, "A modified SOM based RBFN for rotation invariant clear and occluded fingerprint recognition," in Intelligent Computing and Applications. New Delhi: Springer2014,, pp. 11-18. doi:[[[10.1007/978-81-322-2268-2_2]]]
  • 24 G. Sarker, S. Kundu, "A modified radial basis function network for fingerprint identification and localization," in Proceedings of International Conference on Advanced Engineering and Technology, Stockholm, Sweden, 2013;pp. 26-31. custom:[[[-]]]
  • 25 S. Kundu, G. Sarker, "A modified radial basis function network for occluded fingerprint identification and localization," International Journal of Computers Information Technology and Engineering, 2013, vol. 7, no. 2, pp. 103-109. custom:[[[-]]]
  • 26 S. Kundu, G. Sarker, "A modified RBFN based on heuristic based clustering for location invariant fingerprint recognition and localization with and without occlusion," in Proceedings of International Conferences for Convergence of Technology, Pune, India, 2014;pp. 1-6. doi:[[[10.1109/I2CT.2014.7092281]]]
  • 27 S. Kundu, G. Sarker, "A new RBFN with modified optimal clustering algorithm for clear and occluded fingerprint identification," in Proceedings of the 2nd International Conference on Control, Instrumentation, Energy and Communication, Kolkata, India, 2016;pp. 125-129. doi:[[[10.1109/CIEC.2016.7513668]]]
  • 28 G. Sarker, S. Dhua, M. Besra, "An optimal clustering for fuzzy categorization of cursive handwritten text with weight learning in textual attributes," in Proceedings of the 2nd International Conference on Recent Trends Information Systems, Kolkata, India, 2015;pp. 6-11. doi:[[[10.1109/ReTIS.2015.7232843]]]
  • 29 S. Kundu, G. Sarker, "A modified BP network using Malsburg learning for rotation and location invariant fingerprint recognition and localization with and without occlusion," in Proceedings of the 7th International Conference on Contemporary Computing, Noida, India, 2014;pp. 617-623. doi:[[[10.1109/IC3.2014.6897244]]]
  • 30 G. Sarker, "An optimal backpropagation network for face identification and localization," International Journal of Computers and Applications, 2013, vol. 35, no. 2, pp. 63-69. doi:[[[10.2316/journal.202.2013.2.202-3388]]]
  • 31 G. Sarker, "An unsupervised natural clustering with optimal conceptual affinity," Journal of Intelligent Systems, 2010, vol. 19, no. 3, pp. 289-300. doi:[[[10.1515/JISYS.2010.19.3.289]]]
  • 32 G. Sarker, K. Roy, "An RBF network with optimal clustering for face identification," Engineering Science International Research Journal, 2013, vol. 1, no. 1, pp. 70-74. custom:[[[http://www.imrfjournals.in/pdf/MATHS/ESIRJ-VOLUME-1-ISSUE-1-2013/15.pdf]]]
  • 33 G. Sarker, K. Roy, "A modified RBF network with optimal clustering for face identification and localization," International Journal of Advanced Computational Engineering and Networking, 2013, vol. 1, no. 3, pp. 30-35. custom:[[[http://www.iraj.in/journal/journal_file/journal_pdf/3-20-139082384830-35.pdf]]]
  • 34 S. Kundu, G. Sarker, "A person identification system with biometrics using modified RBFN based multiple classifiers," in Proceedings of International Conference on Intelligent Computing and Communication, Kalyani, India, 2016;pp. 415-424. doi:[[[10.1007/978-981-10-2035-3_42]]]
  • 35 S. Kundu, G. Sarker, "An efficient integrator based on template matching technique for person authentication using different biometrics," Indian Journal of Science and Technologyarticle no. 42, 2016, vol. 9, no. article 42. doi:[[[10.17485/ijst/2016/v9i42/93805]]]
  • 36 G. Sarker, "A weight learning technique for cursive handwritten text categorization with fuzzy confusion matirx," in Proceedings of the 2nd International Conference on Control, Instrumentation, Energy Communication, Kolkata, India, 2016;pp. 188-192. doi:[[[10.1109/CIEC.2016.7513802]]]