Search Word(s) in Title, Keywords, Authors, and Abstract:
Texture
Face Recognition Based on the Combination of Enhanced Local Texture Feature and DBN under Complex Illumination Conditions
Chen Li, Shuai Zhao, Ke Xiao and Yanjie Wang
Page: 191~204, Vol. 14, No.1, 2018
10.3745/JIPS.04.0060
Keywords: Deep Belief Network, Enhanced Local Texture Feature, Face Recognition, Illumination Variation
Show / Hide Abstract
GLIBP: Gradual Locality Integration of Binary Patterns for Scene Images Retrieval
Salah Bougueroua and Bachir Boucheham
Page: 469~486, Vol. 14, No.2, 2018
10.3745/JIPS.02.0081
Keywords: CBIR, Elliptic-Region, Global Information, LBP, Local Information, Texture
Show / Hide Abstract
A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle
Wei Song, Shuanghui Zou, Yifei Tian, Su Sun, Simon Fong, Kyungeun Cho and Lvyang Qiu
Page: 1445~1456, Vol. 14, No.6, 2018
10.3745/JIPS.02.0099
Keywords: Driving Awareness, Environment Perception, Unmanned Ground Vehicle, 3D Reconstruction
Show / Hide Abstract
Texture Image Retrieval Using DTCWT-SVD and Local Binary Pattern Features
Dayou Jiang and Jongweon Kim
Page: 1628~1639, Vol. 13, No.6, 2017
10.3745/JIPS.02.0077
Keywords: Dual-Tree Complex Wavelet Transform, Image Retrieval, Local Binary Pattern, SVD, Texture Feature
Show / Hide Abstract
Detection of Microcalcification Using the Wavelet Based Adaptive Sigmoid Function and Neural Network
Sanjeev Kumar and Mahesh Chandra
Page: 703~715, Vol. 13, No.4, 2017
10.3745/JIPS.01.0007
Keywords: Cascade-Forward Back Propagation Technique, Computer-Aided Diagnosis (CAD), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gray-Level Co-Occurrence Matrix (GLCM), Mammographic Image Analysis Society (MIAS) Database, Modified Sigmoid Function
Show / Hide Abstract
Color Image Coding Based on Shape-Adaptive All Phase Biorthogonal Transform
Xiaoyan Wang, Chengyou Wang, Xiao Zhou and Zhiqiang Yang
Page: 114~127, Vol. 13, No.1, 2017
10.3745/JIPS.02.0053
Keywords: Color Image Coding, Shape-Adaptive All Phase Biorthogonal Transform (SA-APBT), Color Space Conversion, Chain Code
Show / Hide Abstract
Image Restoration and Object Removal Using Prioritized Adaptive Patch-Based Inpainting in a Wavelet Domain
Rajesh P. Borole and Sanjiv V. Bonde
Page: 1183~1202, Vol. 13, No.5, 2017
10.3745/JIPS.02.0031
Keywords: Image Inpainting, Object Removal, Region Filling, Texture and Structure Propagation, Wavelet Inpainting
Show / Hide Abstract
Fragile Watermarking Based on LBP for Blind Tamper Detection in Images
Heng Zhang, Chengyou Wang and Xiao Zhou
Page: 385~399, Vol. 13, No.2, 2017
10.3745/JIPS.03.0070
Keywords: Fragile Watermarking, Local Binary Pattern (LBP), Least Significant Bit (LSB), Tamper Detection and Localization
Show / Hide Abstract
Content-Based Image Retrieval Using Combined Color and Texture Features Extracted by Multi-resolution Multi-direction Filtering
Hee-Hyung Bu, Nam-Chul Kim, Chae-Joo Moon and Jong-Hwa Kim
Page: 464~475, Vol. 13, No.3, 2017
10.3745/JIPS.02.0060
Keywords: Color and Texture Feature, Content-Based Image Retrieval, HSV Color Space, Multi-resolution Multi-direction Filtering
Show / Hide Abstract
Fire Detection Using Multi-Channel Information and Gray Level Co-occurrence Matrix Image Features
Jae-Hyun Jun, Min-Jun Kim, Yong-Suk Jang and Sung-Ho Kim
Page: 590~598, Vol. 13, No.3, 2017
10.3745/JIPS.02.0062
Keywords: Color Features, Fire Detection, Texture Features
Show / Hide Abstract
Content-based Image Retrieval Using Texture Features Extracted from Local Energy and Local Correlation of Gabor Transformed Images
Hee-Hyung Bu, Nam-Chul Kim, Bae-Ho Lee and Sung-Ho Kim
Page: 1372~1381, Vol. 13, No.5, 2017
10.3745/JIPS.02.0075
Keywords: Content-based Image Retrieval, Gabor Transformation, Local Energy, Local Correlation, Texture Feature
Show / Hide Abstract
Feasibility Study of a Distributed and Parallel Environment for Implementing the Standard Version of AAM Model
Moulkheir Naoui, Saïd Mahmoudi and Ghalem Belalem
Page: 149~168, Vol. 12, No.1, 2016
10.3745/JIPS.02.0039
Keywords: Active Appearance Model, Data Parallelism, Deformable Model, Distributed Image Processing, Parallel Image Processing, Segmentation
Show / Hide Abstract
Content Based Dynamic Texture Analysis and Synthesis Based on SPIHT with GPU
Premanand P Ghadekar and Nilkanth B Chopade
Page: 46~56, Vol. 12, No.1, 2016
10.3745/JIPS.02.0009
Keywords: Discrete Wavelet Transform, Dynamic Texture, GPU, SPIHT, SVD
Show / Hide Abstract
Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion
Chaimae Anibou, Mohammed Nabil Saidi and Driss Aboutajdine
Page: 421~437, Vol. 11, No.3, 2015
10.3745/JIPS.02.0028
Keywords: Discrete Wavelet Transform, Feature Extraction, Fuzzy Set Theory, Information Fusion, Probability Theory, Segmentation, Supervised Classification
Show / Hide Abstract
A TRUS Prostate Segmentation using Gabor Texture Features and Snake-like Contour
Sung Gyun Kim and Yeong Geon Seo
Page: 103~116, Vol. 9, No.1, 2013
10.3745/JIPS.2013.9.1.103
Keywords: Gabor Filter Bank, Support Vector Machines, Prostate Segmentation
Show / Hide Abstract
Region-Based Facial Expression Recognition in Still Images
Gawed M. Nagi, Rahmita Rahmat, Fatimah Khalid and Muhamad Taufik
Page: 173~188, Vol. 9, No.1, 2013
10.3745/JIPS.2013.9.1.173
Keywords: Facial Expression Recognition (FER), Facial Features Detection, Facial Features Extraction, Cascade Classifier, LBP, One-Vs-Rest SVM
Show / Hide Abstract
Interactive Semantic Image Retrieval
Pushpa B. Patil and Manesh B. Kokare
Page: 349~364, Vol. 9, No.3, 2013
10.3745/JIPS.2013.9.3.349
Keywords: Content-based Image Retrieval (CBIR), Relevance Feedback (RF), Rotated Complex Wavelet Filt ers (RCWFs), Dual Tree Complex Wavelet, and Image retrieval
Show / Hide Abstract
Discriminatory Projection of Camouflaged Texture Through Line Masks
Nagappa Bhajantri, Pradeep Kumar R and Nagabhushan P
Page: 660~677, Vol. 9, No.4, 2013
10.3745/JIPS.2013.9.4.660
Keywords: Camouflage, Line mask, Enhancement, Texture analysis, Distribution pattern, Histogram, Regression line
Show / Hide Abstract
Automatic Detection of Texture-defects using Texture-periodicity and Jensen-Shannon Divergence
V. Asha, N.U. Bhajantri and P. Nagabhushan
Page: 359~374, Vol. 8, No.2, 2012
10.3745/JIPS.2012.8.2.359
Keywords: Periodicity, Jensen-Shannon Divergence, Cluster, Defect
Show / Hide Abstract
Texture Comparison with an Orientation Matching Scheme
Nguyen Cao Truong Hai, Do-Yeon Kim and Hyuk-Ro Park
Page: 389~398, Vol. 8, No.3, 2012
10.3745/JIPS.2012.8.3.389
Keywords: Orientation Matching, Texture Analysis, Texture Comparison, K-means Clustering
Show / Hide Abstract
Iris Recognition Using Ridgelets
Lenina Birgale and Manesh Kokare
Page: 445~458, Vol. 8, No.3, 2012
10.3745/JIPS.2012.8.3.445
Keywords: Ridgelets, Texture, Wavelets, Biometrics, Features, Database
Show / Hide Abstract
Face Recognition Based on the Combination of Enhanced Local Texture Feature and DBN under Complex Illumination Conditions
Chen Li, Shuai Zhao, Ke Xiao and Yanjie Wang
Page: 191~204, Vol. 14, No.1, 2018

Keywords: Deep Belief Network, Enhanced Local Texture Feature, Face Recognition, Illumination Variation
Show / Hide Abstract
To combat the adverse impact imposed by illumination variation in the face recognition process, an effective and feasible algorithm is proposed in this paper. Firstly, an enhanced local texture feature is presented by applying the central symmetric encode principle on the fused component images acquired from the wavelet decomposition. Then the proposed local texture features are combined with Deep Belief Network (DBN) to gain robust deep features of face images under severe illumination conditions. Abundant experiments with different test schemes are conducted on both CMU-PIE and Extended Yale-B databases which contain face images under various illumination condition. Compared with the DBN, LBP combined with DBN and CSLBP combined with DBN, our proposed method achieves the most satisfying recognition rate regardless of the database used, the test scheme adopted or the illumination condition encountered, especially for the face recognition under severe illumination variation.
GLIBP: Gradual Locality Integration of Binary Patterns for Scene Images Retrieval
Salah Bougueroua and Bachir Boucheham
Page: 469~486, Vol. 14, No.2, 2018

Keywords: CBIR, Elliptic-Region, Global Information, LBP, Local Information, Texture
Show / Hide Abstract
We propose an enhanced version of the local binary pattern (LBP) operator for texture extraction in images in the context of image retrieval. The novelty of our proposal is based on the observation that the LBP exploits only the lowest kind of local information through the global histogram. However, such global Histograms reflect only the statistical distribution of the various LBP codes in the image. The block based LBP, which uses local histograms of the LBP, was one of few tentative to catch higher level textural information. We believe that important local and useful information in between the two levels is just ignored by the two schemas. The newly developed method: gradual locality integration of binary patterns (GLIBP) is a novel attempt to catch as much local information as possible, in a gradual fashion. Indeed, GLIBP aggregates the texture features present in grayscale images extracted by LBP through a complex structure. The used framework is comprised of a multitude of ellipse-shaped regions that are arranged in circular-concentric forms of increasing size. The framework of ellipses is in fact derived from a simple parameterized generator. In addition, the elliptic forms allow targeting texture directionality, which is a very useful property in texture characterization. In addition, the general framework of ellipses allows for taking into account the spatial information (specifically rotation). The effectiveness of GLIBP was investigated on the Corel-1K (Wang) dataset. It was also compared to published works including the very effective DLEP. Results show significant higher or comparable performance of GLIBP with regard to the other methods, which qualifies it as a good tool for scene images retrieval.
A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle
Wei Song, Shuanghui Zou, Yifei Tian, Su Sun, Simon Fong, Kyungeun Cho and Lvyang Qiu
Page: 1445~1456, Vol. 14, No.6, 2018

Keywords: Driving Awareness, Environment Perception, Unmanned Ground Vehicle, 3D Reconstruction
Show / Hide Abstract
Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned
ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding
terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of
environment information analysis, we develop a CPU-GPU hybrid system of automatic environment
perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of
three functional modules, namely, multi-sensor data collection and pre-processing, environment perception,
and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing
function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion
information into a global terrain model after filtering redundant and noise data according to the redundancy
removal principle. In the environment perception module, the registered discrete points are clustered into
ground surface and individual objects by using a ground segmentation method and a connected component
labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed
and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates
the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the
captured video images. Texture meshes and color particle models are used to reconstruct the ground surface
and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel
computation method to implement the applied computer graphics and image processing algorithms in parallel.
Texture Image Retrieval Using DTCWT-SVD and Local Binary Pattern Features
Dayou Jiang and Jongweon Kim
Page: 1628~1639, Vol. 13, No.6, 2017

Keywords: Dual-Tree Complex Wavelet Transform, Image Retrieval, Local Binary Pattern, SVD, Texture Feature
Show / Hide Abstract
The combination texture feature extraction approach for texture image retrieval is proposed in this paper. Two kinds of low level texture features were combined in the approach. One of them was extracted from singular value decomposition (SVD) based dual-tree complex wavelet transform (DTCWT) coefficients, and the other one was extracted from multi-scale local binary patterns (LBPs). The fusion features of SVD based multi-directional wavelet features and multi-scale LBP features have short dimensions of feature vector. The comparing experiments are conducted on Brodatz and Vistex datasets. According to the experimental results, the proposed method has a relatively better performance in aspect of retrieval accuracy and time complexity upon the existing methods.
Detection of Microcalcification Using the Wavelet Based Adaptive Sigmoid Function and Neural Network
Sanjeev Kumar and Mahesh Chandra
Page: 703~715, Vol. 13, No.4, 2017

Keywords: Cascade-Forward Back Propagation Technique, Computer-Aided Diagnosis (CAD), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gray-Level Co-Occurrence Matrix (GLCM), Mammographic Image Analysis Society (MIAS) Database, Modified Sigmoid Function
Show / Hide Abstract
Mammogram images are sensitive in nature and even a minor change in the environment affects the quality of the images. Due to the lack of expert radiologists, it is difficult to interpret the mammogram images. In this paper an algorithm is proposed for a computer-aided diagnosis system, which is based on the wavelet based adaptive sigmoid function. The cascade feed-forward back propagation technique has been used for training and testing purposes. Due to the poor contrast in digital mammogram images it is difficult to process the images directly. Thus, the images were first processed using the wavelet based adaptive sigmoid function and then the suspicious regions were selected to extract the features. A combination of texture features and gray- level co-occurrence matrix features were extracted and used for training and testing purposes. The system was trained with 150 images, while a total 100 mammogram images were used for testing. A classification accuracy of more than 95% was obtained with our proposed method.
Color Image Coding Based on Shape-Adaptive All Phase Biorthogonal Transform
Xiaoyan Wang, Chengyou Wang, Xiao Zhou and Zhiqiang Yang
Page: 114~127, Vol. 13, No.1, 2017

Keywords: Color Image Coding, Shape-Adaptive All Phase Biorthogonal Transform (SA-APBT), Color Space Conversion, Chain Code
Show / Hide Abstract
This paper proposes a color image coding algorithm based on shape-adaptive all phase biorthogonal transform (SA-APBT). This algorithm is implemented through four procedures: color space conversion, image segmentation, shape coding, and texture coding. Region-of-interest (ROI) and background area are obtained by image segmentation. Shape coding uses chain code. The texture coding of the ROI is prior to the background area. SA-APBT and uniform quantization are adopted in texture coding. Compared with the color image coding algorithm based on shape-adaptive discrete cosine transform (SA-DCT) at the same bit rates, experimental results on test color images reveal that the objective quality and subjective effects of the reconstructed images using the proposed algorithm are better, especially at low bit rates. Moreover, the complexity of the proposed algorithm is reduced because of uniform quantization
Image Restoration and Object Removal Using Prioritized Adaptive Patch-Based Inpainting in a Wavelet Domain
Rajesh P. Borole and Sanjiv V. Bonde
Page: 1183~1202, Vol. 13, No.5, 2017

Keywords: Image Inpainting, Object Removal, Region Filling, Texture and Structure Propagation, Wavelet Inpainting
Show / Hide Abstract
Image restoration has been carried out by texture synthesis mostly for large regions and inpainting algorithms for small cracks in images. In this paper, we propose a new approach that allows for the simultaneous fill-in of different structures and textures by processing in a wavelet domain. A combination of structure inpainting and patch-based texture synthesis is carried out, which is known as patch-based inpainting, for filling and updating the target region. The wavelet transform is used for its very good multiresolution capabilities. The proposed algorithm uses the wavelet domain subbands to resolve the structure and texture components in smooth approximation and high frequency structural details. The subbands are processed separately by the prioritized patch-based inpainting with isophote energy driven texture synthesis at the core. The algorithm automatically estimates the wavelet coefficients of the target regions of various subbands using optimized patches from the surrounding DWT coefficients. The suggested performance improvement drastically improves execution speed over the existing algorithm. The proposed patch optimization strategy improves the quality of the fill. The fill-in is done with higher priority to structures and isophotes arriving at target boundaries. The effectiveness of the algorithm is demonstrated with natural and textured images with varying textural complexions.
Fragile Watermarking Based on LBP for Blind Tamper Detection in Images
Heng Zhang, Chengyou Wang and Xiao Zhou
Page: 385~399, Vol. 13, No.2, 2017

Keywords: Fragile Watermarking, Local Binary Pattern (LBP), Least Significant Bit (LSB), Tamper Detection and Localization
Show / Hide Abstract
Nowadays, with the development of signal processing technique, the protection to the integrity and authenticity of images has become a topic of great concern. A blind image authentication technology with high tamper detection accuracy for different common attacks is urgently needed. In this paper, an improved fragile watermarking method based on local binary pattern (LBP) is presented for blind tamper location in images. In this method, a binary watermark is generated by LBP operator which is often utilized in face identification and texture analysis. In order to guarantee the safety of the proposed algorithm, Arnold transform and logistic map are used to scramble the authentication watermark. Then, the least significant bits (LSBs) of original pixels are substituted by the encrypted watermark. Since the authentication data is constructed from the image itself, no original image is needed in tamper detection. The LBP map of watermarked image is compared to the extracted authentication data to determine whether it is tampered or not. In comparison with other state-of-the-art schemes, various experiments prove that the proposed algorithm achieves better performance in forgery detection and location for baleful attacks.
Content-Based Image Retrieval Using Combined Color and Texture Features Extracted by Multi-resolution Multi-direction Filtering
Hee-Hyung Bu, Nam-Chul Kim, Chae-Joo Moon and Jong-Hwa Kim
Page: 464~475, Vol. 13, No.3, 2017

Keywords: Color and Texture Feature, Content-Based Image Retrieval, HSV Color Space, Multi-resolution Multi-direction Filtering
Show / Hide Abstract
In this paper, we present a new texture image retrieval method which combines color and texture features extracted from images by a set of multi-resolution multi-direction (MRMD) filters. The MRMD filter set chosen is simple and can be separable to low and high frequency information, and provides efficient multi- resolution and multi-direction analysis. The color space used is HSV color space separable to hue, saturation, and value components, which are easily analyzed as showing characteristics similar to the human visual system. This experiment is conducted by comparing precision vs. recall of retrieval and feature vector dimensions. Images for experiments include Corel DB and VisTex DB; Corel_MR DB and VisTex_MR DB, which are transformed from the aforementioned two DBs to have multi-resolution images; and Corel_MD DB and VisTex_MD DB, transformed from the two DBs to have multi-direction images. According to the experimental results, the proposed method improves upon the existing methods in aspects of precision and recall of retrieval, and also reduces feature vector dimensions.
Fire Detection Using Multi-Channel Information and Gray Level Co-occurrence Matrix Image Features
Jae-Hyun Jun, Min-Jun Kim, Yong-Suk Jang and Sung-Ho Kim
Page: 590~598, Vol. 13, No.3, 2017

Keywords: Color Features, Fire Detection, Texture Features
Show / Hide Abstract
Recently, there has been an increase in the number of hazardous events, such as fire accidents. Monitoring systems that rely on human resources depend on people; hence, the performance of the system can be degraded when human operators are fatigued or tensed. It is easy to use fire alarm boxes; however, these are frequently activated by external factors such as temperature and humidity. We propose an approach to fire detection using an image processing technique. In this paper, we propose a fire detection method using multi- channel information and gray level co-occurrence matrix (GLCM) image features. Multi-channels consist of RGB, YCbCr, and HSV color spaces. The flame color and smoke texture information are used to detect the flames and smoke, respectively. The experimental results show that the proposed method performs better than the previous method in terms of accuracy of fire detection
Content-based Image Retrieval Using Texture Features Extracted from Local Energy and Local Correlation of Gabor Transformed Images
Hee-Hyung Bu, Nam-Chul Kim, Bae-Ho Lee and Sung-Ho Kim
Page: 1372~1381, Vol. 13, No.5, 2017

Keywords: Content-based Image Retrieval, Gabor Transformation, Local Energy, Local Correlation, Texture Feature
Show / Hide Abstract
In this paper, a texture feature extraction method using local energy and local correlation of Gabor transformed images is proposed and applied to an image retrieval system. The Gabor wavelet is known to be similar to the response of the human visual system. The outputs of the Gabor transformation are robust to variants of object size and illumination. Due to such advantages, it has been actively studied in various fields such as image retrieval, classification, analysis, etc. In this paper, in order to fully exploit the superior aspects of Gabor wavelet, local energy and local correlation features are extracted from Gabor transformed images and then applied to an image retrieval system. Some experiments are conducted to compare the performance of the proposed method with those of the conventional Gabor method and the popular rotation-invariant uniform local binary pattern (RULBP) method in terms of precision vs recall. The Mahalanobis distance is used to measure the similarity between a query image and a database (DB) image. Experimental results for Corel DB and VisTex DB show that the proposed method is superior to the conventional Gabor method. The proposed method also yields precision and recall 6.58% and 3.66% higher on average in Corel DB, respectively, and 4.87% and 3.37% higher on average in VisTex DB, respectively, than the popular RULBP method.
Feasibility Study of a Distributed and Parallel Environment for Implementing the Standard Version of AAM Model
Moulkheir Naoui, Saïd Mahmoudi and Ghalem Belalem
Page: 149~168, Vol. 12, No.1, 2016

Keywords: Active Appearance Model, Data Parallelism, Deformable Model, Distributed Image Processing, Parallel Image Processing, Segmentation
Show / Hide Abstract
The Active Appearance Model (AAM) is a class of deformable models, which, in the segmentation process, integrates the priori knowledge on the shape and the texture and deformation of the structures studied. This model in its sequential form is computationally intensive and operates on large data sets. This paper presents another framework to implement the standard version of the AAM model. We suggest a distributed and parallel approach justified by the characteristics of the model and their potentialities. We introduce a schema for the representation of the overall model and we study of operations that can be parallelized. This approach is intended to exploit the benefits build in the area of advanced image processing.
Content Based Dynamic Texture Analysis and Synthesis Based on SPIHT with GPU
Premanand P Ghadekar and Nilkanth B Chopade
Page: 46~56, Vol. 12, No.1, 2016

Keywords: Discrete Wavelet Transform, Dynamic Texture, GPU, SPIHT, SVD
Show / Hide Abstract
Dynamic textures are videos that exhibit a stationary property with respect to time (i.e., they have patterns that repeat themselves over a large number of frames). These patterns can easily be tracked by a linear dynamic system. In this paper, a model t...
Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion
Chaimae Anibou, Mohammed Nabil Saidi and Driss Aboutajdine
Page: 421~437, Vol. 11, No.3, 2015

Keywords: Discrete Wavelet Transform, Feature Extraction, Fuzzy Set Theory, Information Fusion, Probability Theory, Segmentation, Supervised Classification
Show / Hide Abstract
This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on infor- mation fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the per- formance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.
A TRUS Prostate Segmentation using Gabor Texture Features and Snake-like Contour
Sung Gyun Kim and Yeong Geon Seo
Page: 103~116, Vol. 9, No.1, 2013

Keywords: Gabor Filter Bank, Support Vector Machines, Prostate Segmentation
Show / Hide Abstract
Prostate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound(TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a method for automatic prostate segmentation in TRUS images using Gabor feature extraction and snake-like contour is presented. This method involves preprocessing, extracting Gabor feature, training, and prostate segmentation. The speckle reduction for preprocessing step has been achieved by using stick filter and top-hat transform has been implemented for smoothing the contour. A Gabor filter bank for extraction of rotation- invariant texture features has been implemented. A support vector machine(SVM) for training step has been used to get each feature of prostate and nonprostate. Finally, the boundary of prostate is extracted by the snake-like contour algorithm. A number of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with less than 10.2% of the accuracy which is relative to boundary provided manually by experts
Region-Based Facial Expression Recognition in Still Images
Gawed M. Nagi, Rahmita Rahmat, Fatimah Khalid and Muhamad Taufik
Page: 173~188, Vol. 9, No.1, 2013

Keywords: Facial Expression Recognition (FER), Facial Features Detection, Facial Features Extraction, Cascade Classifier, LBP, One-Vs-Rest SVM
Show / Hide Abstract
In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.
Interactive Semantic Image Retrieval
Pushpa B. Patil and Manesh B. Kokare
Page: 349~364, Vol. 9, No.3, 2013

Keywords: Content-based Image Retrieval (CBIR), Relevance Feedback (RF), Rotated Complex Wavelet Filt ers (RCWFs), Dual Tree Complex Wavelet, and Image retrieval
Show / Hide Abstract
The big challenge in current content-based image retrieval systems is to reduce the semantic gap between the low level-features and high-level concepts. In this paper, we have proposed a novel framework for efficient image retrieval to improve the retrieval results significantly as a means to addressing this problem. In our proposed method, we first extracted a strong set of image features by using the dual-tree rotated complex wavelet filters (DT-RCWF) and dual tree-complex wavelet transform (DT-CWT) jointly, which obtains features in 12 different directions. Second, we presented a relevance feedback (RF) framework for efficient image retrieval by employing a support vector machine (SVM), which learns the semantic relationship among images using the knowledge, based on the user interaction. Extensive experiments show that there is a significant improvement in retrieval performance with the proposed method using SVMRF compared with the retrieval performance without RF. The proposed method improves retrieval p erformance from 78.5% to 92.29% on the texture database in terms of retrieval accuracy and from 57.20% to 94.2% on the Corel image database, in terms of precision in a much lower number of iterations.
Discriminatory Projection of Camouflaged Texture Through Line Masks
Nagappa Bhajantri, Pradeep Kumar R and Nagabhushan P
Page: 660~677, Vol. 9, No.4, 2013

Keywords: Camouflage, Line mask, Enhancement, Texture analysis, Distribution pattern, Histogram, Regression line
Show / Hide Abstract
The blending of defective texture with the ambiencee texture results in camouflage. The gray value or color distribution pattern of the camouflaged images fails to reflect considerable deviations between the camouflaged object and the sublimating background demands improved strategies for texture analysis. In this research, we propose the implementation of an initial enhancement of the image that employs line masks, which could result in a better discrimination of the camouflaged portion. Finally, the gray value distribution patterns are analyzed in the enhanced image, to fix the camouflaged portions.
Automatic Detection of Texture-defects using Texture-periodicity and Jensen-Shannon Divergence
V. Asha, N.U. Bhajantri and P. Nagabhushan
Page: 359~374, Vol. 8, No.2, 2012

Keywords: Periodicity, Jensen-Shannon Divergence, Cluster, Defect
Show / Hide Abstract
In this paper, we propose a new machine vision algorithm for automatic defect detection on patterned textures with the help of texture-periodicity and the Jensen- Shannon Divergence, which is a symmetrized and smoothed version of the Kullback- Leibler Divergence. Input defective images are split into several blocks of the same size as the size of the periodic unit of the image. Based on histograms of the periodic blocks, Jensen-Shannon Divergence measures are calculated for each periodic block with respect to itself and all other periodic blocks and a dissimilarity matrix is obtained. This dissimilarity matrix is utilized to get a matrix of true-metrics, which is later subjected to Ward"'"s hierarchical clustering to automatically identify defective and defect-free blocks. Results from experiments on real fabric images belonging to 3 major wallpaper groups, namely, pmm, p2, and p4m with defects, show that the proposed method is robust in finding fabric defects with a very high success rates without any human intervention
Texture Comparison with an Orientation Matching Scheme
Nguyen Cao Truong Hai, Do-Yeon Kim and Hyuk-Ro Park
Page: 389~398, Vol. 8, No.3, 2012

Keywords: Orientation Matching, Texture Analysis, Texture Comparison, K-means Clustering
Show / Hide Abstract
Texture is an important visual feature for image analysis. Many approaches have been proposed to model and analyze texture features. Although these approaches significantly contribute to various image-based applications, most of these methods are sensitive to the changes in the scale and orientation of the texture pattern. Because textures vary in scale and orientations frequently, this easily leads to pattern mismatching if the features are compared to each other without considering the scale and/or orientation of textures. This paper suggests an Orientation Matching Scheme (OMS) to ease the problem of mismatching rotated patterns. In OMS, a pair of texture features will be compared to each other at various orientations to identify the best matched direction for comparison. A database including rotated texture images was generated for experiments. A synthetic retrieving experiment was conducted on the generated database to examine the performance of the proposed scheme. We also applied OMS to the similarity computation in a K-means clustering algorithm. The purpose of using K-means is to examine the scheme exhaustively in unpromising conditions, where initialized seeds are randomly selected and algorithms work heuristically. Results from both types of experiments show that the proposed OMS can help improve the performance when dealing with rotated patterns.
Iris Recognition Using Ridgelets
Lenina Birgale and Manesh Kokare
Page: 445~458, Vol. 8, No.3, 2012

Keywords: Ridgelets, Texture, Wavelets, Biometrics, Features, Database
Show / Hide Abstract
Image feature extraction is one of the basic works for biometric analysis. This paper presents the novel concept of application of ridgelets for iris recognition systems. Ridgelet transforms are the combination of Radon transforms and Wavelet transforms. They are suitable for extracting the abundantly present textural data that is in an iris. The technique proposed here uses the ridgelets to form an iris signature and to represent the iris. This paper contributes towards creating an improved iris recognition system. There is a reduction in the feature vector size, which is 1X4 in size. The False Acceptance Rate (FAR) and False Rejection Rate (FRR) were also reduced and the accuracy increased. The proposed method also avoids the iris normalization process that is traditionally used in iris recognition systems. Experimental results indicate that the proposed method achieves an accuracy of 99.82%, 0.1309% FAR, and 0.0434% FRR.