Digital Library
Vol. 9, No. 1, Mar. 2013
-
Menaouer Brahami, Baghdad Atmani, Nada Matta
Vol. 9, No. 1, pp. 1-30, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.001
Keywords: Knowledge Management, Knowledge Mapping (Knowledge Cartography), Knowledge Representation, Boolean Modeling, Cellular Machine, Data Mining, Boolean Inference Engine
Show / Hide AbstractThe capitalization of know-how, knowledge management, and the control of the constantly growing information mass has become the new strategic challenge for organizations that aim to capture the entire wealth of knowledge (tacit and explicit). Thus, knowledge mapping is a means of (cognitive) navigation to access the resources of the strategic heritage knowledge of an organization. In this paper, we present a new mapping approach based on the Boolean modeling of critical domain knowledge and on the use of different data sources via the data mining technique in order to improve the process of acquiring knowledge explicitly. To evaluate our approach, we have initiated a process of mapping that is guided by machine learning that is artificially operated in the following two stages: data mining and automatic mapping. Data mining is be initially run from an induction of Boolean case studies (explicit). The mapping rules are then used to automatically improve the Boolean model of the mapping of critical knowledge -
Erdenetuya Namsrai, Tsendsuren Munkhdalai, Meijing Li, Jung-Hoon Shin, Oyun-Erdene Namsrai, Keun Ho Ryu
Vol. 9, No. 1, pp. 31-40, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.031
Keywords: Data Mining, Ensemble Method, Feature Selection, Arrhythmia Classification
Show / Hide AbstractIn this paper, a novel method is proposed to build an ensemble of classifiers by using a feature selection schema. The feature selection schema identifies the best feature sets that affect the arrhythmia classification. Firstly, a number of feature subsets are extracted by applying the feature selection schema to the original dataset. Then classification models are built by using the each feature subset. Finally, we combine the classification models by adopting a voting approach to form a classification ensemble. The voting approach in our method involves both classification error rate and feature selection rate to calculate the score of the each classifier in the ensemble. In our method, the feature selection rate depends on the extracting order of the feature subsets. In the experiment, we applied our method to arrhythmia dataset and generated three top disjointed feature sets. We then built three classifiers based on the top-three feature subsets and formed the classifier ensemble by using the voting approach. Our method can improve the classification accuracy in high dimensional dataset. The performance of each classifier and the performance of their ensemble were higher than the performance of the classifier that was based on whole feature space of the dataset. The classification performance was improved and a more stable classification model could be constructed with the proposed approach. -
Do-keun Kwon, Ki hyun Chung, Kyunghee Choi
Vol. 9, No. 1, pp. 41-52, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.041
Keywords: zigbee, Low Power Protocol, Beacon, Power Consumption
Show / Hide AbstractOne of the obstacles preventing the Zigbee protocol from being widely used is the excessive power consumption of Zigbee devices in low bandwidth and low power requirement applications. This paper proposes a protocol that resolves the power efficiency problem. The proposed protocol reduces the power consumption of Zigbee devices in beacon-enabled networks without increasing the time taken by Zigbee peripherals to communicate with their coordinator. The proposed protocol utilizes a beacon control mechanism called a “sleep pattern,” which is updated based on the previous event statistics. It determines exactly when Zigbee peripherals wake up or sleep. A simulation of the proposed protocol using realistic parameters and an experiment using commercial products yielded similar results, demonstrating that the protocol may be a solution to reduce the power consumption of Zigbee devices -
Wei-Ho Chung, Sunil Kumar, Seethal Paluri, Santosh Nagaraj, Annamalai Annamalai Jr., John D. Matyjas
Vol. 9, No. 1, pp. 53-68, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.053
Keywords: Wireless Transmission, Unequal Error Protection (UEP), Rate-Compatible Punctured Convolutional (RCPC) Code, Hierarchical Modulation, H.264/AVC Video Coding
Show / Hide AbstractWe investigate the rate-compatible punctured convolutional (RCPC) codes concatenated with hierarchical QAM for designing a cross-layer unequal error protection scheme for H.264 coded sequences. We first divide the H.264 encoded video slices into three priority classes based on their relative importance. We investigate the system constraints and propose an optimization formulation to compute the optimal parameters of the proposed system for the given source significance information. An upper bound to the significance-weighted bit error rate in the proposed system is derived as a function of system parameters, including the code rate and geometry of the constellation. An example is given with design rules for H.264 video communications and 3.5-4 dB PSNR improvement over existing RCPC based techniques for AWGN wireless channels is shown through simulations. -
Seiyoung Lee, Hwan-Seung Yong
Vol. 9, No. 1, pp. 69-88, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.069
Keywords: Agile Methods, Small Software Projects, Industrial Case Study
Show / Hide AbstractAgile methods are highly attractive for small projects, but no agile method works well as a standalone system. Therefore, some adaption or customization is always required. In this paper, the Agile Framework for Small Projects (AFSP) was applied to four industry cases. The AFSP provides a structured way for software organizations to adopt agile practices and evaluate the results. The framework includes an extended Scrum process and agile practices, which are based on agility and critical success factors in agile software projects that are selected from Scrum, XP, FDD, DSDM and Crystal Clear. AFSP also helps software managers and developers effectively use agile engineering techniques throughout the software development lifecycle. The case study projects were evaluated on the basis of risk-based agility factors, the agility of the adopted practices, agile adoption levels, and the degree of the agile project success. The analysis of the results showed that the framework used in the aforementioned cases was effective -
Om Prakash Verma, Shweta Singh
Vol. 9, No. 1, pp. 89-102, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.089
Keywords: impulse noise, Decision Boundaries, Color Components, Fuzzy Filter, Membership Function
Show / Hide AbstractThe paper presents a fuzzy based impulse noise filter for both gray scale and color images. The proposed approach is based on the technique of boundary discriminative noise detection. The algorithm is a multi-step process comprising detection, filtering and color correction stages. The detection procedure classifies the pixels as corrupted and uncorrupted by computing decision boundaries, which are fuzzified to improve the outputs obtained. In the case of color images, a correction term is added by examining the interactions between the color components for further improvement. Quantitative and qualitative analysis, performed on standard gray scale and color image, shows improved performance of the proposed technique over existing state-of-the-art algorithms in terms of Peak Signal to Noise Ratio (PSNR) and color difference metrics. The analysis proves the applicability of the proposed algorithm to random valued impulse noise -
Sung Gyun Kim, Yeong Geon Seo
Vol. 9, No. 1, pp. 103-116, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.103
Keywords: Gabor Filter Bank, Support Vector Machines, Prostate Segmentation
Show / Hide AbstractProstate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound(TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a method for automatic prostate segmentation in TRUS images using Gabor feature extraction and snake-like contour is presented. This method involves preprocessing, extracting Gabor feature, training, and prostate segmentation. The speckle reduction for preprocessing step has been achieved by using stick filter and top-hat transform has been implemented for smoothing the contour. A Gabor filter bank for extraction of rotation- invariant texture features has been implemented. A support vector machine(SVM) for training step has been used to get each feature of prostate and nonprostate. Finally, the boundary of prostate is extracted by the snake-like contour algorithm. A number of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with less than 10.2% of the accuracy which is relative to boundary provided manually by experts -
Divakar Yadav, Sonia Sánchez-Cuadrado, Jorge Morato
Vol. 9, No. 1, pp. 117-140, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.117
Keywords: OCR, Pre-Processing, Segmentation, Feature vector, Classification, Artificial Neural Network (ANN)
Show / Hide AbstractHindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document"'"s textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved. -
Deepak Ghimire, Joonwhoan Lee
Vol. 9, No. 1, pp. 141-156, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.141
Keywords: Face Detection, Image Enhancement, Skin Tone Percentage Index, Canny Edge, Facial Features
Show / Hide AbstractIn this paper we propose a method to detect human faces in color images. Many existing systems use a window-based classifier that scans the entire image for the presence of the human face and such systems suffers from scale variation, pose variation, illumination changes, etc. Here, we propose a lighting insensitive face detection method based upon the edge and skin tone information of the input color image. First, image enhancement is performed, especially if the image is acquired from an unconstrained illumination condition. Next, skin segmentation in YCbCr and RGB space is conducted. The result of skin segmentation is refined using the skin tone percentage index method. The edges of the input image are combined with the skin tone image to separate all non- face regions from candidate faces. Candidate verification using primitive shape features of the face is applied to decide which of the candidate regions corresponds to a face. The advantage of the proposed method is that it can detect faces that are of different sizes, in different poses, and that are making different expressions under unconstrained illumination conditions -
Woon-hae Jeong, Se-jun Kim, Doo-soon Park, Jin Kwak
Vol. 9, No. 1, pp. 157-172, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.157
Keywords: Collaborative Filtering, Movie Recommendation System, Personal Propensity, Security, Push Stack
Show / Hide AbstractThere are many recommendation systems available to provide users with personalized services. Among them, the most frequently used in electronic commerce is "'"collaborative filtering"'", which is a technique that provides a process of filtering customer information for the preparation of profiles and making recommendations of products that are expected to be preferred by other users, based on such information profiles. Collaborative filtering systems, however, have in their nature both technical issues such as sparsity, scalability, and transparency, as well as security issues in the collection of the information that becomes the basis for preparation of the profiles. In this paper, we suggest a movie recommendation system, based on the selection of optimal personal propensity variables and the utilization of a secure collaborating filtering system, in order to provide a solution to such sparsity and scalability issues. At the same time,we adopt "'"push attack"'" principles to deal with the security vulnerability of collaborative filtering systems. Furthermore, we assess the system"'"s applicability by using the open database MovieLens, and present a personal propensity framework for improvement in the performance of recommender systems. We successfully come up with a movie recommendation system through the selection of optimal personalization factors and the embodiment of a safe collaborative filtering system -
Gawed M. Nagi, Rahmita Rahmat, Fatimah Khalid, Muhamad Taufik
Vol. 9, No. 1, pp. 173-188, Mar. 2013
https://doi.org/10.3745/JIPS.2013.9.1.173
Keywords: Facial Expression Recognition (FER), Facial Features Detection, Facial Features Extraction, Cascade Classifier, LBP, One-Vs-Rest SVM
Show / Hide AbstractIn Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.