Vol. 7, No. 3, Sep. 2011
The Principle of Justifiable Granularity and an Optimization of Information Granularity Allocation as Fundamentals of Granular ComputingWitold Pedrycz
Vol. 7, No. 3, pp. 397-412, Sep. 2011
Keywords: Information Granularity, Principle of Justifiable Granularity, Knowledge management, Optimal Granularity Allocation
Show / Hide AbstractGranular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
Mohammad. H. P, i, Omid Kashefi, Behrouz Minaei
Vol. 7, No. 3, pp. 413-424, Sep. 2011
Keywords: Sequence Data, Similarity Measure, Sequence Mining
Show / Hide AbstractA variety of different metrics has been introduced to measure the similarity of two given sequences. These widely used metrics are ranging from spell correctors and categorizers to new sequence mining applications. Different metrics consider different aspects of sequences, but the essence of any sequence is extracted from the ordering of its elements. In this paper, we propose a novel sequence similarity measure that is based on all ordered pairs of one sequence and where a Hasse diagram is built in the other sequence. In contrast with existing approaches, the idea behind the proposed sequence similarity metric is to extract all ordering features to capture sequence properties. We designed a clustering problem to evaluate our sequence similarity metric. Experimental results showed the superiority of our proposed sequence similarity metric in maximizing the purity of clustering compared to metrics such as d2, Smith-Waterman, Levenshtein, and Needleman-Wunsch. The limitation of those methods originates from some neglected sequence features, which are considered in our proposed sequence similarity metric.
Ayra Panganiban, Noel Linsangan, Felicito Caluyo
Vol. 7, No. 3, pp. 425-434, Sep. 2011
Keywords: biometrics, Degrees of Freedom, Iris Recognition, wavelet
Show / Hide AbstractThe success of iris recognition depends mainly on two factors: image acquisition and an iris recognition algorithm. In this study, we present a system that considers both factors and focuses on the latter. The proposed algorithm aims to find out the most efficient wavelet family and its coefficients for encoding the iris template of the experiment samples. The algorithm implemented in software performs segmentation, normalization, feature encoding, data storage, and matching. By using the Haar and Biorthogonal wavelet families at various levels feature encoding is performed by decomposing the normalized iris image. The vertical coefficient is encoded into the iris template and is stored in the database. The performance of the system is evaluated by using the number of degrees of freedom, False Reject Rate (FRR), False Accept Rate (FAR), and Equal Error Rate (EER) and the metrics show that the proposed algorithm can be employed for an iris recognition system.
Vol. 7, No. 3, pp. 435-446, Sep. 2011
Keywords: Probabilistic Soft Error Detection, Reliability, Anomaly Speculation
Show / Hide AbstractMicroprocessors are becoming increasingly vulnerable to soft errors due to the current trends of semiconductor technology scaling. Traditional redundant multithreading architectures provide perfect fault tolerance by re-executing all the computations. However, such a full re-execution technique significantly increases the verification workload on the processor resources, resulting in severe performance degradation. This paper presents a pro-active verification management approach to mitigate the verification workload to increase its performance with a minimal effect on overall reliability. An anomaly-speculation-based filter checker is proposed to guide a verification priority before the re-execution process starts. This technique is accomplished by exploiting a value similarity property, which is defined by a frequent occurrence of partially identical values. Based on the biased distribution of similarity distance measure, this paper investigates further application to exploit similar values for soft error tolerance with anomaly speculation. Extensive measurements prove that the majority of instructions produce values, which are different from the previous result value, only in a few bits. Experimental results show that the proposed scheme accelerates the processor to be 180% faster than traditional fully-fault-tolerant processor with a minimal impact on overall soft error rate.
Concepcion Perez de Celis Herrero, Jaime Lara Alvarez, Gustavo Cossio Aguilar, Maria J. Somodevilla Garcia
Vol. 7, No. 3, pp. 447-458, Sep. 2011
Keywords: Search by Content, Faceted Classification, IT, Collections Management, Metadata, Information Retrieval
Show / Hide AbstractThis study presents a comprehensive solution to the collection management, which is based on the model for Cultural Objects (CCO). The developed system manages and spreads the collections that are safeguarded in museums and galleries more easily by using IT. In particular, we present our approach for a non-structured search and recovery of the objects based on the annotation of artwork images. In this methodology, we have introduced a faceted search used as a framework for multi-classification and for exploring/browsing complex information bases in a guided, yet unconstrained way, through a visual interface.
Kyung-Mi Park, Han-Cheol Cho, Hae-Chang Rim
Vol. 7, No. 3, pp. 459-472, Sep. 2011
Keywords: Biomedical Interaction Extraction, Natural Language Processing, Interaction Verb Extraction, Argument Relation Identification
Show / Hide AbstractThe vast number of biomedical literature is an important source of biomedical interaction information discovery. However, it is complicated to obtain interaction information from them because most of them are not easily readable by machine. In this paper, we present a method for extracting biomedical interaction information assuming that the biomedical Named Entities (NEs) are already identified. The proposed method labels all possible pairs of given biomedical NEs as INTERACTION or NOINTERACTION by using a Maximum Entropy (ME) classifier. The features used for the classifier are obtained by applying various NLP techniques such as POS tagging, base phrase recognition, parsing and predicate-argument recognition. Especially, specific verb predicates (activate, inhibit, diminish and etc.) and their biomedical NE arguments are very useful features for identifying interactive NE pairs. Based on this, we devised a twostep method: 1) an interaction verb extraction step to find biomedically salient verbs, and 2) an argument relation identification step to generate partial predicate-argument structures between extracted interaction verbs and their NE arguments. In the experiments, we analyzed how much each applied NLP technique improves the performance. The proposed method can be completely improved by more than 2% compared to the baseline method. The use of external contextual features, which are obtained from outside of NEs, is crucial for the performance improvement. We also compare the performance of the proposed method against the co-occurrence-based and the rule-based methods. The result demonstrates that the proposed method considerably improves the performance.
Jagat Sesh Challa, Arindam Paul, Yogesh Dada, Venkatesh Nerella, Praveen Ranjan Srivastava, Ajit Pratap Singh
Vol. 7, No. 3, pp. 473-518, Sep. 2011
Keywords: Software Quality Parameters, ISO/IEC 9126, Fuzzy Software Quality Quantification Tool (FSQQT), Fuzzy Membership Function, Triangular Fuzzy Sets, KLOC, GUI, CUI
Show / Hide AbstractSoftware measurement is a key factor in managing, controlling, and improving the software development processes. Software quality is one of the most important factors for assessing the global competitive position of any software company. Thus the quantification of quality parameters and integrating them into quality models is very essential. Software quality criteria are not very easily measured and quantified. Many attempts have been made to exactly quantify the software quality parameters using various models such as ISO/IEC 9126 Quality Model, Boehm’s Model, McCall’s model, etc. In this paper an attempt has been made to provide a tool for precisely quantifying software quality factors with the help of quality factors stated in ISO/IEC 9126 model. Due to the unpredictable nature of the software quality attributes, the fuzzy multi criteria approach has been used to evolve the quality of the software.
Hua Fang, JeongWoo Kim, JongWhan Jang
Vol. 7, No. 3, pp. 519-530, Sep. 2011
Keywords: Snake, Detec tion, Tracking, Multiple Objects, Topology Changes
Show / Hide AbstractA Snake is an active contour for representing object contours. Traditional snake algorithms are often used to represent the contour of a single object. However, if there is more than one object in the image, the snake model must be adaptive to determine the corresponding contour of each object. Also, the previous initialized snake contours risk getting the wrong results when tracking multiple objects in successive frames due to the weak topology changes. To overcome this problem, in this paper, we present a new snake method for efficiently tracking contours of multiple objects. Our proposed algorithm can provide a straightforward approach for snake contour rapid splitting and connection, which usually cannot be gracefully handled by traditional snakes. Experimental results of various test sequence images with multiple objects have shown good performance, which proves that the proposed method is both effective and accurate.
Hiroshi Kutsuna, Satoshi Fujita
Vol. 7, No. 3, pp. 531-542, Sep. 2011
Keywords: Congestion Control, AIMD, Minority Game
Show / Hide AbstractIn this paper, we propose a new congestion control scheme for high-speed networks. The basic idea of our proposed scheme is to adopt a game theory called, “Minority Game” (MG), to realize a selective reduction of the transmission speed of senders. More concretely, upon detecting any congestion, the scheme starts a game among all senders who are participating in the communication. The losers of the game reduce the transmission speed by a multiplicative factor. MG is a game that has recently attracted considerable attention, and it is known to have a remarkable property so that the number of winners converges to a half the number of players in spite of the selfish behavior of the players to increase its own profit. By using this property of MG, we can realize a fair reduction of the transmission speed, which is more efficient than the previous schemes in which all senders uniformly reduce their transmission speed. The effect of the proposed scheme is evaluated by simulation. The result of simulations indicates that the proposed scheme certainly realizes a selective reduction of the transmission speed. It is sufficiently fair compared to other simple randomized schemes and is sufficiently efficient compared to other conventional schemes.
Hong Joo Lee
Vol. 7, No. 3, pp. 543-548, Sep. 2011
Keywords: Smart Device, Business Strategy, Business Value Chain
Show / Hide AbstractInformation technology is changing the business value chain and business systems. This situation is due to the business value chain and the value creation factors in business. Technology companies and researchers are developing new businesses, but many companies and researchers cannot find successful ways to analyze and develop a business in a specific way. In this paper, first, the value creation motive in business is analyzed through a literature review. Second, business attributes are analyzed, while considering the value creation motive and the business factors in management. Finally, the business attributes of information technology are studied through a review of previous research papers on this topic.
Vol. 7, No. 3, pp. 549-560, Sep. 2011
Keywords: Efficient Proof, E-Voting
Show / Hide AbstractVote validity proof and verification is an efficiency bottleneck and privacy drawback in homomorphic e-voting. The existing vote validity proof technique is inefficient and only achieves honest-verifier zero knowledge. In this paper, an efficient proof and verification technique is proposed to guarantee vote validity in homomorphic e-voting. The new proof technique is mainly based on hash function operations that only need a very small number of costly public key cryptographic operations. It can handle untrusted verifiers and achieve stronger zero knowledge privacy. As a result, the efficiency and privacy of homomorphic e-voting applications will be significantly improved.