Vol. 9, No. 4, Aug. 2013
Developing a Dynamic Materialized View Index for Efficiently Discovering Usable Views for Progressive QueriesChao Zhu, Qiang Zhu, Calisto Zuzarte, Wenbin Ma
Vol. 9, No. 4, pp. 511-537, Aug. 2013
Keywords: database, Query Processing, Query Optimization, progressive query, Materialized View, Index
Show / Hide AbstractNumerous data intensive applications demand the efficient processing of a noew type of query, which is called a progressive query (PQ). A PQ consists of a set of unpredictable but inter-related step-queries (SQ) that are specified by its user in a sequence of steps. A conventional DBMS was not designed to efficiently process such PQs. In our earlier work, we introduced a materialized view based approach for efficiently processing PQs, where the focus was on selecting promising views for materialization. The problem of how to efficiently find usable views from the materialized set in order to answer the SQs for a PQ remains open. In this paper, we present a new index technique, called the Dynamic Materialized View Index(DMVI), to rapidly discover usable views for answering a given SQ. The structure of the proposed index is a special ordered tree where the SQ domain tables are used as search keys and some bitmaps are kept at the leaf nodes for refined filtering. A two-level priority rule is adopted to order domain tables in the tree, which facilitates the efficient maintenance of the tree by taking into account the dynamic characteristics of various types of materialized views for PQs. The bitmap encoding methods and the strategies/algorithms to construct, search, and maintain the DMVI are suggested. The extensive experimental results demonstrate that our index technique is quite promising in improving the performance of the materialized view based query processing approach for PQs
A-Rong Kwon, Kyung-Soon Lee
Vol. 9, No. 4, pp. 538-547, Aug. 2013
Keywords: Social opinion, Personal opinion, Bias detection, Sentiment, Target
Show / Hide AbstractIn this paper, we propose a bias detection method that is based on personal and social opinions that express contrasting views on competing topics on Twitter. We used unsupervised polarity classification is conducted for learning social opinions on targets. The tf-idf algorithm is applied to extract targets to reflect sentiments and features of tweets. Our method addresses there being a lack of a sentiment lexicon when learning social opinions. To evaluate the effectiveness of our method, experiments were conducted on four issues using Twitter test collection. The proposed method achieved significant improvements over the baselines.
Mark B. Onte, Dave E. Marcial
Vol. 9, No. 4, pp. 548-566, Aug. 2013
Keywords: Knowledge Product Outsourcing, Management, Web 2.0, IT in Education
Show / Hide AbstractThe availability of technology and the abundance of experts in universities create an ample opportunity to provide a venue that allows a knowledge seeker to easily connect with and request advice from university experts. On the other hand, outsourcing provides opportunities and remains one of the emerging trends in organizations, and can very clearly observed in the Philippines. This paper describes teh development of a reliable web-based approach to Knowledge Product Outsourcing (KPO) services in the Silliman Online University Learning system. The system is called an "e-Knowledge Box." It integrates Web 2.0 technologies and mechanisms, such as instant messaging, private messaging, document forwarding, video conferencing, online payments, net meetings, and social collaboration together into one system. Among the tools used are WAMP Server 2.0, PHP, BlabIM, Wordpress 3.0, Video Whisper, Red5, Adobe Dreamweaver CS4, and Virtual Box. The proposed system is integrated with the search engine in URLs, Web feeds, email links, social bookmarking, search engine sitemaps, and Web Analytics Direct Visitor Reports. The site demonstrates great web usability and has an excellent rating in functionality, language and content, online help and user guides, system and user feedback, consistency, and architectural and visual clarity. Likewise, the site was was reted as being very good for the following items: navigation navigation, user control, and error prevention and correction.
The Architectural Pattern of a Highly Extensible System for the Asynchronous Processing of a Large Amount of DataRo Man Hwang, Soo Kyun Kim, Syungog An, Dong-Won Park
Vol. 9, No. 4, pp. 567-574, Aug. 2013
Keywords: Large data, UML Diagram, Object-Oriented Software
Show / Hide AbstractIn this paper, we have proposed an architectural solution for a system for the visualization and modification of large amounts of data. The pattern is based on an asynchronous executio of programmable commands and a reflective approach of an object structure composition. The described pattern provides great flexibility, which helps adopting it easily to custom application needs. We have implemented a system based on the described pattern. The implemented system presents an innovative approach for a dynamic data object initialization and a flexible system for asynchronous interaction with data sources. We believe that this system can help software developers increase the quality and the production speed of their software products.
O.P. Verma, Veni Jain, Rajni Gumber
Vol. 9, No. 4, pp. 575-591, Aug. 2013
Keywords: Edge detection, Edge improvement, Fuzzy Rules, Membership Function
Show / Hide AbstractMost of the edge detection methods available in literature are gradient based, which further apply thresholding, to find the final edge map in an image. In this paper, we propose a novel method that is based on fuzzy logic is a mathematical logic that attempts to solve problems by assigning values to an imprecise spectrum of data in order to arrive at the most accurate conclusion possible. Here, the fuzzy logic is used to conclude whether a pixel is an edge pixel or not. The proposed technique begins by fuzzifying the gray values of a pixel into two fuzzy variables, namely the black and the white. Fuzzy rules are defined to find the edge pixels in the fuzzified image. The resultant edge map may contain some extraneous edges, which are further removed from the edge map by separately examining the intermediate intensity range pixels. Finally, the edge map is improved by finding some left out edge pixels by defining a new membership function for the pixels that have their entire 8-neighbourhood pixels classified as white. We have compared our proposed method with some of the existing standard edge detector operators that are available in the literature on image processing. The quantitative analysis of the proposed method is given in terms of entropy value.
Huynh Trung Manh, Gueesang Lee
Vol. 9, No. 4, pp. 592-601, Aug. 2013
Keywords: Gaussian Mixture Model (GMM), Visual Saliency, Segmentation, Object Detection.
Show / Hide AbstractObject segmentation is a challenging task in image processing and computer vision. In this paper, we present a visual attention based segmentation method to segment small sized interesting objects in natural images. Different from the traditional methods, we first search the region of interest by using our novel saliency-based method, which is mainly based on band-pass filtering, to obtain the appropriate frequency. Secondly, we applied the Gaussian Mixture Model (GMM) to locate the object region. By incorporating the visual attention analysis into object segmentation, our proposed approach is able to narrow the search region for object segmentation, so that the accuracy is increased and the computational complexity is reduced. The experimental results indicate that our proposed approach is efficient for object segmentation in natural images, especially for small objects. Our proposed method significantly outperforms traditional GMM based segmentation.
Vol. 9, No. 4, pp. 602-620, Aug. 2013
Keywords: Automatic Text Summarization, Key Concepts, Keyphrase Extraction
Show / Hide AbstractMany previous research studies on extractive text summarization consider a subset of words in a document as keywords and use a sentence ranking function that ranks sentences based on their similarities with the list of extracted keywords. But the use of key concepts in automatic text summarization task has received less attention in literature on summarization. The proposed work uses key concepts identified from a document for creating a summary of the document. We view single-word or multi-word keyphrases of a document as the important concepts that a document elaborates on. Our work is based on the hypothesis that an extract is an elaboration of the important concepts to some permissible extent and it is controlled by the given summary length restriction. In other words, our method of text summarization chooses a subset of sentences from a document that maximizes the important concepts in the final summary. To allow diverse information in the summary, for each important concpet, we select one sentence that is the best possible elaboration of the concept. Accordingly, the most important concept will contribute first to the summary, then to the second best concept, and so on. To prove the effectiveness fo our proposed summarization method, we have compared it to some state-of-the art summarization systems and the results show that the proposed method outperforms the existing systems to which it is compared.
An Intelligent Automatic Early Detection System of Forest Fire Smoke Signatures using Gaussian Mixture ModelSeok-Hwan Yoon, Joonyoung Min
Vol. 9, No. 4, pp. 621-632, Aug. 2013
Keywords: Forest Fire Detection, Gaussian Mixture Models, HSL Color Space, Smoke Signature
Show / Hide AbstractThe most important things for a forest fire detection system are the exact extraction of the smoke from image and being able to clearly distinguish the smoke from those with similar qualities, such as clouds and fog. This research presents an intelligent forest fire detection algorithm via image processing by using the Gaussian Mixture model (GMM), which can be applied to detect smoke at the earliest time possible in a forest. GMMs are usually addressed by making the model adaptive so that its parameters can track changing illuminations and by making the model more complex so that it can represent multimodal backgrounds more accurately for smoke plume segmentation in the forest. Also, in this paper, we suggest a way to classify the smoke plumes via a feature extraction using HSL(Hue, Saturation and Lightness or Luminanace) color space analysis.
Kancherla Jonah Nishanth, Vadlamani Ravi
Vol. 9, No. 4, pp. 633-650, Aug. 2013
Keywords: Data Imputation, General Regression Neural Network (GRNN), Evolving Clustering Method (ECM), Imputation, K-Medoids clustering, k-Means Clustering, MLP
Show / Hide AbstractAll the imputation techniques proposed so far in literature for data imputation are offline techniques as they require a number of iterations to learn the characteristics of data during training and they also consume a lot of computational time. Hence, these techniques are not suitable for applications that require the imputation to be performed on demand and near real-time. The paper proposes a computational intelligence based architecture for online data imputation and extended versions of an existing offline data imputation method as well. The proposed online imputation technique has 2 stages. In stage 1, Evolving Clustering Method (ECM) is used to replace the missing vlaues with cluster centers, as part of the local learnig strategy Stage 2 refines the resultant approximate values using a Genearal Regression Neural Network (GRNN) as part of the global approximation strategy. We also propose extended versions of an existing offline imputation technique. The offline imputation techniques emploly K-Means or K-Medoids and Multi Layer Perceptron (MLP) or GRNN in Stage-1 and Stage-2 respectively. Several experiments were conducted on 8 benchmark datasets and 4 bank related datasets to assess the effectiveness of the proposed online and offline imputation techniques. In terms of Mean Absolute Percentage Error (MAPE), the results indicate that the difference between the proposed best offline imputation method viz., K-Medoids+GRNN and the proposed online imputation method viz., ECM+GRNN is statistically insignificant at a 1% level of significance. Consequently, the proposed online technique, being less expensive and faster, can be employed for imputation instead of the existing and proposed offline imputation techniques. This is the significant outcome of the study. Furthermore, GRNN in stage-2 uniformly reduced MAPE values in both offline and online imputation methods on all datasets.
Moneeb Gohar, Seok-Joo Koh
Vol. 9, No. 4, pp. 651-659, Aug. 2013
Keywords: HIP, Network-based, Handover, simulations
Show / Hide AbstractIn the Host Identity Protocol (HIP), the existing host-based handovr scheme tends to induce large handover delays and packet loss rates. To deal with this problem, we are proposing a network-based andover scheme for HIP in the mobile networks, in which the access routers of the mobile node will establish a handover tunnel and will perform the route optimization for data transmission. We also discuss how to extend the HIP Update message to use the proposed handover scheme. From ns-2 simulations, we can see that the proposed handover scheme can significantly reduce the handover delay and packet losses during handover, as compared to the existing handover schemes.
Nagappa Bhajantri, Pradeep Kumar R, Nagabhushan P
Vol. 9, No. 4, pp. 660-677, Aug. 2013
Keywords: Camouflage, Line Mask, Enhancement, Texture analysis, Distribution pattern, histogram, Regression line
Show / Hide AbstractThe blending of defective texture with the ambiencee texture results in camouflage. The gray value or color distribution pattern of the camouflaged images fails to reflect considerable deviations between the camouflaged object and the sublimating background demands improved strategies for texture analysis. In this research, we propose the implementation of an initial enhancement of the image that employs line masks, which could result in a better discrimination of the camouflaged portion. Finally, the gray value distribution patterns are analyzed in the enhanced image, to fix the camouflaged portions.