Digital Library
Vol. 8, No. 2, Jun. 2012
-
B. John Oommen, Anis Yazidi, Ole-Christoffer Granmo
Vol. 8, No. 2, pp. 191-212, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.191
Keywords: Weak es timators, User's Profiling, Time Varying Preferences
Show / Hide AbstractSince a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology. -
S, eep K. Singh, Sangeeta Sabharwal, J.P.Gupta
Vol. 8, No. 2, pp. 213-240, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.213
Keywords: events, Event Meta Model, Testing, Test Cases, Test scenarios, Event Based Systems, Software Engineering
Show / Hide AbstractSafety critical systems, real time systems, and event-based systems have a complex set of events and their own interdependency, which makes them difficult to test ma Safety critic Safety critical systems, real time systems, and event-based systems have a complex set of events and their own interdependency, which makes them difficult to test manually. In order to cut down on costs, save time, and increase reliability, the model based testing approach is the best solution. Such an approach does not require applications or codes prior to generating test cases, so it leads to the early detection of faults, which helps in reducing the development time. Several model-based testing approaches have used different UML models but very few works have been reported to show the generation of test cases that use events. Test cases that use events are an apt choice for these types of systems. However, these works have considered events that happen at a user interface level in a system while other events that happen in a system are not considered. Such works have limited applications in testing the GUI of a system. In this paper, a novel model-based testing approach is presented using business events, state events, and control events that have been captured directly from requirement specifications. The proposed approach documents events in event templates and then builds an event-flow model and a fault model for a system. Test coverage criterion and an algorithm are designed using these models to generate event sequence based test scenarios and test cases. Unlike other event based approaches, our approach is able to detect the proposed faults in a system. A prototype tool is developed to automate and evaluate the applicability of the entire process. Results have shown that the proposed approach and supportive tool is able to successfully derive test scenarios and test cases from the requirement specifications of safety critical systems, real time systems, and event based systems. -
Ruchika Malhotra, Ankita Jain
Vol. 8, No. 2, pp. 241-262, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.241
Keywords: Empirical Validation, Object Oriented, Receiver Operating Characteristics, Statistical Methods, Machine Learning, Fault Prediction
Show / Hide AbstractAn understanding of quality attributes is relevant for the software organization to deliver high software reliability. An empirical assessment of metrics to predict the quality attributes is essential in order to gain insight about the quality of software in the early phases of software development and to ensure corrective actions. In this paper, we predict a model to estimate fault proneness using Object Oriented CK metrics and QMOOD metrics. We apply one statistical method and six machine learning methods to predict the models. The proposed models are validated using dataset collected from Open Source software. The results are analyzed using Area Under the Curve (AUC) obtained from Receiver Operating Characteristics (ROC) analysis. The results show that the model predicted using the random forest and bagging methods outperformed all the other models. Hence, based on these results it is reasonable to claim that quality models have a significant relevance with Object Oriented metrics and that machine learning methods have a comparable performance with statistical methods -
Bakhta Meroufel, Ghalem Belalem
Vol. 8, No. 2, pp. 263-278, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.263
Keywords: Data Grid, Dynamic Replication, Availability, Failures, Best Client, Best Responsible, Data Management
Show / Hide AbstractThe data grid provides geographically distributed resources for large-scale applications. It generates a large set of data. The replication of this data in several sites of the grid is an effective solution for achieving good performance. In this paper we propose an approach of dynamic replication in a hierarchical grid that takes into account crash failures in the system. The replication decision is taken based on two parameters: the availability and popularity of the data. The administrator requires a minimum rate of availability for each piece of data according to its access history in previous periods, but this availability may increase if the demand is high on this data. We also proposed a strategy to keep the desired availability respected even in case of a failure or rarity (nopopularity) of the data. The simulation results show the effectiveness of our replication strategy in terms of response time, the unavailability of requests, and availability -
Eun-Sun Cho, Sumi Helal
Vol. 8, No. 2, pp. 279-300, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.279
Keywords: Exceptions, Safety, Programming models for Pervasive Systems, Pervasive Computing, Contexts, Situations
Show / Hide AbstractUncertainty and dynamism surrounding pervasive systems require new and sophisticated approaches to defining, detecting, and handling complex exceptions. This is because the possible erroneous conditions in pervasive systems are more complicated than conditions found in traditional applications. We devised a novel exception description and detection mechanism based on “situation”- a novel extension of context, which allows programmers to devise their own handling routines targeting sophisticated exceptions. This paper introduces the syntax of a language support that empowers the expressiveness of exceptions and their handlers, and suggests an implementation algorithm with a straw man analysis of overhead -
Mohamed Abdel Fattah
Vol. 8, No. 2, pp. 301-314, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.301
Keywords: Sentence Alignment, English/ Arabic Parallel Corpus, Parallel Corpora, Machine Translation, Multi-Class Support Vector Machine, Hidden Markov Model
Show / Hide AbstractIn this paper, two new approaches to align English-Arabic sentences in bilingual parallel corpora based on the Multi-Class Support Vector Machine (MSVM) and the Hidden Markov Model (HMM) classifiers are presented. A feature vector is extracted from the text pair that is under consideration. This vector contains text features such as length, punctuation score, and cognate score values. A set of manually prepared training data was assigned to train the Multi-Class Support Vector Machine and Hidden Markov Model. Another set of data was used for testing. The results of the MSVM and HMM outperform the results of the length based approach. Moreover these new approaches are valid for any language pairs and are quite flexible since the feature vector may contain less, more, or different features, such as a lexical matching feature and Hanzi characters in Japanese-Chinese texts, than the ones used in the current research -
Yunsick Sung, Kyungeun Cho, Kyhyun Um
Vol. 8, No. 2, pp. 315-330, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.315
Keywords: brain-computer interface, BCI Toolkit, BCI Framework, EEG, Brain Wave
Show / Hide AbstractRecently, methodologies for developing brain-computer interface (BCI) games using the BCI have been actively researched. The existing general framework for processing brain waves does not provide the functions required to develop BCI games. Thus, developing BCI games is difficult and requires a large amount of time. Effective BCI game development requires a BCI game framework. Therefore the BCI game framework should provide the functions to generate discrete values, events, and converted waves considering the difference between the brain waves of users and the BCIs of those. In this paper, BCI game frameworks for processing brain waves for BCI games are proposed. A variety of processes for converting brain waves to apply the measured brain waves to the games are also proposed. In an experiment the frameworks proposed were applied to a BCI game for visual perception training. Furthermore, it was verified that the time required for BCI game development was reduced when the framework proposed in the experiment was applied -
Ami Marowka
Vol. 8, No. 2, pp. 331-346, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.331
Keywords: TBB, Micro-Benchmarks, Multi-Core, Parallel Overhead
Show / Hide AbstractTask-based programming is becoming the state-of-the-art method of choice for extracting the desired performance from multi-core chips. It expresses a program in terms of lightweight logical tasks rather than heavyweight threads. Intel Threading Building Blocks (TBB) is a task-based parallel programming paradigm for multi-core processors. The performance gain of this paradigm depends to a great extent on the efficiency of its parallel constructs. The parallel overheads incurred by parallel constructs determine the ability for creating large-scale parallel programs, especially in the case of fine-grain parallelism. This paper presents a study of TBB parallelization overheads. For this purpose, a TBB micro-benchmarks suite called TBBench has been developed. We use TBBench to evaluate the parallelization overheads of TBB on different multi-core machines and different compilers. We report in detail in this paper on the relative overheads and analyze the running results. -
Eun-Ha Song, Hyun-Woo Kim, Young-Sik Jeong
Vol. 8, No. 2, pp. 347-358, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.347
Keywords: Hardware Hardening, TPM, TPB, Mobile Cloud, System Behavior Monitoring, BiT Profiling
Show / Hide AbstractRecently, security researches have been processed on the method to cover a broader range of hacking attacks at the low level in the perspective of hardware. This system security applies not only to individuals' computer systems but also to cloud environments. "Cloud" concerns operations on the web. Therefore it is exposed to a lot of risks and the security of its spaces where data is stored is vulnerable. Accordingly, in order to reduce threat factors to security, the TCG proposed a highly reliable platform based on a semiconductor-chip, the TPM. However, there have been no technologies up to date that enables a real-time visual monitoring of the security status of a PC that is operated based on the TPM. And the TPB has provided the function in a visual method to monitor system status and resources only for the system behavior of a single host. Therefore, this paper will propose a m-TMS (Mobile Trusted Monitoring System) that monitors the trusted state of a computing environment in which a TPM chip-based TPB is mounted and the current status of its system resources in a mobile device environment resulting from the development of network service technology. The m-TMS is provided to users so that system resources of CPU, RAM, and process, which are the monitoring objects in a computer system, may be monitored. Moreover, converting and detouring single entities like a PC or target addresses, which are attack pattern methods that pose a threat to the computer system security, are combined. The branch instruction trace function is monitored using a BiT Profiling tool through which processes attacked or those suspected of being attacked may be traced, thereby enabling users to actively respond -
V. Asha, N.U. Bhajantri, P. Nagabhushan
Vol. 8, No. 2, pp. 359-374, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.359
Keywords: Periodicity, Jensen-Shannon Divergence, Cluster, Defect
Show / Hide AbstractIn this paper, we propose a new machine vision algorithm for automatic defect detection on patterned textures with the help of texture-periodicity and the Jensen- Shannon Divergence, which is a symmetrized and smoothed version of the Kullback- Leibler Divergence. Input defective images are split into several blocks of the same size as the size of the periodic unit of the image. Based on histograms of the periodic blocks, Jensen-Shannon Divergence measures are calculated for each periodic block with respect to itself and all other periodic blocks and a dissimilarity matrix is obtained. This dissimilarity matrix is utilized to get a matrix of true-metrics, which is later subjected to Ward"'"s hierarchical clustering to automatically identify defective and defect-free blocks. Results from experiments on real fabric images belonging to 3 major wallpaper groups, namely, pmm, p2, and p4m with defects, show that the proposed method is robust in finding fabric defects with a very high success rates without any human intervention -
Kun Peng
Vol. 8, No. 2, pp. 375-388, Jun. 2012
https://doi.org/10.3745/JIPS.2012.8.2.375
Keywords: ElGamal, PVSS
Show / Hide AbstractPVSS stands for publicly verifiable secret sharing. In PVSS, a dealer shares a secret among multiple share holders. He encrypts the shares using the shareholders"'" encryption algorithms and publicly proves that the encrypted shares are valid. Most of the existing PVSS schemes do not employ an ElGamal encryption to encrypt the shares. Instead, they usually employ other encryption algorithms like a RSA encryption and Paillier encryption. Those encryption algorithms do not support the shareholders"'" encryption algorithms to employ the same decryption modulus. As a result, PVSS based on those encryption algorithms must employ additional range proofs to guarantee the validity of the shares obtained by the shareholders. Although the shareholders can employ ElGamal encryptions with the same decryption modulus in PVSS such that the range proof can be avoided, there are only two PVSS schemes based on ElGamal encryption. Moreover, the two schemes have their drawbacks. One of them employs a costly repeating-proof mechanism, which needs to repeat the dealer"'"s proof at least scores of times to achieve satisfactory soundness. The other requires that the dealer must know the discrete logarithm of the secret to share and thus weakens the generality and it cannot be employed in many applications. A new PVSS scheme based on an ElGamal encryption is proposed in this paper. It employs the same decryption modulus for all the shareholders"'" ElGamal encryption algorithms, so it does not need any range proof. Moreover, it is a general PVSS technique without any special limitation. Finally, an encryption-improving technique is proposed to achieve very high efficiency in the new PVSS scheme. It only needs a number of exponentiations in large cyclic groups that are linear in the number of the shareholders, while all the existing PVSS schemes need at least a number of exponentiations in large cyclic groups that are linear in the square of the number of the shareholders