Digital Library
Vol. 8, No. 4, Dec. 2012
-
Christian Bunse, Yunja Choi, Hans Gerhard Gross
Vol. 8, No. 4, pp. 539-554, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.539
Keywords: Software Development Management, software reusability, modeling
Show / Hide AbstractModel-driven and component-oriented development is increasingly being used in the development of embedded systems. When combined, both paradigms provide several advantages, such as higher reuse rates, and improved system quality. Performing model-driven and component-oriented development should be accompanied by a component model and a method that prescribes how the component model is used. This article provides an overview on the MARMOT method, which consists of an abstract component model and a methodology for the development of embedded systems. The paper describes a feasibility study that demonstrates MARMOT"'"s capability to alleviate system design, verification, implementation, and reuse. Results indicate that model-driven and component-based development following the MARMOT method outperforms Agile development for embedded systems, leads to maintainable systems, and higher than normal reuse rates. -
Byungsang Kim, Chan-Hyun Youn, Yong-Sung Park, Yonggyu Lee, Wan Choi
Vol. 8, No. 4, pp. 555-566, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.555
Keywords: Resource-Provisioning, Bio-Workflow Broker, Next-Generation Sequencing
Show / Hide AbstractThe cloud environment makes it possible to analyze large data sets in a scalable computing infrastructure. In the bioinformatics field, the applications are composed of the complex workflow tasks, which require huge data storage as well as a computing-intensive parallel workload. Many approaches have been introduced in distributed solutions. However, they focus on static resource provisioning with a batchprocessing scheme in a local computing farm and data storage. In the case of a largescale workflow system, it is inevitable and valuable to outsource the entire or a part of their tasks to public clouds for reducing resource costs. The problems, however, occurred at the transfer time for huge dataset as well as there being an unbalanced completion time of different problem sizes. In this paper, we propose an adaptive resourceprovisioning scheme that includes run-time data distribution and collection services for hiding the data transfer time. The proposed adaptive resource-provisioning scheme optimizes the allocation ratio of computing elements to the different datasets in order to minimize the total makespan under resource constraints. We conducted the experiments with a well-known sequence alignment algorithm and the results showed that the proposed scheme is efficient for the cloud environment. -
Sujata Mohanty, Banshidhar Majhi
Vol. 8, No. 4, pp. 567-574, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.567
Keywords: Designated Verifiable, Discrete Logarithm Problem, Chosen Ciphertext Attack, Nonrepudiation
Show / Hide AbstractThis paper presents a strong designated verifiable signcryption scheme, in which a message is signcrypted by a signcryptor and only a specific receiver, who called a "designated verifier", verifies it using his own secret key. The scheme is secure, as an adversary can not verify the signature even if the secret key of the signer is compromised or leaked. The security of the proposed scheme lies in the complexity of solving two computationally hard problems, namely, the Discrete Logarithm Problem (DLP) and the Integer Factorization Problem (IFP). The security analysis of the scheme has been done and it is proved that, the proposed scheme can withstand an adaptive chosen ciphertext attack. This scheme can be very useful in organizations where there is a need to send confidential documents to a specific recipient. This scheme can also be applicable to real life scenarios, such as, e-commerce applications, e-banking and e-voting. -
Tsendsuren Munkhdalai, Meijing Li, Unil Yun, Oyun-Erdene Namsrai, Keun Ho Ryu
Vol. 8, No. 4, pp. 575-588, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.575
Keywords: Biomedical Named-Entity Recognition, Co-Training, Semi-supervised Learning, Feature Processing, Text Mining
Show / Hide AbstractExploiting unlabeled text data with a relatively small labeled corpus has been an active and challenging research topic in text mining, due to the recent growth of the amount of biomedical literature. Biomedical named-entity recognition is an essential prerequisite task before effective text mining of biomedical literature can begin. This paper proposes an Active Co-Training (ACT) algorithm for biomedical named-entity recognition. ACT is a semi-supervised learning method in which two classifiers based on two different feature sets iteratively learn from informative examples that have been queried from the unlabeled data. We design a new classification problem to measure the informativeness of an example in unlabeled data. In this classification problem, the examples are classified based on a joint view of a feature set to be informative/non-informative to both classifiers. To form the training data for the classification problem, we adopt a query-bycommittee method. Therefore, in the ACT, both classifiers are considered to be one committee, which is used on the labeled data to give the informativeness label to each example. The ACT method outperforms the traditional co-training algorithm in terms of fmeasure as well as the number of training iterations performed to build a good classification model. The proposed method tends to efficiently exploit a large amount of unlabeled data by selecting a small number of examples having not only useful information but also a comprehensive pattern. -
R.Sumathi, M.G.Srinivas
Vol. 8, No. 4, pp. 589-602, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.589
Keywords: Wireless Sensor Networks, Quality of Service, Reliability, Energy Efficiency, End-To-End Delay, Critical Data
Show / Hide AbstractWith the increasing demand for real time applications in the Wireless Senor Network (WSN), real time critical events anticipate an efficient quality-of-service (QoS) based routing for data delivery from the network infrastructure. Designing such QoS based routing protocol to meet the reliability and delay guarantee of critical events while preserving the energy efficiency is a challenging task. Considerable research has been focused on developing robust energy efficient QoS based routing protocols. In this paper, we present the state of the research by summarizing the work on QoS based routing protocols that has already been published and by highlighting the QoS issues that are being addressed. The performance comparison of QoS based routing protocols such as SAR, MMSPEED, MCMP, MCBR, and EQSR has also been analyzed using ns-2 for various parameters. -
Hyon-Young Choi, Sung-Gi Min, Youn-Hee Han, Rajeev Koodli
Vol. 8, No. 4, pp. 603-620, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.603
Keywords: Flow Mobility, Proxy Mobile IPv6
Show / Hide AbstractProxy Mobile IPv6 (PMIPv6) is a network-based mobility support protocol and it does not require Mobile Nodes (MNs) to be involved in the mobility support signaling. In the case when multiple interfaces are active in an MN simultaneously, each data flow can be dynamically allocated to and redirected between different access networks to adapt to the dynamically changing network status and to balance the workload. Such a flow redistribution control is called "flow mobility". In the existing PMIPv6-based flow mobility support, although the MN"'"s logical interface can solve the well-known problems of flow mobility in a heterogeneous network, some missing procedures, such as an MN-derived flow handover, make PMIPv6-based flow mobility incomplete. In this paper, an enhanced flow mobility support is proposed for actualizing the flow mobility support in PMIPv6. The proposed scheme is also based on the MN"'"s logical interface, which hides the physical interfaces from the network layer and above. As new functional modules, the flow interface manager is placed at the MN"'"s logical interface and the flow binding manager in the Local Mobility Anchor (LMA) is paired with the MN"'"s flow interface manager. They manage the flow bindings, and select the proper access technology to send packets. In this paper, we provide the complete flow mobility procedures which begin with the following three different triggering cases: the MN"'"s new connection/disconnection, the LMA"'"s decision, and the MN"'"s request. Simulation using the ns-3 network simulator is performed to verify the proposed procedures and we show the network throughput variation caused by the network offload using the proposed procedures. -
Geeta Nagpal, Moin Uddin, Arvinder Kaur
Vol. 8, No. 4, pp. 621-652, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.621
Keywords: Software Estimations, Estimation by Analogy, Grey Relational Analysis, Robust Regression, Data Mining Techniques
Show / Hide AbstractSoftware Estimations provide an inclusive set of directives for software project developers, project managers, and the management in order to produce more realistic estimates based on deficient, uncertain, and noisy data. A range of estimation models are being explored in the industry, as well as in academia, for research purposes but choosing the best model is quite intricate. Estimation by Analogy (EbA) is a form of case based reasoning, which uses fuzzy logic, grey system theory or machine-learning techniques, etc. for optimization. This research compares the estimation accuracy of some conventional data mining models with a hybrid model. Different data mining models are under consideration, including linear regression models like the ordinary least square and ridge regression, and nonlinear models like neural networks, support vector machines, and multivariate adaptive regression splines, etc. A precise and comprehensible predictive model based on the integration of GRA and regression has been introduced and compared. Empirical results have shown that regression when used with GRA gives outstanding results; indicating that the methodology has great potential and can be used as a candidate approach for software effort estimation. -
Ming Ma, Dong-Won Park, Soo Kyun Kim, Syungog An
Vol. 8, No. 4, pp. 653-668, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.653
Keywords: Online Handwriting Recognition, Hidden Markov Model, Stochastic Grammar, Hierarchical clustering, Position Verifier
Show / Hide AbstractIn this study, an improved HMM based recognition model is proposed for online English and Korean handwritten characters. The pattern elements of the handwriting model are sub character strokes and ligatures. To deal with the problem of handwriting style variations, a modified Hierarchical Clustering approach is introduced to partition different writing styles into several classes. For each of the English letters and each primitive grapheme in Korean characters, one HMM that models the temporal and spatial variability of the handwriting is constructed based on each class. Then the HMMs of Korean graphemes are concatenated to form the Korean character models. The recognition of handwritten characters is implemented by a modified level building algorithm, which incorporates the Korean character combination rules within the efficient network search procedure. Due to the limitation of the HMM based method, a postprocessing procedure that takes the global and structural features into account is proposed. Experiments showed that the proposed recognition system achieved a high writer independent recognition rate on unconstrained samples of both English and Korean characters. The comparison with other schemes of HMM-based recognition was also performed to evaluate the system -
Shubhada Ardhapurkar, Ramch, ra Manthalkar, Suhas Gajre
Vol. 8, No. 4, pp. 669-684, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.669
Keywords: Kernel Density Estimation, Discrete Wavelet Transform, Probability Density Function (PDF), Signal to Noise Ratio
Show / Hide AbstractDiscrete wavelet transforms are extensively preferred in biomedical signal processing for denoising, feature extraction, and compression. This paper presents a new denoising method based on the modeling of discrete wavelet coefficients of ECG in selected sub-bands with Kernel density estimation. The modeling provides a statistical distribution of information and noise. A Gaussian kernel with bounded support is used for modeling sub-band coefficients and thresholds and is estimated by placing a sliding window on a normalized cumulative density function. We evaluated this approach on offline noisy ECG records from the Cardiovascular Research Centre of the University of Glasgow and on records from the MIT-BIH Arrythmia database. Results show that our proposed technique has a more reliable physical basis and provides improvement in the Signal-to-Noise Ratio (SNR) and Percentage RMS Difference (PRD). The morphological information of ECG signals is found to be unaffected after employing denoising. This is quantified by calculating the mean square error between the feature vectors of original and denoised signal. MSE values are less than 0.05 for most of the cases. -
Yungho Choi, Neungsoo Park
Vol. 8, No. 4, pp. 685-692, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.685
Keywords: H.264/AVC, Bi-Directional Prediction, Image Backtracking, Fast Algorithm, Mode Decision
Show / Hide AbstractB frame bi-directional predictions and the DIRECT mode coding of the H.264 video compression standard necessitate a complex mode decision process, resulting in a long computation time. To make H.264 feasible, this paper proposes an image backtrackbased fast (IBFD) algorithm and evaluates the performances of two promising fast algorithms (i.e., AFDM and IBFD). Evaluation results show that an image backtrackbased fast (IBFD) algorithm can determine DIRECT mode macroblocks with 13% higher accuracy, as compared with the AFDM. Furthermore, IBFD is shown to reduce the motion estimation time of B frames by up to 23% with a negligible quality degradation -
Kamal Sarkar, Mita Nasipuri, Suranjan Ghose
Vol. 8, No. 4, pp. 693-712, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.693
Keywords: Keyphrase Extraction, Decision Tree, Naive Bayes, Artificial Neural Networks, Machine Learning, WEKA
Show / Hide AbstractThe paper presents three machine learning based keyphrase extraction methods that respectively use Decision Trees, Naïve Bayes, and Artificial Neural Networks for keyphrase extraction. We consider keyphrases as being phrases that consist of one or more words and as representing the important concepts in a text document. The three machine learning based keyphrase extraction methods that we use for experimentation have been compared with a publicly available keyphrase extraction system called KEA. The experimental results show that the Neural Network based keyphrase extraction method outperforms two other keyphrase extraction methods that use the Decision Tree and Naïve Bayes. The results also show that the Neural Network based method performs better than KEA. -
JaeHong Jeon, Min-Hyung Choi, Min Hong
Vol. 8, No. 4, pp. 713-720, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.713
Keywords: FFD-AABB Algorithm, Physically-Based Simulation, Deformable Objects, Collision Detection, Bounding Sphere
Show / Hide AbstractUnlike FEM (Finite Element Method), which provides an accurate deformation of soft objects, FFD (Free Form Deformation) based methods have been widely used for a quick and responsive representation of deformable objects in real-time applications such as computer games, animations, or simulations. The FFD-AABB (Free Form Deformation Axis Aligned Bounding Box) algorithm was also suggested to address the collision handling problems between deformable objects at an interactive rate. This paper proposes an enhanced FFD-AABB algorithm to improve the frame rate of simulation by adding the bounding sphere based collision test between 3D deformable objects. We provide a comparative analysis with previous methods and the result of proposed method shows about an 85% performance improvement. -
Danielle Lee, Peter Brusilovsky
Vol. 8, No. 4, pp. 721-738, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.721
Keywords: Job recommendation, explicit preference, implicit preference, personalized information retrieval
Show / Hide AbstractThe Internet has become an increasingly important source for finding the right employees, so more and more companies post their job openings on the Web. The large amount and dynamic nature of career recruiting information causes information overload problems for job seekers. To assist Internet users in searching for the right job, a range of research and commercial systems were developed over the past 10 years. Surprisingly, the majority of existing job search systems support just one, rarely two ways of information access. In contrast, our work focused on exploring a value of comprehensive access to job information in a single system (i.e., a system which supports multiple ways). We designed Proactive, a recommendation system providing comprehensive and personalized information access. To assist the varied needs of users, Proactive has four information retrieval methods – a navigable list of jobs, keyword-based search, implicit preference-based recommendations, and explicit preference-based recommendations. This paper introduces the Proactive and reports the results of a study focusing on the experimental evaluation of these methods. The goal of the study was to assess whether all of the methods are necessary for users to find relevant jobs and to what extent different methods can meet different users’ information requirements. -
Koohong Kang
Vol. 8, No. 4, pp. 739-748, Dec. 2012
https://doi.org/10.3745/JIPS.2012.8.4.739
Keywords: IEEE 802.11 DCF, Wireless LAN, Throughput and Delay Performance
Show / Hide AbstractWe propose an analytic model to compute the station’s saturated throughput and packet delay performance of the IEEE 802.11 DCF (Distributed Coordination Function) in which frame transmission error rates in the channel are different from each other. Our analytic model shows that a station experiencing worse frame error rates than the others suffers severe performance degradation below its deserved throughput and delay performance. 802.11 DCF adopts an exponential back-off scheme. When some stations suffer from high frame error rates, their back-off stages should be increased so that others get the benefit from the smaller collision probabilities. This impact is then recursively applied to degrade the performance of the victim stations. In particular, we show that the performance is considerably degraded even if the frame error rate of the victim station satisfies the receiver input level sensitivity that has been specified in the IEEE 802.11 standard. We also verify the analytic results by the OPNET simulations.