Vol. 14, No. 4, Aug. 2018
Thyamagondlu Renukamurthy Jayanthi Kumari, Haradagere Siddaramaiah Jayanna
Vol. 14, No. 4, pp. 807-823, Aug. 2018
Keywords: Gaussian Mixture Model (GMM), GMM-UBM, Multiple Frame Rate (MFR), Multiple Frame Size (MFS), MFSR, SFSR
Show / Hide AbstractSpeaker verification system performance depends on the utterance of each speaker. To verify the speaker, important information has to be captured from the utterance. Nowadays under the constraints of limited data, speaker verification has become a challenging task. The testing and training data are in terms of few seconds in limited data. The feature vectors extracted from single frame size and rate (SFSR) analysis is not sufficient for training and testing speakers in speaker verification. This leads to poor speaker modeling during training and may not provide good decision during testing. The problem is to be resolved by increasing feature vectors of training and testing data to the same duration. For that we are using multiple frame size (MFS), multiple frame rate (MFR), and multiple frame size and rate (MFSR) analysis techniques for speaker verification under limited data condition. These analysis techniques relatively extract more feature vector during training and testing and develop improved modeling and testing for limited data. To demonstrate this we have used mel-frequency cepstral coefficients (MFCC) and linear prediction cepstral coefficients (LPCC) as feature. Gaussian mixture model (GMM) and GMM-universal background model (GMM-UBM) are used for modeling the speaker. The database used is NIST-2003. The experimental results indicate that, improved performance of MFS, MFR, and MFSR analysis radically better compared with SFSR analysis. The experimental results show that LPCC based MFSR analysis perform better compared to other analysis techniques and feature extraction techniques.
Alim Murat, Azharjan Yusup, Zulkar Isk, ar, Azragul Yusup, Yusup Abaydulla
Vol. 14, No. 4, pp. 824-836, Aug. 2018
Keywords: Feature Combination, Lexical Semantics, Morphosyntax, Temporal Expression Extraction, Uyghur
Show / Hide AbstractThe automatic extraction of temporal information from written texts is a key component of question answering and summarization systems and its efficacy in those systems is very decisive if a temporal expression (TE) is successfully extracted. In this paper, three different approaches for TE extraction in Uyghur are developed and analyzed. A novel approach which uses lexical semantics as an additional information is also presented to extend classical approaches which are mainly based on morphology and syntax. We used a manually annotated news dataset labeled with TIMEX3 tags and generated three models with different feature combinations. The experimental results show that the best run achieved 0.87 for Precision, 0.89 for Recall, and 0.88 for F1-Measure in Uyghur TE extraction. From the analysis of the results, we concluded that the application of semantic knowledge resolves ambiguity problem at shallower language analysis and significantly aids the development of more efficient Uyghur TE extraction system.
Ubaidullah Rajput, Fizza Abbas, Heekuck Oh
Vol. 14, No. 4, pp. 837-850, Aug. 2018
Keywords: Crypto-currency, Bitcoin, Transaction Malleability
Show / Hide AbstractBitcoin is a decentralized crypto-currency, which is based on the peer-to-peer network, and was introduced by Satoshi Nakamoto in 2008. Bitcoin transactions are written by using a scripting language. The hash value of a transaction’s script is used to identify the transaction over the network. In February 2014, a Bitcoin exchange company, Mt. Gox, claimed that they had lost hundreds of millions US dollars worth of Bitcoins in an attack known as transaction malleability. Although known about since 2011, this was the first known attack that resulted in a company loosing multi-millions of US dollars in Bitcoins. Our reason for writing this paper is to understand Bitcoin transaction malleability and to propose an efficient solution. Our solution is a softfork (i.e., it can be gradually implemented). Towards the end of the paper we present a detailed analysis of our scheme with respect to various transaction malleability-based attack scenarios to show that our simple solution can prevent future incidents involving transaction malleability from occurring. We compare our scheme with existing approaches and present an analysis regarding the computational cost and storage requirements of our proposed solution, which shows the feasibility of our proposed scheme.
Imen Boulnemour, Bachir Boucheham
Vol. 14, No. 4, pp. 851-876, Aug. 2018
Keywords: Alignment, Comparison, Diagnosis, DTW, Motif Discovery, Pattern Recognition, SEA, similarity search, Time Series
Show / Hide AbstractDynamic time warping (DTW) is the main algorithms for time series alignment. However, it is unsuitable for quasi-periodic time series. In the current situation, except the recently published the shape exchange algorithm (SEA) method and its derivatives, no other technique is able to handle alignment of this type of very complex time series. In this work, we propose a novel algorithm that combines the advantages of the SEA and the DTW methods. Our main contribution consists in the elevation of the DTW power of alignment from the lowest level (Class A, non-periodic time series) to the highest level (Class C, multiple-periods time series containing different number of periods each), according to the recent classification of time series alignment methods proposed by Boucheham (Int J Mach Learn Cybern, vol. 4, no. 5, pp. 537-550, 2013). The new method (quasi-periodic dynamic time warping [QP-DTW]) was compared to both SEA and DTW methods on electrocardiogram (ECG) time series, selected from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) public database and from the PTB Diagnostic ECG Database. Results show that the proposed algorithm is more effective than DTW and SEA in terms of alignment accuracy on both qualitative and quantitative levels. Therefore, QP-DTW would potentially be more suitable for many applications related to time series (e.g., data mining, pattern recognition, search/retrieval, motif discovery, classification, etc.).
MinhPhuoc Hong, Kyoungsu Oh
Vol. 14, No. 4, pp. 877-891, Aug. 2018
Keywords: Motion-Blurred Shadows, Real-Time Rendering, Space-Time Visibility
Show / Hide AbstractIn this paper, we propose a novel algorithm for rendering motion-blurred shadows utilizing a depth-time ranges shadow map. First, we render a scene from a light source to generate a shadow map. For each pixel in the shadow map, we store a list of depth-time ranges. Each range has two points defining a period where a particular geometry was visible to the light source and two distances from the light. Next, we render the scene from the camera to perform shadow tests. With the depths and times of each range, we can easily sample the shadow map at a particular receiver and time. Our algorithm runs entirely on GPUs and solves various problems encountered by previous approaches.
Jun Huang, Xiuhui Wang, Jun Wang
Vol. 14, No. 4, pp. 892-903, Aug. 2018
Keywords: gait recognition, Feature Fusion, Gabor Wavelets, GEI, KPCA
Show / Hide AbstractThe paper proposes a novel gait recognition algorithm based on feature fusion of gait energy image (GEI) dynamic region and Gabor, which consists of four steps. First, the gait contour images are extracted through the object detection, binarization and morphological process. Secondly, features of GEI at different angles and Gabor features with multiple orientations are extracted from the dynamic part of GEI, respectively. Then averaging method is adopted to fuse features of GEI dynamic region with features of Gabor wavelets on feature layer and the feature space dimension is reduced by an improved Kernel Principal Component Analysis (KPCA). Finally, the vectors of feature fusion are input into the support vector machine (SVM) based on multi classification to realize the classification and recognition of gait. The primary contributions of the paper are: a novel gait recognition algorithm based on based on feature fusion of GEI and Gabor is proposed; an improved KPCA method is used to reduce the feature matrix dimension; a SVM is employed to identify the gait sequences. The experimental results suggest that the proposed algorithm yields over 90% of correct classification rate, which testify that the method can identify better different human gait and get better recognized effect than other existing algorithms.
Vol. 14, No. 4, pp. 904-915, Aug. 2018
Keywords: Google Index, Prediction, Sentiment Analysis, Social Media, Unemployment Rate
Show / Hide AbstractWe demonstrate how social media content can be used to predict the unemployment rate, a real-world indicator. We present a novel method for predicting the unemployment rate using social media analysis based on natural language processing and statistical modeling. The system collects social media contents including news articles, blogs, and tweets written in Korean, and then extracts data for modeling using part-of-speech tagging and sentiment analysis techniques. The autoregressive integrated moving average with exogenous variables (ARIMAX) and autoregressive with exogenous variables (ARX) models for unemployment rate prediction are fit using the analyzed data. The proposed method quantifies the social moods expressed in social media contents, whereas the existing methods simply present social tendencies. Our model derived a 27.9% improvement in error reduction compared to a Google Index-based model in the mean absolute percentage error metric.
Mingyue Zhang, Jin Shi, Jin Wang, Chang Liu
Vol. 14, No. 4, pp. 916-925, Aug. 2018
Keywords: Academic Evaluation Indicators, Citation Analysis, Citation Impact, Citation Quality
Show / Hide AbstractThe academic research performance is often quantitatively measured by means of using citation frequency. The citation frequency-based indicators, such as h-index and impact factor, are commonly used reflecting the citation quality to some extent. However, these frequency-based indicators are usually carried out based on the assumption that all citations are equal. This may lead to biased evaluations in that, the attributes of the citing objects and cited objects are significant. A high-accuracy evaluation method is needed. In this paper, we review various citation quality-based evaluation indicators, and categorize them considering the algorithms being applied. We discuss the pros and cons of these indicators, and compare them from four dimensions. The outcomes will be useful for our further research on distinguishing citation quality.
Boseon Yu, Wonik Choi, Taikjin Lee, Hyunduk Kim
Vol. 14, No. 4, pp. 926-940, Aug. 2018
Keywords: CACD, Clustering, EEUC, Node Distribution, WSN
Show / Hide AbstractIn clustering-based approaches, cluster heads closer to the sink are usually burdened with much more relay traffic and thus, tend to die early. To address this problem, distance-aware clustering approaches, such as energy-efficient unequal clustering (EEUC), that adjust the cluster size according to the distance between the sink and each cluster head have been proposed. However, the network lifetime of such approaches is highly dependent on the distribution of the sensor nodes, because, in randomly distributed sensor networks, the approaches do not guarantee that the cluster energy consumption will be proportional to the cluster size. To address this problem, we propose a novel approach called CACD (Clustering Algorithm Considering node Distribution), which is not only distance-aware but also node density-aware approach. In CACD, clusters are allowed to have limited member nodes, which are determined by the distance between the sink and the cluster head. Simulation results show that CACD is 20%–50% more energy-efficient than previous work under various operational conditions considering the network lifetime.
Feng Liu, Shuangbao Ma
Vol. 14, No. 4, pp. 941-947, Aug. 2018
Keywords: Nuclear Magnetic Resonance Logging, Signal to Noise Ratio, Spin Echo Train, Wavelet transform
Show / Hide AbstractSince the amplitudes of spin echo train in nuclear magnetic resonance logging (NMRL) are small and the signal to noise ratio (SNR) is also very low, this paper puts forward an improved de-noising algorithm based on wavelet transformation. The steps of this improved algorithm are designed and realized based on the characteristics of spin echo train in NMRL. To test this improved de-noising algorithm, a 32 points forward model of big porosity is build, the signal of spin echo sequence with adjustable SNR are generated by this forward model in an experiment, then the median filtering, wavelet hard threshold de-noising, wavelet soft threshold de-noising and the improved de-noising algorithm are compared to de-noising these signals, the filtering effects of these four algorithms are analyzed while the SNR and the root mean square error (RMSE) are also calculated out. The results of this experiment show that the improved de-noising algorithm can improve SNR from 10 to 27.57, which is very useful to enhance signal and de-nosing noise for spin echo train in NMRL.
Muhammad Fiqri Muthohar, I Gde Dharma Nugraha, Deokjai Choi
Vol. 14, No. 4, pp. 948-960, Aug. 2018
Keywords: Adaptive Sampling, Android Mobile Sensing Framework, Significant Motion Sensor
Show / Hide AbstractMany mobile sensing frameworks have been developed to help researcher doing their mobile sensing research. However, energy consumption is still an issue in the mobile sensing research, and the existing frameworks do not provide enough solution for solving the issue. We have surveyed several mobile sensing frameworks and carefully chose one framework to improve. We have designed an adaptive sampling module for a mobile sensing framework to help solve the energy consumption issue. However, in this study, we limit our design to an adaptive sampling module for the location and motion sensors. In our adaptive sampling module, we utilize the significant motion sensor to help the adaptive sampling. We experimented with two sampling strategies that utilized the significant motion sensor to achieve low-power consumption during the continuous sampling. The first strategy is to utilize the sensor naively only while the second one is to add the duty cycle to the naive approach. We show that both strategies achieve low energy consumption, but the one that is combined with the duty cycle achieves better result.
Aniket Bhatnagar, Varun Gambhir, Manish Kumar Thakur
Vol. 14, No. 4, pp. 961-979, Aug. 2018
Keywords: bipartite matching, Combinatorial Optimization, Evolutionary Computing, Genetic Algorithm, Matrimonial Websites, Stable Marriage Problem
Show / Hide AbstractFor many years, matching in a bipartite graph has been widely used in various assignment problems, such as stable marriage problem (SMP). As an application of bipartite matching, the problem of stable marriage is defined over equally sized sets of men and women to identify a stable matching in which each person is assigned a partner of opposite gender according to their preferences. The classical SMP proposed by Gale and Shapley uses preference lists for each individual (men and women) which are infeasible in real world applications for a large populace of men and women such as matrimonial websites. In this paper, we have proposed an enhancement to the SMP by computing a weighted score for the users registered at matrimonial websites. The proposed enhancement has been formulated into profit maximization of matrimonial websites in terms of their ability to provide a suitable match for the users. The proposed formulation to maximize the profits of matrimonial websites leads to a combinatorial optimization problem. We have proposed greedy and genetic algorithm based approaches to solve the proposed optimization problem. We have shown that the proposed genetic algorithm based approaches outperform the existing Gale-Shapley algorithm on the dataset crawled from matrimonial websites.
Yunsick Sung, Ryong Choi, Young-Sik Jeong
Vol. 14, No. 4, pp. 980-988, Aug. 2018
Keywords: Bayesian Probability, HTC VIVE, Motion Estimation, NUI/NUX, Thalmic Myo
Show / Hide AbstractMotion estimation is a key Natural User Interface/Natural User Experience (NUI/NUX) technology to utilize motions as commands. HTC VIVE is an excellent device for estimating motions but only considers the positions of hands, not the orientations of arms. Even if the positions of the hands are the same, the meaning of motions can differ according to the orientations of the arms. Therefore, when the positions of arms are measured and utilized, their orientations should be estimated as well. This paper proposes a method for estimating the arm orientations based on the Bayesian probability of the hand positions measured in advance. In experiments, the proposed method was used to measure the hand positions with HTC VIVE. The results showed that the proposed method estimated orientations with an error rate of about 19%, but the possibility of estimating the orientation of any body part without additional devices was demonstrated.
An Efficient Implementation of Mobile Raspberry Pi Hadoop Clusters for Robust and Augmented Computing PerformanceKathiravan Srinivasan, Chuan-Yu Chang, Chao-Hsi Huang, Min-Hao Chang, Anant Sharma, Avinash Ankur
Vol. 14, No. 4, pp. 989-1009, Aug. 2018
Keywords: Clusters, Hadoop, MapReduce, Mobile Raspberry Pi, Single-board Computer
Show / Hide AbstractRapid advances in science and technology with exponential development of smart mobile devices, workstations, supercomputers, smart gadgets and network servers has been witnessed over the past few years. The sudden increase in the Internet population and manifold growth in internet speeds has occasioned the generation of an enormous amount of data, now termed ‘big data’. Given this scenario, storage of data on local servers or a personal computer is an issue, which can be resolved by utilizing cloud computing. At present, there are several cloud computing service providers available to resolve the big data issues. This paper establishes a framework that builds Hadoop clusters on the new single-board computer (SBC) Mobile Raspberry Pi. Moreover, these clusters offer facilities for storage as well as computing. Besides the fact that the regular data centers require large amounts of energy for operation, they also need cooling equipment and occupy prime real estate. However, this energy consumption scenario and the physical space constraints can be solved by employing a Mobile Raspberry Pi with Hadoop clusters that provides a cost-effective, low-power, high-speed solution along with micro-data center support for big data. Hadoop provides the required modules for the distributed processing of big data by deploying map-reduce programming approaches. In this work, the performance of SBC clusters and a single computer were compared. It can be observed from the experimental data that the SBC clusters exemplify superior performance to a single computer, by around 20%. Furthermore, the cluster processing speed for large volumes of data can be enhanced by escalating the number of SBC nodes. Data storage is accomplished by using a Hadoop Distributed File System (HDFS), which offers more flexibility and greater scalability than a single computer system.
Sheik Mohammad Mostakim Fattah, Ilyoung Chong
Vol. 14, No. 4, pp. 1010-1032, Aug. 2018
Keywords: Internet of Things, Service Composition, Web of Objects
Show / Hide AbstractRecent advances in medical science have made people live longer, which has affected many aspects of life, such as caregiver burden, increasing cost of healthcare, increasing number of disabled and depressive disorder persons, and so on. Researchers are now focused on elderly living assistance services in smart home environments. In recent years, assisted living technologies have rapidly grown due to a faster growing aging society. Many smart devices are now interconnected within the home network environment and such a home setup supports collaborations between those devices based on the Internet of Things (IoT). One of the major challenges in providing elderly living assistance services is to consider each individual’s requirements of different needs. In order to solve this, the virtualization of physical things, as well as the collaboration and composition of services provided by these physical things should be considered. In order to meet these challenges, Web of Objects (WoO) focuses on the implementation aspects of IoT to bring the assorted real world objects with the web applications. We proposed a semantic modelling technique for manual and semiautomated service composition. The aim of this work is to propose a framework to enable RESTful web services composition using semantic ontology for elderly living assistance services creation in WoO based smart home environment.
Maman Abdurohman, Aji Gautama Putrada, Sidik Prabowo, Catur Wirawan Wijiutomo, Asma Elmangoush
Vol. 14, No. 4, pp. 1033-1048, Aug. 2018
Keywords: M2M, Platform, OpenMTC, Sensor and Lighting
Show / Hide AbstractThis paper proposes an integrated lighting enabler system (ILES) based on standard machine-to-machine (M2M) platforms. This system provides common services of end-to-and M2M communication for smart lighting system. It is divided into two sub-systems, namely end-device system and server system. On the server side, the M2M platform OpenMTC is used to receive data from the sensors and send response for activating actuators. At the end-device system, a programmable smart lighting device is connected to the actuators and sensors for communicating their data to the server. Some experiments have been done to prove the system concept. The experiment results show that the proposed integrated lighting enabler system is effective to reduce the power consumption by 25.22% (in average). The proving of significance effect in reducing power consumption is measured by the Wilcoxon method.
Measuring the Degree of Content Immersion in a Non-experimental Environment Using a Portable EEG DeviceNam-Ho Keum, Taek Lee, Jung-Been Lee, Hoh Peter In
Vol. 14, No. 4, pp. 1049-1061, Aug. 2018
Keywords: Automated Collection, BCI, Measurement of Immersion, Noise Filtering, Non-experimental Environment, Portable EEG
Show / Hide AbstractAs mobile devices such as smartphones and tablet PCs become more popular, users are becoming accustomed to consuming a massive amount of multimedia content every day without time or space limitations. From the industry, the need for user satisfaction investigation has consequently emerged. Conventional methods to investigate user satisfaction usually employ user feedback surveys or interviews, which are considered manual, subjective, and inefficient. Therefore, the authors focus on a more objective method of investigating users’ brainwaves to measure how much they enjoy their content. Particularly for multimedia content, it is natural that users will be immersed in the played content if they are satisfied with it. In this paper, the authors propose a method of using a portable and dry electroencephalogram (EEG) sensor device to overcome the limitations of the existing conventional methods and to further advance existing EEG-based studies. The proposed method uses a portable EEG sensor device that has a small, dry (i.e., not wet or adhesive), and simple sensor using a single channel, because the authors assume mobile device environments where users consider the features of portability and usability to be important. This paper presents how to measure attention, gauge and compute a score of user’s content immersion level after addressing some technical details related to adopting the portable EEG sensor device. Lastly, via an experiment, the authors verified a meaningful correlation between the computed scores and the actual user satisfaction scores.