Cloud computing, also known as country as you go”, is used to turn any computer into a dematerialized
architecture in which users can access different services. In addition to the daily evolution of stakeholders’
number and beneficiaries, the imbalance between the virtual machines of data centers in a cloud environment
impacts the performance as it decreases the hardware resources and the software’s profitability. Our axis of
research is the load balancing between a data center’s virtual machines. It is used for reducing the degree of
load imbalance between those machines in order to solve the problems caused by this technological evolution
and ensure a greater quality of service. Our article focuses on two main phases: the pre-classification of tasks,
according to the requested resources; and the classification of tasks into levels (‘odd levels’ or ‘even levels’) in
ascending order based on the meta-heuristic “Bat-algorithm”. The task allocation is based on levels provided
by the bat-algorithm and through our mathematical functions, and we will divide our system into a number
of virtual machines with nearly equal performance. Otherwise, we suggest different classes of virtual
machines, but the condition is that each class should contain machines with similar characteristics compared
to the existing binary search scheme.
Multivariate finite mixture model is becoming more and more popular in image processing. Performing
image denoising from image patches to the whole image has been widely studied and applied. However, there
remains a problem that the structure information is always ignored when transforming the patch into the
vector form. In this paper, we study the operator which extracts patches from image and then transforms
them to the vector form. Then, we find that some pixels which should be continuous in the image patches are
discontinuous in the vector. Due to the poor anti-noise and the loss of structure information, we propose a
new operator which may keep more information when extracting image patches. We compare the new
operator with the old one by performing image denoising in Expected Patch Log Likelihood (EPLL) method,
and we obtain better results in both visual effect and the value of PSNR.
Smart city is currently the main direction of development. The automatic management of instrumentation is
one task of the smart city. Because there are a lot of old instrumentation in the city that cannot be replaced
promptly, how to makes low-cost transformation with Internet of Thing (IoT) becomes a problem. This article
gives a low-cost method that can identify code wheel instrument information. This method can effectively
identify the information of image as the digital information. Because this method does not require a lot of
memory or complicated calculation, it can be deployed on a cheap microcontroller unit (MCU) with low readonly
memory (ROM). At the end of this article, test result is given. Using this method to modify the old
instrumentation can achieve the automatic management of instrumentation and can help build a smart city.
The problem surrounding methods of implementing the software testing process has come under the
spotlight in recent times. However, as compliance with the software testing process does not necessarily bring
with it immediate economic benefits, IT companies need to pursue more aggressive efforts to improve the
process, and the software industry needs to makes every effort to improve the software testing process by
evaluating the Test Maturity Model integration (TMMi). Furthermore, as the software test process is only at
the initial level, high-quality software cannot be guaranteed. This paper applies TMMi model to Automobile
control software testing process, including test policy and strategy, test planning, test monitoring and control,
test design and execution, and test environment goal. The results suggest improvement of the automobile
control software testing process based on Test maturity model. As a result, this study suggest IT
organization’s test process improve method.
Many studies on flash memory-based buffer replacement algorithms that consider the characteristics of flash
memory have recently been developed. Conventional flash memory-based buffer replacement algorithms
have the disadvantage that the operation speed slows down, because only the reference is checked when
selecting a replacement target page and either the reference count is not considered, or when the reference
time is considered, the elapsed time is considered. Therefore, this paper seeks to solve the problem of
conventional flash memory-based buffer replacement algorithm by dividing pages into groups and
considering the reference frequency and reference time when selecting the replacement target page. In
addition, because flash memory has a limited lifespan, candidates for replacement pages are selected based on
the number of deletions.
In efforts to increase its agricultural productivity, the Indonesian Center for Agricultural Biotechnology and
Genetic Resources Research and Development has conducted a variety of genomic studies using highthroughput
DNA genotyping and sequencing. The large quantity of data (big data) produced by these
biotechnologies require high performance data management system to store, backup, and secure data.
Additionally, these genetic studies are computationally demanding, requiring high performance processors
and memory for data processing and analysis. Reliable network connectivity with large bandwidth to transfer
data is essential as well as database applications and statistical tools that include cleaning, quality control,
querying based on specific criteria, and exporting to various formats that are important for generating high
yield varieties of crops and improving future agricultural strategies. This manuscript presents a reliable, secure,
and scalable information technology infrastructure tailored to Indonesian agriculture genotyping studies.
Nowadays, third-party applications form an important part of the mobile environment, and social networking
applications in particular can leave a variety of user footprints compared to other applications. Digital
forensics of mobile third-party applications can provide important evidence to forensics investigators.
However, most mobile operating systems are now updated on a frequent basis, and developers are constantly
releasing new versions of them. For these reasons, forensic investigators experience difficulties in finding the
locations and meanings of data during digital investigations. Therefore, this paper presents scenario-based
methods of forensic analysis for a specific third-party social networking service application on a specific
mobile device. When applied to certain third-party applications, digital forensics can provide forensic
investigators with useful data for the investigation process. The main purpose of the forensic analysis
proposed in the present paper is to determine whether the general use of third-party applications leaves data
in the mobile internal storage of mobile devices and whether such data are meaningful for forensic purposes.
China possesses a passenger dedicated line system of large scale, passenger flow intensity with uneven
distribution, and passenger nodes with complicated relations. Consequently, the significance of passenger
nodes shall be considered and the dissimilarity of passenger nodes shall be analyzed in compiling passenger
train operation and conducting transportation allocation. For this purpose, the passenger nodes need to be
hierarchically divided. Targeting at problems such as hierarchical dividing process vulnerable to subjective
factors and local optimum in the current research, we propose a clustering approach based on self-organizing
map (SOM) and k-means, and then, harnessing the new approach, hierarchical dividing of passenger
dedicated line passenger nodes is effectuated. Specifically, objective passenger nodes parameters are selected
and SOM is used to give a preliminary passenger nodes clustering firstly; secondly, Davies–Bouldin index is
used to determine the number of clusters of the passenger nodes; and thirdly, k-means is used to conduct
accurate clustering, thus getting the hierarchical dividing of passenger nodes. Through example analysis, the
feasibility and rationality of the algorithm was proved.
Database classification is an important preprocessing step for the multi-database mining (MDM). In fact,
when a multi-branch company needs to explore its distributed data for decision making, it is imperative to
classify these multiple databases into similar clusters before analyzing the data. To search for the best
classification of a set of n databases, existing algorithms generate from 1 to (n2–n)/2 candidate classifications.
Although each candidate classification is included in the next one (i.e., clusters in the current classification are
subsets of clusters in the next classification), existing algorithms generate each classification independently,
that is, without taking into account the use of clusters from the previous classification. Consequently, existing
algorithms are time consuming, especially when the number of candidate classifications increases. To
overcome the latter problem, we propose in this paper an efficient approach that represents the problem of
classifying the multiple databases as a problem of identifying the connected components of an undirected
weighted graph. Theoretical analysis and experiments on public databases confirm the efficiency of our
algorithm against existing works and that it overcomes the problem of increase in the execution time.
This study evaluates the viewpoints of user focus incidents using microblog sentiment analysis, which has
been actively researched in academia. Most existing works have adopted traditional supervised machine
learning methods to analyze emotions in microblogs; however, these approaches may not be suitable in
Chinese due to linguistic differences. This paper proposes a new microblog sentiment analysis method that
mines associated microblog emotions based on a popular microblog through user-building combined with
spectral clustering to analyze microblog content. Experimental results for a public microblog benchmark
corpus show that the proposed method can improve identification accuracy and save manually labeled time
compared to existing methods.
There are a great number of Internet-connected devices and their information can be acquired through an
Internet-wide scanning tool. By associating device information with publicly known security vulnerabilities,
security experts are able to determine whether a particular device is vulnerable. Currently, the identification
of the device information and its related vulnerabilities is manually carried out. It is necessary to automate the
process to identify a huge number of Internet-connected devices in order to analyze more than one hundred
thousand security vulnerabilities. In this paper, we propose a method of automatically generating device
information in the Common Platform Enumeration (CPE) format from banner text to discover potentially
weak devices having the Common Vulnerabilities Exposures (CVE) vulnerability. We demonstrated that our
proposed method can distinguish as much adequate CPE information as possible in the service banner.
Web applications are indispensable in the software industry and continuously evolve either meeting a newer
criteria and/or including new functionalities. However, despite assuring quality via testing, what hinders a
straightforward development is the presence of defects. Several factors contribute to defects and are often
minimized at high expense in terms of man-hours. Thus, detection of fault proneness in early phases of
software development is important. Therefore, a fault prediction model for identifying fault-prone classes in a
web application is highly desired. In this work, we compare 14 machine learning techniques to analyse the
relationship between object oriented metrics and fault prediction in web applications. The study is carried out
using various releases of Apache Click and Apache Rave datasets. En-route to the predictive analysis, the
input basis set for each release is first optimized using filter based correlation feature selection (CFS) method.
It is found that the LCOM3, WMC, NPM and DAM metrics are the most significant predictors. The statistical
analysis of these metrics also finds good conformity with the CFS evaluation and affirms the role of these
metrics in the defect prediction of web applications. The overall predictive ability of different fault prediction
models is first ranked using Friedman technique and then statistically compared using Nemenyi post-hoc
analysis. The results not only upholds the predictive capability of machine learning models for faulty classes
using web applications, but also finds that ensemble algorithms are most appropriate for defect prediction in
Apache datasets. Further, we also derive a consensus between the metrics selected by the CFS technique and
the statistical analysis of the datasets.
In heterogeneous wireless networks supporting multi-access services, selecting the best network from among
the possible heterogeneous connections and providing seamless service during handover for a higher Quality
of Services (QoSs) is a big challenge. Thus, we need an intelligent vertical handover (VHO) decision using
suitable network parameters. In the conventional VHOs, various network parameters (i.e., signal strength,
bandwidth, dropping probability, monetary cost of service, and power consumption) have been used to
measure network status and select the preferred network. Because of various parameter features defined in
each wireless/mobile network, the parameter conversion between different networks is required for a
handover decision. Therefore, the handover process is highly complex and the selection of parameters is
always an issue. In this paper, we present how to maximize network utilization as more than one target
network exists during VHO. Also, we show how network parameters can be imbedded into IEEE 802.21-
based signaling procedures to provide seamless connectivity during a handover. The network simulation
showed that QoS-effective target network selection could be achieved by choosing the suitable parameters
from Layers 1 and 2 in each candidate network.
The simplified neutrosophic set (SNS) is a generalization of fuzzy set that is designed for some practical
situations in which each element has truth membership function, indeterminacy membership function and
falsity membership function. In this paper, we propose a new method to construct similarity measures of
single valued neutrosophic sets (SVNSs) and interval valued neutrosophic sets (IVNSs), respectively. Then we
prove that the proposed formulas satisfy the axiomatic definition of the similarity measure. At last, we apply
them to pattern recognition under the single valued neutrosophic environment and multi-criteria decisionmaking
problems under the interval valued neutrosophic environment. The results show that our methods
are effective and reasonable.
Selection of a suitable task from the extensively available large set of tasks is an intricate job for the developers in crowdsourcing software development (CSD). Besides, it is also a tiring and a time-consuming job for the platform to evaluate thousands of tasks submitted by developers. Previous studies stated that managerial and technical aspects have prime importance in bringing success for software development projects, however, these two aspects can be more effective and conducive if combined with human aspects. The main purpose of this paper is to present a conceptual framework for task assignment model for future research on the basis of personality types, that will provide a basic structure for CSD workers to find suitable tasks and also a platform to assign the task directly. This will also match their personality and task. Because personality is an internal force which whittles the behavior of developers. Consequently, this research presented a Task Assignment Model (TAM) from a developers point of view, moreover, it will also provide an opportunity to the platform to assign a task to CSD workers according to their personality types directly.
Information and communication technology (ICT) is increasingly recognized as an important driver of economic growth, innovation, employment and productivity and is widely accepted as a main feature of development. During the last couple of decades, ICT sector became the most innovative service sector that affected the living standards of human beings all over the world. In the beginning of the 21st century, some of the Asian countries made reforms in the ICT sector and spent an enormous amount for the progress of this sector. On the other hand, developed countries in the European Union (EU) faced different crises which badly affected the dissemination of this sector. Consequently, EU countries lost their hegemony in the field of information technology and resultantly, some of the emerging Asian countries like China, India, and South Korea got supremacy over the EU in this field. Currently, these countries have a strong IT infrastructure, R&D sector, IT research centers working for the development of ICT. Moreover, this paper investigates reasons for the shifting of the balance of digital power from Europe to Asia.
The discrete wavelet transform (DWT) has good multi-resolution decomposition characteristic and its low frequency component contains the basic information of an image. Based on this, a fragile watermarking using the local binary pattern (LBP) and DWT is proposed for image authentication. In this method, the LBP pattern of low frequency wavelet coefficients is adopted as a feature watermark, and it is inserted into the least significant bit (LSB) of the maximum pixel value in each block of host image. To guarantee the safety of the proposed algorithm, the logistic map is applied to encrypt the watermark. In addition, the locations of the maximum pixel values are stored in advance, which will be used to extract watermark on the receiving side. Due to the use of DWT, the watermarked image generated by the proposed scheme has high visual quality. Compared with other state-of-the-art watermarking methods, experimental results manifest that the proposed algorithm not only has lower watermark payloads, but also achieves good performance in tamper identification and localization for various attacks.