The Journal of Information Processing Systems (JIPS) has such indices as ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef, and has are four divisions: Computer systems and theory, Multimedia systems and graphics, Communication systems and security, and Information systems and applications. Published by the Korean Information Processing Society (KIPS), JIPS places special emphasis on hot research topics such as artificial intelligence, network, databases, and security.
The availability of powerful and sensor-enabled mobile and Internet-connected devices have enabled the
advent of the ubiquitous sensor network (USN) paradigm. USN provides various types of solutions to the
general public in multiple sectors, including environmental monitoring, entertainment, transportation,
security, and healthcare. Here, we explore and compare the features of wireless sensor networks and USN.
Based on our extensive study, we classify the security- and privacy-related challenges of USNs. We identify
and discuss solutions available to address these challenges. Finally, we briefly discuss open challenges for
designing more secure and privacy-preserving approaches in next-generation USNs.
In the conventional computing environment, users use only a small number of software systems intensively.
So it had been enough to check and guarantee the functional correctness and safety of a small number of giant
systems in order to protect the user systems and their information inside the systems from outside attacks.
However, checking the correctness and safety of giant systems is not enough anymore, since users are using
various software systems or web services provided by unskilled developers. To prove or guarantee the safety of
software system, a lot of research has been conducted in diverse areas of computer science. We will discuss the
on-going approaches for guaranteeing or verifying the safety of software systems in this paper. We also
discuss the future research challenge which must be solved with better solutions in the near future.
Climate change has become a major challenge for sustainable development of human society. This study is an
attempt to analyze existing literature to identify economic indicators that hamper the process of global
warming. This paper includes case studies based on various countries to examine the nexus for environment
and its relationship with Foreign Direct Investment, transportation, economic growth and energy
consumption. Furthermore, the observations are analyzed from the perspective of China-Pakistan Economic
Corridor (CPEC) and probable impact on carbon emission of Pakistan. A major portion of CPEC investment is
allocated for transportation. However, it is evident that transportation sector is substantial emitter of carbon
dioxide (CO2) gas. Unfortunately, there is no empirical work on the subject of CPEC and carbon emission for
vehicular transportation. This paper infers that empirical results from various other countries are ambiguous
and inconclusive. Moreover, the evidence for the pollution haven hypothesis and the halo effect hypothesis is
limited in general and inapplicable for CPEC in particular. The major contribution of this study is the proposal
of an energy efficient transportation model for reducing CO2 emission. In the end, the paper suggests
strategies to climate researchers and policymakers for adaptation and mitigation of greenhouse gases (GHG).
Digital forensics is a vital part of almost every criminal investigation given the amount of information
available and the opportunities offered by electronic data to investigate and evidence a crime. However, in
criminal justice proceedings, these electronic pieces of evidence are often considered with the utmost
suspicion and uncertainty, although, on occasions are justifiable. Presently, the use of scientifically unproven
forensic techniques are highly criticized in legal proceedings. Nevertheless, the exceedingly distinct and
dynamic characteristics of electronic data, in addition to the current legislation and privacy laws remain as
challenging aspects for systematically attesting evidence in a court of law. This article presents a
comprehensive study to examine the issues that are considered essential to discuss and resolve, for the proper
acceptance of evidence based on scientific grounds. Moreover, the article explains the state of forensics in
emerging sub-fields of digital technology such as, cloud computing, social media, and the Internet of
Things (IoT), and reviewing the challenges which may complicate the process of systematic validation of
electronic evidence. The study further explores various solutions previously proposed, by researchers and
academics, regarding their appropriateness based on their experimental evaluation. Additionally, this
article suggests open research areas, highlighting many of the issues and problems associated with the
empirical evaluation of these solutions for immediate attention by researchers and practitioners. Notably,
academics must react to these challenges with appropriate emphasis on methodical verification. Therefore,
for this purpose, the issues in the experiential validation of practices currently available are reviewed in this
study. The review also discusses the struggle involved in demonstrating the reliability and validity of these
approaches with contemporary evaluation methods. Furthermore, the development of best practices,
reliable tools and the formulation of formal testing methods for digital forensic techniques are highlighted
which could be extremely useful and of immense value to improve the trustworthiness of electronic
evidence in legal proceedings.
Aiming at the problem of service reliability in resource reservation in cloud computing environments, a model of dynamic cloud resource reservation based on trust is proposed. A domain-specific cloud management architecture is designed in which resources are divided into different management domains according to the types of service for easier management. A dynamic resource reservation mechanism (DRRM) is used to test users’ reservation requests and reserve resources for users. According to user preference, several resources are chosen to be candidate resources by fuzzy cluster analysis. The fuzzy evaluation method and a two-way trust evaluation mechanism are adopted to improve the availability and credibility of the model. An analysis and simulation experiments show that this model can increase the flexibility of resource reservation and improve user satisfaction.
GR-tree and query aggregation techniques have been proposed for spatial query processing in conventional spatial query processing for wireless sensor networks. Although these spatial query processing techniques consider spatial query optimization, time query optimization is not taken into consideration. The index reorganization cost and communication cost for the parent sensor nodes increase the energy consumption that is required to ensure the most efficient operation in the wireless sensor node. This paper proposes itinerary-based R-tree (IR-tree) for more efficient spatial-temporal query processing in wireless sensor networks. This paper analyzes the performance of previous studies and IR-tree, which are the conventional spatial query processing techniques, with regard to the accuracy, energy consumption, and query processing time of the query results using the wireless sensor data with Uniform, Gauss, and Skew distributions. This paper proves the superiority of the proposed IR-tree-based space-time indexing.
With the increasing use of the Internet and electronic documents, automatic text categorization becomes imperative. Several machine learning algorithms have been proposed for text categorization. The k-nearest neighbor algorithm (kNN) is known to be one of the best state of the art classifiers when used for text categorization. However, kNN suffers from limitations such as high computation when classifying new instances. Instance selection techniques have emerged as highly competitive methods to improve kNN through data reduction. However previous works have evaluated those approaches only on structured datasets. In addition, their performance has not been examined over the text categorization domain where the dimensionality and size of the dataset is very high. Motivated by these observations, this paper investigates and analyzes the impact of instance selection on kNN-based text categorization in terms of various aspects such as classification accuracy, classification efficiency, and data reduction.
Fingerprint-based biometric identification is one of the most interesting automatic systems for identifying individuals. Owing to the poor sensing environment and poor quality of skin, biometrics remains a challenging problem. The main contribution of this paper is to propose a new approach to recognizing a person’s fingerprint using the fingerprint’s local characteristics. The proposed approach introduces the barycenter notion applied to triangles formed by the Delaunay triangulation once the extraction of minutiae is achieved. This ensures the exact location of similar triangles generated by the Delaunay triangulation in the recognition process. The results of an experiment conducted on a challenging public database (i.e., FVC2004) show significant improvement with regard to fingerprint identification compared to simple Delaunay triangulation, and the obtained results are very encouraging.
Artificial bee colony (ABC) algorithm has attracted significant interests recently for solving the multivariate optimization problem. However, it still faces insufficiency of slow convergence speed and poor local search ability. Therefore, in this paper, a modified ABC algorithm with bees’ number reallocation and new search equation is proposed to tackle this drawback. In particular, to enhance solution accuracy, more bees in the population are assigned to execute local searches around food sources. Moreover, elite vectors are adopted to guide the bees, with which the algorithm could converge to the potential global optimal position rapidly. A series of classical benchmark functions for frequency-modulated sound waves are adopted to validate the performance of the modified ABC algorithm. Experimental results are provided to show the significant performance improvement of our proposed algorithm over the traditional version.
The concepts of graph theory are applied to model and analyze dynamics of computer networks, biochemical networks and, semantics of social networks. The analysis of dynamics of complex networks is important in order to determine the stability and performance of networked systems. The analysis of non-stationary and nonlinear complex networks requires the applications of ordinary differential equations (ODE). However, the process of resolving input excitation to the dynamic non-stationary networks is difficult without involving external functions. This paper proposes an analytical formulation for generating solutions of nonlinear network ODE systems with functional decomposition. Furthermore, the input excitations are analytically resolved in linearized dynamic networks. The stability condition of dynamic networks is determined. The proposed analytical framework is generalized in nature and does not require any domain or range constraints.
We propose an enhanced version of the local binary pattern (LBP) operator for texture extraction in images in the context of image retrieval. The novelty of our proposal is based on the observation that the LBP exploits only the lowest kind of local information through the global histogram. However, such global Histograms reflect only the statistical distribution of the various LBP codes in the image. The block based LBP, which uses local histograms of the LBP, was one of few tentative to catch higher level textural information. We believe that important local and useful information in between the two levels is just ignored by the two schemas. The newly developed method: gradual locality integration of binary patterns (GLIBP) is a novel attempt to catch as much local information as possible, in a gradual fashion. Indeed, GLIBP aggregates the texture features present in grayscale images extracted by LBP through a complex structure. The used framework is comprised of a multitude of ellipse-shaped regions that are arranged in circular-concentric forms of increasing size. The framework of ellipses is in fact derived from a simple parameterized generator. In addition, the elliptic forms allow targeting texture directionality, which is a very useful property in texture characterization. In addition, the general framework of ellipses allows for taking into account the spatial information (specifically rotation). The effectiveness of GLIBP was investigated on the Corel-1K (Wang) dataset. It was also compared to published works including the very effective DLEP. Results show significant higher or comparable performance of GLIBP with regard to the other methods, which qualifies it as a good tool for scene images retrieval.
The purpose of the high-speed railway construction is to better satisfy passenger travel demands. Accordingly, the design of the train working plan must also take a full account of the interests of passengers. Aiming at problems, such as the complex transport organization and different speed trains coexisting, combined with the existing research on the train working plan optimization model, the multiobjective bi-level programming model of the high-speed railway passenger train working plan was established. This model considers the interests of passengers as the center and also takes into account the interests of railway transport enterprises. Specifically, passenger travel cost and travel time minimizations are both considered as the objectives of upper-level programming, whereas railway enterprise profit maximization is regarded as the objective of the lower-level programming. The model solution algorithm based on genetic algorithm was proposed. Through an example analysis, the feasibility and rationality of the model and algorithm were proved.
Recently, hybrid transactional memory (HyTM) has gained much interest from researchers because it combines the advantages of hardware transactional memory (HTM) and software transactional memory (STM). To provide the concurrency control of transactions, the existing HyTM-based studies use a bloom filter. However, they fail to overcome the typical false positive errors of a bloom filter. Though the existing studies use a global lock, the efficiency of global lock-based memory allocation is significantly low in multicore environment. In this paper, we propose an efficient hybrid transactional memory scheme using nearoptimal retry computation and sophisticated memory management in order to efficiently process transactions in multi-core environment. First, we propose a near-optimal retry computation algorithm that provides an efficient HTM configuration using machine learning algorithms, according to the characteristic of a given workload. Second, we provide an efficient concurrency control for transactions in different environments by using a sophisticated bloom filter. Third, we propose a memory management scheme being optimized for the CPU cache line, in order to provide a fast transaction processing. Finally, it is shown from our performance evaluation that our HyTM scheme achieves up to 2.5 times better performance by using the Stanford transactional applications for multi-processing (STAMP) benchmarks than the state-of-the-art algorithms.
As the major source of information, digital images play an indispensable role in our lives. However, with the development of image processing techniques, people can optionally retouch or even forge an image by using image processing software. Therefore, the authenticity and integrity of digital images are facing severe challenge. To resolve this issue, the fragile watermarking schemes for image authentication have been proposed. According to different purposes, the fragile watermarking can be divided into two categories: fragile watermarking for tamper localization and fragile watermarking with recovery ability. The fragile watermarking for image tamper localization can only identify and locate the tampered regions, but it cannot further restore the modified regions. In some cases, image recovery for tampered regions is very essential. Generally, the fragile watermarking for image authentication and recovery includes three procedures: watermark generation and embedding, tamper localization, and image self-recovery. In this article, we make a review on self-embedding fragile watermarking methods. The basic model and the evaluation indexes of this watermarking scheme are presented in this paper. Some related works proposed in recent years and their advantages and disadvantages are described in detail to help the future research in this field. Based on the analysis, we give the future research prospects and suggestions in the end.
Keystroke dynamics user authentication is a behavior-based authentication method which analyzes patterns in how a user enters passwords and PINs to authenticate the user. Even if a password or PIN is revealed to another user, it analyzes the input pattern to authenticate the user; hence, it can compensate for the drawbacks of knowledge-based (what you know) authentication. However, users' input patterns are not always fixed, and each user's touch method is different. Therefore, there are limitations to extracting the same features for all users to create a user's pattern and perform authentication. In this study, we perform experiments to examine the changes in user authentication performance when using feature vectors customized for each user versus using all features. User customized features show a mean improvement of over 6% in error equal rate, as compared to when all features are used.
Images are unavoidably contaminated with different types of noise during the processes of image acquisition and transmission. The main forms of noise are impulse noise (is also called salt and pepper noise) and Gaussian noise. In this paper, an effective method of removing mixed noise from images is proposed. In general, different types of denoising methods are designed for different types of noise; for example, the median filter displays good performance in removing impulse noise, and the wavelet denoising algorithm displays good performance in removing Gaussian noise. However, images are affected by more than one type of noise in many cases. To reduce both impulse noise and Gaussian noise, this paper proposes a denoising method that combines adaptive median filtering (AMF) based on impulse noise detection with the wavelet threshold denoising method based on a Gaussian mixture model (GMM). The simulation results show that the proposed method achieves much better denoising performance than the median filter or the wavelet denoising method for images contaminated with mixed noise.
With the advent of the information society, image restoration technology has aroused considerable interest. Guided image filtering is more effective in suppressing noise in homogeneous regions, but its edge-preserving property is poor. As such, the critical part of guided filtering lies in the selection of the guided image. The result of the Expected Patch Log Likelihood (EPLL) method maintains a good structure, but it is easy to produce the ladder effect in homogeneous areas. According to the complementarity of EPLL with guided filtering, we propose a method of coupling EPLL and guided filtering for image de-noising. The EPLL model is adopted to construct the guided image for the guided filtering, which can provide better structural information for the guided filtering. Meanwhile, with the secondary smoothing of guided image filtering in image homogenization areas, we can improve the noise suppression effect in those areas while reducing the ladder effect brought about by the EPLL. The experimental results show that it not only retains the excellent performance of EPLL, but also produces better visual effects and a higher peak signal-to-noise ratio by adopting the proposed method.