Vol. 16, No. 1, Feb. 2020
Young-Sik Jeong, Jong Hyuk Park
Vol. 16, No. 1, pp. 1-5, Feb. 2020
Keywords: Privacy, Security, Smart City, Sustainable Computing
Show / Hide AbstractSustainable computing is a rapidly expanding field of research covering the fields of multidisciplinary engineering. With the rapid adoption of Internet of Things (IoT) devices, issues such as security, privacy, efficiency, and green computing infrastructure are increasing day by day. To achieve a sustainable computing ecosystem for future smart cities, it is important to take into account their entire life cycle from design and manufacturing to recycling and disposal as well as their wider impact on humans and the places around them. The energy efficiency aspects of the computing system range from electronic circuits to applications for systems covering small IoT devices up to large data centers. This editorial focuses on the security, privacy, and efficiency of sustainable computing for future smart cities. This issue accepted 17 articles after a rigorous review process.
Sayan Maity, Mohamed Abdel-Mottaleb, Shihab S. Asfour
Vol. 16, No. 1, pp. 6-29, Feb. 2020
Keywords: Auto-Encoder, Deep Learning, multimodal biometrics, Sparse Classification
Show / Hide AbstractBiometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, nonplanar movement, and pose variations present in the video clips even in the situation of missing modalities.
Kaiqun Hu, Xin Feng
Vol. 16, No. 1, pp. 30-41, Feb. 2020
Keywords: Curvelets, Spilt Bergman Iteration, sparse representation, Wave Atoms
Show / Hide AbstractLines and textures are natural properties of the surface of natural objects, and their images can be sparsely represented in suitable frames such as wavelets, curvelets and wave atoms. Based on characteristics that the curvelets framework is good at expressing the line feature and wavesat is good at representing texture features, we propose a model for the weighted sparsity constraints of the two frames. Furtherly, a multi-step iterative fast algorithm for solving the model is also proposed based on the split Bergman method. By introducing auxiliary variables and the Bergman distance, the original problem is transformed into an iterative solution of two simple sub-problems, which greatly reduces the computational complexity. Experiments using standard images show that the split-based Bergman iterative algorithm in hybrid domain defeats the traditional Wavelets framework or curvelets framework both in terms of timeliness and recovery accuracy, which demonstrates the validity of the model and algorithm in this paper.
A Secure Operating System Architecture Based on Linux against Communication Offense with Root Exploit for Unmanned Aerial VehiclesKwangMin Koo, Woo-yeob Lee, Sung-Ryung Cho, Inwhee Joe
Vol. 16, No. 1, pp. 42-48, Feb. 2020
Keywords: Architecture, Microkernel, Root Exploit, Security, UAV
Show / Hide AbstractThis paper proposes an operating system architecture for unmanned aerial vehicle (UAV), which is secure against root exploit, resilient to connection loss resulting in the control loss, and able to utilize common applications used in Linux. The Linux-based UAVs are exposed to root exploit. On the other hand, the microkernel-based UAVs are not able to use the common applications utilized in Linux, even though which is secure against root exploit. For this reason, the proposed architecture uses a virtualized microkernel on the Linux operating system to isolate communication roles and prevent root exploit. As a result, the suggested Operating system is secure against root exploit and is able to utilize the common applications at the same time.
Zhonghua Wang, Xiaoming Huang, Faliang Huang
Vol. 16, No. 1, pp. 49-60, Feb. 2020
Keywords: Backward Diffusion, Forward Diffusion, image enhancement, Local Feature
Show / Hide AbstractTo solve the edge ringing or block effect caused by the partial differential diffusion in image enhancement domain, a new image enhancement algorithm based on bidirectional diffusion, which smooths the flat region or isolated noise region and sharpens the edge region in different types of defect images on aviation composites, is presented. Taking the image pixel’s neighborhood intensity and spatial characteristics as the attribute descriptor, the presented bidirectional diffusion model adaptively chooses different diffusion criteria in different defect image regions, which are elaborated are as follows. The forward diffusion is adopted to denoise along the pixel’s gradient direction and edge direction in the pixel’s smoothing area while the backward diffusion is used to sharpen along the pixel’s gradient direction and the forward diffusion is used to smooth along the pixel’s edge direction in the pixel’s edge region. The comparison experiments were implemented in the delamination, inclusion, channel, shrinkage, blowhole and crack defect images, and the comparison results indicate that our algorithm not only preserves the image feature better but also improves the image contrast more obviously.
A Cost-Optimization Scheme Using Security Vulnerability Measurement for Efficient Security EnhancementJun-Young Park, Eui-Nam Huh
Vol. 16, No. 1, pp. 61-82, Feb. 2020
Keywords: Attack Graph, Cloud Security, Cost Optimization, Vulnerability Measurement
Show / Hide AbstractThe security risk management used by some service providers is not appropriate for effective security enhancement. The reason is that the security risk management methods did not take into account the opinions of security experts, types of service, and security vulnerability-based risk assessment. Moreover, the security risk assessment method, which has a great influence on the risk treatment method in an information security risk assessment model, should be security risk assessment for fine-grained risk assessment, considering security vulnerability rather than security threat. Therefore, we proposed an improved information security risk management model and methods that consider vulnerability-based risk assessment and mitigation to enhance security controls considering limited security budget. Moreover, we can evaluate the security cost allocation strategies based on security vulnerability measurement that consider the security weight.
Liming Zhou, Yingzi Shan
Vol. 16, No. 1, pp. 83-95, Feb. 2020
Keywords: data aggregation, Energy-Balanced, privacy preservation, Wireless Sensor Networks
Show / Hide AbstractBecause sensor nodes have limited resources in wireless sensor networks, data aggregation can efficiently reduce communication overhead and extend the network lifetime. Although many existing methods are particularly useful for data aggregation applications, they incur unbalanced communication cost and waste lots of sensors’ energy. In this paper, we propose a privacy-preserving, energy-saving data aggregation scheme (EBPP). Our method can efficiently reduce the communication cost and provide privacy preservation to protect useful information. Meanwhile, the balanced energy of the nodes can extend the network lifetime in our scheme. Through many simulation experiments, we use several performance criteria to evaluate the method. According to the simulation and analysis results, this method can more effectively balance energy dissipation and provide privacy preservation compared to the existing schemes.
Lihao Ni, Yanshen Liu, Yi Liu
Vol. 16, No. 1, pp. 96-112, Feb. 2020
Keywords: Geohash-Encoding, Location-Based Services, Memcached Server Cluster, Point of Interest, Privacy ProtectionModel
Show / Hide AbstractSolving the disclosure problem of sensitive information with the k-nearest neighbor query, location dummy technique, or interfering data in location-based services (LBSs) is a new research topic. Although they reduced security threats, previous studies will be ineffective in the case of sparse users or K-successive privacy, and additional calculations will deteriorate the performance of LBS application systems. Therefore, a model is proposed herein, which is based on geohash-encoding technology instead of latitude and longitude, memcached server cluster, encryption and decryption, and authentication. Simulation results based on PHP and MySQL show that the model offers approximately 10× speedup over the conventional approach. Two problems are solved using the model: sensitive information in LBS application is not disclosed, and the relationship between an individual and a track is not leaked.
Taehoon Kim, Donggeun Kim, Sangjoon Lee
Vol. 16, No. 1, pp. 113-119, Feb. 2020
Keywords: Cell-Counting, Distance Transform, Radius Variation Analysis, watershed algorithm
Show / Hide AbstractThis study proposed the structure of the cluster's cell counting algorithm for cell analysis. The image required for cell count is taken under a microscope. At present, the cell counting algorithm is reported to have a problem of low accuracy of results due to uneven shape and size clusters. To solve these problems, the proposed algorithm has a feature of calculating the number of cells in a cluster by applying a radius change analysis to the existing distance conversion and watershed algorithm. Later, cell counting algorithms are expected to yield reliable results if applied to the required field.
Maolin Xu, Jiaxing Wei, Hongling Xiu
Vol. 16, No. 1, pp. 120-131, Feb. 2020
Keywords: Basic Rodrigues Rotation, Engineering Coordinate System, Instrument Coordinate System, Three-Axis Error, Three-Term Error
Show / Hide AbstractIn order to solve the problem of point clouds coordinate conversion of non-directional scanners, this paper proposes a basic Rodrigues rotation method. Specifically, we convert the 6 degree-of-freedom (6-DOF) rotation and translation matrix into the uniaxial rotation matrix, and establish the equation of objective vector conversion based on the basic Rodrigues rotation scheme. We demonstrate the applicability of the new method by using a bar-shaped emboss point clouds as experimental input, the three-axis error and three-term error as validate indicators. The results suggest that the new method does not need linearization and is suitable for optional rotation angle. Meanwhile, the new method achieves the seamless splicing of point clouds. Furthermore, the coordinate conversion scheme proposed in this paper performs superiority by comparing with the iterative closest point (ICP) conversion method. Therefore, the basic Rodrigues rotation method is not only regarded as a suitable tool to achieve the conversion of point clouds, but also provides certain reference and guidance for similar projects.
Mohamed Hanine, El-Habib Benlahmar
Vol. 16, No. 1, pp. 132-144, Feb. 2020
Keywords: Cloud computing, load balancing, Quality of Service, simulated annealing, Virtual Machine, Workload
Show / Hide AbstractCloud computing is an emerging technology based on the concept of enabling data access from anywhere, at any time, from any platform. The exponential growth of cloud users has resulted in the emergence of multiple issues, such as the workload imbalance between the virtual machines (VMs) of data centers in a cloud environment greatly impacting its overall performance. Our axis of research is the load balancing of a data center’s VMs. It aims at reducing the degree of a load’s imbalance between those VMs so that a better resource utilization will be provided, thus ensuring a greater quality of service. Our article focuses on two phases to balance the workload between the VMs. The first step will be the determination of the threshold of each VM before it can be considered overloaded. The second step will be a task allocation to the VMs by relying on an improved and faster version of the meta-heuristic “simulated annealing (SA)”. We mainly focused on the acceptance probability of the SA, as, by modifying the content of the acceptance probability, we could ensure that the SA was able to offer a smart task distribution between the VMs in fewer loops than a classical usage of the SA.
Yongjian Zhao, Bin Jiang
Vol. 16, No. 1, pp. 145-154, Feb. 2020
Keywords: Density, Estimator, Framework, Kurtosis, Likelihood, Separation
Show / Hide AbstractMaximum likelihood (ML) is the best estimator asymptotically as the number of training samples approaches infinity. This paper deduces an adaptive algorithm for blind signal processing problem based on gradient optimization criterion. A parametric density model is introduced through a parameterized generalized distribution family in ML framework. After specifying a limited number of parameters, the density of specific original signal can be approximated automatically by the constructed density function. Consequently, signal separation can be conducted without any prior information about the probability density of the desired original signal. Simulations on classical biomedical signals confirm the performance of the deduced technique.
Xin-mei Wu, Fang-li Guan, Ai-jun Xu
Vol. 16, No. 1, pp. 155-170, Feb. 2020
Keywords: Corner Detection, Depth Extraction Model, Monocular Vision, Passive Ranging, Planar Homography
Show / Hide AbstractPassive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3–10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.
Class-Labeling Method for Designing a Deep Neural Network of Capsule Endoscopic Images Using a Lesion-Focused Knowledge ModelYe-Seul Park, Jung-Won Lee
Vol. 16, No. 1, pp. 171-183, Feb. 2020
Keywords: Capsule Endoscopy, Class-labeling Method, Deep Learning, Knowledge Base (KB), Ontology
Show / Hide AbstractCapsule endoscopy is one of the increasingly demanded diagnostic methods among patients in recent years because of its ability to observe small intestine difficulties. It is often conducted for 12 to 14 hours, but significant frames constitute only 10% of whole frames. Thus, it has been designed to automatically acquire significant frames through deep learning. For example, studies to track the position of the capsule (stomach, small intestine, etc.) or to extract lesion-related information (polyps, etc.) have been conducted. However, although grouping or labeling the training images according to similar features can improve the performance of a learning model, various attributes (such as degree of wrinkles, presence of valves, etc.) are not considered in conventional approaches. Therefore, we propose a class-labeling method that can be used to design a learning model by constructing a knowledge model focused on main lesions defined in standard terminologies for capsule endoscopy (minimal standard terminology, capsule endoscopy structured terminology). This method enables the designing of a systematic learning model by labeling detailed classes through differentiation of similar characteristics.
Computing Semantic Similarity between ECG-Information Concepts Based on an Entropy-Weighted Concept LatticeKai Wang, Shu Yang
Vol. 16, No. 1, pp. 184-200, Feb. 2020
Keywords: Concept Lattice Theory, ECG Concept, Entropy, Inclusion-Degree, Semantic Computing
Show / Hide AbstractSimilarity searching is a basic issue in information processing because of the large size of formal contexts and their complicated derivation operators. Recently, some researchers have focused on knowledge reduction methods by using granular computing. In this process, suitable information granules are vital to characterizing the quantities of attributes and objects. To address this problem, a novel approach to obtain an entropy-weighted concept lattice with inclusion degree and similarity distance (ECLisd) has been proposed. The approach aims to compute the combined weights by merging the inclusion degree and entropy degree between two concepts. In addition, another method is utilized to measure the hierarchical distance by considering the different degrees of importance of each attribute. Finally, the rationality of the ECLisd is validated via a comparative analysis.
Jinhyun Ahn, Dong-Hyuk Im
Vol. 16, No. 1, pp. 201-209, Feb. 2020
Keywords: Degree, Directed Acyclic Graphs, Topological Sort, 2-Hop Label
Show / Hide AbstractThe graph data structure is popular because it can intuitively represent real-world knowledge. Graph databases have attracted attention in academia and industry because they can be used to maintain graph data and allow users to mine knowledge. Mining reachability relationships between two nodes in a graph, termed reachability query processing, is an important functionality of graph databases. Online traversals, such as the breadth-first and depth-first search, are inefficient in processing reachability queries when dealing with large-scale graphs. Labeling schemes have been proposed to overcome these disadvantages. The state-of-the-art is the 2-hop labeling scheme: each node has in and out labels containing reachable node IDs as integers. Unfortunately, existing 2-hop labeling schemes generate huge 2-hop label sizes because they only consider local features, such as degrees. In this paper, we propose a more efficient 2-hop label size reduction approach. We consider the topological sort index, which is a global feature. A linear combination is suggested for utilizing both local and global features. We conduct experiments over real-world and synthetic directed acyclic graph datasets and show that the proposed approach generates smaller labels than existing approaches.
Yanjiao Wang, Huanhuan Tao, Zhuang Ma
Vol. 16, No. 1, pp. 210-223, Feb. 2020
Keywords: Constrained Optimization Problems, ε Constrained, symbiotic organisms search
Show / Hide AbstractSince constrained optimization algorithms are easy to fall into local optimum and their ability of searching are weak, an improved symbiotic organisms search algorithm with mixed strategy based on adaptive ε constrained (ε_SOSMS) is proposed in this paper. Firstly, an adaptive ε constrained method is presented to balance the relationship between the constrained violation degrees and fitness. Secondly, the evolutionary strategies of symbiotic organisms search algorithm are improved as follows. Selecting different best individuals according to the proportion of feasible individuals and infeasible individuals to make evolutionary strategy more suitable for solving constrained optimization problems, and the individual comparison criteria is replaced with population selection strategy, which can better enhance the diversity of population. Finally, numerical experiments on 13 benchmark functions show that not only is ε_SOSMS able to converge to the global optimal solution, but also it has better robustness.
Unsoo Jang, Kun Ha Suh, Eui Chul Lee
Vol. 16, No. 1, pp. 224-237, Feb. 2020
Keywords: Banknote Recognition, Convolutional Neural Network, Machine Learning, optical character recognition, Serial Number Recognition
Show / Hide AbstractRecognition of banknote serial number is one of the important functions for intelligent banknote counter implementation and can be used for various purposes. However, the previous character recognition method is limited to use due to the font type of the banknote serial number, the variation problem by the solid status, and the recognition speed issue. In this paper, we propose an aspect ratio based character region segmentation and a convolutional neural network (CNN) based banknote serial number recognition method. In order to detect the character region, the character area is determined based on the aspect ratio of each character in the serial number candidate area after the banknote area detection and de-skewing process is performed. Then, we designed and compared four types of CNN models and determined the best model for serial number recognition. Experimental results showed that the recognition accuracy of each character was 99.85%. In addition, it was confirmed that the recognition performance is improved as a result of performing data augmentation. The banknote used in the experiment is Indian rupee, which is badly soiled and the font of characters is unusual, therefore it can be regarded to have good performance. Recognition speed was also enough to run in real time on a device that counts 800 banknotes per minute.