Digital Library
Vol. 16, No. 6, Dec. 2020
-
Young-Sik Jeong, Jong Hyuk Park
Vol. 16, No. 6, pp. 1231-1237, Dec. 2020
https://doi.org/10.3745/JIPS.01.0061
Keywords: Future ICT, Graph Processing, Information and Communication Technologies, Quantum Communication
Show / Hide AbstractIn recent years, future information and communication technology (ICT) has influenced and changed our lives. Without various ICT-based applications, we would have difficulty in securely storing, efficiently processing, and conveniently communicating information. In the future, ICT will play a very important role in the convergence of computing, communication, and all other computational sciences and application. ICT will also influence various fields including communication, science, engineering, industry, business, law, politics, culture, and medicine. In this paper, we investigate the latest algorithms, processes, and services in future fields. -
Hamoud Alshammari, Sameh Abd El-Ghany, Abdulaziz Shehab
Vol. 16, No. 6, pp. 1238-1249, Dec. 2020
https://doi.org/10.3745/JIPS.04.0193
Keywords: Cloud computing, Fog Computing, E-Health, Electronic Health Records, Healthcare Data Analytics, Internet of Things (IoT)
Show / Hide AbstractThroughout the world, aging populations and doctor shortages have helped drive the increasing demand for smart healthcare systems. Recently, these systems have benefited from the evolution of the Internet of Things (IoT), big data, and machine learning. However, these advances result in the generation of large amounts of data, making healthcare data analysis a major issue. These data have a number of complex properties such as high-dimensionality, irregularity, and sparsity, which makes efficient processing difficult to implement. These challenges are met by big data analytics. In this paper, we propose an innovative analytic framework for big healthcare data that are collected either from IoT wearable devices or from archived patient medical images. The proposed method would efficiently address the data heterogeneity problem using middleware between heterogeneous data sources and MapReduce Hadoop clusters. Furthermore, the proposed framework enables the use of both fog computing and cloud platforms to handle the problems faced through online and offline data processing, data storage, and data classification. Additionally, it guarantees robust and secure knowledge of patient medical data. -
Gayoung Jung, Incheol Kim
Vol. 16, No. 6, pp. 1250-1260, Dec. 2020
https://doi.org/10.3745/JIPS.02.0147
Keywords: deep neural network, Multimodal Context, Relationship Detection, Scene Graph Generation
Show / Hide AbstractThis study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset. -
Xiuping Zheng, Meiling Li, Xiaoxia Yang
Vol. 16, No. 6, pp. 1261-1270, Dec. 2020
https://doi.org/10.3745/JIPS.03.0151
Keywords: Intercept Probability, Maximum Capacity, Nakagami Channel, Physical layer, Safety Performance
Show / Hide AbstractThe physical security layer of industrial wireless sensor networks in the event of an eavesdropping attack has been investigated in this paper. An optimal sensor selection scheme based on the maximum channel capacity is proposed for transmission environments that experience Nakagami fading. Comparing the intercept probabilities of the traditional round robin (TRR) and optimal sensor selection schemes, the system secure performance is analyzed. Simulation results show that the change in the number of sensors and the eavesdropping ratio affect the convergence rate of the intercept probability. Additionally, the proposed optimal selection scheme has a faster convergence rate compared to the TRR scheduling scheme for the same eavesdropping ratio and number of sensors. This observation is also valid when the Nakagami channel is simplified to a Rayleigh channel. -
Hyun Sik Sim
Vol. 16, No. 6, pp. 1271-1280, Dec. 2020
https://doi.org/10.3745/JIPS.04.0195
Keywords: Fault Process and Equipment Analysis, Logistic Regression, Plastic Ball Grid Array Manufacturing Process, Yield Management
Show / Hide AbstractThe yield and quality of a micromanufacturing process are important management factors. In real-world situations, it is difficult to achieve a high yield from a manufacturing process because the products are produced through multiple nanoscale manufacturing processes. Therefore, it is necessary to identify the processes and equipment that lead to low yields. This paper proposes an analytical method to identify the processes and equipment that cause a defect in the plastic ball grid array (PBGA) during the manufacturing process using logistic regression and stepwise variable selection. The proposed method was tested with the lot trace records of a real work site. The records included the sequence of equipment that the lot had passed through and the number of faults of each type in the lot. We demonstrated that the test results reflect the real situation in a PBGA manufacturing process, and the major equipment parameters were then controlled to confirm the improvement in yield; the yield improved by approximately 20%. -
Hongqiang Jiao, Xinxin Wang, Wanning Ding
Vol. 16, No. 6, pp. 1281-1292, Dec. 2020
https://doi.org/10.3745/JIPS.03.0153
Keywords: Cloud computing, Subjective Weight, trust evaluation
Show / Hide AbstractMore and more cloud computing services are being applied in various fields; however, it is difficult for users and cloud computing service platforms to establish trust among each other. The trust value cannot be measured accurately or effectively. To solve this problem, we design a service-oriented cloud trust assessment model using a cloud model. We also design a subjective preference weight allocation (SPWA) algorithm. A flexible weight model is advanced by combining SPWA with the entropy method. Aiming at the fuzziness and subjectivity of trust, the cloud model is used to measure the trust value of various cloud computing services. The SPWA algorithm is used to integrate each evaluation result to obtain the trust evaluation value of the entire cloud service provider. -
Dong-Ho Lee, Yan Li, Byeong-Seok Shin
Vol. 16, No. 6, pp. 1293-1308, Dec. 2020
https://doi.org/10.3745/JIPS.04.0194
Keywords: feature extraction, Medical Imaging, Transfer Learning
Show / Hide AbstractIn fine-tuning-based transfer learning, the size of the dataset may affect learning accuracy. When a dataset scale is small, fine-tuning-based transfer-learning methods use high computing costs, similar to a large-scale dataset. We propose a mid-level feature extractor that retrains only the mid-level convolutional layers, resulting in increased efficiency and reduced computing costs. This mid-level feature extractor is likely to provide an effective alternative in training a small-scale medical image dataset. The performance of the mid-level feature extractor is compared with the performance of low- and high-level feature extractors, as well as the fine-tuning method. First, the mid-level feature extractor takes a shorter time to converge than other methods do. Second, it shows good accuracy in validation loss evaluation. Third, it obtains an area under the ROC curve (AUC) of 0.87 in an untrained test dataset that is very different from the training dataset. Fourth, it extracts more clear feature maps about shape and part of the chest in the X-ray than fine-tuning method. -
Jun Li, Haoxiang Zhang, Zhongrui Ni
Vol. 16, No. 6, pp. 1309-1323, Dec. 2020
https://doi.org/10.3745/JIPS.04.0199
Keywords: Crowd Evacuation, Navigation Point, Pedestrian Evacuation Route, Social Force Model
Show / Hide AbstractCrowd evacuation simulation is an important research issue for designing reasonable building layouts and planning more effective evacuation routes. The social force model (SFM) is an important pedestrian movement model, and is widely used in crowd evacuation simulations. The model can effectively simulate crowd evacuation behaviors in a simple scene, but for a multi-obstacle scene, the model could result in some undesirable problems, such as pedestrian evacuation trajectory oscillation, pedestrian stagnation and poor evacuation routing. This paper analyzes the causes of these problems and proposes an improved SFM for complex multi-obstacle scenes. The new model adds navigation points and walking shortest route principles to the SFM. Based on the proposed model, a crowd evacuation simulation system is developed, and the crowd evacuation simulation was carried out in various scenes, including some with simple obstacles, as well as those with multi-obstacles. Experiments show that the pedestrians in the proposed model can effectively bypass obstacles and plan reasonable evacuation routes. -
Je-Kwan Park, Tai-Myoung Chung
Vol. 16, No. 6, pp. 1324-1342, Dec. 2020
https://doi.org/10.3745/JIPS.04.0196
Keywords: collision avoidance, Drone, Global Path Planning, Local Path Planning, Planner, Rapidly-exploring Random Tree (RRT), RRT* (RRT star), Torus, Unmanned Aerial Vehicle (UAV), Velocity Obstacle
Show / Hide AbstractVarious modified algorithms of rapidly-exploring random tree (RRT) have been previously proposed. However, compared to the RRT algorithm for collision avoidance with global and static obstacles, it is not easy to find a collision avoidance and local path re-planning algorithm for dynamic obstacles based on the RRT algorithm. In this study, we propose boundary-RRT*, a novel-algorithm that can be applied to aerial vehicles for collision avoidance and path re-planning in a three-dimensional environment. The algorithm not only bounds the configuration space, but it also includes an implicit bias for the bounded configuration space. Therefore, it can create a path with a natural curvature without defining a bias function. Furthermore, the exploring space is reduced to a half-torus by combining it with simple right-of-way rules. When defining the distance as a cost, the proposed algorithm through numerical analysis shows that the standard deviation (σ) approaches 0 as the number of samples per unit time increases and the length of epsilon ε (maximum length of an edge in the tree) decreases. This means that a stable waypoint list can be generated using the proposed algorithm. Therefore, by increasing real-time performance through simple calculation and the boundary of the configuration space, the algorithm proved to be suitable for collision avoidance of aerial vehicles and replanning of local paths. -
Liquan Zhao, Ke Ma
Vol. 16, No. 6, pp. 1343-1358, Dec. 2020
https://doi.org/10.3745/JIPS.03.0152
Keywords: Compressed sensing, Computed Correlation, reconstruction algorithm, Weak Threshold
Show / Hide AbstractIn the stagewise arithmetic orthogonal matching pursuit algorithm, the weak threshold used in sparsity estimation is determined via maximum iterations. Different maximum iterations correspond to different thresholds and affect the performance of the algorithm. To solve this problem, we propose an improved variable weak threshold based on the stagewise arithmetic orthogonal matching pursuit algorithm. Our proposed algorithm uses the residual error value to control the weak threshold. When the residual value decreases, the threshold value continuously increases, so that the atoms contained in the atomic set are closer to the real sparsity value, making it possible to improve the reconstruction accuracy. In addition, we improved the generalized Jaccard coefficient in order to replace the inner product method that is used in the stagewise arithmetic orthogonal matching pursuit algorithm. Our proposed algorithm uses the covariance to replace the joint expectation for two variables based on the generalized Jaccard coefficient. The improved generalized Jaccard coefficient can be used to generate a more accurate calculation of the correlation between the measurement matrixes. In addition, the residual is more accurate, which can reduce the possibility of selecting the wrong atoms. We demonstrate using simulations that the proposed algorithm produces a better reconstruction result in the reconstruction of a one-dimensional signal and two-dimensional image signal. -
Jisu Kwon, Moon Gi Seok, Daejin Park
Vol. 16, No. 6, pp. 1359-1371, Dec. 2020
https://doi.org/10.3745/JIPS.01.0060
Keywords: Embedded System, error correction code, GPU-Based Acceleration, Hamming Code, Sparse Matrix–Vector Multiplication
Show / Hide AbstractIn transmitting and receiving such a large amount of data, reliable data communication is crucial for normal operation of a device and to prevent abnormal operations caused by errors. Therefore, in this paper, it is assumed that an error correction code (ECC) that can detect and correct errors by itself is used in an environment where massive data is sequentially received. Because an embedded system has limited resources, such as a lowperformance processor or a small memory, it requires efficient operation of applications. In this paper, we propose using an accelerated ECC-decoding technique with a graphics processing unit (GPU) built into the embedded system when receiving a large amount of data. In the matrix–vector multiplication that forms the Hamming code used as a function of the ECC operation, the matrix is expressed in compressed sparse row (CSR) format, and a sparse matrix–vector product is used. The multiplication operation is performed in the kernel of the GPU, and we also accelerate the Hamming code computation so that the ECC operation can be performed in parallel. The proposed technique is implemented with CUDA on a GPU-embedded target board, NVIDIA Jetson TX2, and compared with execution time of the CPU. -
Seong-Hwan Cho, Seung-Hee Kim
Vol. 16, No. 6, pp. 1372-1390, Dec. 2020
https://doi.org/10.3745/JIPS.04.0200
Keywords: Collaboration, Front-end, Project Management, UI, UX, UI/UX
Show / Hide AbstractAn attractive user interface (UI) design with a clear user experience (UX) is the key for the success of applications. Therefore software development projects require very close collaboration between SI developers and front-end service developers. However, methodologies for software development only exist with inadequate development processes or work standards for collaboration. This survey derived 13 risk factors in developing UI/UX from 113 risk factors of IT projects through a questionnaire and factor analysis and proposed a collaboration-based UI/UX development model that can eliminate or mitigate six risks with high weights and reliability. To extract risk factors with high reliability, factor and reliability were analyzed to extract 13 major risks, and based on the expert opinions and the results of correlation analysis, UI/UX development stages were classified into planning, design, and implementation. The causal relationships between risks were verified through regression analysis. This study is the first to expertly analyze major risks based on collaboration in UI/UX development and derive a theoretical basis that can be used in project risk management. These findings are expected to provide a basis for research on development methodologies for higher levels of front-end services and to construct rational collaboration systems between SI practitioners and front-end service providers. -
Qiumei Zheng, Nan Liu, Baoqin Cao, Fenghua Wang, Yanan Yang
Vol. 16, No. 6, pp. 1391-1406, Dec. 2020
https://doi.org/10.3745/JIPS.02.0150
Keywords: Color Image, Transform Domain, Voting Strategy, Zero Watermarking
Show / Hide AbstractA zero-watermarking algorithm in transform domain based on RGB channel and voting strategy is proposed. The registration and identification of ownership have achieved copyright protection for color images. In the ownership registration, discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) are used comprehensively because they have the characteristics of multi-resolution, energy concentration and stability, which is conducive to improving the robustness of the proposed algorithm. In order to take full advantage of the characteristics of the image, we use three channels of R, G, and B of a color image to construct three master shares, instead of using data from only one channel. Then, in order to improve security, the master share is superimposed with the copyright watermark encrypted by the owner’s key to generate an ownership share. When the ownership is authenticated, copyright watermarks are extracted from the three channels of the disputed image. Then using voting decisions, the final copyright information is determined by comparing the extracted three watermarks bit by bit. Experimental results show that the proposed zero watermarking scheme is robust to conventional attacks such as JPEG compression, noise addition, filtering and tampering, and has higher stability in various common color images. -
Minji Seo, Ki Yong Lee
Vol. 16, No. 6, pp. 1407-1423, Dec. 2020
https://doi.org/10.3745/JIPS.04.0197
Keywords: Graph Embedding, Graph Similarity, LSTM Autoencoder, Weighted Graph Embedding, Weighted Graph
Show / Hide AbstractA graph is a data structure consisting of nodes and edges between these nodes. Graph embedding is to generate a low dimensional vector for a given graph that best represents the characteristics of the graph. Recently, there have been studies on graph embedding, especially using deep learning techniques. However, until now, most deep learning-based graph embedding techniques have focused on unweighted graphs. Therefore, in this paper, we propose a graph embedding technique for weighted graphs based on long short-term memory (LSTM) autoencoders. Given weighted graphs, we traverse each graph to extract node-weight sequences from the graph. Each node-weight sequence represents a path in the graph consisting of nodes and the weights between these nodes. We then train an LSTM autoencoder on the extracted node-weight sequences and encode each nodeweight sequence into a fixed-length vector using the trained LSTM autoencoder. Finally, for each graph, we collect the encoding vectors obtained from the graph and combine them to generate the final embedding vector for the graph. These embedding vectors can be used to classify weighted graphs or to search for similar weighted graphs. The experiments on synthetic and real datasets show that the proposed method is effective in measuring the similarity between weighted graphs. -
Ting-ting Yang, Su-yin Zhou, Ai-jun Xu, Jian-xin Yin
Vol. 16, No. 6, pp. 1424-1436, Dec. 2020
https://doi.org/10.3745/JIPS.02.0151
Keywords: Adaptive Mean Shift, Image Abstraction, Image Segmentation, Mathematical Morphology, Tree Segmentation
Show / Hide AbstractAlthough huge progress has been made in current image segmentation work, there are still no efficient segmentation strategies for tree image which is taken from natural environment and contains complex background. To improve those problems, we propose a method for tree image segmentation combining adaptive mean shifting with image abstraction. Our approach perform better than others because it focuses mainly on the background of image and characteristics of the tree itself. First, we abstract the original tree image using bilateral filtering and image pyramid from multiple perspectives, which can reduce the influence of the background and tree canopy gaps on clustering. Spatial location and gray scale features are obtained by step detection and the insertion rule method, respectively. Bandwidths calculated by spatial location and gray scale features are then used to determine the size of the Gaussian kernel function and in the mean shift clustering. Furthermore, the flood fill method is employed to fill the results of clustering and highlight the region of interest. To prove the effectiveness of tree image abstractions on image clustering, we compared different abstraction levels and achieved the optimal clustering results. For our algorithm, the average segmentation accuracy (SA), over-segmentation rate (OR), and under-segmentation rate (UR) of the crown are 91.21%, 3.54%, and 9.85%, respectively. The average values of the trunk are 92.78%, 8.16%, and 7.93%, respectively. Comparing the results of our method experimentally with other popular tree image segmentation methods, our segmentation method get rid of human interaction and shows higher SA. Meanwhile, this work shows a promising application prospect on visual reconstruction and factors measurement of tree. -
Jaehyeon Cho, Nammee Moon
Vol. 16, No. 6, pp. 1437-1446, Dec. 2020
https://doi.org/10.3745/JIPS.02.0149
Keywords: DCGAN, NLTK, OCR
Show / Hide AbstractFor the last few years, smart devices have begun to occupy an essential place in the life of children, by allowing them to access a variety of language activities and books. Various studies are being conducted on using smart devices for education. Our study extracts images and texts from kids’ book with smart devices and matches the extracted images and texts to create new images that are not represented in these books. The proposed system will enable the use of smart devices as educational media for children. A deep convolutional generative adversarial network (DCGAN) is used for generating a new image. Three steps are involved in training DCGAN. Firstly, images with 11 titles and 1,164 images on ImageNet are learned. Secondly, Tesseract, an optical character recognition engine, is used to extract images and text from kids’ book and classify the text using a morpheme analyzer. Thirdly, the classified word class is matched with the latent vector of the image. The learned DCGAN creates an image associated with the text. -
Tomoya Kawakami
Vol. 16, No. 6, pp. 1447-1458, Dec. 2020
https://doi.org/10.3745/JIPS.04.0198
Keywords: Communication Load Reduction, Distributed Data Management, Interval Query, Ring-Shaped Overlay Network, Sensor data, Temporal Data
Show / Hide AbstractThis paper describes a structured overlay network scheme based on multiple different time intervals. Many types of data (e.g., sensor data) can be requested at specific time intervals that depend on the user and the system. These queries are referred to as “interval queries.” A method for constructing an overlay network that efficiently processes interval queries based on multiple different time intervals is proposed herein. The proposed method assumes a ring topology and assigns nodes to a keyspace based on one-dimensional time information. To reduce the number of forwarded messages for queries, each node constructs shortcut links for each interval that users tend to request. This study confirmed that the proposed method reduces the number of messages needed to process interval queries. The contributions of this study include the clarification of interval queries with specific time intervals; establishment of a structured overlay network scheme based on multiple different time intervals; and experimental verification of the scheme in terms of communication load, delay, and maintenance cost. -
Sushil Kumar Singh, Abir El Azzaoui, Mikail Mohammed Salim, Jong Hyuk Park
Vol. 16, No. 6, pp. 1459-1478, Dec. 2020
https://doi.org/10.3745/JIPS.03.0154
Keywords: Computing Security and Privacy, Quantum, Communication, Sensor, Smart Applications
Show / Hide AbstractIn the last few years, quantum communication technology and services have been developing in various advanced applications to secure the sharing of information from one device to another. It is a classical commercial medium, where several Internet of Things (IoT) devices are connected to information communication technology (ICT) and can communicate the information through quantum systems. Digital communications for future networks face various challenges, including data traffic, low latency, deployment of high-broadband, security, and privacy. Quantum communication, quantum sensors, quantum computing are the solutions to address these issues, as mentioned above. The secure transaction of data is the foremost essential needs for smart advanced applications in the future. In this paper, we proposed a quantum communication model system for future ICT and methodological flow. We show how to use blockchain in quantum computing and quantum cryptography to provide security and privacy in recent information sharing. We also discuss the latest global research trends for quantum communication technology in several countries, including the United States, Canada, the United Kingdom, Korea, and others. Finally, we discuss some open research challenges for quantum communication technology in various areas, including quantum internet and quantum computing. -
Chenbo Liu
Vol. 16, No. 6, pp. 1479-1494, Dec. 2020
https://doi.org/10.3745/JIPS.02.0148
Keywords: Adaptive Residual Interpolation (ARI), Directional Difference, Image Demosaicking, Iterative Residual Interpolation (IRI), Minimized-Laplacian Residual Interpolation (MLRI), Residual Interpolation (RI)
Show / Hide AbstractAs an important part of image processing, image demosaicking has been widely researched. It is especially necessary to propose an efficient interpolation algorithm with good visual quality and performance. To improve the limitations of residual interpolation (RI), based on RI algorithm, minimalized-Laplacian RI (MLRI), and iterative RI (IRI), this paper focuses on adaptive RI (ARI) and proposes an improved ARI (IARI) algorithm which obtains more distinct R, G, and B colors in the images. The proposed scheme fully considers the brightness information and edge information of the image. Since the ARI algorithm is not completely adaptive, IARI algorithm executes ARI algorithm twice on R and B components according to the directional difference, which surely achieves an adaptive algorithm for all color components. Experimental results show that the improved method has better performance than other four existing methods both in subjective assessment and objective assessment, especially in the complex edge area and color brightness recovery.