Digital Library
Vol. 17, No. 5, Oct. 2021
-
Dan-Bi Cho, Hyun-Young Lee, Seung-Shik Kang
Vol. 17, No. 5, pp. 867-878, Oct. 2021
https://doi.org/10.3745/JIPS.02.0163
Keywords: Context Awareness, domain adaptation, Multi-channel LSTM, User Intention
Show / Hide AbstractIn context-awareness and user intention tasks, dataset construction is expensive because specific domain data are required. Although pretraining with a large corpus can effectively resolve the issue of lack of data, it ignores domain knowledge. Herein, we concentrate on data domain knowledge while addressing data scarcity and accordingly propose a multi-channel long short-term memory (LSTM). Because multi-channel LSTM integrates pretrained vectors such as task and general knowledge, it effectively prevents catastrophic forgetting between vectors of task and general knowledge to represent the context as a set of features. To evaluate the proposed model with reference to the baseline model, which is a single-channel LSTM, we performed two tasks: voice phishing with context awareness and movie review sentiment classification. The results verified that multichannel LSTM outperforms single-channel LSTM in both tasks. We further experimented on different multichannel LSTMs depending on the domain and data size of general knowledge in the model and confirmed that the effect of multi-channel LSTM integrating the two types of knowledge from downstream task data and raw data to overcome the lack of data. -
Guangjie Liu, Jinlong Zhu, Qiucheng Sun, Jiaze Hu, Hao Yu
Vol. 17, No. 5, pp. 879-891, Oct. 2021
https://doi.org/10.3745/JIPS.04.0222
Keywords: Game, Monte Carlo, Route Optimization
Show / Hide AbstractWith improvements in living conditions, an increasing number of people are choosing to spend their time traveling. Comfortable tour routes are affected by the season, time, and other local factors. In this paper, the influencing factors and principles of scenic spots are analyzed, a model used to find the available routes is built, and a multi-route choice model based on a game theory utilizing a path recommendation weight is developed. A Monte Carlo analysis of a tourist route subjected to fixed access point conditions is applied to account for uncertainties such as the season, start time, end time, stay time, number of scenic spots, destination, and start point. We use the Dijkstra method to obtain multiple path plans and calculate the path evaluation score using the Monte Carlo method. Finally, according to the user preference in the input path, game theory generates path ordering for user choice. The proposed approach achieves a state-of-the-art performance at the pseudo-imperial palace. Compared with other methods, the proposed method can avoid congestion and reduce the time cost. -
Chaoxian Dong
Vol. 17, No. 5, pp. 892-904, Oct. 2021
https://doi.org/10.3745/JIPS.02.0164
Keywords: Bottleneck, Image Semantic Segmentation, Improved ENet, MIOU, MPA, SE Module
Show / Hide AbstractAn image semantic segmentation model is proposed based on improved ENet network in order to achieve the low accuracy of image semantic segmentation in complex environment. Firstly, this paper performs pruning and convolution optimization operations on the ENet network. That is, the network structure is reasonably adjusted for better results in image segmentation by reducing the convolution operation in the decoder and proposing the bottleneck convolution structure. Squeeze-and-excitation (SE) module is then integrated into the optimized ENet network. Small-scale targets see improvement in segmentation accuracy via automatic learning of the importance of each feature channel. Finally, the experiment was verified on the public dataset. This method outperforms the existing comparison methods in mean pixel accuracy (MPA) and mean intersection over union (MIOU) values. And in a short running time, the accuracy of the segmentation and the efficiency of the operation are guaranteed. -
Jieun Kang, Svetlana Kim, Jae-Ho Kim, Nak-Myoung Sung, Yong-Ik Yoon
Vol. 17, No. 5, pp. 905-917, Oct. 2021
https://doi.org/10.3745/JIPS.01.0080
Keywords: Balancing, Collaboration Edge Computing, context-awareness, Data-Intensive Offloading, IoT, RSDO, Task- Intensive Offloading
Show / Hide AbstractIn recent years, edge computing technology consists of several Internet of Things (IoT) devices with embedded sensors that have improved significantly for monitoring, detection, and management in an environment where big data is commercialized. The main focus of edge computing is data optimization or task offloading due to data and task-intensive application development. However, existing offloading approaches do not consider correlations and associations between data and tasks involving edge computing. The extent of collaborative offloading segmented without considering the interaction between data and task can lead to data loss and delays when moving from edge to edge. This article proposes a range segmentation of dynamic offloading (RSDO) algorithm that isolates the offload range and collaborative edge node around the edge node function to address the offloading issue.The RSDO algorithm groups highly correlated data and tasks according to the cause of the overload and dynamically distributes offloading ranges according to the state of cooperating nodes. The segmentation improves the overall performance of edge nodes, balances edge computing, and solves data loss and average latency. -
Qiang Xiao, Shuangshuang Yao, Mengjun Qiang
Vol. 17, No. 5, pp. 918-932, Oct. 2021
https://doi.org/10.3745/JIPS.04.0223
Keywords: Collaboration Degree Evaluation, Fuzzy Matter Element, Supply Chain
Show / Hide AbstractEvaluation of the collaboration of the upstream and downstream enterprises in the manufacturing supply chain is important to improve their synergistic effect. From the supply chain perspective, this study establishes the evaluation model of the manufacturing enterprise collaboration on the basis of fuzzy entropy according to synergistic theory. Downstream enterprises carry out coordinated capital, business, and information flows as subsystems and research enterprises as composite systems. From the three subsystems, the collaboration evaluation index is selected as the order parameter. The compound fuzzy matter-element matrix is established by using its improved algorithm. Subordinate membership and standard deviation fuzzy matter-element matrixes are constructed. Index weight is determined using the entropy weight method. The closeness of each matter element is then calculated. Through a representative of the home appliance industry, namely, Gree Electric Appliances Inc. of Zhuhai, empirical analysis of data in 2011–2017 from the company and its upstream and downstream enterprise collaboration shows a good trend, but the coordinated development has not reached stability. Gree Electric Appliances Inc. of Zhuhai need to strengthen the synergy with upstream and downstream enterprises in terms of cash, business, and information flows to enhance competitiveness. Experimental results show that this method can provide precise suggestions for enterprises, improve the degree of collaboration, and accelerate the development and upgrading of the manufacturing industry. -
Jianpo Li, Qiwei Wang
Vol. 17, No. 5, pp. 933-946, Oct. 2021
https://doi.org/10.3745/JIPS.03.0166
Keywords:
Show / Hide AbstractIn the non-orthogonal multiple access (NOMA) system, multiple user signals on the single carrier are superimposed in a non-orthogonal manner, which results in the interference between non-orthogonal users and noise interference in the channel. To solve this problem, an improved algorithm combining regularized zero-forcing (RZF) precoding with minimum mean square error-serial interference cancellation (MMSE-SIC) detection is proposed. The algorithm uses RZF precoding combined with successive over-relaxation (SOR) method at the base station to preprocess the source signal, which can balance the effects of non-orthogonal inter-user interference and noise interference, and generate a precoded signal suitable for transmission in the channel. At the receiver, the MMSE-SIC detection algorithm is used to further eliminate the interference in the signal for the received superimposed signal, and reduce the calculation complexity through the QR decomposition of the matrix. The simulation results show that the proposed joint detection algorithm has good applicability to eliminate the interference of non-orthogonal users, and it has low complexity and fast convergence speed. Compared with other traditional method, the improved method has lower error rate under different signal-tointerference and noise ratio (SINR). -
Yan Bian, Yusheng Gong, Guopeng Ma, Ting Duan
Vol. 17, No. 5, pp. 947-959, Oct. 2021
https://doi.org/10.3745/JIPS.02.0165
Keywords: GA-OTSU, GF-2 RS Image, Morphology, Sobel Edge Detection, Water Edges
Show / Hide AbstractAiming at the problem of low accuracy in the water boundary automatic extraction of islands from GF-2 remote sensing image with high resolution in three bands, new water edges automatic extraction method in island based on GF-2 remote sensing images, genetic algorithm (GA) method, is proposed in this paper. Firstly, the GAOTSU threshold segmentation algorithm based on the combination of GA and the maximal inter-class variance method (OTSU) was used to segment the island in GF-2 remote sensing image after pre-processing. Then, the morphological closed operation was used to fill in the holes in the segmented binary image, and the boundary was extracted by the Sobel edge detection operator to obtain the water edge. The experimental results showed that the proposed method was better than the contrast methods in both the segmentation performance and the accuracy of water boundary extraction in island from GF-2 remote sensing images. -
Ji-Woon Kang, Sung-Ryong Do
Vol. 17, No. 5, pp. 960-971, Oct. 2021
https://doi.org/10.3745/JIPS.04.0224
Keywords: Borich Needs Assessment, Education Program, Locus for Focus Model, Needs Analysis, Software Safety
Show / Hide AbstractAs the era of the 4th Industrial Revolution enters, the importance of software safety is increasing, but related systematic educational curriculum and trained professional engineers are insufficient. The purpose of this research is to propose the high priority elements for the software safety education program through needs analysis. For this purpose, 74 candidate elements of software safety education program were derived through contents analysis of literature and nominal group technique (NGT) process with five software safety professionals from various industries in South Korea. Targeting potential education participants including industrial workers and students, an on-line survey was conducted to measure the current and required level of each element. Using descriptive statistics, t-test, Borich needs assessment and Locus for focus model, 16 high priority elements were derived for software safety education program. Based on the results, suggestions were made to develop a more effective education program for software safety education. -
Iresha Rubasinghe, Dulani Meedeniya, Indika Perera
Vol. 17, No. 5, pp. 972-988, Oct. 2021
https://doi.org/10.3745/JIPS.04.0225
Keywords: Computer-Aided Software Engineering, Continuous Integration, Software Artefact Consistency Management
Show / Hide AbstractAt present, DevOps environments are getting popular in software organizations due to better collaboration and software productivity over traditional software process models. Software artefacts in DevOps environments are vulnerable to frequent changes at any phase of the software development life cycle that create a continuous integration continuous delivery pipeline. Therefore, software artefact traceability management is challenging in DevOps environments due to the continual artefact changes; often it makes the artefacts to be inconsistent. The existing software traceability related research shows limitations such as being limited to few types of artefacts, lack of automation and inability to cope with continuous integrations. This paper attempts to overcome those challenges by providing traceability support for heterogeneous artefacts in DevOps environments using a prototype named SAT-Analyser. The novel contribution of this work is the proposed traceability process model consists of artefact change detection, change impact analysis, and change propagation. Moreover, this tool provides multi-user accessibility and is integrated with a prominent DevOps tool stack to enable collaborations. The case study analysis has shown high accuracy in SAT-Analyser generated results and have obtained positive feedback from industry DevOps practitioners for its efficacy. -
Sunghyun Yu, Cheolmin Yeom, Yoojae Won
Vol. 17, No. 5, pp. 989-1003, Oct. 2021
https://doi.org/10.3745/JIPS.03.0167
Keywords: Blockchain, Data Sovereignty, EOS, My Data, proxy server, Search Engine, Self-Sovereign Model, Smart Contract
Show / Hide AbstractWith the recent increase in the types of services provided by Internet companies, collection of various types of data has become a necessity. Data collectors corresponding to web services profit by collecting users’ data indiscriminately and providing it to the associated services. However, the data provider remains unaware of the manner in which the data are collected and used. Furthermore, the data collector of a web service consumes web resources by generating a large amount of web traffic. This traffic can damage servers by causing service outages. In this study, we propose a website search engine that employs a system that controls user information using blockchains and builds its database based on the recorded information. The system is divided into three parts: a collection section that uses proxy, a management section that uses blockchains, and a search engine that uses a built-in database. This structure allows data sovereigns to manage their data more transparently. Search engines that use blockchains do not use internet bots, and instead use the data generated by user behavior. This avoids generation of traffic from internet bots and can, thereby, contribute to creating a better web ecosystem. -
Yu Shen, Keyun Xiang, Xiaopeng Chen, Cheng Liu
Vol. 17, No. 5, pp. 1004-1019, Oct. 2021
https://doi.org/10.3745/JIPS.02.0166
Keywords: bilateral filter, Image fusion, Local Area Standard Variance, Nonsubsample Contourlet Transform (NSCT)
Show / Hide AbstractTo solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image’s high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion. -
Guang-Ho Cha
Vol. 17, No. 5, pp. 1020-1033, Oct. 2021
https://doi.org/10.3745/JIPS.02.0167
Keywords: content-based retrieval, Dimensionality Curse, Nearest Neighbor Query, Online Social Network, Kernel method, kernel principal component analysis, similarity search, social network service
Show / Hide AbstractNowadays, online or mobile social network services (SNS) are very popular and widely spread in our society and daily lives to instantly share, disseminate, and search information. In particular, SNS such as YouTube, Flickr, Facebook, and Amazon allow users to upload billions of images or videos and also provide a number of multimedia information to users. Information retrieval in multimedia-rich SNS is very useful but challenging task. Content-based media retrieval (CBMR) is the process of obtaining the relevant image or video objects for a given query from a collection of information sources. However, CBMR suffers from the dimensionality curse due to inherent high dimensionality features of media data. This paper investigates the effectiveness of the kernel trick in CBMR, specifically, the kernel principal component analysis (KPCA) for dimensionality reduction. KPCA is a nonlinear extension of linear principal component analysis (LPCA) to discovering nonlinear embeddings using the kernel trick. The fundamental idea of KPCA is mapping the input data into a highdimensional feature space through a nonlinear kernel function and then computing the principal components on that mapped space. This paper investigates the potential of KPCA in CBMR for feature extraction or dimensionality reduction. Using the Gaussian kernel in our experiments, we compute the principal components of an image dataset in the transformed space and then we use them as new feature dimensions for the image dataset. Moreover, KPCA can be applied to other many domains including CBMR, where LPCA has been used to extract features and where the nonlinear extension would be effective. Our results from extensive experiments demonstrate that the potential of KPCA is very encouraging compared with LPCA in CBMR.