1. Introduction
Artificial intelligence (AI)-based systems provide valuable services to users in various fields such as face recognition for security, stock trading prediction in the finance industry, weather prediction in public administration against environmental hazards, and accurate health diagnosis by doctors in the healthcare industry. Various algorithms, such as classification, regression, and deep learning, are used to help increase the security, efficiency, and accuracy of existing services to users.
Existing systems face various challenges for which AI-based algorithms can help provide solutions, such as fake news spread on social media, securing the privacy of users when sharing data, inaccurate movie-based recommendation systems, and stock selection and volatility delay in stock prediction systems.
This paper presents various algorithms that address different issues in different systems and services. Solutions such as convolutional neural network (CNN) algorithms with a Word-Embedded model built especially for detecting fake news in Korean language, combination of k-clique and data mining for increased accuracy in personalized recommendation-based services, and two-dimensional attentionbased long short-memory (2D-ALSTM) model to overcome the stock selection and volatility delay in stock prediction. Other solutions include multi-level fusion processing algorithm to solve problems of lack of real-time database and deep neural network model based on visual dialogs for use as visual features. The research related to the development and implementation of novel AI-based algorithms is also introduced in this paper.
The rest of this paper is organized as follows. In Section 2, we discuss the learning algorithms in the AI system and services and present 18 high-quality papers, particularly those that mainly focus on the following: initialize-expand-merge (IEM) algorithm; multi-view CNN for 3D model; multi-level fusion processing algorithm; hybrid autoregressive integrated moving average (ARIMA) and neural network model; and novel fuzzy non-local means (NLM) algorithm. We present the conclusions of this work in Section 3.
2. Learning Algorithms in AI System and Services
rchitectures to mitigate the existing challenges are introduced as regular magazine-style papers. Such solutions involve various future track topics including the lack of deep learning-based models for fake news detection in Korean language, finding data for the Supervisory Control and Data Acquisition (SCADA) system, user privacy concerns, and unfavorable effects of stock of volatility delay in existing AI models. In subsequent paragraphs, this paper summarizes each topic in terms of the existing challenges and their solutions.
Liu et al. [1] studied common discovery in weighted networks based on the similarity of familiar neighbors and proposed the IEM algorithm. It has three stages: forming community, initial community, and expanding and merging community. The first two stages focused on an influenced common neighbor community with similarity based on the degree of nodes and defined the weighted information. The last stage merged these communities by maximizing the weighted modularity of the network.
Tian et al. [2] presented a multi-level fusion processing algorithm for complex radar signals. Based on the evidence theory, it mitigates problems of lack of real-time database. It adopted the similarity model, which is based on the parameter type, and then calculated the similarity matrix. The D-S evidence theory is applied to the similarity of parameters concerning each signal and the trust value concerning the target framework of each signal. Finally, the signals are combined and perfected.
Zeng et al. [3] designed ingenious view-pooling methods and applied it to the multi-view CNN (MVCNN) for 3D model classification or retrieval. This method is also called learning-based multiple pooling fusion (LMPF). These methods generated multiple projected images of 3D models for use as the inputs of the MVCNN model. Each convolutional layer of MVCNN was initialized, proper values and related parameters were set, and optimal weights for the MVCNN model were finally obtained. The result shows that LMPF has more efficient performance than traditional hand-crafted view-pooling methods.
Guan et al. [4] proposed an improved fast camera calibration method for mobile terminals with limited computing resources. In this method, the two-order radial distortion and tangential distortion models are introduced to establish the camera model with nonlinear distortion items, and the L-M algorithm is used to optimize parameter iteration. According to the experiment, it improved the efficiency and precision of camera calibration and reduced the time consumed by parameter iteration from 0.220 seconds to 0.063 seconds and the average reprojection error from 0.25 pixels to 0.15 pixels.
Wang et al. [5] studied a novel video traffic flow detection method based on machine vision technology. This method has three parts: using the motion evolution part for establishing an initial background image; using the statistical scoring part for the real-time update of background image; and using the background difference part for detecting moving objects. The detection method could quickly and effectively detect various traffic flow parameters as shown by the experiment.
Liu et al. [6] used a hybrid ARIMA and neural network model to forecast the Shanghai and Shenzhen stock markets. In this model, the monthly closing prices of the Shanghai composite index and Shenzhen component index from January 2001 to December 2014 were first used. The optimal ARIMA model was then selected to forecast the Shanghai and Shenzhen stock markets by using the BIC criteria, EVIEWS, and SPSS software. Afterward, the neural network model was used to forecast the Shanghai and Shenzhen stock markets using the MATLAB software. Finally, the optimal ARIMA model was compared with the neural network model, and the result was calculated. The neural network model was found to have improved the predictive ability of the Shanghai and Shenzhen stock markets.
Rizal et al. [7] proposed the Hjorth descriptors measurement technique in the wavelet sub-band by using discrete wavelet transform (DWT) and wavelet packet decomposition (WPD) for feature extraction on lung sound classification. Lung sound signal was decomposed using two wavelet analyses: DWT and WPD. The highest accuracy was obtained at 97.98% using DWT and at 98.99% using WPD.
Lv and Luo [8] proposed a novel fuzzy NLM algorithm for Gaussian denoising. Based on novel patch similarity, this method was adopted to measure the similarity of structure and luminance between image patches with a fuzzy metric. The kernel function was used to calculate the weights and filter image patches with low weights and reduce the computational time.
Hu and Feng [9] presented a new measurement method for canopy volume. According to this method, a variable rate spraying system was designed and developed. Two treatments were established, and a constant application rate of 300 Lha-1 was set as the control treatment for the comparison with variable rate application at a 0.095 Lm-3 canopy. The results showed no significant differences between the two treatments in the liquid distribution and the capability to reach the inner parts of the crop canopies.
Cao et al. [10] addressed the problem of finding data for the SCADA system. The objective of their research is to solve issues associated with the lack of correlation between various dimensions of data and imperfect processing results of the traditional boxplot, DBSCAN clustering algorithm, and probability weight method. The comparative analysis of the three approaches based on the processing effect reveals that the performance of the DBSCAN clustering algorithm outperforms other methods. The DBSCAN algorithm removes the need to rely on an engineer to determine the line parameters. Furthermore, the algorithm screens the power data better, resulting in increased accuracy and reliable calculation of theoretical line loss in the primary grid.
To label important topics of scientific articles, Kim and Rhee [11] presented an ontology-based framework by first seeking impactful articles and applied topic modeling and social network-based analysis. Abstracts of papers based on data mining between the years 1995–2015 are gathered, and topic modeling is done. A topic network based on common keywords found in the topics is constructed, and social media analysis is performed. Three topics are determined, and a UniDM ontology is generated to interpret them logically. The results show that the recommender systems and k-nearest algorithms are closest to other topics. The proposed framework helps in interpreting the results of big data analytics.
The wide spread of fake news on social network services is a major social issue. Lee et al. [12] proposed a deep learning-based architecture to identify fraudulent news in Korean language. Existing deep learning-based models are built for the English language, which does not apply to the Korean languagebased fake news detection. The Korean language expresses a similar sentence written in English in fewer lines, resulting in feature scarcity. Using the CNN algorithm and a Word-Embedded model, the results achieved good accuracy for body and context errors and low accuracy for headline discrepancies.
The growth of Internet of Things (IoT) devices has given rise to a new market, resulting in an increased number of machines for mobile crowdsensing and devices with diverse sensors. With the right amount of incentives, users are actively encouraged to share their data willingly. Kim et al. [13] suggested a privacy-preserving mechanism to mitigate the privacy concerns of users when sharing their data. The proposed privacy-preserving mechanism, OIMP, aims to solve the opposing objectives of incentive-based payment and preservation of privacy of users. The sensing data received via group signatures results in an on-demand-based payment system. Emulation-based simulation showed positive operational and system performance results as well as the feasibility of utilizing the proposed OIMP user privacypreserving mechanism.
Kim [14] proposed a new and dynamic order-preserving mechanism (OPEnc) based on orderrevealing encryption (OREnc) with optimal client storage and round complexities. The proposed mechanism shows strong security in relation to OREnc. The paper presents a comparative study based on efficiency and security between the proposed mechanism and the existing strong OPEnc schemes. It is possible to build a client-side storage-based, secure, non-interactive OPEnc tool.
A recommendation system can provide more reliable results using a system model based on community detection in a social network. Vilakone et al. [15] introduced a personalized movie-based recommendation system combining data mining for the k-clique method as the best exactness data to the users. The results collected from the proposed method provide important exactness data in contrast with the existing studies. The personal information of the users is organized into different communities using the k-clique process. The system produces recommended movies for new users by implementing a data mining method. The results of the proposed method show accuracy with cost of k = 11.
Cho and Kim [16] proposed a deep neural network model based on visual dialogs using an encoderdecoder structure. Existing studies do not describe the attributes of objects or persons present to be used as features. Characteristics such as dress, age, and gender are highlighted and used to generate answers by employing an attribute recognizer and a separate person identifier. Visual features are obtained from the image as part of the VisDial v0.9 dataset using a convolutional neural network to generate answers. The proposed model shows 15.22%, 14.03%, 12.17%, 8.62%, and 7.54% increases in performance advancements over existing methods. Nonetheless, there is a notable decrease in the performance of the proposed model with existing models in terms of R@10 and Mean.
Yu and Kim [17] proposed a model for stock index prediction that combines attention mechanisms and temporal attention mechanisms. The proposed 2D-ALSTM model is designed to overcome the stock selection and volatility delay issues, which have an unfavorable effect on the existing models. There is a comparative study between the proposed 2D-ALSTM model and the two attention-based models, i.e., multi-input LSTM and dual-stage attention-based RNN. Comparative analysis is achieved using actual stock data collected from the KOSPI100 dataset for the stock index forecast. The results show improved performance of the proposed model compared to existing models.
AI has been extensively practiced in multiple computer science-based fields such as big data, IoT, mobile, and cloud computing for resource supervision. The most significant use case of AI is to manage the quality of services, availability of the system, and service-level agreements. Lim et al. [18] reviewed and analyzed the requirements of cloud resource management using AI, which is examined based on the fog computing system, smart cloud computing system, and edge cloud systems. The authors propose an intelligent resource management scheme to predict a mobile device’s stability using the hidden Markov model. The proposed scheme manages mobile resources to observe their statuses and, using AI, estimates the future stability of the device.
3. Conclusion
This issue features 18 novel, enhanced peer-reviewed papers from different countries around the world. This paper presents diverse kinds of paradigms to subjects that tackle diverse kinds of research areas such as AI, thermal load capacity, intelligent sensing security, HEVC, sentiment analysis optimized resources, Blockchain, digital image watermarking, human tracking technique, steganography software engineering, malware distribution networks, fingerprint matching, wireless sensor networks, semantic web, and so on.