Vol. 15, No. 6, Dec. 2019
Jungho Kang, Jong Hyuk Park
Vol. 15, No. 6, pp. 1259-1264, Dec. 2019
Keywords: Big data, Cloud computing, Smart City
Show / Hide AbstractNowadays, cloud computing and big data analytics are at the center of many industries’ concerns to take advantage of the potential benefits of building future smart cities. The integration of cloud computing and big data analytics is the main reason for massive adoption in many organizations, avoiding the potential complexities of on-premise big data systems. With these two technologies, the manufacturing industry, healthcare system, education, academe, etc. are developing rapidly, and they will offer various benefits to expand their domains. In this issue, we present a summary of 18 high-quality accepted articles following a rigorous review process in the field of cloud computing and big data analytics.
Xi-ai Yan, Wei-qi Shi, Hua Tian
Vol. 15, No. 6, pp. 1265-1276, Dec. 2019
Keywords: Bloom Filter, Cloud Storage, Data Deduplication, Privacy Protection
Show / Hide AbstractData deduplication is a common method to improve cloud storage efficiency and save network communication bandwidth, but it also brings a series of problems such as privacy disclosure and dictionary attacks. This paper proposes a secure deduplication scheme for cloud storage based on Bloom filter, and dynamically extends the standard Bloom filter. A public dynamic Bloom filter array (PDBFA) is constructed, which improves the efficiency of ownership proof, realizes the fast detection of duplicate data blocks and reduces the false positive rate of the system. In addition, in the process of file encryption and upload, the convergent key is encrypted twice, which can effectively prevent violent dictionary attacks. The experimental results show that the PDBFA scheme has the characteristics of low computational overhead and low false positive rate.
Ying Yuan, Jun-Ho Huh
Vol. 15, No. 6, pp. 1277-1295, Dec. 2019
Keywords: Apparel Pattern, Application (App), Color Pattern, Customer Engaged Platform, Digital Printing, Merge Digital Apparel Pattern
Show / Hide AbstractWith its technical development, digital printing is being universally introduced to the mass production of clothing factories. At the same time, many fashion platforms have been made for customers’ participation using digital printing, and a tool is provided in platforms for customers to make designs. However, there is no sufficient solution in the production stage for automatically converting a customer’s design into a file before printing other than designating a square area for the pattern designed by the customer. That is, if 30 different designs come in from customers for one shirt, designers have to do the work of reproducing the design on the clothing pattern in the same location and in the same angle, and this work requires a great deal of manpower. Therefore, it is necessary to develop a technology which can let the customer make the design and, at the same time, reflect it in the clothing pattern. This is defined in relation to the existing clothing pattern with digital printing. This study yields a clothing pattern for digital printing which reflects a customer’s design in real time by matching the diagram area where a customer designs on a given clothing model and the area where a standard pattern reflects the customer’s actual design information. Designers can substitute the complex mapping operation of programmers with a simple area-matching operation. As there is no limit to clothing designs, the various fashion design creations of designers and the diverse customizing demands of customers can be satisfied at low cost with high efficiency. This is not restricted to T-shirts or eco-bags but can be applied to all woven wear, including men’s, women’s, and children’s clothing, except knitwear.
Xin Feng, Kaiqun Hu
Vol. 15, No. 6, pp. 1296-1305, Dec. 2019
Keywords: Image fusion, guided filter, Phase Consistency, Variational Multiscale Decomposition
Show / Hide AbstractTo solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.
Girija Attigeri, Manohara Pai M. M, Radhika M. Pai
Vol. 15, No. 6, pp. 1306-1325, Dec. 2019
Keywords: Classification, correlation, Feature Subset Selection, Financial Big Data, Logistic Regression, Submodular Optimization, Support Vector Machine
Show / Hide AbstractAs the world is moving towards digitization, data is generated from various sources at a faster rate. It is getting humungous and is termed as big data. The financial sector is one domain which needs to leverage the big data being generated to identify financial risks, fraudulent activities, and so on. The design of predictive models for such financial big data is imperative for maintaining the health of the country’s economics. Financial data has many features such as transaction history, repayment data, purchase data, investment data, and so on. The main problem in predictive algorithm is finding the right subset of representative features from which the predictive model can be constructed for a particular task. This paper proposes a correlation-based method using submodular optimization for selecting the optimum number of features and thereby, reducing the dimensions of the data for faster and better prediction. The important proposition is that the optimal feature subset should contain features having high correlation with the class label, but should not correlate with each other in the subset. Experiments are conducted to understand the effect of the various subsets on different classification algorithms for loan data. The IBM Bluemix Big Data platform is used for experimentation along with the Spark notebook. The results indicate that the proposed approach achieves considerable accuracy with optimal subsets in significantly less execution time. The algorithm is also compared with the existing feature selection and extraction algorithms.
Ronggen Yang, Lejun Gong
Vol. 15, No. 6, pp. 1326-1334, Dec. 2019
Keywords: Autism, Biological Molecular, Conditional Random Fields, Knowledge Base
Show / Hide AbstractKnowledge base means a library stored in computer system providing useful information or appropriate solutions to specific area. Knowledge base associated with autism is the complex multidimensional information set related to the disease autism for its pathogenic factor and therapy. This paper focuses on the knowledge of biological molecular information extracted from massive biomedical texts with the aid of widespread used machine learning methods. Six classes of biological molecular information (such as protein, DNA, RNA, cell line, cell component, and cell type) are concerned and the probability statistics method, conditional random fields (CRFs), is utilized to discover these knowledges in this work. The knowledge base can help biologists to etiological analysis and pharmacists to drug development, which can at least answer four questions in question-answering (QA) system, i.e., which proteins are most related to the disease autism, which DNAs play important role to the development of autism, which cell types have the correlation to autism and which cell components participate the process to autism. The work can be visited by the address http://220.127.116.11/bioinfo/index.jsp.
Mihui Kim, Junhyeok Yun
Vol. 15, No. 6, pp. 1335-1349, Dec. 2019
Keywords: crowdsensing, Regression Model, Saturation Prediction, Smart Parking System
Show / Hide AbstractCrowdsensing technologies can improve the efficiency of smart parking system in comparison with present sensor based smart parking system because of low install price and no restriction caused by sensor installation. A lot of sensing data is necessary to predict parking lot saturation in real-time. However in real world, it is hard to reach the required number of sensing data. In this paper, we model a saturation predication combining a time-based prediction model and a sensing data-based prediction model. The time-based model predicts saturation in aspects of parking lot location and time. The sensing data-based model predicts the degree of saturation of the parking lot with high accuracy based on the degree of saturation predicted from the first model, the saturation information in the sensing data, and the number of parking spaces in the sensing data. We perform prediction model learning with real sensing data gathered from a specific parking lot. We also evaluate the performance of the predictive model and show its efficiency and feasibility.
Li Gong, Zhonghui Wang, Yaxian Li, Chunling Jin, Jing Wang
Vol. 15, No. 6, pp. 1350-1364, Dec. 2019
Keywords: Damage Mechanism, Drift Ice, Impact
Show / Hide AbstractThe ice damage occurs frequently in cold and dry region of western China in winter ice period and spring thaw period. In the drift ice condition, it is easy to form different extrusion force or impact force to damage tunnel lining, causing project failure. The failure project could not arrive the original planning and construction goal, giving rise to the water allocation pressure which influences diversion irrigation and farming production in spring. This study conducts the theoretical study on contact-impact algorithm of drift ices crashing diversion tunnel based on the symmetric penalty function in finite element theory. ANSYS/LS-DYNA is adopted as the platform to establish tunnel model and drift ice model. LS-DYNA SOLVER is used as the solver and LSPREPOST is used to do post-processing, analyzing the damage degrees of drift ices on tunnel. Constructing physical model in the experiment to verify and reveal the impact damage mechanism of drift ices on diversion tunnel. The software simulation results and the experiment results show that tunnel lining surface will form varying degree deformation and failure when drift ices crash tunnel lining on different velocity, different plan size and different thickness of drift ice. The researches also show that there are damages of drift ice impact force on tunnel lining in the thawing period in cold and dry region. By long time water scouring, the tunnel lining surfaces are broken and falling off which breaks the strength and stability of the structure.
Jinjuan Wu, Zhengtao Yu, Shulong Liu, Yafei Zhang, Shengxiang Gao
Vol. 15, No. 6, pp. 1365-1377, Dec. 2019
Keywords: Bilingual News, Chinese-Vietnamese, Sentence Similarity, Summarizing the Difference, Undirected Graph
Show / Hide AbstractSummarizing the differences in Chinese-Vietnamese bilingual news plays an important supporting role in the comparative analysis of news views between China and Vietnam. Aiming at cross-language problems in the analysis of the differences between Chinese and Vietnamese bilingual news, we propose a new method of summarizing the differences based on an undirected graph model. The method extracts elements to represent the sentences, and builds a bridge between different languages based on Wikipedia’s multilingual concept description page. Firstly, we calculate the similarity between Chinese and Vietnamese news sentences, and filter the bilingual sentences accordingly. Then we use the filtered sentences as nodes and the similarity grade as the weight of the edge to construct an undirected graph model. Finally, combining the random walk algorithm, the weight of the node is calculated according to the weight of the edge, and sentences with highest weight can be extracted as the difference summary. The experiment results show that our proposed approach achieved the highest score of 0.1837 on the annotated test set, which outperforms the state-of-the-art summarization models.
Heechan Kim, Soowon Lee
Vol. 15, No. 6, pp. 1378-1391, Dec. 2019
Keywords: Document Summarization, General Context, Natural Language Processing, Sequence-to-Sequence Model
Show / Hide AbstractIn recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.
Danyang Cao, Yanhong Ma, Lina Duan
Vol. 15, No. 6, pp. 1392-1405, Dec. 2019
Keywords: Abnormal Pattern, Cathode Voltage, k-nearest neighbor, sliding window
Show / Hide AbstractThe cathode voltage of aluminum electrolytic cell is relatively stable under normal conditions and fluctuates greatly when it has an anomaly. In order to detect the abnormal range of cathode voltage, an anomaly detection algorithm based on sliding window was proposed. The algorithm combines the time series segmentation linear representation method and the k-nearest neighbor local anomaly detection algorithm, which is more efficient than the direct detection of the original sequence. The algorithm first segments the cathode voltage time series, then calculates the length, the slope, and the mean of each line segment pattern, and maps them into a set of spatial objects. And then the local anomaly detection algorithm is used to detect abnormal patterns according to the local anomaly factor and the pattern length. The experimental results showed that the algorithm can effectively detect the abnormal range of cathode voltage.
Dennis Agyemanh Nana Gookyi, Kwangki Ryoo
Vol. 15, No. 6, pp. 1406-1421, Dec. 2019
Keywords: Hardware Resources, IoT, Low-Cost Hardware Devices, RISC-V, SoC, Synthesizable Processors
Show / Hide AbstractThe Internet-of-Things (IoT) has been deployed in almost every facet of our day to day activities. This is made possible because sensing and data collection devices have been given computing and communication capabilities. The devices implement System-on-Chips (SoCs) that incorporate a lot of functionalities, yet they are severely constrained in terms of memory capacitance, hardware area, and power consumption. With the increase in the functionalities of sensing devices, there is a need for low-cost synthesizable processors to handle control, interfacing, and error processing. The first step in selecting a synthesizable processor core for low-cost devices is to examine the hardware resource utilization to make sure that it fulfills the requirements of the device. This paper gives an analysis of the hardware resource usage of ten synthesizable processors that implement the Reduced Instruction Set Computer Five (RISC-V) Instruction Set Architecture (ISA). All the ten processors are synthesized using Vivado v2018.02. The maximum frequency, area, and power reports are extracted and a comparison is made to determine which processor is ideal for low-cost hardware devices.
Lubang Wang, Yue Guo
Vol. 15, No. 6, pp. 1422-1437, Dec. 2019
Keywords: Evolution Equation, Rumor Spreading, Social Networking, WeChat Social Circle
Show / Hide AbstractWith the rapid development of the Internet and the Mobile Internet, social communication based on the network has become a life style for many people. WeChat is an online social platform, for about one billion users, therefore, it is meaningful to study the spreading and evolution mechanism of the rumor on the WeChat social circle. The Rumor was injected into the WeChat social circle by certain individuals, and the communication and the evolution occur among the nodes within the circle; after the refuting-rumor-information injected into the circle, subsequently, the density of four types of nodes, including the Susceptible, the Latent, the Infective, and the Recovery changes, which results in evolving the WeChat social circle system. In the study, the evolution characteristics of the four node types are analyzed, through construction of the evolution equation. The evolution process of the rumor injection and the refuting-rumor-information injection is simulated through the structure of the virtual social network, and the evolution laws of the four states are depicted by figures. The significant results from this study suggest that the spreading and evolving of the rumors are closely related to the nodes degree on the WeChat social circle.
Rupali Shinde, Min Choi, Su-Hyun Lee
Vol. 15, No. 6, pp. 1438-1448, Dec. 2019
Keywords: debugger, Reverse engineering, VED
Show / Hide AbstractWe present a new technique, called VED (very effective debugging), for detecting and correcting division by zero errors for all types of .NET application. We use applications written in C# because C# applications are distributed through the internet and its executable format is used extensively. A tool called Immunity Debugger is used to reverse engineer executable code to get binaries of source code. With this technique, we demonstrate integer division by zero errors, the location of the error causing assembly language code, as well as error recovery done according to user preference. This technique can be extended to work for other programming languages in addition to C#. VED can work on different platforms such as Linux. This technique is simple to implement and economical because all the software used here are open source. Our aims are to simplify the maintenance process and to reduce the cost of the software development life cycle.
Songze Tang, Xuhuan Zhou, Nan Zhou, Le Sun, Jin Wang
Vol. 15, No. 6, pp. 1449-1461, Dec. 2019
Keywords: Face Sketch Synthesis, local similarity, nonlocal similarity, Patch Representation
Show / Hide AbstractFace sketch synthesis plays an important role in public security and digital entertainment. In this paper, we present a novel face sketch synthesis method via local similarity and nonlocal similarity regularization terms. The local similarity can overcome the technological bottlenecks of the patch representation scheme in traditional learning-based methods. It improves the quality of synthesized sketches by penalizing the dissimilar training patches (thus have very small weights or are discarded). In addition, taking the redundancy of image patches into account, a global nonlocal similarity regularization is employed to restrain the generation of the noise and maintain primitive facial features during the synthesized process. More robust synthesized results can be obtained. Extensive experiments on the public databases validate the generality, effectiveness, and robustness of the proposed algorithm.
Kiho Choi, Seongseop Kim, Daejin Park, Jeonghun Cho
Vol. 15, No. 6, pp. 1462-1471, Dec. 2019
Keywords: Binary Translation, Dynamic Testing, Software Monitoring
Show / Hide AbstractReal-time embedded systems have become pervasive in general industry. They also began to be applied in such domains as avionics, automotive, aerospace, healthcare, and industrial Internet. However, the system failure of such domains could result in catastrophic consequences. Runtime software testing is required in such domains that demands very high accuracy. Traditional runtime software testing based on handwork is very inefficient and time consuming. Hence, test automation methodologies in runtime is demanding. In this paper, we introduce a software testing system that translates a real-time software into a monitorable real-time software. The monitorable real-time software means the software provides the monitoring information in runtime. The monitoring target are time constraints of the input real-time software. We anticipate that our system lessens the burden of runtime software testing.
Study on Net Assessment of Trustworthy Evidence in Teleoperation System for Interplanetary TransportationJinjie Wen, Zhengxu Zhao, Qian Zhong
Vol. 15, No. 6, pp. 1472-1488, Dec. 2019
Keywords: formal method, Interplanetary Transportation, Net Assessment, Teleoperation System, Trustworthy Evidence
Show / Hide AbstractCritical elements in the China’s Lunar Exploration reside in that the lunar rover travels over the surrounding undetermined environment and it conducts scientific exploration under the ground control via teleoperation system. Such an interplanetary transportation mission teleoperation system belongs to the ground application system in deep space mission, which performs terrain reconstruction, visual positioning, path planning, and rover motion control by receiving telemetry data. It plays a vital role in the whole lunar exploration operation and its so-called trustworthy evidence must be assessed before and during its implementation. Taking ISO standards and China’s national military standards as trustworthy evidence source, the net assessment model and net assessment method of teleoperation system are established in this paper. The multi-dimensional net assessment model covering the life cycle of software is defined by extracting the trustworthy evidences from trustworthy evidence source. The qualitative decisions are converted to quantitative weights through the net assessment method (NAM) combined with fuzzy analytic hierarchy process (FAHP) and entropy weight method (EWM) to determine the weight of the evidence elements in the net assessment model. The paper employs the teleoperation system for interplanetary transportation as a case study. The experimental result drawn shows the validity and rationality of net assessment model and method. In the final part of this paper, the untrustworthy elements of the teleoperation system are discovered and an improvement scheme is established upon the “net result”. The work completed in this paper has been applied in the development of the teleoperation system of China’s Chang’e-3 (CE-3) “Jade Rabbit-1” and Chang’e-4 (CE-4) “Jade Rabbit-2” rover successfully. Besides, it will be implemented in China’s Chang’e-5 (CE-5) mission in 2019. What’s more, it will be promoted in the Mars exploration mission in 2020. Therefore it is valuable to the development process improvement of aerospace information system.
Wei Xu, Daoli Yang
Vol. 15, No. 6, pp. 1489-1502, Dec. 2019
Keywords: Business Failure Prediction, Combination Method, Different Sample Sizes, Soft Set, uni-int Decision Making Method
Show / Hide AbstractThis work introduces a novel unweighted combination method (UCSS) for business failure perdition (BFP). With considering features of BFP in the age of big data, UCSS integrates the quantitative and qualitative analysis by utilizing soft set theory (SS). We adopt the conventional expert system (ES) as the basic qualitative classifier, the logistic regression model (LR) and the support vector machine (SVM) as basic quantitative classifiers. Unlike other traditional combination methods, we employ soft set theory to integrate the results of each basic classifier without weighting. In this way, UCSS inherits the advantages of ES, LR, SVM, and SS. To verify the performance of UCSS, it is applied to real datasets. We adopt ES, LR, SVM, combination models utilizing the equal weight approach (CMEW), neural network algorithm (CMNN), rough set and D-S evidence theory (CMRD), and the receiver operating characteristic curve (ROC) and SS (CFBSS) as benchmarks. The superior performance of UCSS has been verified by the empirical experiments.
Xinpan Yuan, Songlin Wang, Lanjun Wan, Chengyuan Zhang
Vol. 15, No. 6, pp. 1503-1516, Dec. 2019
Keywords: Long Sentence Similarity, Similar Element, System Similarity, WMD, Word2vector
Show / Hide AbstractIn this paper, to improve the accuracy of long sentence similarity calculation, we proposed a sentence similarity calculation method based on a system similarity function. The algorithm uses word2vector as the system elements to calculate the sentence similarity. The higher accuracy of our algorithm is derived from two characteristics: one is the negative effect of penalty item, and the other is that sentence similar function (SSF) based on word2vector similar elements doesn’t satisfy the exchange rule. In later studies, we found the time complexity of our algorithm depends on the process of calculating similar elements, so we build an index of potentially similar elements when training the word vector process. Finally, the experimental results show that our algorithm has higher accuracy than the word mover’s distance (WMD), and has the least query time of three calculation methods of SSF.