The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
In order to considerably reduce the ambiguity rate, we propose in this article a disambiguation approach that is based on the selection of the right diacritics at different analysis levels. This hybrid approach combines a linguistic approach with a multi-criteria decision one and could be considered as an alternative choice to solve the morpho-lexical ambiguity problem regardless of the diacritics rate of the processed text. As to its evaluation, we tried the disambiguation on the online Alkhalil morphological analyzer (the proposed approach can be used on any morphological analyzer of the Arabic language) and obtained encouraging results with an F-measure of more than 80%.
The Joint Bayesian (JB) method has been used in most state-of-the-art methods for face verification. However, since the publication of the original JB method in 2012, no improved verification method has been proposed. A lot of studies on face verification have been focused on extracting good features to improve the performance in the challenging Labeled Faces in the Wild (LFW) database. In this paper, we propose an improved version of the JB method, called the two-dimensional Joint Bayesian (2D-JB) method. It is very simple but effective in both the training and test phases. We separated two symmetric terms from the three terms of the JB log likelihood ratio function. Using the two terms as a two-dimensional vector, we learned a decision line to classify same and not-same cases. Our experimental results show that the proposed 2D-JB method significantly outperforms the original JB method by more than 1% in the LFW database.
The aim of this paper is to examine the effectiveness of combining three popular tools used in pattern recognition, which are the Active Appearance Model (AAM), the two-dimensional discrete cosine transform (2D-DCT), and Kernel Fisher Analysis (KFA), for face recognition across age variations. For this purpose, we first used AAM to generate an AAM-based face representation; then, we applied 2D-DCT to get the descriptor of the image; and finally, we used a multiclass KFA for dimension reduction. Classification was made through a K-nearest neighbor classifier, based on Euclidean distance. Our experimental results on face images, which were obtained from the publicly available FG-NET face database, showed that the proposed descriptor worked satisfactorily for both face identification and verification across age progression.
In this paper, we propose a framework that attempts to incorporate landmarks into a segment-based Mandarin speech recognition system. In this method, landmarks provide boundary information and phonetic class information, and the information is used to direct the decoding process. To prove the validity of this method, two kinds of landmarks that can be reliably detected are used to direct the decoding process of a segment model (SM) based Mandarin LVCSR (large vocabulary continuous speech recognition) system. The results of our experiment show that about 30% decoding time can be saved without an obvious decrease in recognition accuracy. Thus, the potential of our method is demonstrated.
This research presents the battery discharge rate models for the energy consumption of mobile phone batteries based on machine learning by taking into account three usage patterns of the phone: the standby state, video playing, and web browsing. We present the experimental design methodology for collecting data, preprocessing, model construction, and parameter selections. The data is collected based on the HTC One X hardware platform. We considered various setting factors, such as Bluetooth, brightness, 3G, GPS, Wi-Fi, and Sync. The battery levels for each possible state vector were measured, and then we constructed the battery prediction model using different regression functions based on the collected data. The accuracy of the constructed models using the multi-layer perceptron (MLP) and the support vector machine (SVM) were compared using varying kernel functions. Various parameters for MLP and SVM were considered. The measurement of prediction efficiency was done by the mean absolute error (MAE) and the root mean squared error (RMSE). The experiments showed that the MLP with linear regression performs well overall, while the SVM with the polynomial kernel function based on the linear regression gives a low MAE and RMSE. As a result, we were able to demonstrate how to apply the derived model to predict the remaining battery charge.
This paper presents the constituent-based approach for aligning bilingual multiword expressions, such as noun phrases, by considering the relationship not only between source expressions and their target translation equivalents but also between the expressions and constituents of the target equivalents. We only considered the compositional preferences of multiword expressions and not their idiomatic usages because our multiword identification method focuses on their collocational or compositional preferences. In our experimental results, the constituent-based approach showed much better performances than the general method for extracting bilingual multiword expressions. For our future work, we will examine the scoring method of the constituent-based approach in regards to having the best performance. Moreover, we will extend target entries in the evaluation dictionaries by considering their synonyms.
The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four- Patch Local Binary Patterns (FPLBP), Binarized Statistical Image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.
As interest in the Internet increases, related technologies are also quickly progressing. As smart devices become more widely used, interest is growing in words are missing here like “improving the” or “figuring out how to use the” future Internet to resolve the fundamental issues of transmission quality and security. The future Internet is being studied to improve the limits of existing Internet structures and to reflect new requirements. In particular, research on words are missing here like “finding new forms of” or “applying new forms of” or “studying various types of” or “finding ways to provide more” reliable communication to connect the Internet to various services is in demand. In this paper, we analyze the security threats caused by malicious activities in the future Internet and propose a human behavior analysis-based security service model for malware detection and intrusion prevention to provide more reliable communication. Our proposed service model provides high reliability services by responding to security threats by detecting various malware intrusions and protocol authentications based on human behavior.
Considering video copy transform diversity, a multi-feature video copy detection algorithm based on a Speeded-Up Robust Features (SURF) local descriptor is proposed in this paper. Video copy coarse detection is done by an ordinal measure (OM) algorithm after the video is preprocessed. If the matching result is greater than the specified threshold, the video copy fine detection is done based on a SURF descriptor and a box filter is used to extract integral video. In order to improve video copy detection speed, the Hessian matrix trace of the SURF descriptor is used to pre-match, and dimension reduction is done to the traditional SURF feature vector for video matching. Our experimental results indicate that video copy detection precision and recall are greatly improved compared with traditional algorithms, and that our proposed multiple features algorithm has good robustness and discrimination accuracy, as it demonstrated that video detection speed was also improved.
The Virtual Local Area Network (VLAN) has been used for a long time in campus and enterprise networks as the most popular network virtualization solution. Due to the benefits and advantages achieved by using VLAN, network operators and administrators have been using it for constructing their networks up until now and have even extended it to manage the networking in a cloud computing system. However, their configuration is a complex, tedious, time-consuming, and error-prone process. Since Software Defined Networking (SDN) features the centralized network management and network programmability, it is a promising solution for handling the aforementioned challenges in VLAN management. In this paper, we first introduce a new architecture for campus and enterprise networks by leveraging SDN and OpenFlow. Next, we have designed and implemented an application for easily managing and flexibly troubleshooting the VLANs in this architecture. This application supports both static VLAN and dynamic VLAN configurations. In addition, we discuss the hybrid-mode operation where the packet processing is involved by both the OpenFlow control plane and the traditional control plane. By deploying a real test-bed prototype, we illustrate how our system works and then evaluate the network latency in dynamic VLAN operation.
Data hiding is a wide field that is helpful to secure network communications. It is common that many data hiding researchers consider improving and increasing many aspects such as capacity, stego file quality, or robustness. In this paper, we use an audio file as a cover and propose a reversible steganographic method that is modifying the sample values using modulus function in order to make the reminder of that particular value to be same as the secret bit that is needed to be embedded. In addition, we use a location map that locates these modified sample values. This is because in reversible data hiding it needs to exactly recover both the secret message and the original audio file from that stego file. The experimental results show that, this method (measured by correlation algorithm) is able to retrieve exactly the same secret message and audio file. Moreover, it has made a significant improvement in terms of the following: the capacity since each sample value is carrying a secret bit. The quality measured by peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), Pearson correlation coefficient (PCC), and Similarity Index Modulation (SIM). All of them have proven that the quality of the stego audio is relatively high.
Cloud computing is a new style of computing in which dynamically scalable and reconfigurable resources are provided as a service over the internet. The MapReduce framework is currently the most dominant programming model in cloud computing. It is necessary to protect the integrity of MapReduce data processing services. Malicious workers, who can be divided into collusive workers and non-collusive workers, try to generate bad results in order to attack the cloud computing. So, figuring out how to efficiently detect the malicious workers has been very important, as existing solutions are not effective enough in defeating malicious behavior. In this paper, we propose a security protection framework to detect the malicious workers and ensure computation integrity in the map phase of MapReduce. Our simulation results show that our proposed security protection framework can efficiently detect both collusive and non-collusive workers and guarantee high computation accuracy.