This paper presents a complete method for vehicle detection and tracking in a fixed setting based on computer vision. Vehicle detection is performed based on Scale Invariant Feature Transform (SIFT) feature matching. With SIFT feature detection and matching, the geometrical relations between the two images is estimated. Then, the previous image is aligned with the current image so that moving vehicles can be detected by analyzing the difference image of the two aligned images. Vehicle tracking is also performed based on SIFT feature matching. For the decreasing of time consumption and maintaining higher tracking accuracy, the detected candidate vehicle in the current image is matched with the vehicle sample in the tracking sample set, which contains all of the detected vehicles in previous images. Most remarkably, the management of vehicle entries and exits is realized based on SIFT feature matching with an efficient update mechanism of the tracking sample set. This entire method is proposed for highway traffic environment where there are no non- automotive vehicles or pedestrians, as these would interfere with the results.
System Architecture Evolution (SAE) with Long Term Evolution (LTE) has been used as the key technology for the next generation mobile networks. To support mobility in the LTE/SAE-based mobile networks, the Proxy Mobile IPv6 (PMIP), in which the Mobile Access Gateway (MAG) of the PMIP is deployed at the Serving Gateway (S-GW) of LTE/SAE and the Local Mobility Anchor (LMA) of PMIP is employed at the PDN Gateway (P-GW) of LTE/SAE, is being considered. In the meantime, the Host Identity Protocol (HIP) and the Locator Identifier Separation Protocol (LISP) have recently been proposed with the identifier-locator separation principle, and they can be used for mobility management over the global-scale networks. In this paper, we discuss how to provide the inter-domain mobility management over PMIP-based LTE/SAE networks by investigating three possible scenarios: mobile IP with PMIP (denoted by MIP-PMIP-LTE/SAE), HIP with PMIP (denoted by HIP-PMIP-LTE/SAE), and LISP with PMIP (denoted by LISP-PMIP-LTE/SAE). For performance analysis of the candidate inter-domain mobility management schemes, we analyzed the traffic overhead at a central agent and the total transmission delay required for control and data packet delivery. From the numerical results, we can see that HIP-PMIP-LTE/SAE and LISP-PMIP-LTE/SAE are preferred to MIP-PMIP-LTE/SAE in terms of traffic overhead; whereas, LISP-PMIP-LTE/SAE is preferred to HIP-PMIP-LTE/SAE and MIP-PMIP-LTE/SAE in the viewpoint of total transmission delay.
Global value numbering (GVN) is a method for detecting equivalent expressions in programs. Most of the GVN algorithms concentrate on detecting equalities among variables and hence, are limited in their ability to identify value-based redundancies. In this paper, we suggest improvements by which the efficient GVN algo- rithm by Gulwani and Necula (2007) can be made to detect expression equivalences that are required for identifying value based redundancies. The basic idea for doing so is to use an anticipability-based Join algo- rithm to compute more precise equivalence information at join points. We provide a proof of correctness of the improved algorithm and show that its running time is a polynomial in the number of expressions in the program
Cloud computing is a distributed computing model that has lot of drawbacks and faces difficulties. Many new innovative and emerging techniques take advantage of its features. In this paper, we explore the security threats to and Risk Assessments for cloud computing, attack mitigation frameworks, and the risk-based dynamic access control for cloud computing. Common security threats to cloud computing have been explored and these threats are addressed through acceptable measures via governance and effective risk management using a tailored Security Risk Approach. Most existing Threat and Risk Assessment (TRA) schemes for cloud services use a converse thinking approach to develop theoretical solutions for minimizing the risk of security breaches at a minimal cost. In our study, we propose an improved Attack-Defense Tree mechanism designated as iADTree, for solving the TRA problem in cloud computing environments.
Due to the proliferation of data being exchanged and the increase of dependency on this data for critical decision-making, it has become imperative to ensure the trustworthiness of the data at the receiving end in order to obtain reliable results. Data provenance, the derivation history of data, is a useful tool for evaluating the trustworthiness of data. Various frameworks have been proposed to evaluate the trustworthiness of data based on data provenance. In this paper, we briefly review a history of these frameworks for evaluating the trustworthiness of data and present an overview of some prominent state-of-the-art evaluation frameworks. Moreover, we provide a comparative analysis of two key frameworks by evaluating various aspects in an executional environment. Our analysis points to various open research issues and provides an understanding of the functionalities of the frameworks that are used to evaluate the trustworthiness of data.
In 2004, Yang et al. proposed a threshold proxy signature scheme that efficiently reduced the computational complexity of previous schemes. In 2009, Hu and Zhang presented some security leakages of Yang’s scheme and proposed an improvement to eliminate the security leakages that had been pointed out. In this paper, we will point out that both Yang and Hu’s schemes still have some security weaknesses, which cannot resist warrant attacks where an adversary can forge valid proxy signatures by changing the warrant . We also propose two secure improvements for these schemes.
A primary task in wireless sensor networks (WSNs) is data collection. The main objective of this task is to collect sensor readings from sensor fields at predetermined sinks using routing protocols without conducting network processing at intermediate nodes, which have been proved as being inefficient in many research studies using a static sink. The major drawback is that sensor nodes near a data sink are prone to dissipate more energy power than those far away due to their role as relay nodes. Recently, novel WSN architectures based on mobile sinks and mobile relay nodes, which are able to move inside the region of a deployed WSN, which has been developed in most research works related to mobile WSN mainly exploit mobility to reduce and balance energy consumption to enhance communication reliability among sensor nodes. Our main purpose in this paper is to propose a solution to the problem of deploying mobile data collectors for alleviating the high traffic load and resulting bottleneck in a sink’s vicinity, which are caused by static approaches. For this reason, several WSNs based on mobile elements have been proposed. We studied two key issues in WSN mobility: the impact of the mobile element (sink or relay nodes) and the impact of the mobility model on WSN based on its performance expressed in terms of energy efficiency and reliability. We conducted an extensive set of simulation experiments. The results obtained reveal that the collection approach based on relay nodes and the mobility model based on stochastic perform better.
Spectrum sensing is an essential function that enables cognitive radio technology to explore spectral holes and resourcefully access them without any harmful interference to the licenses user. Spectrum sensing done by a single node is highly affected by fading and shadowing. Thus, to overcome this, cooperative spectrum sensing was introduced. Currently, the advancements in multiple antennas have given a new dimension to cognitive radio research. In this paper, we propose a multiple energy detector for cooperative spectrum sensing schemes based on the evidence theory. Also, we propose a reporting mechanism for multiple energy detectors. With our proposed system, we show that a multiple energy detector using a cooperative spectrum sensing scheme based on evidence theory increases the reliability of the system, which ultimately increases the spectrum sensing and reduces the reporting time. Also in simulation results, we show the probability of error for the proposed system. Our simulation results show that our proposed system outperforms the conventional energy detector system
TCS_SHA-3 is a family of four cryptographic hash functions that are covered by a United States patent (US 2009/0262925). The digest sizes are 224, 256, 384 and 512 bits. The hash functions use bijective functions in place of the standard compression functions. In this paper we describe first and second preimage attacks on the full hash functions. The second preimage attack requires negligible time and the first preimage attack requires O(236) time. In addition to these attacks, we also present a negligible time second preimage attack on a strengthened variant of the TCS_SHA-3. All the attacks have negligible memory requirements. To the best of our knowledge, there is no prior cryptanalysis of any member of the TCS_SHA-3 family in the literature.
Image compression is an essential technique for saving time and storage space for the gigantic amount of data generated by images. This paper introduces an adaptive source-mapping scheme that greatly improves bit- level lossless grayscale image compression. In the proposed mapping scheme, the frequency of occurrence of each symbol in the original image is computed. According to their corresponding frequencies, these symbols are sorted in descending order. Based on this order, each symbol is replaced by an 8-bit weighted fixed-length code. This replacement will generate an equivalent binary source with an increased length of successive identical symbols (0s or 1s). Different experiments using Lempel-Ziv lossless image compression algorithms have been conducted on the generated binary source. Results show that the newly proposed mapping scheme achieves some dramatic improvements in regards to compression ratios.