The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[Aug. 20, 2017]Call for 2017 JIPS survey paper awards is opened. Please refer to here for details.
[Dec. 12, 2016] Call for papers about Special sections scheduled in 2017 are registered. Please refer to here for details.
[Oct. 1, 2016] Call for papers about a new special issue titled "Smart Standards, Algorithms and Frameworks for Interoperability in Internet of Things" is now registered. Please refer to here for details.
[Aug. 20, 2016] Since August 20, 2016, the JIPS has started to manage two fast tracks as well as the regular track, and authors has been required to pay the publication charge. Please refer to the details on the author information page.
[Aug. 1, 2016] Since August 2016, the JIPS has been indexed in "Emerging Sources Citation Index (ESCI)", a new Web of Science index managed by Thomson Reuters, launched in late 2015 for journals that have passed an initial evaluation for inclusion in SCI/SCIE/AHCI/SSCI indexes. Indexing in the ESCI will improve the visibility of the JIPS and provide a mark of quality. This achievement is good for all authors of the JIPS. For more information about ESCI, please see the ESCI fact sheet file.
Journal of Information Processing Systems, Vol. 13, No.4, 2017
The Journal of Information Processing Systems (JIPS) publishes a broad array of subjects related to information communication technology in a wide variety of prevalent and advanced fields, including systems, networks, architecture, algorithms, applications, security, and so forth. As the official international journal published by the Korean Information Processing Society and a prominent, multidisciplinary journal in the world, JIPS is indexed in ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef. The purpose of JIPS is to provide a prominent, influential forum where researchers and professionals can come together to promote, share, and discuss all major research issues and developments. Published theoretical and practical articles contribute to their related research areas by presenting new techniques, concepts, or analyses, and feature experience reports, experiments involving the implementation and application of new theories, and tutorials on state-of-the-art technologies related to information processing systems. The subjects covered by this journal include, but are not limited to, topics related to computer systems and theories, multimedia systems and graphics, communication systems and security, and software systems and applications.
The significant advances in information and communication technologies are changing the process of how information is accessed. The internet is a very important source of information and it influences the development of other media. Furthermore, the growth of digital content is a big problem for academic digital libraries, so that similar tools can be applied in this scope to provide users with access to the information. Given the importance of this, we have reviewed and analyzed several proposals that improve the processes of disseminating information in these university digital libraries and that promote access to information of interest. These proposals manage to adapt a user’s access to information according to his or her needs and preferences. As seen in the literature one of the techniques with the best results, is the application of recommender systems. These are tools whose objective is to evaluate and filter the vast amount of digital information that is accessible online in order to help users in their processes of accessing information. In particular, we are focused on the analysis of the fuzzy linguistic recommender systems (i.e., recommender systems that use fuzzy linguistic modeling tools to manage the user’s preferences and the uncertainty of the system in a qualitative way). Thus, in this work, we analyzed some proposals based on fuzzy linguistic recommender systems to help researchers, students, and teachers access resources of interest and thus, improve and complement the services provided by academic digital libraries.
Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.
In order to solve the undetected probability of multiple targets in ultra-wideband (UWB) through-the-wall radar imaging (TWRI), a time-delay and amplitude modified back projection (BP) algorithm is proposed. The refraction point is found by Fermat’s principle in the presence of a wall, and the time-delay is correctly compensated. On this basis, transmission loss of the electromagnetic wave, the absorption loss of the refraction wave, and the diffusion loss of the spherical wave are analyzed in detail. Amplitude compensation is deduced and tested on a model with a single-layer wall. The simulating results by finite difference time domain (FDTD) show that it is effective in increasing the scattering intensity of the targets behind the wall. Compensation for the diffusion loss in the spherical wave also plays a main role. Additionally, the two-layer wall model is simulated. Then, the calculating time and the imaging quality are compared between a single- layer wall model and a two-layer wall model. The results illustrate the performance of the time-delay and amplitude-modified BP algorithm with multiple targets and multiple-layer walls of UWB TWRI.
The Structured Query Language (SQL) Injection continues to be one of greatest security risks in the world according to the Open Web Application Security Project’s (OWASP) Top 10 Security vulnerabilities 2013. The ease of exploitability and severe impact puts this attack at the top. As the countermeasures become more sophisticated, SOL Injection Attacks also continue to evolve, thus thwarting the attempt to eliminate this attack completely. The vulnerable data is a source of worry for government and financial institutions. In this paper, a detailed survey of different types of SQL Injection and proposed methods and theories are presented, along with various tools and their efficiency in intercepting and preventing SQL attacks.
Mammogram images are sensitive in nature and even a minor change in the environment affects the quality of the images. Due to the lack of expert radiologists, it is difficult to interpret the mammogram images. In this paper an algorithm is proposed for a computer-aided diagnosis system, which is based on the wavelet based adaptive sigmoid function. The cascade feed-forward back propagation technique has been used for training and testing purposes. Due to the poor contrast in digital mammogram images it is difficult to process the images directly. Thus, the images were first processed using the wavelet based adaptive sigmoid function and then the suspicious regions were selected to extract the features. A combination of texture features and gray- level co-occurrence matrix features were extracted and used for training and testing purposes. The system was trained with 150 images, while a total 100 mammogram images were used for testing. A classification accuracy of more than 95% was obtained with our proposed method.
In this paper, we propose a transliteration approach based on semantic information (i.e., language origin and gender) which are automatically learnt from the person name, aiming to transliterate the person name of Uyghur into Chinese. The proposed approach integrates semantic scores (i.e., performance on language origin and gender detection) with general transliteration model and generates the semantic knowledge-based model which can produce the best candidate transliteration results. In the experiment, we use the datasets which contain the person names of different language origins: Uyghur and Chinese. The results show that the proposed semantic transliteration model substantially outperforms the general transliteration model and greatly improves the mean reciprocal rank (MRR) performance on two datasets, as well as aids in developing more efficient transliteration for named entities.
This paper proposes an automatic method to summarize Bangla news document. In the proposed approach, pronoun replacement is accomplished for the first time to minimize the dangling pronoun from summary. After replacing pronoun, sentences are ranked using term frequency, sentence frequency, numerical figures and title words. If two sentences have at least 60% cosine similarity, the frequency of the larger sentence is increased, and the smaller sentence is removed to eliminate redundancy. Moreover, the first sentence is included in summary always if it contains any title word. In Bangla text, numerical figures can be presented both in words and digits with a variety of forms. All these forms are identified to assess the importance of sentences. We have used the rule-based system in this approach with hidden Markov model and Markov chain model. To explore the rules, we have analyzed 3,000 Bangla news documents and studied some Bangla grammar books. A series of experiments are performed on 200 Bangla news documents and 600 summaries (3 summaries are for each document). The evaluation results demonstrate the effectiveness of the proposed technique over the four latest methods.
Software today has become an inseparable part of our life. In order to achieve the ever demanding needs of customers, it has to rapidly evolve and include a number of changes. In this paper, our aim is to study the relationship of object oriented metrics with change proneness attribute of a class. Prediction models based on this study can help us in identifying change prone classes of a software. We can then focus our efforts on these change prone classes during testing to yield a better quality software. Previously, researchers have used statistical methods for predicting change prone classes. But machine learning methods are rarely used for identification of change prone classes. In our study, we evaluate and compare the performances of ten machine learning methods with the statistical method. This evaluation is based on two open source software systems developed in Java language. We also validated the developed prediction models using other software data set in the same domain (3D modelling). The performance of the predicted models was evaluated using receiver operating characteristic analysis. The results indicate that the machine learning methods are at par with the statistical method for prediction of change prone classes. Another analysis showed that the models constructed for a software can also be used to predict change prone nature of classes of another software in the same domain. This study would help developers in performing effective regression testing at low cost and effort. It will also help the developers to design an effective model that results in less change prone classes, hence better maintenance.
Cross-lingual query expansion is usually based on the relationship among monolingual words. Bilingual comparable corpus contains relationships among bilingual words. Therefore, this paper proposes a method based on these relationships to conduct query expansion. First, the word vectors which characterize the bilingual words are trained using Chinese and Thai bilingual comparable corpus. Then, the correlation between Chinese query words and Thai words are computed based on these word vectors, followed with selecting the Thai candidate expansion terms via the correlative value. Then, multi-group Thai query expansion sentences are built by the Thai candidate expansion words based on Chinese query sentence. Finally, we can get the optimal sentence using the Chinese and Thai query expansion method, and perform the Thai query expansion. Experiment results show that the cross-lingual query expansion method we proposed can effectively improve the accuracy of Chinese and Thai cross-language information retrieval.
This paper introduces a new algorithm that renders motion blur using triangular motion paths. A triangle occupies a set of pixels when moving from a position in the start of a frame to another position in the end of a frame. This is a motion path of a moving triangle. For a given pixel, we use a motion path of each moving triangle to find a range of time that this moving triangle is visible to the camera. Then, we sort visible time ranges in the depth-time dimensions and use bitwise operations to solve the occlusion problem. Thereafter, we compute an average color of each moving triangle based on its visible time range. Finally, we accumulate an average color of each moving triangle in the front-to-back order to produce the final pixel color. Thus, our algorithm performs shading after the visibility test and renders motion blur in real time.
The amount of sources of information available on the web using ontologies as support continues to increase and is often heterogeneous and distributed. Ontology alignment is the solution to ensure semantic inter- operability. In this paper, we describe a new ontology alignment approach, which consists of combining structure-based and reasoning-based approaches in order to discover new semantic correspondences between entities of different ontologies. We used the biblio test of the benchmark series and anatomy series of the Ontology Alignment Evaluation Initiative (OAEI) 2012 evaluation campaign to evaluate the performance of our approach. We compared our approach successively with LogMap and YAM++ systems. We also analyzed the contribution of our method compared to structural and semantic methods. The results obtained show that our performance provides good performance. Indeed, these results are better than those of the LogMap system in terms of precision, recall, and F-measure. Our approach has also been proven to be more relevant than YAM++ for certain types of ontologies and significantly improves the structure-based and reasoning- based methods.
In this work a Discrete Cosine Transform (DCT)-based feature dimensionality reduced approach for fingerprint matching is proposed. The DCT is applied on a small region around the core point of fingerprint image. The performance of our proposed method is evaluated on a small database of Bologna University and two large databases of FVC2000. A dimensionally reduced feature vector is formed using only approximately 19%, 7%, and 6% DCT coefficients for the three databases from Bologna University and FVC2000, respectively. We compared the results of our proposed method with the discrete wavelet transform (DWT) method, the rotated wavelet filters (RWFs) method, and a combination of DWT+RWF and DWT+(HL+LH) subbands of RWF. The proposed method reduces the false acceptance rate from approximately 18% to 4% on DB1 (Database of Bologna University), approximately 29% to 16% on DB2 (FVC2000), and approximately 26% to 17% on DB3 (FVC2000) over the DWT based feature extraction method.
The traditional text similarity measurement methods based on word frequency vector ignore the semantic relationships between words, which has become the obstacle to text similarity calculation, together with the high-dimensionality and sparsity of document vector. To address the problems, the improved singular value decomposition is used to reduce dimensionality and remove noises of the text representation model. The optimal number of singular values is analyzed and the semantic relevance between words can be calculated in constructed semantic space. An inverted index construction algorithm and the similarity definitions between vectors are proposed to calculate the similarity between two documents on the semantic level. The experimental results on benchmark corpus demonstrate that the proposed method promotes the evaluation metrics of F-measure.
A virtual reality is a virtual space constructed by a computer that provides users the opportunity to indirectly experience a situation they have not experienced in real life through the realization of information for virtual environments. Various studies have been conducted to realize virtual reality, in which the user interface is a major factor in maximizing the sense of immersion and usability. However, most existing methods have disadvantages, such as costliness or being limited to the physical activity of the user due to the use of special devices attached to the user’s body. This paper proposes a new type of interface that enables the user to apply their intentions and actions to the virtual space directly without special devices, and test content is introduced using the new system. Users can interact with the virtual space by throwing an object in the space; to do this, moving object detectors are produced using infrared sensors. In addition, the users can control the virtual space with their own postures. The method can heighten interest and concentration, increasing the sense of reality and immersion and maximizing user’s physical experiences.
The handwriting based person identification systems use their designer’s perceived structural properties of handwriting as features. In this paper, we present a system that uses those structural properties as features that graphologists and expert handwriting analyzers use for determining the writer’s personality traits and for making other assessments. The advantage of these features is that their definition is based on sound historical knowledge (i.e., the knowledge discovered by graphologists, psychiatrists, forensic experts, and experts of other domains in analyzing the relationships between handwritten stroke characteristics and the phenomena that imbeds individuality in stroke). Hence, each stroke characteristic reflects a personality trait. We have measured the effectiveness of these features on a subset of handwritten Devnagari and Latin script datasets from the Center for Pattern Analysis and Recognition (CPAR-2012), which were written by 100 people where each person wrote three samples of the Devnagari and Latin text that we have designed for our experiments. The experiment yielded 100% correct identification on the training set. However, we observed an 88% and 89% correct identification rate when we experimented with 200 training samples and 100 test samples on handwritten Devnagari and Latin text. By introducing the majority voting based rejection criteria, the identification accuracy increased to 97% on both script sets.
Weighted network link prediction is a challenge issue in complex network analysis. Unsupervised methods based on local structure are widely used to handle the predictive task. However, the results are still far from satisfied as major literatures neglect two important points: common neighbors produce different influence on potential links; weighted values associated with links in local structure are also different. In this paper, we adapt an effective link prediction model—local naive Bayes model into a weighted scenario to address this issue. Correspondingly, we propose a weighted local naive Bayes (WLNB) probabilistic link prediction framework. The main contribution here is that a weighted cluster coefficient has been incorporated, allowing our model to inference the weighted contribution in the predicting stage. In addition, WLNB can extensively be applied to several classic similarity metrics. We evaluate WLNB on different kinds of real-world weighted datasets. Experimental results show that our proposed approach performs better (by AUC and Prec) than several alternative methods for link prediction in weighted complex networks.
State-of-the-art speaker recognition systems may work better for the English language. However, if the same system is used for recognizing those who speak different languages, the systems may yield a poor performance. In this work, the decisions of a Gaussian mixture model-universal background model (GMM- UBM) and a learning vector quantization (LVQ) are combined to improve the recognition performance of a multilingual speaker identification system. The difference between these classifiers is in their modeling techniques. The former one is based on probabilistic approach and the latter one is based on the fine-tuning of neurons. Since the approaches are different, each modeling technique identifies different sets of speakers for the same database set. Therefore, the decisions of the classifiers may be used to improve the performance. In this study, multitaper mel-frequency cepstral coefficients (MFCCs) are used as the features and the monolingual and cross-lingual speaker identification studies are conducted using NIST-2003 and our own database. The experimental results show that the combined system improves the performance by nearly 10% compared with that of the individual classifier.
The round robin algorithm is regarded as one of the most efficient and effective CPU scheduling techniques in computing. It centres on the processing time required for a CPU to execute available jobs. Although there are other CPU scheduling algorithms based on processing time which use different criteria, the round robin algorithm has gained much popularity due to its optimal time-shared environment. The effectiveness of this algorithm depends strongly on the choice of time quantum. This paper presents a new effective round robin CPU scheduling algorithm. The effectiveness here lies in the fact that the proposed algorithm depends on a dynamically allocated time quantum in each round. Its performance is compared with both traditional and enhanced round robin algorithms, and the findings demonstrate an improved performance in terms of average waiting time, average turnaround time and context switching.
In this paper, we newly propose a traffic information service model that collects traffic information sensed by an individual vehicle in real time by using a smart device, and which enables drivers to share traffic information on all roads in real time using an application installed on a smart device. In particular, when the driver requests traffic information for a specific area, the proposed driver-personalized service model provides him/her with traffic information on the driving directions in advance by predicting the driving directions of the vehicle based on the learning of the driving records of each driver. To do this, we propose a traffic information management model to process and manage in real time a large amount of online-generated traffic information and traffic information requests generated by each vehicle. We also propose a road node- based indexing technique to efficiently store and manage location-based traffic information provided by each vehicle. Finally, we propose a driving learning and prediction model based on the hidden Markov model to predict the driving directions of each driver based on the driver's driving records. We analyze the traffic information processing performance of the proposed model and the accuracy of the driving prediction model using traffic information collected from actual driving vehicles for the entire area of Seoul, as well as driving records and experimental data.
Certificateless public key cryptography (CL-PKC) is a new benchmark in modern cryptography. It not only simplifies the certificate management problem of PKC, but also avoids the key escrow problem of the identity based cryptosystem (ID-PKC). In this article, we propose a certificateless blind signature protocol which is based on elliptic curve cryptography (CLB-ECC). The scheme is suitable for the wireless communication environment because of smaller parameter size. The proposed scheme is proven to be secure against attacks by two different kinds of adversaries. CLB-ECC is efficient in terms of computation compared to the other existing conventional schemes. CLB-ECC can withstand forgery attack, key only attack, and known message attack. An e-cash framework, which is based on CLB-ECC, has also been proposed. As a result, the proposed CLB-ECC scheme seems to be more effective for applying to real life applications like e-shopping, e-voting, etc., in handheld devices.
Extraction of influential people from their respective domains has attained the attention of scholastic community during current epoch. This study introduces an innovative interaction strength metric for retrieval of the most influential users in the online social network. The interactive strength is measured by three factors, namely re-tweet strength, commencing intensity and mentioning density. In this article, we design a novel algorithm called IPRank that considers the communications from perspectives of followers and followees in order to mine and rank the most influential people based on proposed interaction strength metric. We conducted extensive experiments to evaluate the strength and rank of each user in the micro-blog network. The comparative analysis validates that IPRank discovered high ranked people in terms of interaction strength. While the prior algorithm placed some low influenced people at high rank. The proposed model uncovers influential people due to inclusion of a novel interaction strength metric that improves results significantly in contrast with prior algorithm.
Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new meta- heuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.
A joint channel estimation and data detection technique for a multiple input multiple output (MIMO) wireless communication system is proposed. It combines the least square (LS) training based channel estimation (TBCE) scheme with sphere decoding. In this new approach, channel estimation is enhanced with the help of blind symbols, which are selected based on their correctness. The correctness is determined via sphere decoding. The performance of the new scheme is studied through simulation in terms of the bit error rate (BER). The results show that the proposed channel estimation has comparable performance and better computational complexity over the existing semi-blind channel estimation (SBCE) method.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.