Article

Implementation of Agent Based Dynamic Distributed Service

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The concept of distributed computing implies a network / internet-work of independent nodes which are logically configured in such a manner as to be seen as one machine by an application. They have been implemented in many varying forms and configurations, for the optimal processing of data. Agents and multi-agent systems are useful in modeling complex distributed processes. They focus on support for (the development of) large-scale, secure, and heterogeneous distributed systems. They are expected to abstract both hardware and software vis-à-vis distributed systems. For optimizing the use of the tremendous increase in processing power, bandwidth, and memory that technology is placing in the hands of the designer, a Dynamically Distributed Service (to be positioned as a service to a network / internet-work) is proposed. The service will conceptually migrate an application on to different nodes. In this paper, we present the design and implementation of an inter-mobility (migration) mechanism for agents. This migration is based on FIPA ACL messages. We also evaluate the performance of this implementation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Nowadays there are many offerings of cloud services all over the world which have various security requirements depending on their business use. The compliance of these cloud services with the predefined security policies should be proven. In a cloud infrastructure this is not an easy job, because of its immense complexity. This paper proposes an architecture which uses software agents as its core components to collect evidence across the different layers of cloud infrastructures (Cloud Managment System, Hypervisor, VM, etc.) and builds a chain of evidence to prove compliance with predefined security policies.
Article
Full-text available
We consider a machine with a single real variable x that describes its state. Jobs J1, ', JN are to be sequenced on the machine. Each job requires a starting state A, and leaves a final state Bi. This means that Ji can be started only when x = Ai and, at the completion of the job, x = Bi. There is a cost, which may represent time or money, etc., for changing the machine state x so that the next job may start. The problem is to find the minimal cost sequence for the N jobs. This problem is a special case of the traveling salesman problem. We give a solution requiring only 0N2 simple steps. A solution is also provided for the bottleneck form of this traveling salesman problem under special cost assumptions. This solution permits a characterization of those directed graphs of a special class which possess Hamiltonian circuits.
Article
Full-text available
Protected and encrypted data sent electronically is vulnerable to various attacks such as spyware and attempts in breaking and revealing the data. Thus, steganography was introduced to conceal a secret message into an unsuspicious cover medium so that it can be sent safely through a public communication channel. Suspicion becomes the significant key determinant in the field of steganography. In other words, an efficient stegnographic algorithm will not cause any suspicion after the hidden data is embedded. This paper presents an overview of steganography on GIF image format in order to explore the potential of GIF in information hiding research. A platform, namely StegCure is proposed by using an amalgamation of three different Least Significant Bit (LSB) insertion algorithms that is able to perform steganographic methods. This paper explains about the enhancement of the Least Significant Bits (LSB) insertion techniques from the most basic and conventional 1 bit to the LSB colour cycle method. Various kinds of existing steganographic methods are discussed and some inherent problems are highlighted along with some issues on existing solutions. In comparison with the other data hiding applications, StegCure is a more comprehensive security utility where it offers user-friendly functionality with interactive graphic user interface and integrated navigation capabilities. Furthermore, in order to sustain a higher level of security, StegCure has implemented a Public Key Infrastructure (PKI) mechanism at both sender and receiver sites. With this feature, StegCure manages to restrict any unauthorized user from retrieving the secret message through trial and error. Besides, we also highlight a few aspects in LSB methods on image steganography. At the end of the paper, the evaluation results of the hybrid method in StegCure are presented. The future work will be focused in assimilation of more diversified methods into a whole gamut of steganography systems and its robustness towards steganalysis.
Article
Full-text available
Quality, like beauty, is in the eye of the stakeholder; but there is more to it than meets the eye! Information systems have a multiplicity of people involved throughout their lifecycle. They are developed and they have a life, they evolve, adapt and die, hence we use many attributes relevant to all stakeholders namely users, developers and sponsors. Acceptability to reliability, correctness to usability, expandability to zoticality span the range of people involved either as actors or 'sufferers', as users or as financiers. In order to explore the way each stakeholder views information systems quality we use an attribute alphabet as a vehicle and a prompter. In this paper we identify the main attributes of interest and concentrate on the overlaps and conflicts of interest which inevitably lead to compromises for practical, logistic and financial reasons.
Article
Full-text available
In this paper, a historical overview of significant attempts to get over the software crisis is presented. In particular, we trace the development of lifecycle models and information systems development methodologies during the last four decades. Finally, we explore the role of measurements and outline current and future works leading to process and product improvement.
Conference Paper
Full-text available
Many new routing and MAC layer protocols have been proposed for wireless sensor networks tackling the issues raised by the resource constrained unattended sensor nodes in large-scale deployments. The majority of these protocols considered energy efficiency as the main objective and assumed data traffic with unconstrained delivery requirements. However, the growing interest in applications that demand certain end-to-end performance guarantees and the introduction of imaging and video sensors have posed additional challenges. Transmission of data in such cases requires both energy and QoS aware network management in order to ensure efficient usage of the sensor resources and effective access to the gathered measurements. In this paper, we highlight the architectural and operational challenges of handling of QoS traffic in sensor networks. We report on progress make to-date and outline open research problems.
Conference Paper
Full-text available
Businesses are increasingly dependent on flexible software applications for changing market environments, the rigid licensing structures for software distribution, as used with most legacy systems, is becoming even less popular. The deployment of Web services in enterprise developments facilitates reusability and interoperation amongst ever growing integrated applications. This paper describes the development of an autonomic software license management system under JAX-WS technology of Java Enterprise Edition 5 (JEE5) based on a pattern language of proprietary software licences.
Conference Paper
Full-text available
LEACH (low energy adaptive clustering hierarchy) (W. Heinzelman et al., 2000) is one of the popular cluster-based structures, which has been widely proposed in wireless sensor networks. LEACH uses a TDMA based MAC protocol, and in order to maintain a balanced energy consumption, suggests that each node probabilistically become a cluster head. To reduce the energy consumption and to avoid the strict synchronization requirements of TDMA, we first apply a sleep-wakeup based decentralized MAC protocol to LEACH, then we present an analytic framework for obtaining the optimal probability with which a node becomes a cluster head in order to minimize the network's energy consumption. The analysis is first presented for small networks, under the assumption of identical expected distance of all cluster heads from the sink. Then the analysis is extended for large networks to consider the case when the distances of various sections of the network from the sink may be different, since nodes further away have to spend greater energy in order to reach the sink. Our simulation results show that using this optimal probability results in much more efficient energy consumption and compared with the current LEACH, our proposal consumes significantly less power.
Article
Full-text available
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render—or synthesize—images of the face under novel poses and illumination conditions. The pose space is then sampled, and for each pose the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Article
Full-text available
An increasing number of software process and product standards emphasize the need for measurement. ISO 9001, for example, provides guidance for monitoring and controlling product and process characteristics during both production and installation. However, standards provide little guidance as to what exactly users should measure and how to use the results to support the development of high-quality software. Furthermore, measurement cannot be defined independent of context. A metric set judged valid on one project may lead to poor quality or high development costs when applied to another project. When quality is measured, several factors come into play, including product characteristics (such as size), process maturity level of the company developing the software product, its development environment (such as the design methodology and CASE tools used), and the development team's skill and experience.
Article
Full-text available
Abstract-Establishing hidden communication is an important subject of discussion that has gained substantial importance nowadays with the advancements in the era of the Internet. One of the methods introduced for accomplishing hidden communication is steganography. Steganography is an important area of research in recent years involving a number of applications. It is the science of embedding information into the cover image viz., text, video, and image (payload) without causing statistically significant modification to the cover image. In this paper we present an image based steganographic algorithm named as High Capacity Filter Based Steganography (HCFBS), that combines Least Significant Bit (LSB) method for data hiding, and the filtering techniques for image enhancement.
Article
Full-text available
Component-Based Systems (CBS) have now become more generalized approach for application development. The main advantages of CBS are reduced development time, cost and efforts along with several others. These advantages are mainly contributed by the reuse of already built-in software components. In order to realize the reuse of components effectively in CBS, it is required to measure the reusability of components. However, due to the blackbox nature of components where the source codes of these components are not available, it is difficult to use conventional metrics in Component-Based Development, as these metrics require analysis of source codes. The paper discusses the reusability concepts for Component based Systems and explores several existing metrics for both white-box and black box components to measure reusability directly or indirectly.
Article
Full-text available
Maintenance of software systems is becoming major concern for softwaredevelopers and users. In software projects/products, where softwarechanges/updates are frequently required to improve software quality,maintainability is an important characteristic of ISO 9126 quality standard toevaluate. Analyzability, changeability, stability, and testability are subattributes/characteristics of maintainability in ISO 9126. In this paper,changeability is measured by making changes at code level of an Aspect-Oriented (AO) system. The approach taken to evaluate the changeability of anAO system is to compute the impact of changes made to modules of the system.Some projects1 in aspect-oriented programming (AOP) language, AspectJ, havebeen taken for testing. The results suggest that the AO system can easily absorbchanges and AO design metrics can be used as indicators of changeability aswell as of maintainability. The results also suggest that a code level change inAO systems not always cause less change impact to other modules than a codelevel change in Object-Oriented (OO) systems.
Article
Full-text available
Steganographic tools and techniques are becoming more potential and widespread. Illegal use of steganography poses serious challenges to the law enforcement agencies. Limited work has been carried out on supervised steganalysis using neural network as a classifier. We present a combined method of identifying the presence of covert information in a carrier image using fisher’s linear discriminant (FLD) function followed by the radial basis function (RBF). Experiments show promising results when compared to the existing supervised steganalysis methods, but arranging the retrieved information is still a challenging problem.
Article
Full-text available
Human Activity is not a well defined concept in Ubiquitous Computing discipline because human activity is very complex and the computer environment is very limited in capturing user activity. However, user activity is an essential ingredient for the determination of appropriate response from Intelligent Environment in order to provide appropriate services with or without explicit commands. This paper describes an approach to the determination and recognition of user activity based on user location. The characterisation of user activities can be deduced from sensor activities based on the scalable distribution of context location information. This approach does not require users to label their activities.
Article
Full-text available
Secured communication across the world in the information domain is of at most importance when many languages, several alphabets and various signs (glyphs) found their existence. Data in the form of symbolic (text) representation is the basis for the present work. For any cryptographic Process the two main parameters used are the algorithm and the key. Existing cryptographic systems divide the text into words and each word into characters where character is treated as basic unit. For each character, the corresponding bit stream is generated .The cryptographic system encrypts the bit stream and this bit stream represents a set of characters. The transformation of characters is the basis for the security system. While encrypting our emphasis is to retrieve the characters back from the transformed text. The frequency distribution of characters in the original text is reflected up to some extent in the transformed text. The complexity that is involved in the form of frequency distribution is the parameter that is addressed .But if the underlying language is highly complex then it may be difficult to determine a particular message . In fact the structure and complexity of the underlying language is an extremely important factor when trying to assess an attacker's likelihood of success. The unit of information in a given process can be represented in symbolic notation, which may correspond to a language. In case of Indic scripts the basic unit need not correspond to a character. The concept of syllable, which is the representative of a set of characters with their associative combinatorial units is the basis for the formation of text. Syllable formation inherently embedded with the pre defined rules which will be of a major concern for the cryptographic systems. The commonality among all Indic scripts is associated with syllable and at the same time the differences are found with the combinatorial system. A syllable is a set of character codes of varying size where the complexity is in terms of machine and human understandable characteristics are different. The frequency distribution of these characters in the language differs from one language to other. The transformation of these characters can be treated as a basis for any crypto analysis. The frequency distribution of characters of any script plays an important role in this process. This paper proposes a noval approach for the security of Indic scripts in terms of script complexity as a parameter, with a case study on Telugu.
Article
Full-text available
This research work proposes a new protocol that modifies Ad-hoc On-demand Distance Vector (AODV) protocol to improve its performance using Ant Colony algorithm. The mobility behaviour of nodes in the application is modelled by the random waypoint model through which random locations to which a node move are generated, and the associated speed and pause time are specified to control the frequency at which the network topology is changed. The Optimized-AODV protocol, incorporates path accumulation during the route discovery process in AODV to attain extra routing information. It is evident from the results that Optimized- AODV improves the performance of AODV under conditions of high load and moderate to high mobility.
Article
Full-text available
A lot of work is devoted to formalizing and devising architectures for agents' cooperative behaviour, for coordinating the behaviour of individual agents within groups, as well as to designing agent societies using social laws. However, providing agents with abilities to automatically devise societies so as to form coherent emergent groups that coordinate their behaviour via social laws, is highly challenging. These systems are called self-organised. We are beginning to understand some of the ways in which selforganised agent systems can be devised. In this perspective, this paper provides several examples of multi-agent systems in which self-organisation, based on different mechanisms, is used to solve complex problems. Several criteria for comparison of self-organisation between the different applications are provided.
Article
Full-text available
This paper deals with public-key steganography in the presence of a passive warden. The aim is to hide secret messages within cover-documents without making the warden suspicious, and without any preliminar secret key sharing. Whereas a practical attempt has been already done to provide a solution to this problem, it suffers of poor flexibility (since embedding and decoding steps highly depend on cover-signals statistics) and of little capacity compared to recent data hiding techniques. Using the same framework, this paper explores the use of trellis-coded quantization techniques (TCQ and turbo TCQ) to design a more efficient public-key scheme. Experiments on audio signals show great improvements considering Cachin's security criterion.
Article
Full-text available
A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.
Article
Short Message Service (SMS) is a hugely popular and easily adopted communications technology for mobile devices. Yet due to a lack of understanding in its insecure implementation, it is generally trusted by people. Users conduct business, disclose passwords and receive sensitive notification reports from systems using this communication technology. SMS was an "after-thought" in the Global System for Mobile Communication (GSM) design which uses SS7 for signalling. SMSs by default are sent in cleartext form within the serving GSM's SS7 network, Over The Air (OTA), and potentially over the public Internet in a predictable format. This allows anyone accessing the signaling system to read, and or modify the SMS content on the fly. In this paper, we focus our attention on alleviating the SMS security vulnerability by securing messages using an approximate one-time pad. A one-time pad, considered to be the only perfectly secure cryptosystem, secures an SMS message for transport over any medium between a mobile device and the serving GSM network. Our approach does not alter the physical underlying GSM architecture.
Article
Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications [1]. To satisfy these requirements, several solutions have been proposed that exploit the tradeoffs among energy, accuracy, and latency. Many solutions also use hierarchical (tiered) architectures, where sensor nodes are clustered according to application-specific parameters. Several protocols and applications can exploit such clustering techniques to increase scalability and reduce delays. Examples include routing protocols, and applications requiring efficient data aggregation (e.g., computing the maximum detected radiation around an object). Clustering protocols have been previously investigated as either stand-alone protocols for ad-hoc networks, e.g., [2], [3], [4], [5], or in the context of routing protocols, e.g., [6], [7], [8], [9], [10]. In this work, we present a stand-alone distributed clustering approach that considers a hybrid of energy and communication cost. Based on this approach, we present a protocol, HEED (Hybrid Energy-Efficient Distributed clustering) [11], which has five primary goals: (i) operating in a completely distributed manner, (ii) prolonging network lifetime by distributing energy consumption, (iii) terminating the clustering process within a constant number of iterations/steps, (iv) minimizing control overhead (to be linear in the number of nodes), and (v) producing well-distributed cluster heads and compact clusters. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. HEED assumes that all nodes are equally significant and energy consumption is not necessarily uniform among nodes. To the best of our knowledge, no previously proposed clustering protocol has addressed these goals in an integrated manner. The HEED clustering operation is invoked at each node in order to decide if the node will elect to become a cluster head or join a cluster. A cluster head is responsible for two important tasks: (1) intra-cluster coordination, i.e., coordinating among nodes within its cluster, and (2) inter-cluster communication, i.e., communicating with other cluster heads and/or external observers. The cluster range or radius is determined by the transmission power level used for intra-cluster announcements and during clustering. We refer to this as the cluster power level. The cluster power level should be set to one of the lower power levels of a node, to increase spatial reuse, and reserve higher power levels for inter-cluster communication. Selecting cluster heads is based on two parameters: a primary parameter and a secondary one. HEED uses the primary parameter to probabilistically select an initial set of cluster heads, and the secondary parameter to "break ties." A tie in this context means that a node falls within the "cluster range" of more than one cluster head.
Article
Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, object-based watermarking schemes are needed for copyright protection. This paper presents a novel object based watermarking solution for MPEG4 video authentication using the shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results show that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy compression (e.g. MPEG1 and MPEG2, MPEG-4, H264).
Article
In this paper, we study how specific design principles and elements of steganographic schemes for the JPEG format influence their security. Our goal is to shed some light on how the choice of the embedding operation and domain, adaptive selection channels, and syndrome coding influence statistical detectability. In the experimental part of this paper, the detectability is evaluated using a state-of-the-art blind steganalyzer and the results are contrasted with several adhoc detectability measures, such as the embedding distortion. We also report the first results of our steganalysis of the recently proposed YASS algorithm and compare its security to other steganographic methods for the JPEG format.
Article
This paper describes a new heuristic algorithm for the bottleneck traveling salesman problem (BTSP), which exploits the formulation of BTSP as a traveling salesman problem (TSP). Computational tests show that our algorithm is quite effective. It found optimal solutions for many problems from the standard traveling salesman problem library (TSPLIB) problems. We also consider BTSP with an additional constraint and show that our BTSP heuristic can be modified to obtain a heuristic to solve this problem. Relationships between symmetric and asymmetric versions of BTSP are also discussed.
Conference Paper
Software licensing schemes are controls put in software to grant or deny the use of the software. It plays an important part in the distribution and the control of software. This paper reviews some of the technologies behind software licensing schemes and presents a classification and a case study.
Conference Paper
Wireless sensor networks have been widely studied and usefully employed in many applications such as medical monitoring, automotive safety and space applications. Typically, sensor nodes have several limitations such as limited battery life, low computational capability, short radio transmission range and small memory space. However, the most severe constraint of the nodes is their limited energy resource because they cease to function when their battery has been depleted. To reduce energy usage in wireless sensor networks, many cluster-based routings have been proposed. Among those proposed, LEACH (low energy adaptive clustering hierarchy) is a well-known cluster-based sensor network architecture which aims to distribute energy consumption evenly to every node in a given network. This clustering technique requires a predefined number of clusters and has been developed with an assumption that the sensor nodes are uniformly distributed through out the network. In this paper, we propose a hybrid clustering and routing architecture for wireless sensor networks. There are three main parts in our proposed architecture which are a modified subtractive clustering technique, an energy-aware cluster head selection method and a cost-based routing algorithm. These are all centralized techniques and are expected to be executed at the base station
Conference Paper
Protecting data transmitted over the Internet has become a critical issue driven by the progress in data digitalization and communications networking over the past decade. The content being transmitted can be in the form of images, text and voice. To ensure that transmitted data are secure and cannot be tampered with or noticed by malicious attackers, several approaches have been proposed. Steganography is one general approach among them. The hiding capacity and image quality of stego-images are two major measures with which to evaluate an image hiding scheme. To enhance the hiding capacity of Fridrich et al.'s scheme, an improved image hiding scheme for grayscale images based on wet paper coding is proposed in this paper. The significant difference between Fridrich et al.'s scheme and ours is that we shuffle all the pixels in the host image by adopting toral automorphism, then segment all pixels before using our proposed wet pixel hiding strategies to hide the data. Experimental results show that our proposed scheme embeds a larger-sized secret image while maintaining acceptable image quality of the stego-image better than Fridrich et al.'s scheme does. Moreover, the computational cost of our proposed scheme is significantly less than that of Fridrich et al.'s scheme because inverse matrix and multiplication operations are not required. Therefore, our proposed scheme is suitable for real time applications.
Conference Paper
This paper presents a novel approach to handle the challenges of face recognition. In this work thermal face images are considered, which minimizes the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Polar images are projected into eigenspace and finally classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal face images. Experimental results show that the proposed approach significantly improves the verification and identification performance and the success rate is 97.05%.
Conference Paper
In cellular mobile networks, efficient handoff performance entails minimizing unnecessary handoffs without risking desired handoffs which could lead to increased dropped calls. An adaptive relative-threshold (hysteresis) algorithm has been proposed by Senarath and Everitt (see Proc. VTC'93, 1993) to effectively identify and prevent unnecessary handoffs. However, results here show that this adaptive algorithm can cause undesired cell-dragging. We propose an adaptive, absolute-threshold algorithm to be used in combination with the adaptive relative-threshold algorithm to mitigate cell dragging. The performance of the algorithm is investigated for an IS-136 TDMA system using a powerful network level simulator. Results show that, depending on the propagation environment, the proposed algorithm can reduce received signal strength (RSS) based handoff triggers by up to 75% while maintaining a lower call drop-rate and less cell dragging than fixed threshold algorithms
Article
A set of fundamental principles can act as an enabler in the establishment of a discipline; however, software engineering still lacks a set of universally recognized fundamental principles. This article presents a progress report on an attempt to identify and develop a consensus on a set of candidate fundamental principles. A fundamental principle is less specific and more enduring than methodologies and techniques. It should be phrased to withstand the test of time. It should not contradict a more general engineering principle and should have some correspondence with “best practice”. It should be precise enough to be capable of support and contradiction and should not conceal a tradeoff. It should also relate to one or more computer science or engineering concepts. The proposed candidate set consists of fundamental principles which were identified through two workshops, two Delphi studies and a web-based survey.
Article
Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Article
The past few years have witnessed increased interest in the potential use of wireless sensor networks (WSNs) in applications such as disaster management, combat field reconnaissance, border protection and security surveillance. Sensors in these applications are expected to be remotely deployed in large numbers and to operate autonomously in unattended environments. To support scalability, nodes are often grouped into disjoint and mostly non-overlapping clusters. In this paper, we present a taxonomy and general classification of published clustering schemes. We survey different clustering algorithms for WSNs; highlighting their objectives, features, complexity, etc. We also compare of these clustering algorithms based on metrics such as convergence rate, cluster stability, cluster overlapping, location-awareness and support for node mobility.
Conference Paper
In many e-commerce (electronic commerce) situations, the owner of a digital object wants to enforce policies on the object after the object is in the customer's hands. The object can be thought of as being software, because data is often protected by forcing access to it to take place through a particular authorized software (e.g., a "reader" for an encrypted media file, in which case a license to view the movie is, in some sense, a "software license"). One of the ways that were proposed for such policy enforcement is the use of smart cards. We describe an enhanced solution to software license management based on tamper-resistant smart cards. Our public-key protocols for binding software licenses to smart cards improve on previous schemes in that they support flexible and partial transfers of licenses between cards. The license is verified by checking the presence of the associated card. The user can therefore have several software licenses all of which are bound to one card, to avoid juggling several cards in and out of the card reader.
Conference Paper
Consistent checkpointing provides transparent fault tolerance for long-running distributed applications. Performance measurements of an implementation of consistent checkpointing are described. The measurements show that consistent checkpointing performs remarkably well. Eight computation-intensive distributed applications were executed on a network of 16 diskless Sun-3/60 workstations, and the performance without checkpointing was compared to the performance with consistent checkpoints taken at two-minute intervals. For six of the eight applications, the running time increased by less than 1% as a result of the checkpointing. The highest overhead measured was 5.8%. Incremental checkpointing and copy-on write checkpointing were the most effective techniques in lowering the running time overhead. It is argued that these measurements show that consistent checkpointing is an efficient way to provide fault tolerance for long-running distributed applications
Conference Paper
Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions.
Article
Interaction in component-based systems (CBS) happens when a component provides an interface and other components use it, and also when a component submits an event and other component receives it. Interactions promote dependencies. Higher dependency leads to a complex system, which results in poor understanding and a higher maintenance cost. Usually, dependency is represented by an adjacency matrix used in graph theory. However, this representation can check only for the presence of dependency between components and does not consider the type of interactions between these components. Interaction type can have a significant contribution to the complexity of the system. This paper proposes a link-list based dependency representation and implements it by using Hash Map in Java. This representation can store the dependency along with other information like, provided and required interfaces of components along with their types. This information can be used to analyze several interaction and dependency related issues. This paper also presents the results of an experiment of the proposed approach and measures the interaction densities and dependency level of an individual component and for the system The results show that the proposed metrics can also be used to identify the most critical and isolated components in the system, which can lead to better understanding and easy system maintenance.
Article
Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.
Article
Consider the traveling salesman problem where the distance between two cities A and B is an integrable function of the y-coordinate of A and the x-coordinate of B. This problem finds important applications in operations management and combinatorial optimization. Gilmore and Gomory (Oper. Res. 12 (1964) 655) gave a polynomial time algorithm for this problem. In the bottleneck variant of this problem (BP), we seek a tour that minimizes the maximum distance between any two consecutive cities. For BP, Gilmore and Gomory state that they “do not yet know how to solve the problem for general integrable functions”. We show that BP is strongly NP-complete. Also, we use this reduction to provide a method for proving NP-completeness of other combinatorial problems.
Article
In this work we investigate a novel approach to handle the challenges of face recognition, which includes rotation, scale, occlusion, illumination etc. Here, we have used thermal face images as those are capable to minimize the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Line features are extracted from thermal polar images and feature vectors are constructed using these line. Feature vectors thus obtained passes through principal component analysis (PCA) for the dimensionality reduction of feature vectors. Finally, the images projected into eigenspace are classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database. Experimental results show that the proposed approach significantly improves the verification and identification performance and the success rate is 99.25%.
Article
Due to extremely high demand of mobile phones among people, over the years there has beena great demand for the support of various applications and security services. 2G and 3Gprovide two levels of security through: encryption and authentication. This paper presentsperformance analysis and comparison between the algorithms in terms of time complexity.The parameters considered for comparison are processing power and input size. Securityfeatures may have adverse effect on quality of services offered to the end users and thesystem capacity. The computational cost overhead that the security protocols and algorithmsimpose on lightweight end users devices is analyzed. The results of analysis reveal the effectof authentication and encryption algorithms of 2G and 3G on system performance defined interms of throughput which will further help in quantifying the overhead caused due tosecurity.
Article
Manufacturing database store large amount of interrelated data. The designers access specific information or group of information in the data. Each designer accessing an entity tries to modify the design parameters meeting the requirements of different customers. Sister concerns of the same group of company will be modifying the data as per design requirements. When information is updated with new modification by different group of designers, what is the order in which modification of the data has to be allowed. If simultaneous access of the information is done, how to maintain the consistency of the data. and a designer voluntarily corrupts the data, how to make sure the designer is responsible for the corruption of data. In any case if the transaction process corrupts the data, how to maintain the consistency of the data. Deleting the information wantedly can be identified with extra security for the data. However, when transaction protocol is not implemented properly, then corruption of data in the form of misleading information that showing less numerical value than what it has to be or showing more numerical than before updation. In this research work, we have proposed a neural network method for the managing the locks assigned to objects and the corresponding transactions are stored in a data structure. The main purpose of using the ANN is that it will require less memory in storing the lock information assigned to objects. We have attempted to use backpropagation algorithm for storing lock information when multi users are working on computer integrated manufacturing (CIM) database.