International Journal of Computers and Applications

Published by ACTA Press
Online ISSN: 1925-7074
Print ISSN: 1206-212X
Distribution of Operating Modes of Sensor Nodes
Clustering is an effective topology control approach in sensor networks. This paper proposes a distributed and adaptive clustering architecture for dynamic sensor networks. The proposed architecture comprises an approach for energy-efficient clustering with adaptive node activity for achieving a good performance in terms of system lifetime and network coverage quality. This architecture demonstrates a uniform cluster head distribution across the network in addition to a desirable network coverage. Furthermore, the paper presents an analytical approach to disclose the relationship between network density and coverage quality. Experiments were conducted to validate the proposed architecture. The analytical and simulation results demonstrate that the proposed architecture prolongs network lifetime meanwhile preserving a highly coverage quality.
Several medium-to-large companies are currently in the process of using external hosting to deploy their Internet applications. External hosting by such application service providers (ASPs) as IBM provides a low-cost, secure and reliable way for companies to deploy e-commerce enterprise applications without the need to purchase costly infrastructure. However, while designing software for applications that run on the servers of external hosts, performance may suffer if the data and the application reside at two different locations. In this paper, we provide a technique for improving the performance in such applications. This technique is implemented in large industrial software systems. We provide the software architecture and preliminary performance results.
Spectral clustering has been used in computer vision successfully in recent years, which refers to the algorithm that the global-optima is found in the relaxed continuous domain obtained by eigendecomposition, and then a multi-class clustering problem should solved by traditional clustering algorithm such as k-means. In this paper, we propose a novel spectral clustering algorithm based on particle swarm optimization (PSO). The major contribution of this work is to combine PSO technique with spectral clustering. In the multi-class clustering stage, the PSO is applied in the feature space to cluster the new data, each of which is a characterization of the original data. Experimental studies on PSO-based spectral clustering algorithm demonstrate that the proposed algorithm provides global convergence, steady performance and better accuracy.
Segment retransmissions are an essential tool in assuring reliable end-to-end communication in the Internet. Their crucial role in TCP design and operation has been studied extensively, in particular with respect to identifying non-conformant, buggy, or underperforming behaviour. However, TCP segment retransmissions are often overlooked when examining and analyzing large traffic traces. In fact, some have come to believe that retransmissions are a rare oddity, characteristically associated with faulty network paths, which, typically, tend to disappear as networking technology advances and link capacities grow. We find that this may be far from the reality experienced by TCP flows. We quantify aggregate TCP segment retransmission rates using publicly available network traces from six passive monitoring points attached to the egress gateways at large sites. In virtually half of the traces examined we observed aggregate TCP retransmission rates exceeding 1%, and of these, about half again had retransmission rates exceeding 2%. Even for sites with low utilization and high capacity gateway links, retransmission rates of 1%, and sometimes higher, were not uncommon. Our results complement, extend and bring up to date partial and incomplete results in previous work, and show that TCP retransmissions continue to constitute a non-negligible percentage of the overall traffic, despite significant advances across the board in telecommunications technologies and network protocols. The results presented are pertinent to end-to-end protocol designers and evaluators as they provide a range of "realistic" scenarios under which, and a "marker" against which, simulation studies can be configured and calibrated, and future protocols evaluated.
Microarrays are made it possible to simultaneously monitor the expression profiles of thousands of genes under various experimental conditions. Identification of co-expressed genes and coherent patterns is the central goal in microarray or gene expression data analysis and is an important task in Bioinformatics research. In this paper, K-Means algorithm hybridised with Cluster Centre Initialization Algorithm (CCIA) is proposed Gene Expression Data. The proposed algorithm overcomes the drawbacks of specifying the number of clusters in the K-Means methods. Experimental analysis shows that the proposed method performs well on gene Expression Data when compare with the traditional K- Means clustering and Silhouette Coefficients cluster measure.
Optimization algorithms are normally influenced by meta-heuristic approach. In recent years several hybrid methods for optimization are developed to find out a better solution. The proposed work using meta-heuristic Nature Inspired algorithm is applied with back-propagation method to train a feed-forward neural network. Firefly algorithm is a nature inspired meta-heuristic algorithm, and it is incorporated into back-propagation algorithm to achieve fast and improved convergence rate in training feed-forward neural network. The proposed technique is tested over some standard data set. It is found that proposed method produces an improved convergence within very few iteration. This performance is also analyzed and compared to genetic algorithm based back-propagation. It is observed that proposed method consumes less time to converge and providing improved convergence rate with minimum feed-forward neural network design.
This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together using weighted mean fusion rule. Finally, support vector machines (SVM) have trained with augmented fused set of features and perform the recognition task. The system has been evaluated with UMIST face database consisting of multiview faces. The experimental results demonstrate the efficiency and robustness of the proposed system for multi-view face images with high recognition rates. Complexity analysis of the proposed system is also presented at the end of the experimental results. Comment: 10 pages, 3 figures, IJCA
Steganography is the science of hiding digital information in such a way that no one can suspect its existence. Unlike cryptography which may arouse suspicions, steganography is a stealthy method that enables data communication in total secrecy. Steganography has many requirements, the foremost one is irrecoverability which refers to how hard it is for someone apart from the original communicating parties to detect and recover the hidden data out of the secret communication. A good strategy to guaranteeirrecoverability is to cover the secret data not usinga trivial method based on a predictable algorithm, but using a specific random pattern based on a mathematical algorithm. This paper proposes an image steganography technique based on theCanny edge detection algorithm.It is designed to hide secret data into a digital image within the pixels that make up the boundaries of objects detected in the image. More specifically, bits of the secret data replace the three LSBs of every color channel of the pixels detected by the Canny edge detection algorithm as part of the edges in the carrier image. Besides, the algorithm is parameterized by three parameters: The size of the Gaussian filter, a low threshold value, and a high threshold value. These parameters can yield to different outputs for the same input image and secret data. As a result, discovering the inner-workings of the algorithm would be considerably ambiguous, misguiding steganalysts from the exact location of the covert data. Experiments showed a simulation tool codenamed GhostBit, meant to cover and uncover secret data using the proposed algorithm. As future work, examining how other image processing techniques such as brightness and contrast adjustment can be taken advantage of in steganography with the purpose ofgiving the communicating parties more preferences tomanipulate their secret communication.
In this paper we develop a scientific approach to control inter-country conflict. This system makes use of a neural network and a feedback control approach. It was found that by controlling the four controllable inputs: Democracy, Dependency, Allies and Capability simultaneously, all the predicted dispute outcomes could be avoided. Furthermore, it was observed that controlling a single input Dependency or Capability also avoids all the predicted conflicts. When the influence of each input variable on conflict is assessed, Dependency, Capability, and Democracy emerge as key variables that influence conflict.
With the advent of the Internet, search engines have begun sprouting like mushrooms after a rainfall. Only in recent years, have developers become more innovative, and came up with guided searching facilities online. The goals of these applications are to help ease and guide the searching efforts of a novice web user toward their desired objectives. A number of implementations of such services are emerging. This paper proposes a guided meta-search engine, called "Guided Google", as it serves as an interface to the actual search engine, using the Google Web Services.
Process Control Interface 
Data capture interface generated from a existing procedure called by an activity 
The Web is widely considered as a reliable and promising approach to design new information systems in various application areas such as geographical information systems. A flexible and interactive control of user actions and interactions must be implemented to ensure system consistency and make its evolution easier. The Web environment does not provide such a control. Combining both works from the process domain and those from multimedia community, we propose a Temporal Process Model allowing the activities specification and the expression of their organization via temporal scenarios. Our approach leads to a new environment where processes could be executed in a safe and sound environment. The paper presents the process model and the object oriented framework we developed and emphasizes the application of our approach to an existing Multimedia Information System dedicated to natural hazards.
. Though the separation of a model from its visual representation (view) implies well-known benefits, available Java libraries do not sufficiently support this concept. The paper presents a straightforward way to smoothly enhance Java libraries in this direction independently of the particular graphic user interface (GUI) library. The lean framework JGadgets, which was inspired by the Oberon Gadgets system [1], allows developers to focus on model programming only. This significantly reduces the development costs, in particular in the realm of quite simple, form-based GUIs which are common-place in commercial e-business-systems.
A network of parallel workstations promises cost-eective parallel computing. This paper presents the HyFi (Hybrid Filaments) package, which can be used to create architectureindependent parallel programs|that is, programs that are portable and ecient across dierent parallel machines. HyFi integrates Shared Filaments (SF), which provides parallelism on sharedmemory multiprocessors, and Distributed Filaments (DF), which extracts parallelism from networks of uniprocessors. This enables parallelism on any architecture, including homogeneous networks of multiprocessors. HyFi uses ne-grain parallelism and implicit shared-variable communication to provide a uniform programming model. HyFi adopts the same basic execution model as SF and DF; this paper discusses the modications necessary to develop the hybrid system. In particular, HyFi modies the signal-thread model as well as the software distributed shared memory of DF. It also unies the SF and DF reduction operations as well as the dynamic load-balancing mechanism of fork-join laments. Application programs written using the HyFi API can run unchanged on any architecture. Performance is encouraging on fork/join applications, where excellent speedup is achieved. Also, the ne-grain model of HyFi allows up to a 14.5% improvement due to overlap of communication and computation. Unfortunately, we nd that iterative applications do not speed up well due to the inability of the Pentium Xeon architecture to eciently support concurrent memory accesses. 1 Keywords: ne-grain parallelism, architecture independence, distributed shared memory. 1
Energy conservation is a critical issue in wireless sensor networks for node and network life, as the nodes are powered by batteries. One way of doing so is to use only local information available to the nodes in the network. In this paper, we evaluate a number of power-aware routing protocols based on local information only. The simulation shows that basing the routing decision on the remaining power of neighboring nodes is not enough by itself. Instead, using the directional value and the sum of power remaining at the next neighbors gives the routing protocol a broader perspective about the condition of the network from a local point of view and enhances the decision process.
Computer-based, networking learning environments play a significant role in the improvement of the learning procedure. Electronic communication and collaboration services provide tutors and trainees with continual, close, and e#cient cooperation. An increase in the use of the Internet as a repository of resources for learning, and also as a means for delivery of specially prepared teaching materials, is a particularly significant innovation in the field of education. Educational applications are increasingly based on the World Wide Web, combining simplified access to the application and integration into a Web-based learning environment. This work presents a flexible communication and collaboration environment, developed within the framework of the ODYSSEAS project, that may be used by educators. The basic services environment presented in this paper is built upon the well-known and popular standards HTTP, SMTP, and POP3 and is accessible to the potential user through a web browser and connection to a user authentication server that will handle the user's private information, with minimal installation cost.
1 : In this paper new ideas and algorithms for adapting the workflow for newscast production are presented. The goal is to enable TV news editors to take advantage of the added facilities offered by digital video techniques. Algorithms for the analysis of MPEG compressed videos are presented. These enable a news editor to extract sequences of key frames from newsfeeds. Newsfeeds consist of assembled news clips transmitted by news agencies. Key frames are extracted in order to create a quick overview of the available material. In addition frames containing textual descriptions of the news content of individual clips are also extracted. These images are of low quality and standard OCR methods prove unsuitable for text recognition. Using standard image processing techniques in combination with a new OCR method especially developed for this purpose key words are extracted from these frames. On the one hand the key words can be used by news editors to search for particular news items in a newsfeed and on the other to search for related information on the internet. The final output of the system thus supplies news editors with an overview of newsfeed content combined with additional information gathered from news agencies via the internet. Key words: Newsfeeds, Analysis, MPEG, OCR, Internet 1.
Example images and their binary masks used to train the system for skin detection. Portion of the images containing the skin are manually marked in the binary images.
Some results of skin detection. White area in the images shows region where skin is detected.
In this paper, we present a method to remove commercials from talk and game show videos and to segment these videos into host and guest shots. In our approach, we mainly rely on information contained in shot transitions, rather than analyzing the scene content of individual frames. We utilize the inherent difference in scene structure of commercials and talk shows to differentiate between them. Similarly, we make use of the well-defined structure of talk shows, which can be exploited to classify shots as host or guest shots. The entire show is first segmented into camera shots based on color histogram. Then, we construct a data-structure (shot connectivity graph) which links similar shots over time. Analysis of the shot connectivity graph helps us to automatically separate commercials from program segments. This is done by first detecting stories, and then assigning a weight to each story based on its likelihood of being a commercial. Further analysis on stories is done to distinguish shots of the hosts from shots of the guests. We have tested our approach on several fulllength shows (including commercials) and have achieved video segmentation with high accuracy. The whole scheme is fast and works even on low quality video (160x120 pixel images at 5 Hz). Keywords: Video segmentation, video processing, digital library, story analysis, semantic structure of video, removing commercials from broadcast video. 1.
Training on simulators systems based on virtual reality for learning or learning improvement may be a more cost-effective and efficient akernative to traditional training methods. A new approach for quality for online training evaluation in virtual reality simulators is proposed. This approach uses Hidden Markov Models (HMM) for modeling and classification of training in pre-defined classes of training. In this paper we show an example of application in a simulator of bone marrow harvest for transplant.
Reliability has not been adequately addressed in existing real-time network communication protocols. We have developed a fault tolerant, real-time token ring protocol, which can efficiently handle token and message loss due to temporary processing and/or transmission errors, as well as messages loss due to station crashes [7]. In this paper we describe a data-link layer design of the message retransmission mechanism used in the protocol. We describe the design decisions and discuss the results of a simulation study of the proposed mechanism. The objective of the mechanism is to provide a real-time approach to minimizing messages loss in the event of a station crash. To evaluate the performance of the proposed message retransmission mechanism, we have conducted extensive simulation studies. The simulation results give insight into the effectiveness of the proposed strategy in averting messages loss due to station crashes under real-time constraints and various system conditions. Key words: Network communication, Token ring, Realtime systems, Fault tolerance 1
The SB-PRAM is a shared-memory parallel computer that realizes the CRCW-PRAM model from theoretical computer science. In this paper, the SB-PRAM system is described from a programmers point of view. Special emphasis is put on the process creation scheme and on the efficient implementation of synchronization constructs of the P4 library. Key Words architecture, massively parallel systems, software, PRAM, shared memory 1 Introduction The theoretical PRAM Model [6] is widely used in the theory community for specifying parallel algorithms in an elegant way [6]. A PRAM consists of an unbounded set of processors which compute synchronously in parallel. There is a single unbounded shared memory in which each processor can access any cell in unit time. This allows a synchronous execution of parallel programs on the instruction level leading to a fine grain parallelism without time consuming synchronization. There are different possibilities for dealing with concurrent accesses to a single me...
The popularity of real-time audio and video streaming applications on the Internet has burdened the congestion control mechanisms and highlighted the fairness concerns with misbehaving flows. The type and amount of traffic on the network is causing degradation of services to users in the Internet community. Packet Pair Layered Multicast protocol is based on cumulative layered multicasting that enables the bottleneck bandwidth to be inferred by the use of a packet pair mechanism. Increased complexity in PLM might hinder the deployment of the protocol in the wider network. This is due to the increased state stored in network routers and the propagation of dual packets in the network. In this paper, we discuss the development of the Adaptive Layered Multicast protocol to regulate and distribute available bandwidth to heterogeneous receivers without the complexities associated with PLM.
Mobile devices can reduce their energy consumption through power aware remote processing. Software components running on battery-operated wireless nodes are migrated to wall-power wired remote servers. To increase the efficiency of power aware remote processing, we propose a novel integrated estimator for software component's power and energy consumption. This adaptive estimator is based on a software component interface, which provides power and timing information. The unit is one of the main components in our framework for power aware remote processing, providing information for efficient internal decision making whether software components are worth for migration or not. Furthermore, we present results from our framework evaluation in Java environment and standard wearable computing hardware, using sample software components for AES encryption and decryption.
To protect the privacy of proxy blind signature holders from dissemination of signatures by verifiers, this paper introduces universal designated-verifier proxy blind signatures. In a universal designated-verifier proxy blind signature scheme, any holder of a proxy blind signature can convert the proxy blind signature into a designated-verifier signature. Given the designated-signature, only the designated-verifier can verify that the message was signed by the proxy signer, but is unable to convince anyone else of this fact. This paper also proposes an ID-based universal designated-verifier proxy blind signature scheme from bilinear group-pairs. The proposed scheme can be used in E-commerce to protect user privacy.
We have demonstrated how fuzzy concepts can easily be used in the Johnson algorithm for managing uncertain scheduling on two-machine flow shops. This paper extends application to fuzzy flow shops with more than two machines. A new fuzzy heuristic flow-shop scheduling algorithm (the fuzzy CDS algorithm) is then designed since optimal solutions seem unnecessary for uncertain environments. Also, the conventional CDS algorithm is shown as a special case of the fuzzy CDS algorithm with special membership functions being assigned.
In this paper we present an empirical, comparative performance, analysis of fourteen variants of Differential Evolution (DE) and Dynamic Differential Evolution (DDE) algorithms to solve unconstrained global optimization problems. The aim is to compare DDE, which employs a dynamic evolution mechanism, against DE and to identify the competitive variants which perform reasonably well on problems with different features. The fourteen variants of DE and DDE are benchmarked on 6 test functions grouped by features - unimodal separable, unimodal nonseparable, multimodal separable and multimodal non-separable. The analysis identifies the competitive variants and shows that DDE variants consistently outperform their classical counter parts.
Main flow of the algorithm
Rate of additional new proposals
Results of the model on a data-base containing only real cases.
Results of the simple case-based retrieval for the NC
Matchmaking of human beings is recognized as a difficult task requiring great insight and sensitivity. In this paper, we present a case-based model that enlarges the number of relevant matches for any applicant. This model has been implemented in a working system for matches among either religious or traditional Jews in Israel. One implemented idea is the regular case-based retrieval, i.e. retrieving proposals similar to previous successful proposals. In addition, we implement a more complex case-based retrieval, which retrieves proposals in a few case-based steps. The model has been implemented in a working system intended for use in real life. Examples of the experiment we carried out show that case-based retrieval helps the majority of people by proposing new proposals in addition to those proposed in the regular kinds of retrieval. We think that these case-based retrieval methods, in principle, can be generalized to a certain extent to other matching tasks. Another future idea is to evaluate the potential of the case-based matchmaker as an intelligent support system for matchmakers.
The author introduces a minimum cost maximum flows (MCMF) routing algorithm that allows the construction of a multicast tree that incorporates both static and dynamic nodes. The key idea of the algorithm is, first, finding the least delay path of the partial static tree (PST) from a source node to a set of static receivers, by using the ratio of "delay/number of flows" as cost function; and second, adding the dynamic nodes to the constructed PST, one by one, according to their arrival order, by minimizing the ratio "delay/number of flows." According to the presented simulation, this algorithm is superior to shortest path tree (SPT) technique and minimum delay maximum degree algorithm (MDMDA). This result is obtained according to certain simulation parameters and to the tool used to construct the graphs, regarding the total tree cost and the total tree flows criteria, when there is no delay constraint.
Discovery and selection of mobile services are essential functions in mobile ad-hoc networks (MANETs). Particularly, connecting MANETs to the Internet in order to provide mobile nodes with multi-hop wireless Internet access depends on the interface between both networks. Our proposed architecture uses mobile gateways as "moving" access points to connect MANET nodes to the edge of the Internet. The purpose of this paper is to propose a location-based discovery protocol of mobile gateways and a selection technique used by MANET nodes to register with these gateways and get Internet access. While the discovery protocol is based on the geometrical properties of Voronoi diagrams, the selection technique uses a hybrid criterion based on the normalized weighted sum of the Euclidean distance between MANET nodes and mobile gateways, and the load of mobile gateways.
The widespread use of eXtensible Markup Language (XML) for data representation and exchange has led to increasing research interest in methods for XML content searching, presentation, and access control. This paper presents an XML repository searcher-browser application with a declarative role-based access control framework; the proposed access control model allows the definition of a finegrained access policy to be applied to the underlay data content. The auto-generated user interface for the accessing of XML content provides a light-weight application that, while taking into account the access control policy, is also suitable for distributed mobile applications.
We propose a new neural network model, Neuron-Adaptive artificial neural Network (NAN). A learning algorithm is derived to tune both the neuron activation function free parameters and the connection weights between neurons. We proceed to prove that a NAN can approximate any piecewise continuous function to any desired accuracy, and then relate the approximation properties of NAN models to some special mathematical functions. A neuron-Adaptive artificial Neural network System for Estimating Rainfall (ANSER), which uses NAN as its basic reasoning network, is described. Empirical results show that the NAN model performs about 1.8% better than artificial neural network groups, and around 16.4% better than classical artificial neural networks when using a rainfall estimate experimental database. The empirical results also show that by using the NAN model, ANSER plus can (1) automatically compute rainfall amounts ten times faster; and (2) reduce average errors of rainfall estimates for the total precipitation event to less than 10%.
An IP Virtual Private Network (VPN) uses a major share of physical resources of a network to satisfy customer's demand for secure connectivity and Quality of Service (QoS) over the Internet. Service Level Agreements (SLAs) are often used to provide bandwidth-guaranteed VPNs on networks that do not support reservation. To meet these SLAs, service providers overprovision the bandwidth allocation. This is effective, but not economic and does not enforce compliance by the customer, with potentially adverse consequences for charging and congestion control mechanisms. This article proposes an agent-based approach to dynamically adjust the allocated bandwidth as per the user's requests so that the VPNs that are carrying real time or multimedia traffic can be allocated with required amount of bandwidth. The Agent processes acting on behalf of VPN users may decrease their allocated capacity if the users underuse the allocated quota so that the Service provider can satisfy few additional demands. We propose distributed bandwidth resizing algorithms for optimizing inter-VPN and intra-VPN bandwidth allocations. This leads to an increased number of VPN connections and better utilization of network resources. The simulation results of the proposed adaptive algorithms show efficient utilization of network bandwidth among the VPN users.
Flowchart of the distributed cluster formation
Operating sensor modes (a), active member (b), and as active node in general (c) 
Sensor percentage versus battery capacities 
Clustering is an effective topology control approach in sensor networks. This paper proposes a distributed and adaptive clustering architecture for dynamic sensor networks. The proposed architecture comprises an approach for energy-efficient clustering with adaptive node activity for achieving a good performance in terms of system lifetime and network coverage quality. This architecture demonstrates a uniform cluster head distribution across the network in addition to a desirable network coverage. Furthermore, the paper presents an analytical approach to disclose the relationship between network density and coverage quality. Experiments were conducted to validate the proposed architecture. The analytical and simulation results demonstrate that the proposed architecture prolongs network lifetime meanwhile preserving a highly coverage quality.
A multimedia server employs admission control algorithm (ACA) to control client traffic in order to increase utilization of server resources. The admission process accepts new clients as long as it does not violate service requirements of pre-existing clients. In this paper, we are proposing a hybrid admission control algorithm (HACA) that can handle a considerably large number of clients simultaneously by engaging different admission policies for different clients based on their service requirements. The performance of an ACA is dependent on the disk-scheduling algorithm it uses. In this paper we consider various disk-scheduling algorithms to measure the performance of our proposed algorithm. We also introduce some techniques for minimizing overflow of rounds that significantly improve the performance and demonstrate the effectiveness of HACA.
GA operations order complexity
General gene representation
Kernels merger example: node 10 and node 11  
Kernel evolution process  
Best, average, and worst fitness of chromosomes in a population of 300,000 for the elliptic wave filter  
This paper presents an efficient method for concurrent BIST synthesis and test scheduling in high-level synthesis. The method maximizes concurrent testing of modules while performing the allocation of functional units, test registers, and interconnects. The method is based on a genetic algorithm that efficiently explores the testable design space and finds a sub-optimal test registers assignment for each k-test session. The method was implemented using C++ on a Linux workstation. Several benchmark examples have been implemented and favorable design comparisons are reported.
The design of distributed systems can be based on exercising either centralized or decentralized control mechanisms as per the application requirements. In this paper, for a centralized controlled distributed system, we design a dynamic object allocation and replication algorithm that adapts to the arriving requests patterns. We propose a mathematical cost model that considers the costs involved in servicing a request, such as I/O cost and communication cost, and design a dynamic algorithm, referred to as dynamic window mechanism (DWM). Our objective is to minimize the total servicing cost of all the arriving requests. We use competitive analysis to quantify the performance of DWM algorithm in the stationary computing environment (SCE) and extend our analytical study to the mobile computing environment (MCE).
The problem of test data generation in software testing is well referenced. Of the various methods used in the literature, symbolic execution appears to be a promising approach. It can be used either for software verification or to facilitate the automated test data generation process. A number of symbolic systems that employ symbolic execution to generate test data have already been constructed. However, these systems analyze programs written in specific programming languages. Thus, although some of them use an internal representation for performing symbolic execution, each one can only deal with programs in a single language. In this paper a script language called SYMEXLAN is presented. It can be adapted in order to construct a general symbolic execution testing system independent of the language in which the software under test is written.
Character recognition systems can contribute tremendously to the advancement of the automation process and can improve the interaction between man and machine in many applications, including office automation, cheque verification, and a large variety of banking, business, and data entry applications. The main theme of this paper is the automatic recognition of hand-printed Arabic characters using machine learning. Conventional methods have relied on hand-constructed dictionaries that are tedious to construct and difficult to make tolerant to variation in writing styles. The advantages of machine learning are that it can generalize over the large degree of variation between writing styles and recognition rules can be constructed by example. The system was tested on a sample of handwritten characters from several individuals whose writing quality ranged from acceptable to poor. The average recognition rate obtained using cross-validation was 87.23%.
Traditionally, in the CS design process, it is considered more important to formalize the functionality of the system, while the system's environment plays a secondary role. From our point of view, a formalized representation of the system functionality and the system environment should be considered concurrently. In this paper, we propose four hierarchical levels of formalization for both the functionality and the environment of the system to be used at the concept development stage. To establish a design framework of the identified class of CS we formalized a decision-making process to choose a rational architecture of the real system. Formalization of the decision-making process involves: efficiency criteria, design strategy, and a set of goal functions. To find a solution to the set of goal functions we propose a framework of a simulation-based design methodology. The solution will provide a designer with a rational architecture of a real CS.
In this paper we propose a methodology, in which analytical models are presented so as to abstract the characteristics of massive multiplayer online games (MMOGs). By using these models, the system performance can be evaluated in terms of two performance metrics: (i) the cost of resources consumed by a targeted game system during its game-play; (ii) the system delay and consistency loss rate. Specifically, we can investigate the impact of various factors, such as game types, the number of players, the intensity of players or NPCs interaction, the number of regions in a gameworld, avatar region-transition rate and network configuration, on the system performance. In this study, a number of resource-cost functions are also defined, which map the system resource consumed to the cost during a game-play. In addition, an approach for evaluating the system delay and consistency loss is proposed. We choose the federated peer-to-peer architecture as a target for this study, because it is a typical and potential architecture proposed recently and attracting more attention. Finally, we demonstrate numerical examples for the system performance evaluation.
A new cellular array is introduced for synthesizing totally symmetric Boolean functions. The cellular structure uses only 3-input, 3-output AND--OR cells, and is fully path-delay fault testable. It is an improved version of the classical digital summation threshold logic array used earlier in logic design [1]. It admits a universal test set of length 2n that detects all single stuck-at faults, where n is the number of input variables. The proposed design is useful in view of the fact that two-level realizations of most of the symmetric functions are known to be path-delay untestable. Experiments on several circuits demonstrate that the structure offers less area and fewer number of paths compared to earlier delay testable proposals.
Distributed virtual environments (DVEs) are distributed systems that allow multiple geographically distributed clients to interact concurrently in a shared virtual world. DVEs, such as online games, military simulations, and collaborative design, etc., are very popular nowadays. To support scalable DVEs, a multi-server architecture is usually employed, and the virtual world is partitioned into several zones to distribute the load among servers. The client assignment problem arises when assigning the participating clients in the zones to servers. Current approaches usually assign clients to servers according to the locations of clients in the virtual world; i.e., clients interacting in a zone of the virtual world will be assigned to the same server. This approach may degrade the interactivity of DVEs if the network delay from a client to its assigned server is large. In this paper, we formulate the client assignment problem and propose two algorithms to assign clients to servers in a more efficient way. The proposed algorithms are based on the heuristics developed for the well-known terminal assignment problem. Simulation results with the BRITE Internet Topology Generator show that our algorithms are effective in enhancing the interactivity of DVEs.
This study focuses on the mathematical modeling of the optimal spare capacity assignment problem for the links of a telecommunications network. Given a network topology, a point-to-point demand matrix with demand routings, and the permissible values for link capacity, we optimize the assignment of spare capacity to the links of the network in order for it to survive single link failures. The modular spare capacity allocation problem is formulated as a mixed-integer program which is computationally expensive for all but small problem instances. To be able to solve large practical problem instances, we strengthen the continuous relaxation by including additional constraints related to cuts in the network topology graph. Our solution approach is to decompose the problem into a pair of smaller problems for which optimal solutions can be obtained within a realistic time constraint. Combining the solutions for the subproblems results in a feasible solution for the original problem. We analyze the efficiency of cut-generating techniques used to derive additional constraints, and we empirically investigate the performance of the decomposition approach. The numerical results indicate that the combination of additional constraints and the decomposition algorithm improves solution times when compared to solving the original mixed-integer program. The solution time improvement using the proposed heuristics can be as high as an order of magnitude for small problem instances, while large practical problem instances can be solved in less than half the time.
The overall performance characteristics of cluster systems depend heavily on the pattern and on the amount of communication between the nodes. The performance may be improved by using asynchronous (nonblocking) message passing, because it allows communication and computation to overlap, thereby hiding a part of the communication overhead. This paper develops an analytical model to capture the performance-related issues of asynchronous communication in a small, fully switched cluster environment. The parameters of the model can be identified from measurable program and hardware characteristics, allowing the model to anticipate the performance behaviour of complex parallel applications. The paper's main contribution is to describe the effect of parallel communication channels on the effective bandwidth of a single node. The model is validated by comparing the predicted and measured performance of two different broadcast primitives for a range of message sizes as a function of the number of the participating nodes.
This paper presents a solution to the problem of making a trusted third-party authentication protocol fault tolerant. We applied the general solution to the Needham and Schroeder and Kerberos authentication protocols. Finally, we discuss the implementation of a fault-tolerant Kerberos authentication protocol.
Document retrieval latency is a major problem when using the Internet. Caching of documents at servers close to clients reduces the latency. Many such caching schemes are known. The impact of a caching scheme on the relative bandwidth requirement of individual links of the caching network however, is little known in the literature. This work formulates a graph-based framework and methodology for estimating the relative bandwidth requirement of links in a hierarchical caching network. It exploits the simple fact that an existing latency will not worsen if the bandwidths of non-bottleneck links are reduced until those become bottlenecks.
Since Biederman introduced to the computer vision community a theory of human image understanding called “recognition-by- components", great interest in using it as a basis for generic object recognition has been spawned. Inspired by optica, we propose a framework for generic object recognition with multiple Bayesian networks, where the object, primitive, prediction, and face nets are integrated with the more commonly used graph representation in computer vision to capture the causal, probabilistic relations among objects, primitives, aspects, faces, and contours. Based on the use of likelihood evidence, the communication mechanism among the nets is simple and efficient, and the four basic recognition behaviours are realized in a single framework. Each net is an autonomous agent, selectively responding to the data from the lower level in the context from its parent net, and dealing with the uncertainty and controlling the recognition tasks on its corresponding level. Our contributions in this article are the dynamic feedback control among recognition stages based on Bayesian networks, the attention mechanism using consistency- and discrimination-based value functions, and the unification of incremental grouping, partial matching, and multi-key indexing as an identical process under prediction for hypothesis generation. Our experiments have demonstrated that this new approach is more robust and efficient than the previous one.
No Given a prescribed boundary of a Bezier surface we compare the Bezier surfaces generated by two different methods, i.e. the Bezier surface minimising the Biharmonic functional and the unique Bezier surface solution of the Biharmonic equation with prescribed boundary. Although often the two types of surfaces look visually the same, we show that they are indeed different. In this paper we provide a theoretical argument showing why the two types of surfaces are not always the same.
Signcryption is a public key cryptographic primitive that performs digital signature and public key encryption simultaneously, at lower computational costs and communication overheads than the signature-then-encryption approach. In this paper, an efficient certificate-based signcryption scheme based on bilinear pairings is proposed. As compared to traditional and identity-based signcryption schemes, the proposed scheme has the following advantages: it provides implicit certification; it does not have the private key escrow feature of identity-based signcryption schemes, we also analyze the proposed scheme from security and performance points of view.
Multimedia data encryption is suitable for copyright protection. In this paper, a multimedia encryption scheme combined with block-based codecs (such as JPEG/JPEG2000, MPEG1/2, or H.261/263) is proposed, which permutes block positions, permutes coefficients (discrete cosign transform (DCT) coefficients or wavelet coefficients), and encrypts coefficient signs. It is extended to a perceptual encryption scheme and a secure video-on-demand (VOD) scheme. Theoretical analyses and experimental results show that this encryption scheme is of low-cost, supports direct bit-rate control, and is of higher robustness to transmission errors. These properties make it suitable for secure image or video transmission.
Top-cited authors
Liudong Xing
  • University of Massachusetts Dartmouth
Ryan Robidoux
  • University of Massachusetts Dartmouth
Haiping Xu
  • University of Massachusetts Dartmouth
Moustafa Youssef
  • The American University in Cairo
Ashok Agrawala
  • University of Maryland, College Park