Journal of Computing and Information Technology

Print ISSN: 1330-1136
Publications
This paper gives an insight into the readiness of small and medium sized enterprises (SMEs) for accepting e-government services in he UK. We conducted a survey of 128 SMEs, which revealed that there is a moderate demand for e-government services, but they were not rated as efficient and essential for SMEs' businesses as conventional services. The proliferation of the UK government's Web sites, which requires co-ordination between several organisations/multiple sites, and inadequate awareness of such services, do not comply with the common concepts of e-governance and consequently have an impact on the SMEs acceptance of e-government services in the UK
 
Acoustic echo canceller.
Triangular second order kernel of a typical acoustic echo path.
Simplified Volterra filter structure.
Equation error adaptive bilinear filter.
Echo cancellation obtained by some significant filters, in dB, plotted versus nonlinear second order distorsion in dB.
Linear filters are often employed in most signal processing applications. As a matter of fact, they are well understood within a uniform theory of discrete linear systems. However, many physical systems exhibit some nonlinear behaviour, and in certain situations linear filters perform poorly. One case is the problem of acoustic echo cancellation, where the digital filter employed has to identify as close as possible the acoustic echo path that is found to be highly nonlinear. In this situation a better system identification can be achieved by a nonlinear filter. The problem is to find a nonlinear filter structure able to realize a good approximation of the echo path without any significant increase of the computational load. Conventional Volterra filters are well suited for modeling that system but they need in general too many computational resources for a real time implementation. We consider some low complexity nonlinear filters in order to find out a filter structure able to achieve performances close to those of the Volterra filter, but with a reduced increase of the computational load in comparison to the linear filters commonly employed in commercial acoustic echo cancellers
 
Aspect-oriented programming paradigm [G. Kiczales et al., (1997)] has proven to be a viable approach to simplifying complex software systems. We are particularly concerned with systems where basic functionality interlaces with more specific and repetitive tasks such as exception handling, logging or message redirection. Aspect-oriented approach enables separation of concerns [R.J. Walker et al., (1999)] which are better designed independently but must operate together. We have extended this approach to distributed enterprise Web-based information systems based on J2EE platform
 
The Geo-any operator.  
The main problem of visual query languages for geographical data concerns ambiguity of the query. In fact a query can have different visual representation and a visual representation of a query can have different interpretations. Increasing the number of the objects of the query increases the ambiguity. This ambiguity derives from the fact that a query can lead to multiple interpretations both for the system, and for the user. He, by his actions may not represent his intentions so that the system can be brought to a wrong interpretation. So, he cannot express his exact query and different queries can be formulated to obtain the user goals. The present work proposes an approach that allows the user to represent only desired constraints and to avoid the representation of undesired constraints in the visual representation of the query
 
The aim of this paper is to address the role of ICT-support call centres in supporting mobile professionals. In global organisations, mobile professionals require constant and continuous access to information services. In modern global banking firms, ICTs are intensively utilised for electronic transactional processing and in supporting banking professionals in accomplishing their global tasks across geographical locations and time zones. The use of ICTs by global mobile bankers is crucial to access real-time information, anytime and anywhere. In their remote mobility, banking professionals may experience technology failure or difficulties in accessing information services. This inability of the banking mobile professionals in utilising ICTs could have serious consequences in terms of risk on the bank operations and profit. The ICT-support call-centres play major roles in supporting the mobile user. This paper discusses how a global help desk unit accomplishes this role in a global banking organisation. This is achieved through analysis of the call tickets from the global helpdesk tracking system.
 
We describe new binary algorithm for the prediction of alpha and beta protein folding types from RNA, DNA and amino acid sequences. The method enables quick, simple and accurate prediction of alpha and beta protein folds on a personal computer by means of few binary patterns of coded amino acid and nucleotide physicochemical properties. The algorithm was tested with machine learning SMO (sequential minimal optimisation) classifier for the support vector machines and classification trees, on a dataset of 140 dissimilar protein folds. Depending on the method of testing, the overall classification accuracy was 91.43%-100% and the tenfold cross-validation result of the procedure was 83.57%->90%. Genetic code randomisation analysis based on 100,000 different codes tested for the protein fold prediction quality indicated that: a) there is a very low chance of p=2.7times10<sup>-4</sup> that a better code than the natural one specified by the binary coding algorithm is randomly produced, b) dipeptides represent basic protein units with respect to the natural genetic code defining of the secondary protein structure
 
Methodological scope of business reengineering is designed by using logistics information systems researches carried out in most highly developed countries. Methods of information system development are analysed from the business engineering point of view. The example of the information system development in function of business reengineering describes the usage of system structural analysis method in redesigning the process of transport of goods, which is critical for optimization of process of transport within the system of logistics. Basing upon the analysis of the logistics system business necessities and studies of information system development researches of certain companies in Germany and USA, some of the significant factors of strategic development of logistics information systems have been defined. Definition of critical information system development factors and their mutual interaction intensity provides the guidelines for forming the methodological scope of business reengineering of system of logistics
 
Top 10 Page Views.
Every modern institution involved in higher education needs a learning management system (LMS) to handle learning and teaching processes. It is necessary to offer e.g. electronic lecture materials to the students for download via the Internet. In some educational contexts, it is also necessary to offer Internet tutorials to be able to give the students more personal support and accompany them through the whole lecture period. Many organisations have introduced commercial LMS and gained the experience that monolithic solutions do not fulfil the dynamic requirements of complex educational institutions and are very cost-intensive. Therefore, many universities face the decision to stick to their commercial LMS or to switch to a potentially more cost-effective and flexible solution, for instance by adopting available open source LMS. Since we have made profound experience in developing and operating an open source LMS, this contribution enlightens the main characteristics of this alternative. This paper describes a use case dealing with a full product lifecycle (development, deployment, use and evaluation) of an open source LMS at the University of Muenster (Germany). It identifies relevant instruments and aspects of system design which software architects in practical application domains should pay attention to
 
Enterprise information system management is the operation of different corporate databases, applications, and more and more often of integration and interoperability of legacy systems, acquired through mergers and acquisitions. These legacy systems produce structured or semistructured data that add to the vast amounts of data that a company generates every day. This data needs to be communicated between heterogeneous systems within the same company and eventually beyond the company's walls. Transformations of communicated data are required to enable companies to tightly integrate their systems into a cohesive infrastructure without changing their applications and systems. We present a transformation system that uses a grammar-based approach to provide direct integration of applications and systems at the data level. Sequences of transformations allow flexible and effective exchange of data between heterogeneous systems resulting in a single information network
 
In this paper, we analyze some properties of triangular and hexagonal grids in 2D digital space. We define distances based on the neighbouring relations that can be introduced in these grids. On the triangular grid, this can be done by the help of neighbourhood sequences. We construct a shortest path in the hexagonal grid in a natural way. We present an algorithm, which produces, for a given neighbourhood sequence, a shortest path between two arbitrary points of the triangular grid, and also calculate the distance between these two points
 
This paper describes our experiences in teaching a first year object-oriented programming course. We used Java as a vehicle to teach programming principles and BlueJ as a Java development environment. The course was heavily supported by Web-based resources delivered through WebCT. So far we consider the overall students' learning experience as being considerably enriched and a positive one
 
One of the most common basic techniques for improving the performance of web applications is caching frequently accessed data in fast data stores, colloquially known as cache daemons. In this paper we present a cache daemon suitable for storing complex data while maintaining fine-grained control over data storage, retrieval and expiry. Data manipulation in this cache daemon is performed via standard SQL statements so we call it SQLcached. It is a practical, usable solution already implemented in several large web sites.
 
Sketch of the 2D edge intersection approach, where x d denotes the detected position and x denotes the correct position of the tip, where the two edges meet. The observation window around x d is drawn with dashed style.
In this contribution, we are concerned with the detection and refined subvoxel localization of 3D point landmarks. We propose multi-step differential approaches which are generalizations of an existing two-step approach for subpixel localization of 2D point landmarks. This two-step approach combines landmark detection by applying a differential operator with refined localization through a differential edge intersection approach. In this paper, we investigate the localization performance of this two-step approach for an analytical model of a Gaussian blurred L-corner as well as a Gaussian blurred ellipse. By varying the model parameters, differently tapered and curved structures are represented. The results motivate the use of an analogous approach to 3D point landmark localization. We generalize the edge intersection approach to 3D and, by combining it with 3D differential operators for landmark detection, we propose multi-step approaches for subvoxel localization of 3D po...
 
Due to the diversity of display capabilities and input devices, mobile computing gadgets have caused a dramatic increase in the development effort of interactive services. User interface (UI) tailoring and multi platform access represent two promising concepts for coping with this challenge. The article presents a hybrid approach to the generation of adaptive UIs based on a linking strategy of hierarchies of graphs.
 
Concept index page of ISIS-Tutor with annotated links. Scrolling is required to see the links to the concepts from 45 to 64. 
The same index page as on figure 1 with hidden links to not-ready-to-be-learned nodes 
This paper is devoted to evaluation of adaptive navigation support in educational context. We present an educational hypermedia system ISIS-Tutor that applies several ANS technologies -- adaptive annotation, adaptive hiding, and direct guidance -- and describe a study, which evaluates the first two technologies. The results show that adaptive navigation support is helpful and can reduce user navigation efforts
 
Block diagram for determination of muscle bre orientation. 
Muscles, bones and cartilage of the anatomical model. The model is shown from (a) ventral, (b) lateral, and (c) dorsal. 
Interactive three-dimensional editor for the manual detection of orientations and their normals. Displayed are transversal, sagittal and frontal cuts in the thin-section photos of the Visible Man dataset from diierent perspectives.
An example for three-dimensional interpolation of orientations. The interpolation is determined from two given orientations. The number of itera- tions amounts to 20. 
(a)(b) Muscle bre orientations in the lower leg. The interpolation is calculated using 200 orientations determined with the three-dimensional editor and 7 000 normals of gradients calculated with automatic methods. The number of iterations is 60. The white lines indicate the bre orientation. They are constructed by following the orientation starting from user de ned points. 
this paper is the extension of a detailed anatomical model (Sachse et al., 1996a) (Sachse et al., 1996b) with the three-dimensional orientation of skeletal muscle fibres (Figure 1). The orientation is interpolated basing on two sets with restrictions of different types. The first set consists of points for which the orientation is known. The second set consists of points with an assigned normal of orientation. These sets are created by detection with manual or automatic methods using techniques of digital image processing. The interpolation works iteratively employing the averaging orientations in the 6-neighbourhood. The average of neighbouring orientations is calculated by determination of their principal axis.
 
We have developed a scalable reliable multicast architecture for delivering oneto -many telepresentations. Whereas the transport for interactive real-time audio and video is concerned with timely delivery, other media, such as slides, images and animations require reliability. We propose to support reliability by combining multicast with forward error correction (FEC), as well as additional techniques depending on the nature of the data. Two related but distinct protocols are used for dynamic and persistent session data. For dynamic session data, we use erasure-correcting scalable reliable multicast (ECSRM), an enhanced version of SRM by Floyd et al. that is based on NACK suppression, but improves scalability and rate control. Session-persistent data is delivered using Fcast, a protocol that combines FEC and data carouseling with no backchannel from receiver to sender. Our approach is scalable to large heterogeneous receiver sets, and supports late-joining receivers. We have implemente...
 
Data dependency graph of c = (1/a 2 ) + b 2-(b + a)-1. 
Model Validation Results Using the Total Work Halting Criteria 
Determining the resources needed to run a specific program is an important task for static task schedulers for existing multiprocessors. It can also be a valuable computer aided engineering tool for the design and implementation of application specific parallel processors. An approach for determining the required number of processors and the amount of memory needed per processor is described. The estimates are calculated using information available in a data-flow graph generated by a high-level language compiler. Metrics based on the notions of thread spawning and maximum length thread probability density functions are presented. The measures obtained from the parallelism profiles are used as input to a queuing system model to predict the number of processing elements that can be exploited. Memory resource estimates are predicted through a simple graph traversal technique. Finally, experimental results are given to evaluate the methods. 1.0 Introduction Processor execution speeds are...
 
Sample Metadata
Playout Duration of the Segments
Tour Formation from Retrieved News Items
Video production involves the selection, manipulation, and composition of video segments to achieve a refined piece suitable for an intended audience. By associating metadata with each segment it is possible to automate this production process. For example, a mechanism is achievable for the purpose of creating dynamically assembled compositions for information customization applications including news-on-demand. In this paper we propose a grammar and associated production constraints necessary to facilitate automatic video composition in the news domain. The grammar encompasses composition based on content as well as structure of a newscast. In addition to providing a framework for logical composition of information, the grammar provides constraints for customization of information under bounds on playout duration or content selected by a user. We demonstrate how the language assists automatic information manipulation and composition of a newscast specifically when data are acquired f...
 
Calculating the exact radiological path through a pixel or voxel space is a frequently encountered problem in medical image reconstruction from projections and greatly influences the reconstruction time. Currently, one of the fastest algorithms designed for this purpose was published in 1985 by Robert L. Siddon [1]. In this paper, we propose an improved version of Siddon's algorithm, resulting in a considerable speedup.
 
Selective Image Compression (SeLIC) is a compression technique where explicitly defined regions of interest (RoI) are compressed in a lossless way whereas image regions containing unimportant information are compressed in a lossy manner. Such techniques are of great interest in telemedicine or medical imaging applications with large storage requirements. In this paper we introduce and compare different techniques based on wavelet transforms and demonstrate their good performance which is mainly due to the spatial locality of the wavelet transform domain. Keywords: wavelet image compression, region of interest coding, selective image compression 1 Introduction Wavelet-based image processing methods have gained much attention in the biomedical imaging community. Applications range from pure biomedical image processing techniques such as noise reduction, image enhancement, and detection of microcalcifications in mammograms to computed tomography (CT), magnetic resonance imaging (MRI), and...
 
In this paper we study a class of explicit pseudo two-step Runge-Kutta methods (EPTRK methods) with additional weights v. These methods are especially designed for parallel computers. We study s-stage methods with local stage order s and local step order s + 2 and derive a sufficient condition for global convergence order s+2 for fixed step sizes. Numerical experiments with 4- and 5-stage methods show the influence of this superconvergence condition. However, in general it is not possible to employ the new introduced weights to improve the stability of high order methods. We show, for any given s-stage method with extended weights which fullfills the simplifying conditions B(s) and C(s Gamma 1), the existence of a reduced method with a simple weight vector which has the same linear stability behaviour and the same order. Key words: Runge-Kutta methods, parallelism, two-step methods, superconvergence, linear stability AMS(MOS) subject classification (1991): 65M12, 65M20 1 Introduct...
 
Principle of one active ray.
Representation of a contour by active rays.
Results for tracking a car on a highway with active rays (images 4, 24, 44, 64, 84, 104 of a sequence of 123 images taken at video rate): the sampling step size 4 is =18. gets
Results for tracking a car on a highway with active rays (images 4, 44, 84 of a sequence of 123 images taken at video rate): the sampling step size 4 is =9.
In this paper we describe a new approach to contour extraction and tracking, which is based on the principles of active contour models and overcomes its shortcomings. We formally introduce active rays, describe the contour extraction as an energy minimization problem and discuss what active contours and active rays have in common. The main difference is that for active rays a unique ordering of the contour elements in the 2D image plane is given, which cannot be found for active contours. This is advantageous for predicting the contour elements' position and prevents crossings in the contour. Furthermore, another advantage of this approach is that instead of an energy minimization in the 2D image plane the minimization is reduced to a 1D search problem. The approach also shows any--time behavior which is important with respect to real--time applications. Finally, the method allows for the management of multiple hypotheses of the object's boundary. First results on real image sequences ...
 
Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model genetic algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. It is also possible that since linearly separable problems are often used to test Genetic Algorithms, that Island Models may simply be particularly well suited to exploiting the separable nature of the test problems. We explore this possibility by using the infinite population models of simple genetic algorithms to study how Island Models can track multiple search trajectories. We also introduce a simple model for better understanding when Island Model genetic algorithms may have an advantage when processing some test problems. We provide empirical results for both linearly separa...
 
A schematic showing inputs of an FLC 
This paper describes a genetic-fuzzy system in which a genetic algorithm (GA) is used to improve the performance of a fuzzy logic controller (FLC). The proposed algorithm is tested on a number of gait-generation problems of a hexapod for crossing a ditch while moving on flat terrain along a straight line path with minimum number of legs on the ground and with maximum average kinematic margin of the ground-legs. Moreover, the hexapod will have to maintain its static stability while crossing the ditch. The movement of each leg of the hexapod is controlled by a separate fuzzy logic controller and a GA is used to find a set of good rules for each FLC from the author-defined large rule base. The optimized FLCs are found to perform better than the author-designed FLCs. Although optimization is performed off-line, the hexapod can use these FLCs to navigate in real-world on-line scenarios. As an FLC is less expensive computationally, the computational complexity of the proposed algorithm will...
 
Example Multidimensional Split of Video Data
In this work we discuss various ideas for the optimization of 3-D wavelet/subband decomposition on shared memory MIMD computers.
 
A domain-specific language (DSL) provides a notation tailored towards an application domain and is based on the relevant concepts and features of that domain. As such, a DSL is a means to describe and generate members of a family of programs in the domain. A prerequisite for the design of a DSL is a detailed analysis and structuring of the application domain. Graphical feature diagrams have been proposed to organize the dependencies between such features, and to indicate which ones are common to all family members and which ones vary. In this paper, we study feature diagrams in more details, as well as their relationship to domain-specific languages. We propose the Feature Description Language (FDL), a textual language to describe features. We explore automated manipulation of feature descriptions such as normalization, expansion to disjunctive normal form, variability computation and constraint satisfaction. Feature descriptions can be directly mapped to UML diagrams which in their turn can be used for Java code generation. The value of FDL is assessed via a case study in the use and expressiveness of feature descriptions for the area of documentation generators. 1998 ACM Computing Classification System: D.2.2, D.2.9, D.2.11, D.2.13. Keywords and Phrases: Domain engineering, tool support, software product lines, UML, constraints. Note: To appear in the Journal of Computing and Information Technology, 2001. Note: Work carried out under CWI project SEN 1.2, Domain-Specific Languages, sponsored by the Telematica Instituut. 1 1
 
: This paper addresses an innovative approach to computer assisted learning of foreign language terminology which involves supporting not only foreign language learning focused on specific terminology but also the enhancement of conceptual knowledge in the subject area. ITELS - an intelligent tutoring system aimed at helping Bulgarians to learn English terminology in a particular subject area exemplifies the main ideas of this approach. The paper focuses on the issues of representation and extraction of terminological knowledge, which are of crucial importance for the system's overall performance. The most significant aspect of the proposed approach lies in separating language knowledge from subject area knowledge. The paper suggests a way of building a terminological knowledge base and of using it for intelligent language instruction. Keywords: Terminology Knowledge Processing, Conceptual Graphs, Computer Assisted Language Learning, Intelligent Tutoring Systems. 1. Introduction For...
 
This paper describes some achievements in the segmentation of medical images using artificial neural networks. We have identified three main sources of a priori information available to help perform the task of medical image segmentation: anatomical knowledge about the imaged district, the physical principles of image generation and the "regularities" of biological structures. The exploitation of each of these forms of knowledge can be effectively achieved with suitable neural architectures, three of which are described in the paper. An important lesson learnt from using these architectures is that different kinds of knowledge unavoidably induce different limitations in the resulting segmentation systems either in terms of generality or of performance. Our experience indicates that some of such limitations can be overcome through a careful exploitation and integration of available knowledge sources via proper neural modules.
 
This work investigates the use of orientation features, computed using the Hough transform, as a criterion for image similarity evaluation in content based picture retrieval. The context of this work is the management of thematic catalogues, in which the coherence in the meaning of the image contents can be relied upon to a certain degree. The vector space model, a well-assessed technique in textual information retrieval, is utilized for the retrieval model. Introduction Current technology allows the acquisition, transmission, storing, and manipulation of large collections of images. Yet systems for their classification and retrieval still rely heavily on textual descriptions associated to images. Recently a number of methodologies, techniques and tools have been studied for identification and comparison of images features in order to develop classification and retrieval systems based on (almost) automatic interpretation of image contents [1-4]. This work focuses on the use of orient...
 
This paper introduces two new Java 2 Platform Micro Edition (J2ME) Remote Method Invocation (RMI) packages. These packages make use of serialized object compression and encryption in order to respectively minimize the transmission time and to establish secure channels. The currently used J2ME RMI package does not provide either of these features. Our packages substantially outperform the existing Java package in the total time needed to compress, transmit, and decompress the object for General Packet Radio Service (GPRS) networks, often called 2.5G networks, even under adverse conditions. The results show that the extra time incurred to compress and decompress serialized objects is small compared to the time required to transmit the object without compression in GPRS networks. Existing RMI code for J2ME can be obliviously used with our new packages.
 
The aim of craniofacial reconstruction is to produce a likeness of a face given the skull. Few works in computerized assisted facial reconstruction have been done in the past, due to poor machine performances and data availability, and major works are manually reconstructions. In this paper, we present an approach to build 3D statistical models of the skull and the face with soft tissues from 3D CT scans. This statistical model is used by our reconstruction method to produce 3D soft tissues from the skull of one individual. Results on real data are presented that envision the pertinence of the proposed method using a larger training set.
 
Intelligent autonomous acting of mobile robots in unstructured environments requires 3D maps. Since manual mapping is a tedious job, automatization of this job is necessary. Automatic, consistent volumetric modeling of environments requires a solution to the simultaneous localization and map building problem (SLAM problem). In 3D this task is computationally expensive, since the environments are sampled with many data points with state of the art sensing technology. In addition, the solution space grows exponentially with the additional degrees of freedom needed to represent the robot pose. Mapping environments in 3D must regard six degrees of freedom to characterize the robot pose. This paper summarizes our 6D SLAM algorithm and presents novel algorithmic and technical means to reduce computation time, i.e., the data structure cached k-d tree and parallelization. The availability of multi-core processors as well as efficient programming schemes as OpenMP permit the parallel execution of robotics tasks. each scan is matched to some previous one, small errors add up to global inconsistencies. These errors are due to imprecise measurements as well as small registration errors, which can never be avoided. SLAM algorithms that use information about closed loops help diminish these effects. So, Lu and Milios proposed a probabilistic scan matching algorithm for solving the simultaneous localization and mapping (LUM) [19]. In recent work, these algorithms are applied to 3D laser scan mapping [25, 8]. Figure 1 gives an example of a 3D map generated by a mobile robot.
 
Image preprocessing. (a) original image; (b) hairs removal; (c) median filtering; (d) first principal component of the Karhunen-Lò eve transform. 
Insufficiency of median filter for dark hair removal. (a) original image; (b) median filter result. 
Results of subset feature selection with SFFS and SFBS algorithms. 
In this paper we present an accelerated system for di- agnosing skin lesions based on digitized dermatoscopic color images. This system is composed mainly of three levels : lesion detection, lesion description features selection and decision. The lesion detection level consists in the preprocessing of the lesion image in order to remove the undesired objects from the original image. Then, the extraction of the lesion is done by separating it from the healthy surrounding skin. The lesion description level is based on the extraction of a set of features modeling clinical signs of malignancy. The decision level is based on the produced vector of features scores, which is used as input to a multi-layer perceptron classifier in order to assign the lesion to the class of benign lesions or to the one of malignant melanomas. We focus particularly in this paper on the critical step of the features selection allowing to select a reasonable reduced number of useful features while removing redundant in- formation and approximating the properties of melanoma recognition. This permits to reduce the dimension of the lesion’s vector, and consequently the computing time, without a significant loss of information. In fact, a large set of features was investigated by the application of relevant features selection techniques. Then, the number of features for classification was optimised and only five well-selected features were used to cover the discrimi- natory information about lesions malignancy. With this approach, for reasonably balanced training test sets, we record a good classification rate of 77 7% in a very promising CPU time. Keywords: computer-aided diagnosis, melanoma, per- ceptron, feature subset selection, sequential floating search methods.
 
This paper presents a technique for accessing channel based on sequencing technique for 802.11 wireless networks. The objective of this method is to reduce, as well as to avoid the number of collisions while accessing channel by many nodes at the same time. A concept of sequence number is introduced to avoid collisions. The transmission of a node takes place after checking the sequence number. MAC layer issues are very important while accessing channel over wireless networks. Simulation results show that the performance of sequence MAC is improved significantly when compared with legacy MAC.
 
In this paper, a distributed architecture has been proposed in order to support an authorization service more precisely in dynamically created Virtual Organizations (VO). In comparison to other existing architectures such as Akenti, VOMS and TAS, our architecture uses certificates on top a of the distributed agent architecture for managing requested resources among the VOs. The most obscure issue in distributed agents is finding the proper node that keeps the particular requested certificates In this paper, Chord’s Finger Table has been improved to add extra search abilities on the ring architecture of Chord. The process of locating keys can be implemented on the top of the improved Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. In this article, a theatrical analysis is presented for simulations, which shows improvement in the number of passed hops to locate keys in the proposed method in comparison of standard chord, so it’s more cost efficient.
 
Increased End-to-end Propagation Delay When Y is Source Node in a Multicast Tree. 
A Triangular Shared-Tree Network. 
A Star Shape Shared-Tree Network. 
A mobile ad hoc network is a wireless mobile network that does not have any base station or other central control infrastructure. Design of efficient multicast routing protocols in such a network is challenging, especially when the mobile hosts move rapidly. Shared-tree routing protocol is a widely used multicast routing protocol in ad hoc networks. However, this protocol is deficient in terms of the end-to-end delay and network throughput. In this paper, we propose a protocol to improve the inherent problem of the large end-to-end delay in the shared-tree method as a modification to the existing multicast Ad hoc On-demand Distance Vector (MAODV) routing for the low mobility network. The protocol uses the n-hop local ring search to establish a new forwarding path and limit the flooding region. We then propose an extension to our proposed protocol, by using the periodic route discovery message to improve the network throughput for the high mobility network. Simulation results demons trate the improvement in the average end-to-end delay for the low mobility case as well as in the high packet delivery ratio for the high mobility case.
 
Conversion Matrix linking QoP to QoS.
Variation of QoP (satisfaction) with transmission protocol employed.
Innovations and developments in networking technology have been driven by technical considerations with little analysis of the benefit to the user. In this paper we argue that network parameters that define the network Quality of Service (QoS) must be driven by user-centric parameters such as user expectations and requirements for multimedia transmitted over a network. To this end a mechanism for mapping user-oriented parameters to network QoS parameters is outlined. The paper surveys existing methods for mapping user requirements to the network. An adaptable communication system is implemented to validate the mapping. The architecture adapts to varying network conditions caused by congestion so as to maintain user expectations and requirements. The paper also surveys research in the area of adaptable communications architectures and protocols. Our results show that such a user-biased approach to networking does bring tangible benefits to the user.
 
This paper presents an adaptive traffic signaling method based on fuzzy logic for roundabout with four-approach intersection. The process of whether to extend or terminate current signal phase and select the sequence of next phases is determined using fuzzy logic. The proposed method can replace an experienced traffic policeman organizing traffic at roundabout intersections.
 
Adaptive hypermedia courseware systems resolve the problem of users’ disorientation in hyperspace through the adaptive navigation and presentation support. We describe the AHyCo (Adaptive Hypermedia Courseware) - an adaptive Web-based educational system for creation and reuse of adaptive courseware with emphasis on adaptive navigation support and lessons sequencing. The proposed model consists of the domain model, the student model, and the adaptive model. The system is composed of two environments: the authoring environment and the learning environment.
 
In the last years, several robotic walking aids to assist elderly users with mobility constraints and thus to react to the growing number of elderly persons in our society have been developed. In order to ensure good support for the user, the robotic walker should adapt to the motion of the user while at the same time not losing the target out of sight. Even though some of the existing active robotic walkers are able to guide their user to a target, during guidance, the input of the user is not considered sufficiently. Therefore a new adaptive guidance system for robotic walkers has been developed. It is able to lead the walking aid user to a given target while considering his inputs during guidance and adapting the path respectively. The guidance system has been implemented on the mobile robot assistant Care-O-bot II and a field test was done in an old people’s residence proving the correct function and usefulness of the guidance system.
 
In this paper, we investigate the use of a Gaussian MixtureModel (GMM)-based quantizer for quantization of the Line Spectral Frequencies (LSFs) in the Adaptive Multi-Rate (AMR) speech codec. We estimate the parametric GMM model of the probability density function (pdf) for the prediction error (residual) of mean-removed LSF parameters that are used in the AMR codec for speech spectral envelope representation. The studied GMM-based quantizer is based on transform coding using Karhunen-Lóeve transform (KLT) and transform domain scalar quantizers (SQ) individually designed for each Gaussian mixture. We have investigated the applicability of such a quantization scheme in the existing AMR codec by solely replacing the AMR LSF quantization algorithm segment. The main novelty in this paper lies in applying and adapting the entropy constrained (EC) coding for fixed-rate scalar quantization of transformed residuals thereby allowing for better adaptation to the local statistics of the source. We study and evaluate the compression efficiency, computational complexity and memory requirements of the proposed algorithm. Experimental results show that the GMM-based EC quantizer provides better rate/distortion performance than the quantization schemes used in the referent AMR codec by saving up to 7.32 bits/frame at much lower rate-independent computational complexity and memory requirements.
 
High-Level Overview of the Liberty Alliance Architecture.
QFD for requirement analysis of the technical solution for identification approaches.
Processes that are related to the identification and the authentication of persons and other legal entities have been necessarily existing and functioning for a while in public administration and business. Information Society offers new e-services for citizens and businesses, which dramatically change the administration and results additional challenges, risks and opportunities. Citizen’s confidence and trust to services has to be improved, meanwhile several requirements, like data protection, privacy and legal requirements has to be satisfied. The usual business process of identification of the corresponding entity is generally based on some trivial control mechanism, typically password identification. In order to keep up the trust of the public in the public administration activities, the process for entity identification (both person and legal entity) should be amended taken in account the business and security consideration. Identity management solutions show intriguing variation of approaches in Europe, they are at a different maturity level of services. Our paper gives an overview about the most frequently cited identity management architectures (namely: Liberty Alliance Architecture, IDABC, Sibboleth, Government Gateway Model and Austrian Model) and presents an identity management framework (based on the PKI, but improved it), customized for the Hun-garian specialities, which offer possibilities to improve the related services quality. The goal of this paper is to show a solution for the improvement of the identity management solution for e-government processes through the development of security mechanisms making use of the readily avail-able technologies.
 
The visualization of 3D models of the patient's body emerges as a priority in surgery. In this paper two different visualization and interaction systems are presented: a virtual interface and a low cost multi-touch screen. The systems are able to interpret in real-time the user's movements and can be used in the surgical pre-operative planning for the navigation and manipulation of 3D models of the human body built from CT images. The surgeon can visualize both the standard patient information, such as the CT image dataset, and the 3D model of the patient's organs built from these images. The developed virtual interface is the first prototype of a system designed to avoid any contact with the computer so that the surgeon is able to visualize models of the patient's organs and to interact with these, moving the finger in the free space. The multi-touch screen provides a custom user interface developed for doctors' needs that allows users to interact, for surgical pre-operative planning purposes, both with the 3D model of the patient's body built from medical images, and with the image dataset.
 
The paper presents a novel technique for affine invariant feature extraction with the view of object recognition based on parameterized contour. The proposed technique first normalizes an input image by removing the affine deformations using independent component analysis which also reduces the noise introduced during contour parameterization. Then four invariant functionals are constructed using the restored object contour, dyadic wavelet transform and conics in the context of wavelets. Experimental results are conducted using three different standard datasets to confirm the validity of the proposed technique. Beside this the error rates obtained in terms of invariant stability are significantly lower when compared to other wavelet based invariants. Also the proposed invariants exhibit higher feature disparity than the method of Fourier descriptors.
 
Network processor (NP) is optimized to perform network tasks. It uses massive parallel processing architecture to achieve high performance. Ad hoc network is an exciting research aspect due to the characters of self-organization, dynamically topology and temporary network life. However , all the characters make the security problem more serious. Denial-of-Service (DoS) attack is the main puzzle in the security of Ad hoc network. A novel NP-based security scheme is proposed to combat the attack. Security agent is established by a hardware thread in NP. Agent can update itself at some interval by the trustworthiness of the neighbor nodes. Agent can trace the RREQ and RREP messages stream to aggregate the key information and analyze them by intrusion detection algorithm. NS2 simulator is expanded to validate the security scheme. Simulation results show that NP-based security scheme is effective to detect DoS attack.
 
Simple routing scenario. 
Retransmissions of the three schemes.
Throughputs of the three schemes.
Delays of the three systems.
This paper presents the performance comparison of Lightweight Agents, Single Mobile Intelligent Agents and Remote Procedure Call which are tools for implementing communication in a distributed computing environment. Routing algorithms for each scheme is modeled based on TSP. The performance comparison among the three schemes is based on bandwidth overhead with retransmission, system throughput and system latency. The mathematical model for each performance metric is presented, from which mathematical model is derived for each scheme for comparison. The simulation results show that the LWAs has better performance than the other two schemes in terms of small bandwidth retransmission overhead, high system throughput and low system latency. The Bernoulli random variable is used to model the failure rate of the simulated network which is assumed to have probability of success p = 85% and the probability of failure q = 15%. The network availability is realized by multiplicative pseudorandom number generator during the simulation. The results of simulation are presented.
 
The Intelligent Space is an area (room, public space, etc.) that has networked distributed sensors, which can be used for observing and gathering information from the space. The main objective for the introduction of Intelligent Space is to provide services to humans inside the space. These services can be either informational, such as the ones provided by displays, or physical. In order to be able to provide physical services mobile robots are introduced in the Intelligent Space as actu-ators. The network of distributed sensors in the space can therefore be utilized to provide data from the space needed for the control of the robot. Here we present our implementation of an Intelligent Space system that uses spatially distributed laser range finders for tracking the mobile robot and humans inside the space and building the map of the space. Based on these measurements, the control of mobile robot acting as physical agent of the Intelligent Space is developed.
 
An enormous number of documents is being produced that have to be stored, searched and accessed. Document indexing represents an efficient way to tackle this problem. Contributing to the document indexing process, we developed the Computer Aided Document Indexing System (CADIS) that applies controlled vocabulary keywords from the EUROVOC thesaurus. The main contribution of this paper is the introduction of the special CADIS internal data structure that copes with the morphological complexity of the Croatian language. CADIS internal data structure ensures efficient statistical analysis of input documents and quick visual feedback generation that helps indexing documents more quickly, accurately and uniformly than manual indexing.
 
Schematic snake movement through the image domain towards a boundary.  
Dowels representing the soft tissue thickness at standardized locations on the cranial bone.  
A multi-modality framework for forensic soft-facial reconstruction based on computed tomography (CT) and magnetic resonance imaging (MRI) is presented. CT is used to acquire a virtual representation of a skull find and MRI templates provide the desired soft tissue information to produce a facial likeness of a deceased individual. Two main strategies are described. The first is based on a regularized non-linear warping technique using radial basis functions known as thin-plate splines in 3D space. The second is an automatic segmentation scheme based on active contours, which will provide a facial template that can be morphed onto the CT of the skull find. These approaches are presented in the framework of a forensic workplace.
 
Top-cited authors
Darrell Whitley
  • Colorado State University
Robert B. Heckendorn
  • University of Idaho
Paul Klint
  • Centrum Wiskunde & Informatica
Eli B. Cohen
  • Informing Science Institute
Ignace Lemahieu
  • Ghent University