BookPDF Available

Advances in Systems, Computing Sciences and Software Engineering: Proceedings of SCSS 2005



Advances in Systems, Computing Sciences and Software Engineering This book includes the proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS’05). The proceedings are a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of computer science, software engineering, computer engineering, systems sciences and engineering, information technology, parallel and distributed computing and web-based programming. SCSS’05 was part of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE’05) (www. cisse2005. org), the World’s first Engineering/Computing and Systems Research E-Conference. CISSE’05 was the first high-caliber Research Conference in the world to be completely conducted online in real-time via the internet. CISSE’05 received 255 research paper submissions and the final program included 140 accepted papers, from more than 45 countries. The concept and format of CISSE’05 were very exciting and ground-breaking. The PowerPoint presentations, final paper manuscripts and time schedule for live presentations over the web had been available for 3 weeks prior to the start of the conference for all registrants, so they could choose the presentations they want to attend and think about questions that they might want to ask. The live audio presentations were also recorded and were part of the permanent CISSE archive, which also included all power point presentations and papers. SCSS’05 provided a virtual forum for presentation and discussion of the state-of the-art research on Systems, Computing Sciences and Software Engineering.
2006, XIV, 437 p.
Printed book
154,95€ | £139.50 | $219.00
*165,80€(D) | 170,45€(A) | CHF222.50
Available from your library or
Printed eBook for just
€ | $ 24.99
T. Sobh, University of Bridgeport, Bridgeport, USA; K. Elleithy, University of Bridgeport,
Bridgeport, USA (Eds.)
Advances in Systems, Computing Sciences and Software
Proceedings of SCSS 2005
International Conference on Systems, Computing Sciences and
Software Engineering 2005 provides a virtual forum for presentation
and discussion of the state-of the-art research on computers,
information and systems sciences and engineering
The virtual conference will be conducted through the Internet using
web-conferencing tools, made available by the conference
Authors will be presenting their PowerPoint, audio or video
presentations using web-conferencing tools without the need for
Conference sessions will be broadcast to all the conference
participants, where session participants can interact with the
presenter during the presentation and (or) during the Q and A slot
that follows the presentation
The conference proceedings of the International Conference on Systems, Computing
Sciences and Software Engineering include a set of rigorously reviewed world-class
manuscripts addressing and detailing state-of-the-art research projects in the areas
of Computer Science, Software Engineering, Computer Engineering, and Systems
Engineering and Sciences.
The International Conference on Systems, Computing Sciences and Software Engineering
(SCSS 2005) was part of the International Joint Conferences on Computer, Information and
Systems Sciences and Engineering (CISSE 2005).
CISSE 2005, the World's first Engineering/Computing and Systems Research E-Conference
was the first high-caliber Research Conference in the world to be completely conducted
online in real-time via the internet.
Order online at or for the Americas call (toll free) 1-800-SPRINGER or email us at: orders- For outside the Americas call +49 (0) 6221-345-4301 or email us at:
The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for
Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices
exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted.

Chapters (54)

In a conventional control room of a Nuclear Powre Plant, a great number of tiled alarms are generated especially under a plant upset condition. As its conventional control room evolves into an advanced one, an annunciator-based tile display for an alarm status is required to be removed and replaced by a computer-based tile display. Where this happens, it becomes a bothering task for plant operators to navigate and acknowledge tiled alarm information, because it places an additional burden on them. In this them. In this paper, a display method, Elastic Tile Display, was proposed, which can be use to visualize and navigate effectively a large quantity of tiled alarms. We expect the method to help operators navigate alarms with a little cost of their attention resources and acknowledge them in a timely manner.
EAI issue is constantly a significant aspect of enterprise computing area. Furthermore, recent advances in Web Service and integration sever provide a promising pushing to EAI issues. However, few investigations have focused on a general view of EAI technologies and solutions. to provide a clear perspective about EAI, a technology framework-ABCMP is presented in this paper. ABCMP attempts to describe the structure of diverse EAI technologies, which allows a general comprehension of the EAI issue. Moreover, process topic about ABCMP is also discussed to provide a wider vision of the technology framework.
Most researches concerning real-time system scheduling assumes scheduling constraint to be precise. However, in real world scheduling is a decision making process which involves vague constraints and uncertain data. Fuzzy constraints are particularly well suited for dealing with imprecise data. This paper proposes a fuzzy scheduling approach to real-time system scheduling in which the scheduling parameters are treated as fuzzy variables. A simulation is also performed and the results are compared with both EDF and LLF scheduling algorithms. The latter two algorithms are the most commonly used algorithms for scheduling real-time processes. It is conculded that the proposed fuzzy approach is very promising and it has the potential to be considered for future research.
For decades, different algorithms were proposed addressing the issue of constructing Huffman Codes. In this paper we propose a detailed time-efficient three-phase parallel algorithm for generating Huffman codes on CREW PRAM model exploiting n processors, where n is equal to the number of symbols in the input alphabet. First the codeword length for each symbol is computed concurrently with a direct parallelization of the Huffman tree construction algorithm eliminating the complexity of dealing with the original tree-like data structure. Then the Huffman codes corresponding to symbols are generated in parallel based on a recursive formula. The performance of the proposed algorithm depends directly on the height of the corresponding Huffman tree. It achieves an O(n) time complexity in the worst case whichis rarely encountered in practice.
With the advent of grid computing, some jobs are executed across several clusters. In this paper, we show that (1) cross cluster job execution can suffer from network contention on the Inter-cluster links and (2) how to reduce traffic on the Inter-cluster links. With our new all-to-all communication strategtes and host file mapping. It is possible to reduce the volume of data between two clusters from(n/2, for a job with size n. Our measurements confirm this reduction in data volume on the inter-cluster link and show a reduction in runtime. With these improvements it is possible to increase the size of grids without significantly comparomtsing performance.
In this paper, we propose a method for synthesizing the glue code for distributed programming. The goal of this method is to completely automate the synthesis of code for handling distributed computing issues, such as remote method calls and message passing. By using this method, the software for migrating the objects, synchronizing the communication, and supporting remote method calls can be automatically generated. From the programmer’s point of view, remote accesses to objects are syntactically and semantically indistinguishable from local accesses. The design of the system is discussed and the whole system is based on the Linda notation. A prototype has been developed using JavaSpaces for code synthesis in Java. Experiments show that this method can help developers generate the code for handling distributed message passing and remote procedure calls.
This paper presents the workflow and architecture of the Relation Semantics Elicitation Prototype (RSEP). RSEP has been developed to address the limitations of the Description Logic based constructs of the Web Ontology Language (OWL) with respect to capturing the intrinsic nature of binary relations. After extracting relation definitions from an input OWL ontology, RSEP interactively elicits the intrinsic semantics of these relations from knowledge providers and appends this elicited knowledge in OWL Full syntax to the input ontology. RSEP has been tested on the IEEE Suggested Upper Merged Ontology (SUMO) and the results are presented in this paper. Preliminary results from using the elicited relation semantics to cluster relations from SUMO and to arrange them taxonomically are also presented to highlight their potential contribution to knowledge interoperability and reuse on the Semantic Web.
Modern information technology has equipped the world with more powerful telecommunication and mobility. We interact with various information systems daily especially those dealing with global positioning systems. Geographical information system is one of important element in those positioning systems. Hence spatial retrieval becomes more and more important in telecommunication systems, fleet management, vehicle navigation, robot automation, and satellite signal processing. Nowadays, spatial query is not longer done with lengthy and complicated syntax. However it is easily done with sketching and speech. The spatial queries are more dealing with daily routines like searching for buildings and routes, analyzing customer preferences and so on. This kind of query is called configuration or structural spatial query that needs powerful retrieval technique as it involves high volume of database accesses to locate the suitable objects. Configuration query retrieval is in content based retrieval family that is also an active research area in spatial databases. This research developed an enhanced configuration similarity retrieval model using single measure in contrast with the multimeasure models in the domain of spatial retrieval.
When a software component is used, it is often necessary to set initial values in many of its attributes. To set these initial values appropriately, the user of the component must ascertain which attributes are needed to be initialized, and set them programmatically to suitable initial values. The work involved in this sort of initialization can be alleviated by attaching a wizard interface to the target component itself and setting the initial values visually from the wizard. However, there are large development costs associated with devising suitable initial value candidates and producing a new wizard to use these initial values for each individual component. In this paper, we propose a system whereby application programs that use a target component are subjected to dynamic analysis to discover which attributes and initial values are set most often during thr running of the component. The proposed system generates and attaches a wizard, which supports application programmers to initialize the component visually by using these initial values, to the component. The proposed system can be recognized as a system for applying the Wizard pattern to each component automatically. Experiments have shown that the attributes and their initial values chosen for initialization by generated wizards closely resemble the expectations of the component’s original developers. We have thus confirmed that the proposed system can bring about a substantial reduction in wizard development costs.
-Image retrieval based on Quadrant Motif Scan (QMS) is proposed in this paper. Motif traces from image pixels are the core idea to extract feature vectors and used for distinguishing images by region-based comparisons. We exploit recursive quadrant segmentation in image and used for distingushing images by region-based comparisons. We exploit recursive quadrant segmentation in images and derive representative motif for stratified regions. In this sense, a parent regionis segmented into sub-regions until a predefined stratum threshold. Matching data for each region contains its motif code plus the result from uniformity detection. By the credit setting, the similarity mechanism proceeds in corresponding regions from two images in a top-down manner. Dynamic parameter adjustments to relevance feedback can help pursue best retrieval results. Besides, a peck inspection technique is also added in the QMS matching metric to enhance performance. Experimental results reveal effectiveness and efficiency comparable to the Motif Cooccurrence Matrix (MCM) method with invariance to image scaling.
In the past decade, the amount of information available on the Internet has expanded astronomically. The amount of raw data now available on the Internet presents an opportunity for business and researchers to derive useful knowledge out of it by utilising the concepts of data mining. The area of research within data mining, often referred to as web mining, has emerged to extract knowledge from the Internet. Existing algorithms have been applied to this new domain, and new algorithms are being researched to address indexing and knowledge requirements. Three main areas of internet have emerged in web mining: Mining the content of the web pages; mining the structure of the web; and mining the usage patterns of clients. This paper provides an overview of web mining, with an in depth look at each of the three areas just mentioned.
- Advances in softward and hardware technologies have given operating systems the ability to process data and handle various concurrent processes. The increased ability has been one of the driving forces which have led to the proliferation of mechanisms in operating systesm to satisfy the performance requriements of applications with predictable resourece allocation. As differnt classes of applicatons require different resources managemnt policies one needs to look into ways to satify all classes of applications. Conventional general purpose operating systems have been developed for a single class of best-effort applications, hence, are inadequate to support multiple classes of applications. We present an abstract architecture for the support of Quality of Service (Qos) in Kernel-Less Operating System (KLOS). We propose new semantics for the Qos resources management paradigm, based on the notion of Quality of Service. By virtue of this new semantics, it is possible to provide the support required by KLOS to various components of the operating system such as memory manager, processor time, IO management etc. These mechanisms which are required within an operating system to support this paradigm are described, and the design and implementation of a protoypical kernel which implements them in is presented. Various notions of negotiation rules between the application and the operating systems are discussed along-with a feature which allows the user to express its requirements and the fact is that this model assures the user in providing the selcted parameters and returns feedback about the way it meets those requirements. This Qos model presents a design paradigm that allows the internal components to be rearranged dynamically, adapting the architecutre to the high performance of KLOS. The benefits of our framework are demonstrated by building a simulation model to represent how the various modules of an opeating system and the interface between the processes and the operating system can be tailored to provide Qualitu of Service guarantees.
The aim of the Semantic Web is to allow access to more advanced knowledge managment by organizing knowledge in conceptual space according to its meaning. Semantic Web agents will use the techniligies of the Semantic Web to identfy and extract information from Web sources. However, the implementation of the Semantic web has a problem, namely, that the Semantic Web ignores the different types of already-existing resources in the current implementation of the Web. As a result, many of the resources will not be included in the conceptual spaces of the Semantic Web. This research introduces a framework that catalog allows mulit-access points to different resources, and it allows agents to discover the sought-after knowledge. This catalog can be established through the creation of a framework that helps computer-to-computer communication, knowledge management, and imformation retrieval general.
Color-image data is generally more complex than black and whilt version in terms of the number of variables in the channels and “color constancy” issue. Most of two-dimensional (2D) codes are dominant in a black and white domain such as QR code, PDF417, Data Matrix and Maxicode rather than in color domain, affected usually by such technical difficulities of handing color variables. Conventional 2D codes adopt Reed-Solomon (RS) algorithm for their error control codes mainly to recover from damages such as stains, scrathes and spots, while highlight and shadow issues in color codes are much more complex. In this paper we propose a new decoding approach by applying recoverable erasures by RS algorithm in color codes to resolve color recognition issue effectively. Using erasure concept ultimately for color constantcy, marking erasure strategy during color recognition processing steps is very crucial. Our algorithm ultimately mitigates color recognition load overall in decoding color codes. Consequently our new erasure decision algorithm prevents color recognition failures within erasure correction capability of RS algorithm, and it can be applied along with other color constancy algorithms easily to increase overall decoding performance just using RS erasure.
Collaboration-based design is a well known method for construction complex software systems [1, 12, 13]. A collaboration implements one feature of the system. Because of the independent development of collaborations, collaborations might easily produce methods with identical signatures though no intention of overriding [7,10]. This paper differentiates between accidental and intended overriding and proposes a solution to the problem generated from oveeiding method signatures between collaborations. Our solution is based goal is the clarity, measured by the ease-of-use by developers.
Grid computing environments are being extended in order to present some features that are typically found in pervasive computing environments. In particular, Grid environments have to allow mobile users to get access to their services and resources, as well as they have to adapt services depending on mobile user location and context. This paper presents a 2-layers location service that locates mobile users both in intra-Grid and extra-Grid architectures. The lower layer of the location service locates mobile users within a single intraGrid environment, i.e mobile users can move among different areas of the physical environment adn the Grid can provide its services accordingly to the user position. The uper layer locates mobile users in an extra-grid, which is composed by a distributed federation of intra-Grids, by collection location information coming from basis layers. The location service has been developed at the top of the standard OGSA architecture.
This exposes how to modify an existing progrmming workbench to make it useful and efficient for developing interfaces for the blind and partically sighted people. This work is based on abstract data types and components.
-Head-Related Impulse Responses (HRIRs) are used in signal processing to model the synthesis of spatialized audio which is used in a wide variety of applications, from computer games to aids for the vision impaired. They represent the modification to sound due to the listener’s torso, shoulders, head and pinnae, or outer ears. As such, HRIRs are somewhat different for each listener and require expensive specialized equipment for their measurement. Therefore, the development of a method to obtain customized HRIRs without specialized equipment is extremely desirable. In previous research on this topic, Prony’s modeling method was used to obtain an appropriate set of time delays and a resonant frequency to approximate measured HRIRs. During several recent experimental attempts to improve on this previous method, a noticeable increase in percent fit was obtained using the Steiglitz-McBride interative approximation method. In this paper we report on the comparison between these two methods and the statistically significant advantage found in using the Steiglitz-McBride method for the modeling of most HRIRs.
-Even though automated hand-written character recognition can be highly accurate, most of these systems are unable to apply context to improve results, unlike human readers. This paper describes an approach to automated hand-written character recognition that seeks explicit features amongst the input data, and applies layered abduction to derive an explanation to account for the input in terms of English characters. Layered abduction is used because it can provide top-down guidance is used because it can provide top-down guidance to improve accuracy. Such an approach has been taken here resulting in more than 96% accuracy for hand-written printed character recognition in a limited domain.
Software development and deployment is based on a well established two stage model: program development and program execution. Since the beginning of software development, meta-programming has proven to be useful in several applications. Even though interpreted languages have successfully exploited the ability of generating programs, meta-programming is still far from being main stream. The two stage model is largely due to the computing metaphors adopted for defining programs, and deeply entrenched within the whole software management infrastructure. In this paper we explore how a runtime, multi-staged, self-contained, meta-programming system based on the notion of partial application can provide a suitable support for programs capable of evolve along time. To make the discussion more concrete we discuss two scenarios where the suggested infrastructure is used: software installation and robot control by means of programs embedding knowledge into thier code rather than into data structures.
For the mass demands of wireless communication application services, the mobile location technologies have drawn much attention of the governments, academia, and industries around the world. In wireless communication, one of the main problem facing accurate location is nonline of sight (NLoS) propagation. To solve the problem, we present a new location algorithm with clustering technology by utilizing the geometrical feature of cell layout, time of arrival (ToA) range measurements, and three base stations. The mobile location is estimated by solveing the optimal solution of the objective function based on the high density cluter. Simulations study was conducted to evaluate the performance of the algorlthm for different NloS error. The results of our experiments demonstrate that the proposed algorithm is significantly more effective in location accuracy than linear line of position algorithm and Taylor series algorithm, and also satisflies the location accuracy demand of E- 911.
The solution of a problem of mathematical description of heat exchange, gas dynamics and the physicochemical phenomena taking place in blast furnace in their correlations, and some its application for study of processes, defining reduction of metals from multicomponent iron ores are considered.
A new approach to problems of land allocation for livestock grazing, combining both computer science and social science tools has been developed since 1998 in Senegal, especially in the Ferlo area. Here we examine the implications for the different research centres such as the CIRAD 1 , the Pôle Pastoral Zones Sèches (PPZS), the ESP and the laboratories of the UCO. A computer simulation (MAS), based on sociological data, has been introduced in order to obtain a more neutral evaluation of the possible approaches to the problem, for example the approach of the local people involved, together with that of regional land development and allocation policy. This is preferable to practice that consists of experimenting policy in real-life situations with potentially dire consequences for the population and the environment. This initiative aims both to stabilise the social position of shepherds/herders and to preserve the production potential of ecosystems used for grazing. Its approach is one of sustainable development and the underlying theory must be questioned in order to clarify the empirical scope and pertinence. This applied research (which, owing to the questions it asks, is rooted “in the most immediate reality” [1]), should not be naïve concerning its implications. Is it possible, with theoretical knowledge and an objective approach, to give research results to local management especially when the latter are separated from the decision making process? The project’s aim is to modify social behaviour, to rationalise it or to induce new behaviour. Groups or individuals that have to modify their behaviour as a result will naturally give rise to questioning concerning the social benefits of such changes. Who is destined to benefit from the MAS platform? From the perspective of increased well-being to local populations a long-term follow-up is required together with an evaluation of social relations resulting from its use. It will certainly be necessary to implement a large
This paper provides a review of modelling metadata for adaptive Hyperbook. We analyse the adaptivity and conceptual model for representing and sorting a Hyperbook, the language and tools for metadata and the architecture of the Hyperbook.
- Extracting biological significance from a large microarray dataset using data mining clustering technique is an important process in bioinformatics. In this paper, a microarray dataset (matrix 504 x 227) made available by SAMSI institute, was used as the base sample to develop a new demo web-based clustering system that exploits the improved efficiency and functionality of PHP/MYSQL technology. The clustering algorithms and robustness of PHP/MYSQL produced categorized microarray data that can be associated with diseases with improved visulizations.
-Electronic Made to Measure is a new mode of garment production. This paper analyses the state of eMTM, and introduces the functional components of the eMTM infrastructures, designs it’s workflow, then gives one feasible solution. The research work of his paper will play a good role for future research and popularization of eMTM in China.
- Ant systems are flexible to implement and give possibility to scale because they are based on multi agent cooperation. The aim of this publications is to show the universal character of that solution and potentiality to implement it in wide areas of applications. The increase of demand for effective methods of big document collections management is sufficient stimulus to place the research on the new application of ant based systems in the area of text document processing. The author will define the ACO (Ant Colony Optimization) meta-heuristic, which was the basis of method developed by him. Presentation of the details of the ant based documents clustering method will be the main part of publication.
We develop a dynamic game model to study the optimal control of the economies in a two-country monetary union under startegic interactions between macroeconomic policy-makers. In this union, goverments of participating countries pursure national golas when deciding on fiscal policies, whereas the common central bank’s monetary policy aims at union-wide objective varibles. For a symmetric demand shock, we derive numerical solutions of the central bank. The different solution concepts for this game serve as models of a conflict between naional and supra-national institutions (noncooperative Nash equilibrium) on the one hand of coordinated policy-making (cooperative Pareto solutions) on the other. we show that there is a trade-off between instruments’ and targets’ deviations from desired paths; moreover, the volatility of output and inflation increase when private agents reactmore strongly to changes in actual inflation.
-Recently, software engineering is required to make face to the development, maintenance and evolution complexity of software systems. Among the proposed solutions for manging the complexity, Model Driven Engineering (MDE) has been accepted and implemented as one of the most promising solutions. In this approach, models become the hub of development, separating platform independent aspects from platform dependent aspects. Among the more important issues in MDE, we remark model transformation and model matching (or schema matching). In this paper, we propose an approach to take into account schema matching in the context of MDE. A schema matching algorithm is provided and implemented as a plug-in for Eclipse. We discuss an illustrative example to validate our approach.
Conventional Human Computer Interaction requires the use of hands for moving the mouse and pressing keys on the keyboard. As a result paraplegics are not able to use computer systems unless they acquire special Human Computer Interaction equipment. In this paper we describe a system that aims to provide paraplegics the opportunity to use computer systems without the need for additional invasive hardware. Our system uses a standard web camera for capturing face images showing the user of the computer. Image processing techniques are used for tracking head movements, making it possible to use head motion in order to interact with a computer. The performance of the proposed system was evaluated using a number of specially designed test applications. According to the quantitative results, it is possible to perform most HCI tasks with the same ease and accuracy as in the case that a touch pad of a portable computer is used. Currently our system is being used by a number of paraplegic users.
We present a model for the anomalies of software engineering that cause system failures. Our model enables us to develop a classification scheme in terms of the types of errors that are responsible, as well as to identify the layers of a system architecture at which control mechanisms should be applied.
Recent high-speed networks provide new features such as DMA and programmable network cards. However standard network protocols, like TCP/IP, still consider a more classical network architecture usually adapted to the ethernet network. In order to use the high-speed networks efficiently, protocol implementors should use the new features provided by recent networks. This article provides an advanced study of both hardware and software requirements for high-speed network protocols. A focus is made on the programming model and the steps involved in the transfer’s critical path.
This paper defines two suites of metrics, which cater static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined into a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate utilisation of components. In this paper both static and dynamic metrics are evaluated using Weyuker's set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly.
-Many iterative search processes, or adaptive plan, that aim to find an optimal solution in a given problem domain, suggest that an optimal search process has an exponential character. Plans that consist of multiple strategies running in parallel, such an bandit searches, aim to demonstrate this pattern in probabilistic distributions of finding a best observed strategy amongst a number of alternatives. This paper introduces a hypothetical adaptive plan that consists of three strategies. One strategy guarantees a better result with each iteration, one has comparable results, and one guarantees worse results. The idea behind this approach is the suspicion that every adaptive plan can basically be mapped to these three base strategies, and that the exponential character of an optimal plan is a trait of its recurrent character.
This paper is a proposal for the construction of a pseudo-net built with precisely defined tokens describing the content and structure of the original WWW. This construction is derived by morphosyntactical analysis and should be structured with a post-processing mechanism. It is provided also an in-depth analysis of requirements and hypothesis to be stated to accomplish with this goal. An in-depth analysis of requirements and hypothesis to be stated to accomplish this goal is also provided. Such derived structure could act as an alternate network theme organization with a compacted version of the original web material. This paper does not describe nor study the post-processing approaches. Instead, it is posted here as a future work. A statistical analysis is presented here with the evaluation of the understanding degree of a hand-made structure built with some tokens derived under the hypothesis presented here. A comparison with the keyword approach is also provided.
In this project a design and implementacion of a platform (hardware-software) for control and monitoring, using Internet, the physical variables of a greenhouse such as temperature and luminosity was made; for this a new type of microcontroller, TINI (Tiny InterNet Interfaces) was used, which can be used as a small server with additional advantages as being able to program it with JAVA language and to support a lot of communication protocols, like 1-wire protocol. Due to the form the platform was designed and implemented, this technology could be incorporated with very low cost in the PYMES (Small and Medium Companies) dedicated, for example to the production of flowers. An additional value of the platform is its adaptability in other applications like for example, laboratories, monitoring and control of manufactures, monitoring systems, among others.
One of the major challenges of a pervasive environment is the need for adaptation of content to suit a client’s specific needs and choices such as the client's preferences, the characteristics of the client device, the characteristics of the network to which the client is currently connected, as well as other related factors. In order for the adaptation to be efficient while satisfying the client's requirements and maintaining the semantics and quality of the content, the adaptation system needs to have adequate information regarding the content to be adapted, the client's profile, the network profile and others. The information regarding the content can be found from the content metadata. This work addresses the issue of content metadata management in a pervasive environment in relation to content adaptation. For this purpose, a distributed architecture for the management of metadata of multimedia content is proposed. The proposed architecture consists of components for storage, retrieval, update, and removal of metadata in the system. It also provides interfaces to external components through which metadata can be accessed. In addition, it proposes ways to specify, in the metadata, restrictions on the adaptations that may be applied on the content. This enables the content creator to impose constraints on adaptations that may potentially cause the loss of critical information or a decrease in the quality of the adapted content.
A common metric used for the assessment of overall reliability, in a memory hierarchy is the Mean To Failure (MTTF),but it doesn’t into accunt for time of data storage in each level. We propose another metric, Data Loss Rate (DLR), for this reason. We will derive a recurrent formula for computing DLR, and we will validate it by computer simulation.
In order to conceive and design cooperative information systems to better adapt to the dynamics of modern organizations in the changing environment, this paper explores the requirements and approaches to build “living” cooperative information systems for virtual organizations – both in terms of system flexibility and co-evolution of organization and information systems. The object of our case study is the Beijing Organizing Committee for the Games of the XXIX Olympiad. To meet the requirements of “living” cooperative information systems in the context of virtual organizations, we propose a unified peer-to-peer architecture based on the foundational concepts and principles of Miller’s Living Systems Theory, which is a widely accepted theory about how all living systems “work”. In our system framework, every peer belongs to one of the 6 organizational levels, e.g. peers, groups, organizations, communities, societies, supranational systems. Each level has the same types of components but different specializations. In addition, every peer has the same critical subsystems that process information. The case studies show how our architecture effectively supports the changing organization, dynamic businesses, and decentralized ownerships of resources.
In this pqper, the disadvantages and advantages of artificial neural networks (ANNs) and Case-base Reasoning (CBR) have been briefly introduced respectively. The capacity of network can be Improved through the mechanisum of CBR in the dynamic processing environment. And the limitation of CBR, that could not complete their reasoning process and propose a solution to a given task without intervention of experts, can be strong self-learning ability of ANN. The combination of these two artificial intelligent techniques not only benefits to control the quality and enhance the efficiency, but also to shorten the design cycle and save the cost, which paly an important role in promoting the intelligentized level of the textile industry. At the same time, utilizing ANN prediciting model, the sensitive process variables that affect the processing performances and quality of yarn and fabric can be decided, which are often adjusted during solving the new problems to form the desired techniques.
-Developing WBHAs is moving fast due to an explosive increase of Internet/WWW use. Furthermore, the ability to use the internet/WWW tecnologies in closed environments (e.g., intrranet and extranet) have also helped in the spread of WBHAs development. However, the classical life-cycle model reveal seeral limitations in term of describing the process of deeloping WBHAs Recently, several design methods such as RMM[15], HDM[1], OOHDM [25], EORM[3] have been proposed for the design requirements of WBHAs. However, none of them have addressed the life-cycle of WBHAs development. Hece, in this paper, we first study different WBHA aexhitectures, then we identify a list of requirements for WBHAs development. After that, we show how the waterfall model lacks the capability to satisfy these requirements. Finally, we propose a new life-cycle model for WBHAs development.
The problem of reverse engineering assembly language projects for microcontrollers from embedded systems is approached in this paper. A tool for analyzing projects is described which starts from the source files of the project to be analyzed, grouped in a Project Folder and from a Configuration files and generates diagrams for describing the program’s functionality. The tool is useful for the programmer and for the project manager in different phases of a project: code review, design review and development. It is also useful for understanding and documenting older projects.
Air contamination is one of the biggest problems taht affects to the countries in almost any part of the world. The increase in the quantities of gases and enviornments has been verified to world scale, and each day it becomes more obvios that the answer to these problems should concentrate in the seacrb of intelligent solutions. In Chile, by law, the obligations settles down of developeing decontatmination plans in areas where the levels of pollutants systematically exceed the enviornmental norms, and plans of prevention where these norms are in danger of being ovecome. During the autumun-winter season the population of Santigo’s city centre is affected by a sudden increase in the levels of ai contamination. This situation is known as a critical episode of atmospheric contamination, and it takes place when there are register high levels of concertration of pollutants during a short period of time. These epsiodes originate from the covergence of a series of meteorological factos that impete the ventilation of Snatiago’s basin due to an increase in the previous emissions to the episode. According to the existing by-law, the criteria to decrease critical esisode of contamination is referred to in the index ICAP, that it generated by the data of the Net of Mensuration of gases and particles that are entered into a prognostic method developed by the physicist J. Cassmassi [44], [47], [55] This way, the authority makes the decisions with regard to the critical episodes that can affect Santiago’s city centre depending on the predictions that gives a pattern. Our investigational work is framed in the line of looking for intelligent methodologies for prediction and control of environmental probelms in Santiago’s city centre and its Metropolitan Area.
This contribution emphasizes the appropriateness of using human experience for designing rule-based user displays for controlling technical systems. To transform the human (operator) experience into a computer presentation, suitable means are needed. The proposed method and technique for designing user-machine interfaces for system control uses fuzzy logic techniques to covert human experience into a computer representation, and to computer values to animate graphical objects on the user display. This modest contribution investigates and clarifies the reasons for considering the appropriateness of fuzzy logic in designing rule-based human-machine interfaces for technical system control.
While existing Web Services standars provide the basic functionality needed to implement Web Service-based applications, wider acceptance of the Web Service paradigm requires improvements in serveral areas. A natural extension of the existing centralized approach to service discovery and selection is to rely on the community of peers—clients that can share their experiences obtained through previous interactions with service providers—as the source for both service discovery and service selection. We show that distributed algorithms for trust maintenance and reputation acquistion can effectively supplant the current centralized approach, even in cases where not all possible information is available, even in cases where not all possible information is available at any given time. Soem preliminary results demonstrate the feasibility of trust and reputation-based service selection approach.
Two different methods for obtaining software programs with predictable quality are compared by positioning these models in a product/process and confirmation/improvement framework. The Mafteah/Method A method can be placed in the confirmation segment, whereas the software Capability Maturity Model can be positioned in the improvement/process quadrant.
Predicting the next request of a user as visits Web pages has gained importance as Web-based activity increases. A large amount of research has been done on trying to predict correctly the pages a user will request. This task requires the development of models that can predicts a user’s next request to a web server. In this paper, we propose a method for constructing first-order and second-order Markov models of Web site access prediciton based on past visitor behavior and compare it association rules technique. In these qpproaches, sequences of user requests based on past visitor behavior and compare it association rules technique. In these approaches, sequences of user requests are collected by the session identificaiton techinique, which distinguishes the requests for the same web page in different browses. We report experimental studies using real server log for comparison between methods and show that degree of precision.
Metadata may be used for handling for handling of statistical information. Some metadata standards have already emerged as guiding lines for information processing within statistical information system. Conceptual models for metadata representation have to address beside the dataset itself additional data objects occurring in connection with a dataset. A unified description framework for such data objects is discussed with respect to metadata handling. Basic ideas on integration and translation of metadata standards are given with a focus on the framwork. Hereby principles of ontology engineering play a key role as starting point.
The traditional auction protocols (aka Dutch, English auctions) that Ebay, Amazon and Yahoo use; although considered success stories [9], have limited negotiation space: The combinatorial auction (CA) and multi-attribute auction (MAA) [10], [17] have been developed to address these shortages but even these do not allow users to negotiate more than one attribute at a time. As an answer to these limitations a new e-auction protocol has been created to enable agents negotiate on many attributes and combinations of goods simultaneously. This paper therefore shows how the automated hybrid auction was created to reduce computational and bid evaluation complexity based on CMOA bidding language and Social Construction of Technology (SCOT) principles. SCOT states that (i) the ’relevant social group’; (ii) their ’interpretative flexibility; and workability / functionality of the technology must be considered for the development of system such as e-auction as an alternative to E-Bay. SCOT is of the view that technologies emerge out of the process of choice and negotiations between ’relevant social groups’ [15], [8], in this case the bidders, auctioneers, sellers and auction house. This paper represents a collaboration of studies in progress - The Combinatorial Multi-attribute Auction as an online auction as compared to existing e-auction protocols such as Amazon, eBay, and applicaiton of intelligent software and Agent UML in e-auction.
There were great expectations in the 1980s in connection with the practical applications of mathematical processes which were built mainly upon fractal dimension mathematical basis. Results were achieved in the first times in several fields: examination of material structure, simulation of chaotic phenomena (earthquake, tornado), modelling real processes with the hilp of information technology equipment, the definition of length of revers or riverbanks. Significant results were achieved in practical applications later in the fields of informationtechnology, certain image processing areas. data compression, and computer classification. In the present publication the so far well known algorithms calculating fractal dimension in a simple way will be introduced as well sa the new mathematical concept named by th author ’ spectral fractal dimension’ [8] the algorithm derived from this concept and the possiblities of their practical usage.
- When a desired sugnal is encompassed by a noisy environment, active noise. The presented algorithm is based on the standard Least mean Squares (LMS) algorithm developed by Bernard Widrow. Modifications to the LMS algorithm were made in order ot optimixe its performance in extracting a desired speech signal form a noisy environment. The system consists of two adaptive systems running in paralles. withone having a much higher convergence rate to provide rapid adaptation in a non-stationary environment. However, theoutput of the higher converging system results indistorted speech. Therefore, the second system, which runs at a lower convergence rate but regularly has its coeffients updated by the first system, provides the actual output of the desired signal. All of the algorithm development and simulation were initially performed in matlab, and were then implemented on TMS320C6416 Digital Signal Processor(DSP) evaluation board to produce a real-time, noise-reduced speech signal.
- Many software organizations often bypass the requirements analysis phase of the software development life cycle process and skip directly to the implementation phase in an effort to save time and money. The results of such an approach often leads to projects not meeting the expected deadline, exceeding budget, and not meeting user needs or expectations. One of the primary benefits of requirements analysis is to catch problems early and Minimize thier impact with respect to time and money. This paper is a literature review of the requirements analysis phase and the multitude of techniques available to perform the analysis. It is hoped that by compiling the information into a single document, readers will be more in a position to understand the requirements engineering process and provide analysts a compelling argument as to why it should be employed in modern day software development.
An interative algorithm was developed to fit Fisher Law for Heavy Ion Collisions with distinct charge balance, obtaining different critical temperatures in agreement with recent theoretical and experimental results. This way is confirmed the influence of charge balance on the caloric curve of nuclear matter.
Scheduling in real-time is an important problem due to its role in practical applications. Among the scheduling algorithms proposed in the literature, static priority scheduling algorithms have less run-time scheduling overhead due to their logical simplicity. Rate monotonic scheduling is the first static priority algorithm proposed for real-time scheduling[1]. It has been extensively analyzed and heavily used in practice for its simplicity. One of the limitations of rate monotonic scheduling, as shown recently in [26], is that it incurs significant of this paper is to propose static priority scheduling algorithms with reduced preemptions. We present two frameworks, called off-line activation-adjusted scheduling (OAA) and adaptive activation-adjusted scheduling (AAA), from which many static priority scheduling algorithms can be derived by appropriately implementing the abstract components. The proposed algorithms reduce the number of unnecessary preemptions and hence: (i) increase processor utilization in realtime systems; and (iii) increase tasks schedulability. We conducted a simulation study for selected algorithms derived from the frameworks and the results indicate that the algorithms reduce preemptions significantly. The appeal of our algorithms is that they generally achieve significant reduction in preemptions while retaining the simplicity of static priority algorithms intact.
For the last 10 years, ageing well in the community has become a key concern of the European Union and its member states. Action plans as well as distinct programs such as the Ambient Assisted Living (AAL) Joint Programme are evidence of this engagement. Since then, many AAL products and services have been developed and implemented in the European market. Up to now, however, access to, and the availability of these solutions is difficult, and the information on their use is scarce. ActiveAdvice, an AAL EU-funded project aims to develop an online platform which offers both information on AAL solutions as well as advice to end users. This paper discusses the application of a multi-stakeholder perspective approach. It discusses the user-centered development and reflects on the establishment of AAL ecosystems and the functional requirements of the ActiveAdvice platform. It includes an extended methodological framework, which explains conclusively the ActiveAdvice stakeholders’ identification process and the user-centered requirements analysis, built on 38 semi-structured interviews with three stakeholder groups – consumers, businesses and governments. The integration of different stakeholders in the development and implementation of AAL solutions is a necessity as well as a challenge. This holds also true for the development of the ActiveAdvice platform.
ResearchGate has not been able to resolve any references for this publication.