This article focuses on the importance of the precise calculation of
similarity factors between papers and reviewers for performing a fair and
accurate automatic assignment of reviewers to papers. It suggests that papers
and reviewers' competences should be described by taxonomy of keywords so that
the implied hierarchical structure allows similarity measures to take into
account not only the number of exactly matching keywords, but in case of
non-matching ones to calculate how semantically close they are. The paper also
suggests a similarity measure derived from the well-known and widely-used
Dice's coefficient, but adapted in a way it could be also applied between sets
whose elements are semantically related to each other (as concepts in taxonomy
are). It allows a non-zero similarity factor to be accurately calculated
between a paper and a reviewer even if they do not share any keyword in common.
A manual measuring time tool in mass sporting competitions would not be
imaginable nowadays, because many modern disciplines, such as IRONMAN, last a
long-time and, therefore, demand additional reliability. Moreover, automatic
timing-devices based on RFID technology, have become cheaper. However, these
devices cannot operate as stand-alone because they need a computer measuring
system that is capable of processing incoming events, encoding the results,
assigning them to the correct competitor, sorting the results according to the
achieved times, and then providing a printout of the results. This article
presents the domain-specific language EasyTime, which enables the controlling
of an agent by writing the events within a database. It focuses, in particular,
on the implementation of EasyTime with a LISA tool that enables the automatic
construction of compilers from language specifications, using Attribute
In recent years, smartphones have become prevalent. Much attention is being
paid to developing and making use of mobile applications that require position
information. The Global Positioning System (GPS) is a very popular localization
technique used by these applications because of its high accuracy. However, GPS
incurs an unacceptable energy consumption, which severely limits the use of
smartphones and reduces the battery lifetime. Then an urgent requirement for
these applications is a localization strategy that not only provides enough
accurate position information to meet users' need but also consumes less
energy. In this paper, we present an energy-efficient localization strategy for
smartphone applications. On one hand, it can dynamically estimate the next
localization time point to avoid unnecessary localization operations. On the
other hand, it can also automatically select the energy-optimal localization
method. We evaluate the strategy through a series of simulations. Experimental
results show that it can significantly reduce the localization energy
consumption of smartphones while ensuring a good satisfaction degree.
The theoretical outcomes and experimental results of new color model implemented in algorithms and software of image processing are presented in the paper. This model, as it will be shown below, may be used in modern real time video processing applications such as radar tracking and communication systems. The developed model allows carrying out the image process with the least time delays (i.e. it speeding up image processing). The proposed model can be used to solve the problem of true color object identification. Experimental results show that the time spent during RGI color model conversion may approximately four times less than the time spent during other similar models.
As a video coding standard, H.264 achieves high compress rate while keeping good fidelity. But it requires more intensive computation than before to get such high coding performance. A Hierarchical Multi level Parallelisms (HMLP) framework for H.264 encoder is proposed which integrates four level parallelisms - frame-level, slice-level, macroblock-level and data-level into one implementation. Each level parallelism is designed in a hierarchical parallel framework and mapped onto the multi-cores and SIMD units on multi-core architecture. According to the analysis of coding performance on each level parallelism, we propose a method to combine different parallel levels to attain a good compromise between high speedup and low bit rate. The experimental results show that for CIF format video, our method achieves the speedup of 33.57x-42.3x with 1.04x-1.08x bit-rate increasing on 8-core Intel Xeon processor with SIMD Technology.
This paper proposes an accelerometer-based gesture recognition algorithm. As a pre-process procedure, raw data output by accelerometer should be quantized, and then use discrete Hidden Markov Model to train and recognize them. Based upon this recognition algorithm, we treat gesture as a method of human-computer interaction and use it in 3D interaction subsystem in VR system named VDOM by following steps: establish Gesture-Semantic Map, train standard gestures, finally do recognition. Experimental results show that the system can recognize input gestures quickly with a reliable recognition rate. The users are able to perform most of the typical interaction tasks in virtual environment by this accelerometer-based device.
We propose a new method for matching two 3D point sets of identical cardinality with global similarity but local non-rigid deformations and distribution errors. This problem arises from marker based optical motion capture (Mocap) systems for facial Mocap data. To establish one-to-one identifications, we introduce a forward 3D point pattern matching (PPM) method based on spatial geometric flexibility, which considers a non-rigid deformation between the two point-sets. First, a model normalization algorithm based on simple rules is presented to normalize the two point-sets into a fixed space. Second, a facial topological structure model is constructed, which is used to preserve spatial information for each FP. Finally, we introduce a Local Deformation Matrix (LDM) to rectify local searching vector to meet the local deformation. Experimental results confirm that this method is applicable for robust 3D point pattern matching of sparse point sets with underlying non-rigid deformation and similar distribution.
EThis paper presents a novel computer-aided design system which uses a computational approach to producing 3D images for stimulating creativity of designers. It introduces the genetic algorithm first. Then a binary tree based genetic algorithm is presented. This approach is illustrated by a 3D image generative example, which uses complex function expressions as chromosomes to form a binary tree, and all genetic operations are performed on the binary tree. Corresponding complex functions are processed by MATLAB software to form 3D images of artistic flowers. This generative design is integrated with a visualization interface, which allows designers to interact and select from instances for design evolution. It shows the system is able to enhance the possibility of discovering various potential design solutions.
Skeleton of 3D mesh is a fundamental shape feature, and is useful for shape description and other many applications in 3D Digital Geometry Processing. This paper presents a novel skeleton extraction algorithm based on feature point and core extraction by the Multidimensional scaling (MDS) transformation. The algorithm first straights the folded prominent branch up, as well as the prominent shape feature points of mesh are computed, a meaningful segmentation is applied under the direction of feature points. The Node-ring of all segmented components is defined by discrete geodesic path on mesh surface, and then the skeleton of every segmented component is defined as the link of the Node-ring's center. As to the core component without prominent feature points, principal curve is used to fit its skeleton. Our algorithm is simple, and invariant both to the pose of the mesh and to the different proportions of model's components.
The Virtual Home Environment (VHE) has been introduced as an abstract concept enabling users to access and personalize their subscribed services whatever the terminal they use and whatever the underlying network used. Much effort is currently being spent upon the challenging task to provide an architectural solution and an implementation of the VHE, providing ubiquitous service availability, personalized user interfaces and session mobility while users are roaming or changing their equipment. In this paper we present a multimedia delivery service, one of the VHE services selected to demonstrate its features, showing its interconnection with the so far defined VESPER VHE architecture.
A prototype compiler of the ST language (Structured Text), its operation and internal structure is presented. The compiler is a principal part of CPDev engineering environment for programming industrial controllers according to IEC 61131-3 standard. The CPDev is under development at Rzeszow University of Technology. The compiler generates an universal executable code as final result. The code can be interpreted on different platforms by target-specific virtual machines. Sample platforms include AVR, ARM, MCS-51, PC.
This paper presents a solution to bridging the abstract and concrete syntax of a Web rule languages by using model transforma-tions. Current specifications of Web rule languages such as Semantic Web Rule Language (SWRL) or RuleML define their abstract syntax (e.g., metamodel) and concrete syntax (e.g., XML schema) separately. Although the recent research in the area of Model-Driven Engineering (MDE) demonstrates that such a separation of two types of syntax is a good practice (due to the complexity of languages), one should also have tools that check validity of rules written in a concrete syntax with respect to the abstract syntax of the rule language. In this study, we use the REWERSE I1 Rule Markup Language (R2ML), SWRL, and Object Constraint Language (OCL), whose abstract syntax is defined by using metamodeling, while their textual concrete syntax is defined by using ei-ther XML/RDF schema or Extended Backus-Naur Form (EBNF) syntax. We bridge this gap by a bi-directional transformation defined in a model transformation language (ATLAS Transformation Language, ATL). This transformation allowed us to discover a number of issues in both web rule language metamodels and their corresponding concrete syntax, and thus make them fully compatible. This solution also enables for sharing web rules between different web rule languages.
Higher-level programming such as metaprogramming
introduces a layer of abstraction above the domain language programs.
Metaprogramming allows describing generic components and managing
variability in a domain. It is especially useful for developing program
generators for domains, where a great deal of commonalties exists. It
allows increasing the level of abstraction and hiding details that are
unnecessary to the designer. Information abstraction and hiding reduces
the amount of “user-visible” information. In this paper, we estimate the
increase of abstraction by evaluating the information content at the lower
(domain) and higher (meta) layers of abstraction. The estimation method
is based on the Kolmogorov complexity and uses a common
compression algorithm. The method is evaluated experimentally on
families of DSP components.
The maintenance of a software system represents an important part in its lifetime. In general, each software system is the subject of different kinds of changes. Bug fixes and a new functionality extensions are the most common reasons for a change. Usually, a change is accomplished by source code modifications. To make such a modification, correct understanding the current state of a system is required. This paper presents the innovative approach to the simplification of program comprehension. Based on the presented method, the affected software system is analysed and metamodel for the selected feature is created. The feature represents functional aspect of a system being the subject of the analysis and change. The main benefit is that by focusing on well known (and precisely described) parts of program implementation, it is possible to create metamodel for implementation parts automatically. The level of metamodel is at a higher level of abstraction than implementation.
This research aims to develop novel technologies to efficiently integrate wireless communication networks and Underwater Acoustic Sensor Networks (UASNs). Surface gateway deployment is one of the key techniques for connecting two networks with different channels. In this work, we propose an optimization method based on the genetic algorithm for surface gateway deployment, design a novel transmission mechanism-simultaneous transmission, and realize two efficient routing algorithms that achieve minimal delay and payload balance among sensor nodes. We further develop an analytic model to study the delay, energy consumption and packet loss ratio of the network. Our simulation results verify the effectiveness of the model, and demonstrate that the technique of multiple gateway deployment and the mechanism of simultaneous transmission can effectively reduce network delay, energy consumption and packet loss rate.
A strong need for new approaches and new curricula in different disciplines in European education area still exists. It is especially the case in the field of software engineering which has traditionally been underdeveloped in some areas. The curriculum presented in this paper is oriented towards undergraduate students of informatics and engineering. The proposed approach takes into account integration trends in European educational area and requirements of the labour market. The aim of this paper is to discuss the body of knowledge that should be provided by a modern curriculum in software engineering at a master level. Also the techniques used in development and implementation of such curriculum at different universities will be described. The presented ideas are based on the experience gained in the 3 year TEMPUS(1) project "Joint MSc Curriculum in Software Engineering", which established joint master studies in software engineering. Over a three-year interval, the project managed to define a new and joint curriculum, create teaching materials and deliver the curriculum in two institutions.
This article presents a case study of a theoretical multi-agent system designed to clean up ecological disasters. It focuses on the interactions within a heterogeneous team of agents, outlines their goals and plans, and establishes the necessary distribution of information and commitment throughout the team, including its sub-teams. These aspects of teamwork are presented in the TEAMLOG formalism , based on multimodal logic, in which collective informational and motivational attitudes are first-class citizens. Complex team attitudes are justified to be necessary in the course of teamwork. The article shows how to make a bridge between theoretical foundations of TEAMLOG and an application and illustrates how to tune TEAMLOG to the case study by establishing sufficient, but still minimal levels for the team attitudes.
From the viewpoint of adaptability, we classify software systems as being nonreflexive, introspective and adaptive. Introducing a simple example of LL(1) languages for expressions, we present its nonreflexive and adaptive implementation using Haskell functional language. Multiple metalevel concepts are an essential demand for a systematic language approach, to build up adaptable software systems dynamically, i.e. to evolve them. A feedback reflection loop from data to code through metalevel data is the basic implementation requirement and the proposition for semi-automatic evolution of software systems. In this sense, practical experiment introduced in this paper is related to the base level of language, but it illustrates the ability for extensions primarily in horizontal but also in vertical direction of an adaptive system.
Adaptation in multimedia systems is usually restricted to defensive, reactive media adaptation (often called stream-level adaptation). We argue that offensive, proactive, system-level adaptation deserves not less attention. If a distributed multimedia system cares for overall, end-to-end quality of service then it should provide a meaningful combination of both. We introduce an adaptive multimedia server (ADMS) and a supporting middleware which implement offensive adaptation based on a lean, flexible architecture. The measured costs and benefits of the offensive adaptation process are presented. We introduce an intelligent video proxy (QBIX), which implements defensive adaptation. The cost/benefit measurements of QBIX are presented elsewhere . We show the benefits of the integration of QBIX in ADMS. Offensive adaptation is used to find an optimal, user-friendly configuration dynamically for ADMS, and defensive adaptation is added to take usage environment (network and terminal) constraints into account.
This paper investigates the problem of synchronizing a mobile agent network by means of a velocity adaptation strategy, where each agent is assigned different moving velocities to establish a time-varying network topology, and the velocity of each agent develops adaptively according to the local property between itself and its neighbors. We show that our strategy is effective in enhancing the synchronizability of the mobile agent network, i.e., the region of power density for which the network can achieve synchronization is enlarged as compared to the fast-switching case. In addition, the influence of the controlling parameter on network evolution is studied by assessing the convergence time.
In this paper we describe how existing software developing processes, such as Rational Unified Process, can be adapted in order to allow disciplined and more efficient development of user interfaces. The main objective of this paper is to demonstrate that standard modeling environments, based on the UML, can be adapted and efficiently used for user interfaces development. We have integrated the HCI knowledge into developing processes by semantically enriching the models created in each of the process activities of the process. By using UML, we can make easier use of HCI knowledge for ordinary software engineers who, usually, are not familiar with results of HCI researches, so these results can have broader and more practical effects. By providing a standard means for representing human-computer interaction, we can seamlessly transfer UML models of multimodal interfaces between design and specialized analysis tools. Standardization provides a significant driving force for further progress because it codifies best practices, enables and encourages reuse, and facilitates interworking between complementary tools. Proposed solutions can be valuable for software developers, who can improve quality of user interfaces and their communication with user interface designers, as well as for human computer interaction researchers, who can use standard methods to include their results into software developing processes.
A way to improve the effectiveness in e-learning is to offer the personalized approach to the learner. Adaptive e-learning system needs to use different strategies and technologies to predict and recommend the most likely preferred options for further learning material. This can be achieved by recommending and adapting the appearance of hyperlinks or simply by recommending actions and resources. This paper presents an idea for integration of such recommender system into existing web-based Java tutoring system in order to provide various adaptive programming courses.
This paper presents a cognitive multi-agents architecture called Intelligent Cognitive Agents (InCA) that was elaborated for the design of Intelligent Adaptive Learning Systems. The InCA architecture relies on a personal agent that is aware of the user's characteristics, and that coordinates the intervention of a set of expert cognitive agents (such as story telling agents, assessment agents, stimulation agents or help agents). This InCA architecture has been applied for the design of K-InCA, an e-learning system aimed at helping people to learn and adopt knowledge-sharing management practices.
During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of the fuzzy set theory, especially in the realm of the industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations i.e., accurate mathematical models. The fuzzy logic controller based on wavelet network provides a means of converting a linguistic control strategy based on expert knowledge into an automatic strategy. In the available literature, one can find scores of papers on fuzzy logic based controllers or fuzzy adaptation of PID controllers. However, relatively less number of papers is found on fuzzy adaptive control, which is not surprising since fuzzy adaptive control is relatively new tool in control engineering. In this paper, fuzzy adaptive PID controller with wavelet network is discussed in subsequent sections with simulations. An adaptive neural network structure was proposed. This structure was used to replace the linearization feedback of a second order system (plant, process). Also, in this paper, it is proposed that the controller be tuned using Adaptive fuzzy controller where Adaptive fuzzy controller is a stochastic global search method that emulates the process of natural evolution. It is shown that Adaptive fuzzy controller be capable of locating high performance areas in complex domains without experiencing the difficulties associated with high dimensionality or false optima as may occur with gradient decent techniques. From the output results, it was shown that Adaptive fuzzy controller gave fast convergence for the nonparametric function under consideration in comparison with conventional Neural Wavelet Network (NWN).
Gradients of high-dimensional functions can be computed effi- ciently and with machine accuracy by so-called adjoint codes. We present an L-attributed grammar for the single-pass generation of intraprocedural adjoint code for a subset of Fortran. Our aim is to integrate the syntax- directed approach into the front-end of the NAGWare Fortran compiler. Research prototypes of this compiler that build adjoint code based on an abstract intermediate representation have been developed for several years. We consider the syntax-directed generation of adjoint code as a low development cost alternative to more sophisticated algorithms. The price to pay comes in form of a very limited set of code optimizations that can be performed in a single-pass setting.
Software testing provides a means to reduce errors, cut maintenance and overall software costs. Numerous software development and testing methodologies, tools, and techniques have emerged over the last few decades promising to enhance software quality. While it can be argued that there has been some improvement it is apparent that many of the techniques and tools are isolated to a specific lifecycle phase or functional area. This paper presents a set of best practice models and techniques integrated in optimized and quantitatively managed software testing process (OptimalSQM), expanding testing throughout the SDLC. Further, we explained how can Quantitative Defect Management Model be enhanced to be practically useful for determining which activities need to be addressed to improve the degree of early and cost-effective software fault detection with assured confidence is proposed. To enable software designers to achieve a higher quality for their design, a better insight into quality predictions for their design choices, test plans improvement using Simulated Defect Removal Cost Savings model is offered in this paper.
Real-time human tracking is very important in surveillance and robot applications. We note that the performance of any human tracking system depends on its accuracy and its ability to deal with various human sizes in a fast way. In this paper, we combined the presented works in [1, 2] to come with new human tracking algorithm that is robust to background and lighting changes and does not require special hardware components. In addition this system can handle various scales of human images. The proposed system uses sum of absolute difference (SAD) with thresholding as has been described in  and compares the output with the predefined person pattern using the technique which has been described in . Using the combination between [1,2] approaches will enhance the performance and speed of the tracking system since pattern matching has been performed according to just one pattern. After matching stage, a specific file is created for each tracked person, this file includes image sequences for that person. The proposed system handles shadows removal, lighting changes, and background changes with infinite pattern scales using standard personal computer.
The paper presents the new agent framework XJAF and its application on distributed library catalogues. The framework is based on the Java EE technology and uses the concept of the plug-ins for implementation of the basic framework components. One important plug-in of the agent framework has been introduced into this system: the inter-facilitator connection plug-in, which defines how multiple facilitators form an agent network. The inter-facilitator connection plug-in is particularly important in both design and implementation phases in the field of distributed library catalogues. In order to substantiate the above statement, the framework has been used for implementation of the agent-based central catalogue of the library information system BISIS. Also, the framework has been used to implement the agent-based metadata harvesting system for the networked digital library of theses and dissertations (NDLTD). Both systems have been implemented at the University of Novi Sad.
The aim of this study is to design and develop an interaction model to perform the collaborative teaching process among pedagogical agents. A pedagogical agent has a role in a situation of the teaching process. However, the role is not fixed, but dynamically changed according to the learner's understanding. So, in this paper, we have analyzed the collaborative teaching process between one learner and two teachers for the subject of multiple fraction in elementary school, and extracted communication performatives and protocols for interaction required in this process as an interaction model. Moreover, we describe an example of a collaborative teaching process by using the extracted communication performatives and protocols.
The main goal of this paper is to provide an overview of the rapidly developing area of software agents serving as a reference point to a large body of literature and to present the key concepts of software agent technology, especially agent languages, tools and platforms. Special attention is paid to significant languages designed and developed in order to support implementation of agent-based systems and their applications in different domains. Afterwards, a number of useful and practically used tools and platforms available are presented, as well as support activities or phases of the process of agent-oriented software development.
EXtensible Java-based Agent Framework (XJAF) is a pluggable architecture of the hierarchical intelligent agents system with communication based on KQML. Workers, Inc. is a workflow management system implemented using mobile agents. It is especially suited for highly distributed and heterogeneous environments. The application of the above-mentioned systems will be considered in the area of Document Management Systems.
The enterprise strategy is influenced by the environment changes: socio economic, legislative, technology, and the globalization. This makes its Information System more complex and competition increasingly fierce. In order for an enterprise to ensure its place in this hard context characterized by rapid and random changes of the internal and external environments, it must have fast adapting policy of its strategy and drive quickly important changes at all levels of its Information System in order to align it to its strategy and vice versa; that?s, it must always be agile. Therefore, agility of the Enterprise Information System can be considered as a primary objective of an enterprise. This paper deals with agility assessment in the context of POIRE project. It proposes a fuzzy logic based assessment approach in order to measure, regulate and preserve continuously the Information System agility. It also proposes a prototype implementation and an application of the proposed approach to a tour operator enterprise.
Ontology has been collecting a lot of attention recently. In fact, it has potential for resolving several key problems such as semantic tag design for semantic web, semantic integration, knowledge sharing/reuse, etc. However, it is also true that people have different understanding of ontology. This article is written to contribute to clarification of the understanding of ontology and ontological engineering and to promotion of its utility. Although the discussion is made in the context of Artificial Intelligence in Education domain, I believe the content is pretty general.
The importance of XML query optimization is growing due to the rising number of XML-intensive data mining tasks. Earlier work on algebras for XML query focused mostly on rule-based optimization and used node-at-a-time execution model. Heavy query workloads in modern applications require cost-based optimization which is naturally supported by the set-at-a-time execution model. This paper introduces an algebra with only set-at-a-time operations, and discusses expression reduction methods and lazy evaluation techniques based on the algebra. Our experiments demonstrate that, for queries with complex conditional and quantified expressions, the proposed algebra results in plans with much better performance than those produced by the state-of-the-art algebras. For relatively simple queries, the proposed methods are expected to yield plans with comparable performance.
Genetic Algorithms (GA) is a family of search algorithms based on the mechanics of natural selection and biological evolution. They are able to efficiently exploit historical information in the evolution process to look for optimal solutions or approximate them for a given problem, achieving excellent performance in optimization problems that involve a large set of dependent variables. Despite the excellent results of GAs, their use may generate new problems. One of them is how to provide a good fitting in the usually large number of parameters that must be tuned to allow a good performance. This paper describes a new platform that is able to extract the Regular Expression that matches a set of examples, using a supervised learning and agent-based framework. In order to do that, GA-based agents decompose the GA execution in a distributed sequence of operations performed by them. The platform has been applied to Language induction problem, for that reason the experiments are focused on the extraction of the regular expression that matches a set of examples. Finally, the paper shows the efficiency of the proposed platform (in terms of fitness value) applied to three case studies: emails, phone numbers and URLs. Moreover, it is described how the codification of the alphabet affects to the performance of the platform.
This paper presents a novel alignment approach for imperfect speech and the corresponding transcription. The algorithm gets started with multi-stage sentence boundary detection in audio, followed by a dynamic programming based search, to find the optimal alignment and detect the mismatches at sentence level. Experiments show promising performance, compared to the traditional forced alignment approach. The proposed algorithm has already been applied in preparing multimedia content for an online English training platform.
To be a debugger is a good thing! Since the very beginning of the programming activity, debuggers are the most important and widely used tools after editors and compilers; we completely recognize their importance for software development and testing. Debuggers work at machine level, after the compilation of the source program; they deal with assembly, or binary-code, and are mainly data structure inspectors. Alma is a program animator based on its abstract representation. The main idea is to show the algorithm being implemented by the program, independently from the language used to implement it. To say that ALMA is a debugger, with no value added, is not true! ALMA is a source code inspector but it deals with programming concepts instead of machine code. This makes possible to understand the source program at a conceptual level, and not only to fix run time errors. In this paper we compare our visualizer/animator system, ALMA, with one of the most well-known and used debuggers, the graphical version of GDB, the DDD program. The aim of the paper is twofold: the immediate objective is to prove that ALMA provides new features that are not usually offered by debuggers; the main contribution is to recall the concepts of debugger and animator, and clarify the role of both tools in the field of program understanding, or program comprehension.
Ambient Intelligence aims to enhance the way people interact with their environment to promote safety and to enrich their lives. A Smart Home is one such system but the idea extends to hospitals, public transport, factories and other environments. The achievement of Ambient Intelligence largely depends on the technology deployed (sensors and devices interconnected through networks) as well as on the intelligence of the software used for decision-making. The aims of this article are to describe the characteristics of systems with Ambient Intelligence, to provide examples of their applications and to highlight the challenges that lie ahead, especially for the Software Engineering and Knowledge Engineering communities. In particular we address system specification and verification for the former and knowledge acquisition from the vast amount of data collected for the latter.
In this paper, we propose a neural network approach to forecast AM/PM Jordan electric power load curves based on several parameters (temperature, date and the status of the day). The proposed method has an advantage of dealing with not only the nonlinear part of load curve but also with rapid temperature change of forecasted day, weekend and special day features. The proposed neural network is used to modify the load curve of a similar day by using the previous information. The suitability of the proposed approach is illustrated through an application to actual load data of Electric Power Company in Jordan. The results show an acceptable prediction for Short-Term Electrical Load Forecasting (STELF), with maximum regression factor 90%.
Under the open environments, it is very difficult to guarantee the trustworthiness of interacting business process using traditional software engineering methods, at the same time, for dealing with the influence of external factors, some proposed business process mining methods are only effective 1-bounded business process, and some behavior dependent relationships are ignore. A behavior trustworthiness analysis method of business process based on induction information is presented in the paper. Firstly, aimed to the internal factors, we analyze the consistent behavior relativity to guarantee the predictable function. Then, for the external factors, in order to analyze the behavior change of business process, we propose a process mining methods based on induction information. Finally, experiment simulation is given out, and compares our method with genetic process mining methods. Theoretical analysis and experimental results indicate that our method is better than the genetic process mining method.
Visual Languages (VLs) are beneficial particularly for domainspecific applications, since they can support ease of understanding by visual metaphors. If such a language has an execution semantics, comprehension of program execution may be supported by direct visualization. This closes the gap between program depiction and execution. To rapidly develop a VL with execution semantics a generator framework is needed which incorporates the complex knowledge of simulating and animating a VL on a high specification level. In this paper we show how a fully playable tile-based game is specified with our generator framework DEViL.We illustrate this on the famous Pacman1 game. We claim that our simulation and animation approach is suitable for the rapid development process. We show that the simulation of a VL is easily reached even in complex scenarios and that the automatically generated animation is mostly adequate, even for other kinds of VLs like diagrammatic, iconic or graph based ones.
The aim of this paper is to discuss how our pattern-based strategy for the visualization of data and control flow can effectively be used to animate the program and exhibit its behavior. That result allows us to propose its use for Program Comprehension. The animator uses well known compiler techniques to inspect the source code in order to extract the necessary information to visualize it and understand program execution. We convert the source program into an internal decorated (or attributed) abstract syntax tree and then we visualize the structure by traversing it, and applying visualization rules at each node according to a pre-defined rule-base. In order to calculate the next step in the program execution, a set of rewriting rules are applied to the tree. The visualization of this new tree is shown and the program animation is constructed using an iterative process. No changes are made in the source code, and the execution is simulated step by step. Several examples of visualization are shown to illustrate the approach and support our idea of applying it in the context of a Program Comprehension environment.
The paper presents innovative parser construction method and parser generator prototype which generates a computer language parser directly from a set of annotated classes in contrast to standard parser generators which specify concrete syntax of a computer language using BNF notation. A language with textual concrete syntax is defined upon the abstract syntax definition extended with annotations in the presented approach. Annotations define instances of concrete syntax patterns in a language. Abstract syntax of a language is inevitable input of the parser generator as well as language's concrete syntax pattern definitions. The process of parser implementation is presented on the concrete computer language - the Simple Arithmetic Language. The paper summarizes results of the studies of implemented parser generator and describes its role in the university courses.
Deep web respond to a user query result records encoded in HTML files. Data extraction and data annotation, which are important for many applications, extracts and annotates the record from the HTML pages. We proposed an domain-specific ontology based data extraction and annotation technique; we first construct mini-ontology for specific domain according to information of query interface and query result pages; then, use constructed mini-ontology for identifying data areas and mapping data annotations in data extraction; in order to adapt to new sample set, mini-ontology will evolve dynamically based on data extraction and data annotation. Experimental results demonstrate that this method has higher precision and recall in data extraction and data annotation.
Concurrent programs may suffer from concurrency anomalies that may lead to erroneous and unpredictable program behaviors. To ensure program correctness, these anomalies must be diagnosed and corrected. This paper addresses the detection of both low- and high-level anomalies in the Transactional Memory setting. We propose a static analysis procedure and a framework to address Transactional Memory anomalies. We start by dealing with the classic case of low-level dataraces, identifying concurrent accesses to shared memory cells that are not protected within the scope of a memory transaction. Then, we address the case of high-level dataraces, bringing the programmer's attention to pairs of memory transactions that were misspecified and should have been combined into a single transaction. Our framework was applied to a set of programs, collected form different sources, containing well known low- and high-level anomalies. The framework demonstrated to be accurate, confirming the effectiveness of using static analysis techniques to precisely identify concurrency anomalies in Transactional Memory programs.
Weyuker's properties have been suggested as a guiding tool in identification of a good and comprehensive complexity measure by several researchers. Weyuker proposed nine properties to evaluate complexity measure for traditional programming. However, they are extensively used for evaluating object-oriented (OO) metrics, although the object-oriented features are entirely different in nature. In this paper, two recently reported OO metrics were evaluated and, based on it; the usefulness and relevance of these properties for evaluation purpose for object-oriented systems is discussed.
During the planning and implementation of Information and Communication Technologies solutions in the healthcare system, attention should be focused on the interests of citizens, healthcare employees, and the public. The project, Development of the Healthcare Information System for Basic Healthcare and Pharmaceutical Services" demands the implementation of Electronic Healthcare Documentation in the Healthcare Information System of Serbia. This article represents a short overview of previous development of the healthcare information system. Electronic health documentation needs to represent basic health process of every single user. Healthcare Information Systems is based on patients, medical documents, information exchange about patient's health between health's, insurances and financials institutions, with primary goal to made healthy population with less cost.
Body Sensor Networks (BSN) are an emerging application that places sensors on the human body. Given that a BSN is typically powered by a battery, one of the most critical challenges is how to prolong the lifetime of all sensor nodes. Recently, using clusters to reduce the energy consumption of BSN has shown promising results. One of the important parameters in these cluster-based algorithms is the selection of cluster heads (CHs). Most prior works selected CHs either probabilistically or based on nodes' residual energy. In this work, we first discuss the efficiency of cluster-based approaches for saving energy. We then propose a novel cluster head selection algorithm to maximize the lifetime of a BSN for motion detection. Our results show that we can achieve above 90% accuracy for the motion detection, while keeping energy consumption as low as possible.
In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multi-agent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In paper (5) we have focused on defining gen- eral mechanism for knowledge fusion. Next, the techniques ensuring tractability of fusing knowledge expressed as a Horn subset of proposi- tional dynamic logic were developed in (13,16). Propositional logics may seem too weak to be useful in real-world applications. On the other hand, propositional languages may be viewed as sublanguages of first-order logics which serve as a natural tool to define concepts in the spirit of description logics (2). Thes e notions may be further used to define various ontologies, like e.g. those applicable in the Semantic Web. Taking this step, we propose a framework, in which our Horn subset of dynamic logic is combined with deductive database technology. This synthesis is formally implemented in the framework of HSPDL architecture. The resulting knowledge fusion rules are naturally applicable to real-world data.
Organizations build, buy and reuse different types of technology with the intention to addressing their organizational needs, challenges and for competitive advantage. Unfortunately, the means is not the end. Instead, in some ways, it leads to complications and complexities, and more importantly, consumes more resources. Some organizations have adopted the technical architecture approach to address the challenges posed by technology deployment. The technical architecture is intended to address aspects, from strategic planning to implementation of technology infrastructures. This is to consistently effect significant technological change within the environment. The technical architecture approach facilitates and enables prioritization of analysis, development and implementation, which are based on value added business requirements and vision. It therefore allows the organization to proceed at its own pace while progressing at the same time. The paper presents model which reflects the consistent approach that adaptive enterprises could employ to build, maintain, and apply technical architecture in the computing environment. The model emphasizes a holistic approach to technical architecture deployment in the organization.