The mining loss rate and dilution rate are the key indicators for the mining technology and management level of mining enterprises. Aiming at the practical problems such as the large workload but inaccurate data of the traditional loss and dilution calculation method, this thesis introduces the operating principle and process of calculating the loss rate and dilution rate at the mining fields by adopting geological models. As an example, authors establishes 3D models of orebody units in the exhausted area and mining fields in Yangshu Gold Mine in Liaoning Province, and conduct Boolean calculation among the models to obtain the calculation parameters of loss and dilution, and thereby calculate out the dilution rate and loss rate of the mining fields more quickly and accurately.
Everyone knows that thousand of words are represented by a single image. As a
result image search has become a very popular mechanism for the Web searchers.
Image search means, the search results are produced by the search engine should
be a set of images along with their Web page Unified Resource Locator. Now Web
searcher can perform two types of image search, they are Text to Image and
Image to Image search. In Text to Image search, search query should be a text.
Based on the input text data system will generate a set of images along with
their Web page URL as an output. On the other hand, in Image to Image search,
search query should be an image and based on this image system will generate a
set of images along with their Web page URL as an output. According to the
current scenarios, Text to Image search mechanism always not returns perfect
result. It matches the text data and then displays the corresponding images as
an output, which is not always perfect. To resolve this problem, Web
researchers have introduced the Image to Image search mechanism. In this paper,
we have also proposed an alternate approach of Image to Image search mechanism
Software technology based on reuse is identified as a process of designing
software for the reuse purpose. The software reuse is a process in which the
existing software is used to build new software. A metric is a quantitative
indicator of an attribute of an item or thing. Reusability is the likelihood
for a segment of source code that can be used again to add new functionalities
with slight or no modification. A lot of research has been projected using
reusability in reducing code, domain, requirements, design etc., but very
little work is reported using software reuse in medical domain. An attempt is
made to bridge the gap in this direction, using the concepts of clustering and
classifying the data based on the distance measures. In this paper cardiologic
database is considered for study. The developed model will be useful for
Doctors or Paramedics to find out the patients level in the cardiologic
disease, deduce the medicines required in seconds and propose them to the
patient. In order to measure the reusability K means clustering algorithm is
Enhancements in technology always follow Consumer requirements. Consumer
requires best of service with least possible mismatch and on time. Numerous
applications available today are based on Web Services and Cloud Computing.
Recently, there exist many Web Services with similar functional
characteristics. Choosing a right Service from group of similar Web Service is
a complicated task for Service Consumer. In that case, Service Consumer can
discover the required Web Service using non functional attributes of the Web
Services such as QoS. Proposed layered architecture and Web Service Cloud
i.e.WS Cloud computing Framework synthesizes the Non functional attributes that
includes reliability, availability, response time, latency etc. The Service
Consumer is projected to provide the QoS requirements as part of Service
discovery query. This framework will discover and filter the Web Services form
the cloud and rank them according to Service Consumer preferences to facilitate
Service on time.
Querying over XML elements using keyword search is steadily gaining popularity. The traditional similarity measure is widely employed in order to effectively retrieve various XML documents. A number of authors have already proposed different similarity-measure methods that take advantage of the structure and content of XML documents. They do not, however, consider the similarity between latent semantic information of element texts and that of keywords in a query. Although many algorithms on XML element search are available, some of them have the high computational complexity due to searching a huge number of elements. In this paper, we propose a new algorithm that makes use of the semantic similarity between elements instead of between entire XML documents, considering not only the structure and content of an XML document, but also semantic information of namespaces in elements. We compare our algorithm with the three other algorithms by testing on the real datasets. The experiments have demonstrated that our proposed method is able to improve the query accuracy, as well as to reduce the running time. Comment: 9 pages
The modeling of physical processes is an integral part of scientific and
technical research. In this area, the Extendible C++ Application in Quantum
Technologies (ECAQT) package provides the numerical simulations and modeling of
complex quantum systems in the presence of decoherence with wide applications
in photonics. It allows creating models of interacting complex systems and
simulates their time evolution with a number of available time-evolution
drivers. Physical simulations involving massive amounts of calculations are
often executed on distributed computing infrastructures. It is often difficult
for non expert users to use such computational infrastructures or even to use
advanced libraries over the infrastructures, because they often require being
familiar with middleware and tools, parallel programming techniques and
packages. The P-RADE Grid Portal is a Grid portal solution that allows users to
manage the whole life-cycle for executing a parallel application on the
computing Grid infrastructures. The article describes the functionality and the
structure of the web portal based on ECAQT package.
We describe a simple component architecture for the development of tools for mathematically based semantic transformations of scientific software. This architecture consists of compiler-based, language-specific front- and backends for source transformation, loosely coupled with one or more language-independent "plug-in" transformation modules. The coupling mechanism between the front- and back-ends and transformation modules is provided by the XML Abstract Interface Form (XAIF). XAIF provides an abstract, language-independent representation of language constructs common in imperative languages, such as C and Fortran. We describe the use of this architecture in the construction of tools for automatic differentiation (AD) of programs written in Fortran 77 and ANSI C. The XAIF is particularly well suited for performing the source transformations needed for AD. Differentiation modules typically operate within the scope of statements or basic blocks, working at a level where procedural languages are very similar. Thus, it is possible to specify a common interface format for mathematically-based semantic transformations that need not represent the union of all languages.
UML 2.0 activity diagrams (ADs) are largely used as a modeling language for flow-oriented behaviors in software and business processes. Unfortunately, their place/transition operational semantics is unable to capture and preserve semantics of the newly defined high-level activities constructs such as Interruptible Activity Region. Particularly, basic Petri nets do not preserve the non-locality semantics and reactivity concept of ADs. This is mainly due to the absence of global synchronization mechanisms in basic Petri nets. Zero-safe nets are a high-level variant of Petri nets that ensure transitions global coordination thanks to a new kind of places, called zero places. Indeed, zero-safe nets naturally address Interruptible Activity Region that needs a special semantics, forcing the control flow by external events and defining a certain priority level of executions. Therefore, zero-safe nets are adopted in this work as semantic framework for UML 2.0 activity diagrams.
Keywords: UML Activity Diagrams Formalization, Interruptible Activity Region, Zero-Safe Nets.
For further information, please visit this web site.
This paper is dedicated to virtual world exploration techniques, which have to help a human being to understand a 3D scene. A new method to compute a global view of a scene is presented in the paper. The global view of a scene is determined by a “good” set off points of view. This method is based on a genetic algorithm. The “good” set of points of view is used to compute a camera path around the scene.
In this Paper, a classification method based on neural networks is presented for recognition of 3D objects. Indeed, the objective of this paper is to classify an object query against objects in a database, which leads to recognition of the former. 3D objects of this database are transformations of other objects by one element of the overall transformation. The set of transformations considered in this work is the general affine group.
Generally, there are two approaches for solving the problem of human pose estimation from monocular images. One is the learning-based approach, and the other is the model-based approach. The former method can estimate the poses rapidly but has the disadvantage of low estimation accuracy. While the latter method is able to accurately estimate the poses, its computational cost is high. In this paper, we propose a method to integrate the learning-based and modelbased approaches to improve the estimation precision. In the learning-based approach, we use regression analysis to model the mapping from visual observations to human poses. In the model-based approach, a particle filter is employed on the results of regression analysis. To solve the curse of the dimensionality problem, the eigenspace of each motion is learned using Principal Component Analysis (PCA). Finally, the proposed method was estimated using the CMU Graphics Lab Motion Capture Database. The RMS error of human joint angles was 6.2 degrees using our method, an improvement of up to 0.9 degrees compared to the method without eigenspaces.
The IEC 61131-3 standard defines a model and a set of programming languages for the development of industrial automation software. It is widely accepted by industry and most of the commercial tool vendors advertise compliance with it. On the other side, Model Driven Development (MDD) has been proved as a quite successful paradigm in general-
purpose computing. This was the motivation for exploiting the benefits of MDD in the industrial automation domain.
With the emerging IEC 61131 specification that defines an object-oriented (OO) extension to the function block model, there will be a push to the industry to better exploit the benefits of MDD in automation systems development. This work discusses possible alternatives to integrate the current but also the emerging specification of IEC 61131 in the model driven development process of automation systems. IEC 61499, UML and SysML are considered as possible alternatives to allow the developer to work in higher layers of abstraction than the one supported by IEC 61131 and to more effectively move from requirement specifications into the implementation model of the system.
Optimization of the open absorption desiccant cooling system has been carried out in the present work. A finite difference method is used to simulate the combined heat and mass transfer processes that occur in the liquid desiccant re-generator which uses calcium chloride (CaCl 2) solution as the working desiccant. The source of input heat is assumed to be the total radiation incident on a tilted surface. The system of equations is solved using the Matlab-Simulink platform. The effect of the important parameters, namely the regenerator length, desiccant solution flow rate and concentration , and air flow rates, on the performance of the system is investigated. In order to optimize the system performance , a genetic algorithm technique has been applied. The system coefficient of performance COP has been maximized for different design parameters. It has been found that the maximum values of COP could be obtained for different combinations of regenerator length solution flow rate and air flow rate. Therefore, it is essential to select the design parameters for each ambient condition to maximize the performance of the system.
A description of the Systems Dynamics paradigm is given and the reduced Qualitative System Dynamics (QSD) form explained. A simple example is given to illustrate the diagram construction. The principles of states (levels), rates and feedback loops are outlined. The QSD method is used to address the problem of accessibility by using human control of automation as an example, and applying the QSD method to evaluate the effects of the researcher and user in the design of an accessible artefact. This simple automation model illustrates what can be found out from such a picture, in this indicating how the feedback from users has an influence on the time to deliver such designs.
A lower bound to errors of measuring object position is constructed as a function of parameters of a monocular computer vision system (CVS) as well as of observation conditions and a shape of an observed marker. This bound justifies the specification of the CVS parameters and allows us to formulate constraints for an object trajectory based on required measurement accuracy. For making the measurement, the boundaries of marker image are used.
In order to solve the premature convergence problem of the basic Ant Colony Optimization algorithm, a promising modification with changing index was proposed. The main idea of the modification is to measure the uncertainty of the path selection and evolution by using the average information entropy self-adaptively. Simulation study and performance comparison on Traveling Salesman Problem show that the improved algorithm can converge at the global optimum with a high probability. The work provides a new approach for solving the combinatorial optimization problems, especially the NP-hard combinatorial optimization problems.
The establishment of an existing practice scenario was an essential component in providing a basis for further research in the area of COTS software acquisition within the organisation. This report details the identification of means of describing the existing practice of software acquisition within an organisation and identification of models that could be used to present this view. The chosen best practices descriptions for the idealized model were maturity models, including SA-CMM, CMMI-ACQ, and ISO/IEC 12207. This report describes these models briefly and then describes the process of identifying the requirements for idealizing these maturity models into process frameworks that could be identified to actually business process models from a real organisation in order to identify gaps and optimizations within the organisation’s realization of the best practices model. It also identified the next steps in identification of the theoretical best practice framework, which will involve translation of the model to YAWL Petri nets and simulation of the process in order to identify potential modelling flaws or issues with framework efficiency. Implications of the currently ongoing research include the identification and correspondence of specific tasks and activities from ITIL and CoBiT frameworks with the generic key process areas of software acquisition frameworks and identification of sufficiently detailed structural framework models for each level in order to identify appropriate frameworks for application even in cases where these frameworks were not explicitly identified by the organisation or the researcher.
In this paper, we analyze the survivability of Mobile Ad Hoc Network systemically and give a detailed description of the survivability issues related to the MANET. We begin our work with analyzing the requirements of survivability of ad hoc network, and then we classify the impacts that affect survivability into three categories: dynamic topology, faults and attacks. The impacts of these factors are analyzed individually. A simulation environment for the MANET towards survivability is designed and implemented as well. Experiments that under the requirements and the impacts we declared are done based on this environment.
This paper presents a two-level learning method for designing an optimal Radial Basis Function Network (RBFN) using Adaptive Velocity Update Relaxation Particle Swarm Optimization algorithm (AVURPSO) and Orthogonal Least Squares algorithm (OLS) called as OLS-AVURPSO method. The novelty is to develop an AVURPSO algorithm to form the hybrid OLS-AVURPSO method for designing an optimal RBFN. The proposed method at the upper level finds the global optimum of the spread factor parameter using AVURPSO while at the lower level automatically constructs the RBFN using OLS algorithm. Simulation results confirm that the RBFN is superior to Multilayered Perceptron Network (MLPN) in terms of network size and computing time. To demonstrate the effectiveness of proposed OLS-AVURPSO in the design of RBFN, the Mackey-Glass Chaotic Time-Series as an example is modeled by both MLPN and RBFN.
The challenging task to synthesize automatically a time-to-amplitude converter, which unites by its functionality several digital circuits, has been successfully solved with the help of a novel methodology. The proposed approach is based on a paradigm according to which the substructures are regarded as additional mutation types and when ranged with other mutations form a new adaptive individual-level mutation technique. This mutation approach led to the discovery of an original coevolution strategy that is characterized by very low selection rates. Parallel island-model evolution has been running in a hybrid competitive-cooperative interaction throughout two incremental stages. The adaptive population size is applied for synchronization of the parallel evolutions.
This paper describes a simulation-based intelligent decision support system (IDSS) for real time control of a flexible manufacturing system (FMS) with machine and tool flexibility. The manufacturing processes involved in FMS are complicated since each operation may be done by several machining centers. The system design approach is built around the theory of dynamic supervisory control based on a rule-based expert system. The paper considers flexibility in operation assignment and scheduling of multipurpose machining centers which have different tools with their own efficiency. The architecture of the proposed controller consists of a simulator module coordinated with an IDSS via a real time event handler for implementing inter-process synchronization. The controller's performance is validated by benchmark test problem.
Integrated use of statistical process control (SPC) and engineering process control (EPC) has better performance than that by solely using SPC or EPC. But integrated scheme has resulted in the problem of "Window of Opportunity" and autocorrelation. In this paper, advanced T 2 statistics model and neural networks scheme are combined to solve the above problems: use T 2 statistics technique to solve the problem of autocorrelation; adopt neural networks technique to solve the problem of "Window of Opportunity" and identification of disturbance causes. At the same time, regarding the shortcoming of neural network technique that its algorithm has a low speed of convergence and it is usually plunged into local optimum easily. Genetic algorithm was proposed to train samples in this paper. Results of the simulation ex-periments show that this method can detect the process disturbance quickly and accurately as well as identify the dis-turbance type.
This paper introduces a framework to produce and to manage quality requirements of embedded aeronautical systems, called the Requirements Engineering Framework (REF for short). It aims at making the management of the requirement lifecycle easier, from the specification of the purchasers needs, to their implementation in the final products, and also their verification, while controlling costs. REF is based on the main standards of aeronautics, in particular RTCA DO-254 'Design Assurance Guidance for Airborne Electronic Hardware' (aka EUROCAE ED-80), and RTCA DO-178B 'Software Considerations in Airborne Systems and Equipment Certification' (aka EUROCAE ED-12) standards, for hardware and software components, respectively. An implementation of REF, using the IBM Rational DOORS and IBM Rational Change tools, is also presented in this paper. REF, described in this article, does not refer to the practices of a particular firm in aeronautics.
The research undertaken within a Greek IT organisation specialising in service provisioning to the Greek banking sector discusses the various aspects of a number of identified environment factors within five distinct IT projects which affect the requirements analysis phase. Project Management (PMBOK® Guide 4th ed.), IT Service Management (ITIL® v3) and Business Analysis (BABOK® Guide 2.0) framework practices applied to the various IT projects are highlighted in regard to improved activity execution. Project issue management, stakeholder management, time management, resources management, communication management and risk management aspects are presented. These are then linked to the identified environment factors so as to indicate the adaptability of an IT support team to changing environment factors in IT project environments and how the fulfilment of these factors can significantly contribute to effective requirements analysis and enhance the requirements management cycle.
A method for designing real-time distributed controllers of discrete manufacturing systems is presented. The approach held is agent based; the controller strategy is distributed into several interacting agents that operate each one on a part of the manufacturing process; these agents may be distributed into several interconnected processors. The proposed method consists of a modelling methodology and software development framework that provides a generic agent architecture and communication facilities supporting the interaction among agents.
In this paper, we present a new formalism for Modeling Multi Agent Systems (MAS). Our model based a PN is able to describe not only not the internal state of each agent modeled but also its behavior. Owing to these features, one can model naturally the dynamic behavior of complex systems and the communication between these entities. For this, we propose mathematical definitions attached to firing transitions. To validate our contribution, we will deal with real examples.
Parallel to the considerable growth in applications of web-based systems, there are increasing demands for methods and tools to assure their quality. Testing these systems, due to their inherent complexities and special characteristics, is complex, time-consuming and challenging. In this paper a novel multi-agent framework for automated testing of web-based systems is presented. The main design goals have been to develop an effective and flexible framework that supports different types of tests and utilize different sources of information about the system under test to automate the test process. A prototype of the proposed framework has been implemented and is used to perform some experiments. The results are promising and prove the overall design of the framework.
In this paper, we discuss agile software process improvement in P company with their description of process management in current level and analysis of problems, design the P Company success factors model in organizational culture, systems, products, customers, markets, leadership, technology and other key dimensions, which is verified through questionnaire in P company. In the end, we apply knowledge creation theory to analyze the open source software community with successful application of the typical agile software method, propose ten principles of knowledge creation in open source software community: Self-organizing, Code sharing, Adaptation, Usability, Sustention, Talent, Interaction, Collaboration, Happiness, and Democracy.
The ageing population in developed countries brings many benefits but also many challenges, particularly in terms of the development of appropriate technology to support their ability to remain in their own home environment. One particular challenge reported for such Home Care Systems (HCS) is the identification of an appropriate requirements development technique for dealing with the typical diverse stakeholders involved. Agile Methods (AMs) recognize this challenge and propose techniques that could be useful. This paper examines the desirable characteristics identified for requirements development in HCS and investigates the extent to which agile practices conform to these. It also sets out future work to improve the situation for the non compliant points found.
In the Windows XP 64 bit operating system environment, several common PC were used to build a cluster system, establishing the distributed memory parallel (DMP) computing system. A finite element model of whole aircraft with about 260 million degrees of freedom (DOF) was developed using three-node and four-node thin shell element and two-node beam element. With the large commercial finite element software MSC.MARC and employing two kinds of domain decomposition method (DDM) respectively, realized the parallel solving for the static strength analysis of the whole aircraft model, which offered a high cost-effective solution for solving large-scale and complex finite element models.
Automata theory has played an important role in theoretical computer science since last couple of decades. The algebraic automaton has emerged with several modern applications, for example, optimization of programs, design of model checkers, development of theorem provers because of having certain interesting properties and structures from algebraic theory of mathematics. Design of a complex system requires functionality and also needs to model its control behavior. Z notation has proved to be an effective tool for describing state space of a system and then defining operations over it. Consequently, an integration of algebraic automata and Z will be a useful computer tool which can be used for modeling of complex systems. In this paper, we have linked algebraic automata and Z defining a relationship between fundamentals of these approaches which is refinement of our previous work. At first, we have described strongly connected algebraic automata. Then homomorphism and its variants over strongly connected automata are specified. Next, monoid endomorphisms and group automorphisms are formalized. Finally, equivalence of endomorphisms and auto-morphisms under certain assumptions are described. The specification is analyzed and validated using Z/Eves toolset.
Automata theory has played an important role in computer science and engineering particularly modeling behavior of systems since last couple of decades. The algebraic automaton has emerged with several modern applications, for example , optimization of programs, design of model checkers, development of theorem provers because of having properties and structures from algebraic theory of mathematics. Design of a complex system not only requires functionality but it also needs to model its control behavior. Z notation is an ideal one used for describing state space of a system and then defining operations over it. Consequently, an integration of algebraic automata and Z will be an effective computer tool which can be used for modeling of complex systems. In this paper, we have combined algebraic automata and Z notation defining a relationship between fundamentals of these approaches. At first, we have described algebraic automaton and its extended forms. Then homomorphism and its variants over strongly connected automata are specified. Finally, monoid endomorphisms and group automorphisms are formalized, and formal proof of their equivalence is given under certain assumptions. The specification is analyzed and validated using Z/EVES tool.
This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The method followed consists of four stages: 1) selection of music-theoretical insights, 2) translation of these insights into a set of principles, 3) conversion of the principles into a computational model having the form of an algorithm for music generation, 4) testing the “music ” generated by the algorithm to evaluate the adequacy of the model. As an example, the method is implemented in Melody Generator, an algorithm for generating tonal melodies. The program has a structure suited for generating, displaying, playing and storing melodies, functions which are all accessible via a dedicated interface. The actual generation of melodies, is based in part on constraints imposed by the tonal context, i.e. by meter and key, the settings of which are controlled by means of parameters on the interface. For another part, it is based upon a set of construction principles including the notion of a hierarchical organization, and the idea that melodies consist of a skeleton that may be elaborated in various ways. After these aspects were implemented as specific sub-algorithms, the device produces simple but well-structured tonal melodies.
This work proposes a method for the detection and identification of parked vehicles stationed. This technique composed many algorithms for the detection, localization, segmentation, extraction and recognition of number plates in images. It is acts of a technology of image processing used to identify the vehicles by their number plates. Knowing that we work on images whose level of gray is sampled with (120×180), resulting from a base of abundant data by PSA. We present two algorithms allowing the detection of the horizontal position of the vehicle: the classical method "horizontal gradients" and our approach "symmetrical method". In fact, a car seen from the front presents a symmetry plan and by detecting its axis, that one finds its position in the image. A phase of localization is treated using the parameter MGD (Maximum Gradient Difference) which allows locating all the segments of text per horizontal scan. A specific technique of filtering, combining the method of symmetry and the localization by the MGD allows eliminating the blocks which don't pass by the axis of symmetry and thus find the good block containing the number plate. Once we locate the plate, we use four algorithms that must be realized in order to allow our system to identify a license plate. The first algorithm is adjusting the intensity and the contrast of the image. The second algorithm is segmenting the characters on the plate using profile method. Then extracting and resizing the characters and finally recognizing them by means of optical character recogni-tion OCR. The efficiency of these algorithms is shown using a database of 350 images for the tests. We find a rate of lo-calization of 99.6% on a basis of 350 images with a rate of false alarms (wrong block text) of 0.88% by image.
The genetic algorithms represent a family of algorithms using some of genetic principles being present in nature, in order to solve particular computational problems. These natural principles are: inheritance, crossover, mutation, survival of the fittest, migrations and so on. The paper describes the most important aspects of a genetic algorithm as a stochastic method for solving various classes of optimization problems. It also describes the basic genetic operator selection, crossover and mutation, serving for a new generation of individuals to achieve an optimal or a good enough solution of an optimization problem being in question.
The Tiny Encryption Algorithm (TEA) is a Feistel block cipher well known for its simple implementation, small memory footprint, and fast execution speed. In two previous studies, genetic algorithms (GAs) were employed to investigate the randomness of TEA output, based on which distinguishers for TEA could be designed. In this study, we used quantum inspired genetic algorithms (QGAs) in the cryptanalysis of TEA. Quantum chromosomes in QGAs have the advantage of containing more information than the binary counterpart of the same length in GAs, and therefore generate a more diverse solution pool. We showed that QGAs could discover distinguishers for reduced cycle TEA that are more efficient than those found by classical GAs in two earlier studies. Furthermore, we applied QGAs to break four-cycle and five-cycle TEAs, a considerably harder problem, which the prior GA approach failed to solve.
Optimum design of cable stayed bridges depends on number of parameters. Design of Cable stayed bridge satisfying all practical constraints is challenging to the designers. Considering the huge number of design variables and practical constraints, Genetic Algorithms (GA) is most suitable for optimizing the cable stayed bridge. In the present work the optimum design is carried out by taking total material cost of bridge as objective function. During problem formulation most of the practical design variables and constraints are considered. Using genetic algorithms some parametric studies such as effect of geometric nonlinearity, effect of grouping of cables, effect of practical site constraints on tower height and side span, effect of bridge material, effect of cable layout, effect of extra-dosed bridges on optimum relative cost have been presented. Data base is prepared for new designers to estimate the relative cost of bridge.
This paper presents makespan algorithms and scheduling heuristic for an Internet-based collaborative design and manufacturing process using bottleneck approach. The collaborative manufacturing process resembles a permutation re-entrant flow shop environment with four machines executing the process routing of M1,M2,M3,M4,M3,M4 in which the combination of the last three processes of M4,M3,M4 has high tendency of exhibiting dominant machine characteristic. It was shown that using bottleneck-based analysis, effective makespan algorithms and constructive heuristic can be developed to solve for near-optimal scheduling sequence. At strong machine dominance level and medium to large job numbers, this heuristic shows better makespan performance compared to the NEH.
Purpose of this study was to compare different Machine Learning classifiers (C4.5, Support Vector Machine, Naive Bayes, K-NN) in the early prediction of outcome of the subjects in vegetative state due to traumatic brain injury. Accuracy proved acceptable for all compared methods (AUC > 0.8), but sensitivity and specificity varied considerably and only some classifiers (in particular, Support Vector Machine) appear applicable models in the clinical routine. A combined use of classifiers is advisable.
This paper presents a new algorithm for generation of attack signatures based on sequence alignment. The algorithm is composed of two parts: a local alignment algorithm—GASBSLA (Generation of Attack Signatures Based on Sequence Local Alignment) and a multi-sequence alignment algorithm—TGMSA (Tri-stage Gradual Multi-Sequence Alignment). With the inspiration of sequence alignment used in Bioinformatics, GASBSLA replaces global alignment and constant weight penalty model by local alignment and affine penalty model to improve the generality of attack signatures. TGMSA presents a new pruning policy to make the algorithm more insensitive to noises in the generation of attack signatures. In this paper, GASBSLA and TGMSA are described in detail and validated by experiments.