Article

OMG Unified Modeling Language Specification

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

...  As part of the functional requirements of the system to be built, which is probably the role they play more often. This is the main purpose of use cases as described in the latest UML specification (OMG, 2003) and in other publications (Cockburn, 2001;Leffingwell and Widrig, 2000;Schneider and Winters, 1998). ...
...  Extension points: in the UML 1.5 specification (OMG, 2003), an extension point is defined as a reference to one or a collection of locations in a use case where the use case may be extended. An extend relationship defines that a use case may be -i.e. ...
... Apart from their textual specification, use cases, their actors and their relationships can be depicted in the so-called use case diagrams. Use case diagrams were part of the initial proposal by Jacobson et al. (1992) and, with minor changes, they are also present in the current UML specification (OMG, 2003). ...
... Since the node X has 3 entries and hence has exceeded the maximum number of entries allowed per node, a new non-leaf node is introduced and node X is partitioned into two distinct nodes X1 and X2. The entries of node X are split into two sets of entries; the first set corresponding to the range [-∞, 19] is associated with node X1 while the second entry having the range [19,70] is associated with node X2. All the child nodes associated with the first set are now allocated to the node X1 while the child nodes associated with the second set are allocated to the node X2. ...
... We define the architecture, the specifications and requirements as well as the behavior of the distributed b+ tree simulation system using the Unified Modeling Language UML [19]. Developed by the Object Management Group OMG [19]. ...
... We define the architecture, the specifications and requirements as well as the behavior of the distributed b+ tree simulation system using the Unified Modeling Language UML [19]. Developed by the Object Management Group OMG [19]. UML is a standardized modeling language that defines the notation and semantics of object-oriented software systems. ...
... One of the major issues is the lack of adequate processes, techniques and automated tool support available for specification of requirements. We thus set out in our research to enhance Requirements Specification (RS) methodology by improving the approach that is most popular at the moment -use case modelling [2][3]. The use case approach is well-suited for specifying functional requirements for software systems. ...
... The use case modelling approach was first presented by Ivar Jacobson [8], but now this technique is considered to be a part of the Unified Modelling Language (UML) [3]. With use case models, one can specify functional requirements for a system in terms of scenarios of interaction between the system and its environment. ...
... Get billing data In addition to defining a use case as a service required from the system, it can also be seen as a collection of scenarios of system use that have the same goal [3]. Hence, there are usually a number of different scenarios or flows through each use case -the main flow and several alternative flows. ...
Article
Full-text available
Inadequate requirements specification is one of the main causes of software development project failure today. A ma-jor problem is the lack of processes, techniques and auto-mated tool support available for specifying system require-ments. We suggest a way to improve requirements specifi-cation methodology by enhancing the approach that is most popular at the moment -use case modelling. Despite their popularity, use case models are not formal enough for auto-mated analysis. We amend traditional use case models with a formal structure and semantics to make them suitable for automated verification. The enhanced use case modelling technique that we propose is called Susan ("S"ymbolic "us"e case "an"alysis) and it facilitates verification of use case models using symbolic model checking. We also devel-oped a software tool called SusanX to construct, manipulate and analyse Susan models. The analysis feature of the tool is implemented using the NuSMV model checker. A num-ber of generic properties for verification are built into Su-sanX, and the tool additionally allows the user to construct model-specific properties.
... Enfin, les modèles, les métamodèles et les métamétamodèles peuvent être implementés selon différents standards. Par exemple , l'Object Management Group propose un métamétamodèle appelé Meta Object Facility (MOF) et différents métamodèles (UML: Unified Modeling Language [10], KDM: Knowledge Discovery Metamodel [11], etc.) ...
... The output model conforms to the metamodel shown in Fig. 5. 10. The root of such metamodel is the class Model that collects rules. ...
Thesis
Full-text available
Today's business world is very dynamic and organizations have to quickly adjust their internal policies to follow the market changes. Such adjustments must be propagated to the business logic embedded in the organization's information systems, that are often legacy applications not designed to represent and operationalize the business logic independently from the technical aspects of the programming language employed. Consequently, the business logic buried in the system must be discovered and understood before being modified. Unfortunately, such activities slow down the modification of the system to new requirements settled in the organization policies and threaten the consistency and coherency of the organization business. In order to simplify these activities, we provide amodel-based approach to extract and represent the business logic, expressed as a set of business rules, from the behavioral and structural parts of information systems. We implement such approach for Java, COBOL and relational database management systems. The proposed approach is based on Model Driven Engineering,that provides a generic and modular solution adaptable to different languages by offering an abstract and homogeneous representation of the system.
... T HE conventional object data model represents objects, attributes, and relationships between objects. Several data models are based on these concepts, such as the Entity-Relationship Model and the Unified Modeling Language (UML) [1] [2]. The relational data model also supports these concepts in term of relations representing object classes and their attributes. ...
... We describe next the three fundamental data models that we wish to combine into a single composite data model (CDM). 1. The term Data Model is an overloaded term. ...
Article
Full-text available
In this paper, we combine the characteristics of three fundamental data models in order to represent their semantics in a common framework. These fundamental data models include the familiar concepts of modeling (1) object classes (or entities), their properties (attributes) and relationships between them, (2) multidimensional objects and attributes that can be summarized over the dimensions, and (3) hierarchical structures. This model, called the Composite Data Model, facilitates combinations of these three model structures to be represented jointly in a single schema, thus providing more expressive and natural queries over them. The main advantage of the composite data model (CDM), and a composite query language (CQL) over it, is that any combination of the three fundamental models can be represented jointly based on explicit semantics of each of the fundamental data models. This is unlike existing data models that represent each data model individually or obscure the semantics of additional features being modeled. In order to develop a query language over the combined schemas, we introduce a new concept, referred to as anchor , which is an object class that acts as the focus of the query. We provide in the query language path structures relative to the anchor that facilitate data navigation and data manipulation. We develop the syntax and semantics of the proposed language, and illustrate its expressive power through numerous query examples, and comparisons to three other query languages: OQL, SPARQL, and XQuery.
... Statecharts (Harel, 1987) became one of the most common behavior-modeling frameworks, integrated in broader modeling and specification systems (Unified Modeling Language (UML) (Booch et al., 2000) and AADL (Feiler et al., 2009)). UML has four behavior diagrams: activity, sequence, state machine, and use case. ...
... For example, each root may be visualized as a box (Figure 14.2), and if there is a composition operation specifying an interaction between root behaviors, the boxes are connected by an arrow marked by the interaction type. The root behavior may be visualized with UML activity diagrams (Booch et al., 2000) (see Figure 14.6). The MP developer's environment may have a library of predefined views providing different visualizations for schemas. ...
Chapter
Full-text available
This chapter describes a novel approach for modeling and predicting systems behavior resulting from the interactions among subsystems and among the system and its environment. The approach emphasizes specification of component behavior and component interaction as separate concerns at the architectural level. Monterey Phoenix (MP) provides a new capability for automatically verifying systems behaviors early in the life cycle, when design flaws are most easily and inexpensively corrected. MP extends existing frameworks and allows multiple visualizations for different stakeholders and has potential for application in multiple domains. The first MP prototype has been implemented as a compiler generating an Alloy model from the MP schema and then running the Alloy Analyzer to obtain event traces and to perform assertion checks. It has benefited from Alloy's relational logic formalism and visualization tools.
... For practical consideration, it is beneficial to begin with general natural language requirements and refine them into graphical notations, such as Unified Model Language (UML) diagrams. 33,34 However, the original requirement within the specification must always be bidirectionally linked to allow traceability. It should be possible to describe and follow the life of requirements in all directions. ...
Article
Full-text available
The long-lasting trend of medical informatics is to adapt novel technologies in the medical context. In particular, incorporating artificial intelligence to support clinical decision-making can significantly improve monitoring, diagnostics, and prognostics for the patient's and medic's sake. However, obstacles hinder a timely technology transfer from research to the clinic. Due to the pressure for novelty in the research context, projects rarely implement quality standards. Here, we propose a guideline for academic software life cycle processes tailored to the needs and capabilities of research organizations. While the complete implementation of a software life cycle according to commercial standards is not feasible for in scientific work, we propose a subset of elements that we are convinced will provide a significant benefit while keeping the effort in a feasible range. Ultimately, the emerging quality checks for academic software development can pave the way for an accelerated deployment of academic advances in clinical practice.
... A common comment is that it can be more difficult to code the representation of abstract objects in process languages than it is to code the abstract objects themselves [83]. The standard in common use for UML is hundreds of pages long [86] and interpretations of the standard are often debated, making it inexpressive to individuals w ho are not already familiar with the standard. ...
... The generic and recursive nature of the partial database import/export allows a streamlined object oriented design approach, illustrated by the UML (Unified Modelling Language) diagram in Figure 11 (Booch et al. 2000). The design consists of classes which model the structure of the database in a generic way, i.e. table structures are not pre--programmed but created during runtime. ...
... Also, the purpose of this tool is to model system-level functions and procedural flow that is part of larger activity (Bell, 2003). It is thought that the UML Activity Diagram represents the performance of actions and sub activities that are a special case of state diagram (Booch, Jacobson & Rumbaugh, 2000). Since the UML Activity Diagram is the most commonly used one, we will analyze and research the UML Activity Diagram more detailed. ...
Conference Paper
Full-text available
Defining tasks in serious games are important for measuring the benefits of gaming environment for the player. For this the contribution of the domain experts for the game design is essential. Today, in order to improve the level of communication between domain experts and software system technicians to better define the game requirements according to the target user groups several tools are being used. One of commonly used such a tool is the UML activity diagram. However in UML-AD there is no specific notation representing the player and the system tasks in the game design in an easily distinguishable way. Accordingly, in this study, an extension of UML Activity is proposed and named as activity diagram extended (ADE). Additionally the impact of ADE for the understanding level of these diagrams by technicians is evaluated experimentally. The results of this study are promising for improving the level of understanding on these diagrams.
... In our development practices formalization is a two step process. The first step is updating the conceptual model coded as a UML 1.4 class diagram (Booch et al. 2000) using the ArgoUML editor5. Protege ontology editor6 is used in the second step for coding the ontology in OWL 2 with an account for DL restrictions (Motik et al. 2012). ...
Article
Full-text available
This paper reports on the use of the OntoElect methodology for evaluating the fitness of an existing ontology to the requirements of the knowledge stakeholders in a domain. It demonstrates, that a thorough routine for indirect elicitation, ensuring completeness, correctness of interpretation, using in ontology evaluation of these requirements is a must for ontology engineering. This is also valid if the requirements for ontology refinement are elaborated by a very high profile expert working groups. The approach used in the reported research is based on the use of OntoElect – the methodology for ontology refinement. The workflow of OntoElect contains three phases: feature elicitation, requirements conceptualization, and ontology evaluation. It elicits the set of terms extracted from a saturated collection of documents in the domain. It further sublimates these terms to the set of required features using the information about term significance in the form of numeric scores. Furthermore, it applies conceptualization and formalization activities to these features yielding their aggregations as ontological fragments interpreted as formalized requirements. Finally, the mappings are specified between the elements in the requirements and ontology elements. The scores are used in the mappings to indicate the strength of positive or negative votes regarding the evaluated ontology. The sum of the votes gives the overall numeric fitness measure of the ontology to the domain requirements. The paper presents the use of OntoElect in the use case of evaluating the W3C OWL-Time ontology against the requirements extracted from the proceedings of the TIME symposia series.
... Consequently, the terms "object model"(as in [3] and [4]) and "object diagram" have completely different meanings: according to the current version of the UML standard, "object diagram" refers to the concrete instances that exist in the system in a given moment, while in [3] and in many other resources "object model" refers to the equivalent of a UML class diagram describing the model in general. To avoid confusion, in this paper we will stick to the standard terminology of UML [5]. ...
Conference Paper
Full-text available
Domain modelling is a crucial part of Enterprise Modelling and considered as a challenge in enterprise engineering education. Pedagogy for this subject is not systematized and teachers or book authors develop the curriculum based on their own experience and understanding of the subject. This leads to a wide diversity of pedagogical methods, learning paths and even drastic differences in the applied terminology. In this paper, we identified and classified learning outcomes from several educational resources on domain modelling according to the revised Bloom’s taxonomy of educational objectives. We identified the similarities and gaps among the resources, such as lack of evaluation-related tasks, as well as the insufficient presence of procedural knowledge related tasks. The examples of most popular tasks are given, along with the directions to the future development of a systematic educational framework and guidelines for domain modelling pedagogy.
... Consequently, the terms "object model"(as in [3] and [4]) and "object diagram" have completely different meanings: according to the current version of the UML standard, "object diagram" refers to the concrete instances that exist in the system in a given moment, while in [3] and in many other resources "object model" refers to the equivalent of a UML class diagram describing the model in general. To avoid confusion, in this paper we will stick to the standard terminology of UML [5]. ...
Conference Paper
Full-text available
Domain modelling is a crucial part of Enterprise Modelling and con-sidered as a challenge in enterprise engineering education. Pedagogy for this subject is not systematized and teachers or book authors develop the curriculum based on their own experience and understanding of the subject. This leads to a wide diversity of pedagogical methods, learning paths and even drastic differ-ences in the applied terminology. In this paper, we identified and classified learning outcomes from several educational resources on domain modelling ac-cording to the revised Bloom’s taxonomy of educational objectives. We identi-fied the similarities and gaps among the resources, such as lack of evaluation-related tasks, as well as the insufficient presence of procedural knowledge re-lated tasks. The examples of most popular tasks are given, along with the direc-tions to the future development of a systematic educational framework and guidelines for domain modelling pedagogy.
... Performance et Temps SPT (Schedulability Performance and Time) [8], profil UML pour les systèmes d'ingénierie SysML[9], profil UML pour leSoC [10], profil MARTE (Modeling and Analysis of Real-Time Embedded Systems)[11] et profil UML- SystemC [12]. ...
... UML define un conjunto de notaciones que pueden ser extendidas por el diseñador para soportar un proceso de modelaje dedicado a un dominio de aplicación específico. UML es actualmente un estándar, adoptado por el OMG (Object Management Group) 2 . A partir de los mecanismos que provee UML para ser extendido, el objetivo principal de este trabajo es proponer un conjunto de estereotipos para modelar aplicaciones distribuidas, considerando aspectos de comunicación, gestión, disponibilidad, desempeño y aspectos inherentes a la implementación. ...
Article
Full-text available
Considering the extension mechanisms provided by the Unified Modeling Language (UML) and aspects relative to the development of distributed systems, such as implementation, communication, location, administration, availability and performance, this paper provides a profile or set of stereotypes to facilitate the design of these applications. A Stereotype Specification Pattern is defined as a generic framework that facilitates the description, handling and use of stereotypes. The proposed profile contributes to the elicitation and specification of non-functional requirements, which are crucial for the construction of quality-distributed applications.
... Various formalizations have been used for information models. In the contemporary era, Entity-Relationship diagrams provide a direct link to relational database schemas (Chen, 1976) The Unified Modeling Language (UML) (Booch et al., 2000) is another frame-based approach which includes a number of features associated with object-orientation. More recently the Web Ontology Language (OWL) (W3C, 2004, 2012) has its roots in descriptions logics, in which inferences may be drawn from the declared semantics using logical reasoning tools. ...
Conference Paper
Full-text available
Information models are useful for representing concepts and relationships in a domain of discourse. These models are typically used to guide system design and implementation and are also used as documentation. Models are also used to assist with system integration enabling multiple stakeholders to agree on a common structure and semantics for sharing data. For example, information models developed in Unified Modelling Language (UML) developed according to ISO 19100 series standards may be used to develop Geography Markup Language schema which specify how geographic data may be encoded for exchange as XML. In this paper we propose additional uses for UML information models, enabling them to be connected to additional information specifying the semantic content of data, and delivered using Linked Data approaches. We also describe the role of models in enabling transformation and integration of heterogeneous data to a common model. We present two case studies (i) publishing a suite of related information models using the Water Data Transfer Format (WDTF) schema at the Australian Bureau of Meteorology and (ii) the use of information models for harvesting content in the Spatial Identifier Reference Framework (SIRF). In the WDTF case study, model publication is underpinned by transformation of UML models to Web Ontology Language (OWL) ontologies based on a (draft) ISO standard. The models are published in a Feature Type Catalog (FTC) delivered through a RESTful interface. The FTC is implemented as a Linked Data application in which model elements are identified using URIs, with content negotiation to access HTML or OWL/RDF forms. In the SIRF case study we describe how models and mapping between models are used to support transformation and integration of data, and are then published together with the integrated data using Linked Data approaches.
... Data modeling methods: Data modeling methods are formalizing data entities and relationships between entities. There are generally four main data modeling methods including Barker (1990) methodology, IDEF1x (1993, Entity-Relationship Model (ERM) (Chen, 1976) and Unified Modeling Language (UML) (Booch et al., 2000). These methods are the most common utilized methods to describe the data structure and transaction of a modeling system (Tak and Hang, 2002;Law and Tak, 2003;Rönkkö, 2006;Khabbazi et al., 2011). ...
Article
Full-text available
This study proposes an object-oriented modelling for the quality information system as a module able to be integrated with other production logistics back office systems. Using the UML modelling tools such as component and class diagrams, the model addresses the highest level quality business processes and data structure as well as all identified providing and depending interfaces and the messaging system in a module-based framework design. The methodology and adopted procedures are explained in details of which provide a better understanding of the modelling and the possibility for the lower-levels quality data extensions following the same framework. The model is able to manipulate all quality control data for purchasing, production and remedy operation in a lot-based make-to-order production system within a defined module.
... The UML is a graphical systems engineering tool that specifies systems structure (composition) and behaviour (function). It is often used to define software architectures [261][262][263][264] and has been proposed as an ideal method for integrating biological data into in silico models and meta-models in hierarchical levels of complexity [265][266][267]. Both the UML and subsequent statistical models aim to represent the multisystemic pathogenesis of ALF. ...
Thesis
Full-text available
Acute liver failure (ALF) is a rare but devastating clinical syndrome with multiple causes and a variable course. The mortality rate is high. Orthotopic liver transplantation is the only therapy of proven survival benefit but the limited supply of donor organs, the rapidity of progression and the variable course of ALF limit its use. A need therefore exists for a method to ‘bridge’ patients, that is, provide temporary support, to either the spontaneous regeneration of the innate liver or transplantation. One possibility includes bio-artificial liver support systems (BALSS). This technology is composed of an extracorporeal circulation system incorporating a bioreactor that contains parenchymal liver cells (hepatocytes) to perform the detoxifying, transforming and synthetic properties of a liver. However, the development of a BALSS holds particular challenges. Despite approximately four decades of research, bio-artificial liver (BAL) technology globally remains in a pre-commercial stage. The University of Pretoria (UP) and the Council for Scientific and Industrial Research (CSIR) have developed a BALSS with novel characteristics. These include a computationally optimized radial-flow primary porcine hepatocyte bioreactor perfused with blood plasma, and a perfluorocarbon oxygen carrier which replaces hemoglobin. There are also novel design properties in the circulation system itself. Demonstrating the metabolic and clinical efficacy of a BAL device requires implementing, in vitro (cell biology), in vivo (animal) and mathematical modeling studies. These studies are a formal necessity but are inherently ‘models’ of the in vivo human clinical circumstance. That is, they are limited by their experimentally controlled configuration/s. In investigating these, this thesis firstly provides a foundation by reviewing the clinical and biological context of ALF and BAL technology, then presents and evaluates particular studies/models that have been implemented over several years in the course of the UP-CSIR BAL project. For each section, thoughts and recommendations regarding future work that will facilitate the development of BAL technology are discussed in detail. The thesis is concluded with an evaluation of success and the consensus-agreed requirement of continued research and innovation in the field.
... 1990 년대에는 객체지향 방법 기반의 UML (Unified Modeling Language)이 탄생하고 프로그램의 흐름을 보 여주는 Activity Diagram을 비롯한 9 개의 모델링 방법 이 등장하여 몇 차례의 개정을 거쳐 [Fig. 1]과 같이 7 개 의 Structure Diagram과 7 개의 Behavior Diagram 등 총 14 종류의 모델링 방법으로 확장되었다[18,19,20].Fig. 1] UML diagrams for integrated modeling 2000 년대 들어 모바일과 융복합 기술의 급속한 발전 과 함께, 애자일 프로그래밍 기술 등 소프트웨어 개발 환 경도 근본적으로 변화하고 있다. ...
Article
Methods to design programs which implement IT systems have been developed in various forms from flowchart to activity diagram of UML. However, program design tools and methods developed so far have not been efficient comparing to program coding tools and methods. In addition, Program design methods and tools developed until now have been difficult to support the bidirectional conversion between program design and coding, and the improvement of development productivity and maintainability. Therefore, in this study, we propose Convergence Development Method to enable working with wide bandwidth through fusing the program design and coding phase by using SOC and supporting tool named SETL which automatizes the convergence of design and coding. Thus, by using SETL, it is expected that the efficiency gap between the program design and coding phase is reduced, and development productivity and maintainability is increased. key words.
... Because our aim is to fill the gap between decision-making in real-world decision-making and decision-model implementation, we use the same formal language during the transcription process, generalisation (next section) and implementation (i.e. the Unified Modelling Language, UML). Unified Modelling Language (UML) is a standardised object-oriented modelling language in the software engineering field (Booch et al., 2000;Papajorgji and Pardalos, 2006). We also used UML as the formal language of transcription analysis because it provides standard graphical representations for representing knowledge (Milton et al., 1999) ( Table 4.3). ...
Thesis
Full-text available
Evolutions of the institutional and environmental contexts are driving search for alternative cropping systems to reduce water use while maintaining high levels of productivity. This thesis is an account of the long tradition of research on cropping-plan choices at the farm level. It concerns the scope of modelling agricultural systems with an opening to economy. The objective of the research described in this thesis is to produce formalised knowledge on farmers’ croppingplan choices under uncertain environment (price and weather) by analysing and modelling their decision-making processes. Formalising and modelling decision-making processes is becoming a crucial point to develop decision support systems that go beyond limitations of formerly developed prescriptive approaches. This thesis contributes to the development of a formalised and integrated methodology to study and model complex decision-making process. This methodology enables to fill the gap between field surveys and decision-model implementation. The methodology is drawn upon a theoretical background of the decision-making, and consistently combined tools to respectively survey, analyse, model and implement coupled agent and biophysical models. In this thesis, I address the question of uncertainty in two directions. I first analyse the spatio-temporal dynamic of individual farmers’ decision-making process. Then I estimate farmers’ aversion to risk by comparing stated and revealed elicitation methods. On the basis of field survey results, I develop a decision model called CRASH. The approach to develop the model stresses on explicit formalisation of the decision-making process in its temporal and spatial dimensions, and representation of the domain knowledge through generic concepts that are close to ones used by decision-makers. The implementation of developed models is carried out on the RECORD platform as trail blazing project. The originality relies on the use of dynamic models on both the biophysical and management processes. This research opens new perspectives for developing farm specific decision support systems that are based on simulating farmers’ decisionmaking processes. The modelling and simulating the cropping-plan decision-making processes should enable to design with farmers cropping systems that re-conciliate the required adaptive capacities and needs to maintain cropping systems productivity.
... This approach explores the interaction between a product data model that reflects the product design and the process to manufacture this product represented by a workflow. @BULLET The authors of [22] present case handling as a paradigm for supporting knowledge-intensive business processes. The authors compare case handling to workflow management and identify four problems. ...
Article
Full-text available
Process-aware information systems (PAIS) supporting knowledge-intensive processes are gaining importance nowadays. Crisis management process is an example of a knowledge-intensive process that is grounded on vast experience of multiple actors (e.g., city services, volunteers, administration) and their collaboration. Automated crisis management systems have to comply with various norms and regulations; at the same time, they need to constantly deal with uncertainty and adapt the process scenario to a current situation. In this paper, we consider the example of a flood management process implemented as a part of the COS Operation Center (COSOC) - a smart city solution that works with knowledge-intensive processes. We examine how the activity-oriented modeling paradigm underlying COSOC supports process flexibility. We propose an alternative way to specify the flood management process based on state-oriented paradigm and the statecharts formalism and discuss the advantages and limitations of the two paradigms.
... The generic and recursive nature of the partial database import/export allows a streamlined object oriented design approach, illustrated by the UML (Unified Modelling Language) diagram in Figure 11 (Booch et al. 2000). The design consists of classes which model the structure of the database in a generic way, i.e. table structures are not pre--programmed but created during runtime. ...
Thesis
Full-text available
Change is a perpetual, inherent part of the Earth System and has always influenced the many life forms existing on this planet. The rather recent notion of Global Change is connected with an increased rate of changes, some of which can be largely or partly attributed to anthropogenic activities. Mitigating the negative impacts of Global Change requires a holistic knowledge about the function of the Earth System. Remote sensing technology has the potential of observing Earth System features on a global scale with sufficient spatial detail, allowing data stemming from remote sensing systems to be used for the parameterisation of Earth System models. The current limitation of these models in the accurate prediction of future Earth System states is largely caused by the uncertainties of both the initial parameterisation and of the models themselves. Consequently, data products of higher accuracy are required to reduce the uncertainties currently associated with many remotely sensed data. This necessitates a number of measures such as accurate pre-launch sensor calibration and calibration/validation of sensors and data products over all scales of observation during the whole lifetime of systems, essentially tying data to a common reference system and hence rendering data comparable. The paradigm of the Complete Observing System supports the generation of holistic Earth System knowledge by seamlessly integrating in situ, airborne and space based sensor data. Key to the integrative function of Complete Observing Systems is the ability to locate and share data suitable for a given task within the system. This functionality requires the excessive documentation of the primary datasets with metadata, detailing both provenance and uncertainties. This thesis provides a contribution to Complete Observing Systems by addressing three specific research questions: 1) What are the important metadata of field spectroradiometer data collections and how can these primary and associated secondary resources be efficiently entered into, stored in and retrieved from a spectral database to ensure long-term usage and enable data sharing? 2) How can spectroradiometer data collections be exchanged between distributed database systems while retaining the full metadata context? and 3) How can an operational, high accuracy, Airborne Prism Experiment (APEX) imaging spectrometer data calibration processor be implemented and subsequently integrated into a generic processing framework? Research addressing the three research questions resulted in the development of specific components, namely: 1) the generation of the SPECCHIO spectral database system, offering easy and efficient storage of spectral data described by a rich metadata set and being available to the remote sensing community as online system or on-site installation, b) description of the steps required for the extraction of a spectral subset including its full metadata context and its subsequent, non-conflicting import into a target system plus the according implementation of the concept as a function of the SPECCHIO system and 3) the provision of an operational data processor for the APEX system, fully integrated into a generic processing framework at VITO and carrying out data segregation and radiometric, geometric and spectral calibration to produce highly accurate, uniformly calibrated data cubes. This thesis concludes that further research is needed to 1) accomplish the integration of airborne imaging spectrometer data processing and archiving facilities in complete observing systems in order to allow the bridging of scales between ground and space-based data, 2) provide full uncertainty propagation throughout processing and archiving systems, 3) generate new Earth system science products that take advantage of top-end imaging spectrometers and 4) advance the integration of spectral databases in imaging spectrometer data processing systems to allow the automated calibration/validation of continuous remote sensing data with sparse in situ spectral data. To this end, the development of automated quality indicator generation, the provision of generic metadata storage capabilities and work on the standardisation of metadata are the main improvements envisaged for spectral database systems.
... As mentioned earlier, workflows are essentially a series of functional units and the dependencies between them define the order in which the units must be executed. Among the well-known models that have been used as the basis for workflow representation languages include Petri nets [41], directed graphs [6], Unified Modelling Language (UML) [14] and Business Process ...
Article
Full-text available
Traditional workflow systems have several drawbacks, e.g. in their inabilities to rapidly react to changes, to construct workflow automatically (or with user involvement) and to improve performance autonomously (or with user involvement) in an incremental manner according to specified goals. Overcoming these limitations would be highly beneficial for complex domains where such adversities are exhibited. Video processing is one such domain that increasingly requires attention as larger amounts of images and videos are becoming available to persons who are not technically adept in modelling the processes that are involved in constructing complex video processing workflows. Conventional video and image processing systems, on the other hand, are developed by programmers possessing image processing expertise. These systems are tailored to produce highly specialised hand-crafted solutions for very specific tasks, making them rigid and non-modular. The knowledge-based vision community have attempted to produce more modular solutions by incorporating ontologies. However, they have not been maximally utilised to encompass aspects such as application context descriptions (e.g. lighting and clearness effects) and qualitative measures. This thesis aims to tackle some of the research gaps yet to be addressed by the workflow and knowledge-based image processing communities by proposing a novel workflow composition and execution approach within an integrated framework. This framework distinguishes three levels of abstraction via the design, workflow and processing layers. The core technologies that drive the workflow composition mechanism are ontologies and planning. Video processing problems provide a fitting domain for investigating the effectiveness of this integratedmethod as tackling such problems have not been fully explored by the workflow, planning and ontological communities despite their combined beneficial traits to confront this known hard problem. In addition, the pervasiveness of video data has proliferated the need for more automated assistance for image processing-naive users, but no adequate support has been provided as of yet. A video and image processing ontology that comprises three sub-ontologies was constructed to capture the goals, video descriptions and capabilities (video and image processing tools). The sub-ontologies are used for representation and inference. In particular, they are used in conjunction with an enhanced Hierarchical Task Network (HTN) domain independent planner to help with performance-based selection of solution steps based on preconditions, effects and postconditions. The planner, in turn, makes use of process models contained in a process library when deliberating on the steps and then consults the capability ontology to retrieve a suitable tool at each step. Two key features of the planner are the ability to support workflow execution (interleaves planning with execution) and can perform in automatic or semi-automatic (interactive) mode. The first feature is highly desirable for video processing problems because execution of image processing steps yield visual results that are intuitive and verifiable by the human user, as automatic validation is non trivial. In the semiautomaticmode, the planner is interactive and prompts the user tomake a tool selection when there is more than one tool available to perform a task. The user makes the tool selection based on the recommended descriptions provided by the workflow system. Once planning is complete, the result of applying the tool of their choice is presented to the user textually and visually for verification. This plays a pivotal role in providing the user with control and the ability to make informed decisions. Hence, the planner extends the capabilities of typical planners by guiding the user to construct more optimal solutions. Video processing problems can also be solved in more modular, reusable and adaptable ways as compared to conventional image processing systems. The integrated approach was evaluated on a test set consisting of videos originating from open sea environment of varying quality. Experiments to evaluate the efficiency, adaptability to user’s changing needs and user learnability of this approach were conducted on users who did not possess image processing expertise. The findings indicate that using this integrated workflow composition and execution method: 1) provides a speed up of over 90% in execution time for video classification tasks using full automatic processing compared to manual methods without loss of accuracy; 2) is more flexible and adaptable in response to changes in user requests (be it in the task, constraints to the task or descriptions of the video) than modifying existing image processing programs when the domain descriptions are altered; 3) assists the user in selecting optimal solutions by providing recommended descriptions.
... Dentre as linguagens propostas para modelagem SMAs destacamos MAS-ML (Multi-Agent System Modeling) [15][16][18]. MAS-ML é uma linguagem de modelagem que realiza uma extensão conservativa da UML [1] para permitir a modelagem de SMAs. Em particular, as seguintes características da linguagem podem ser ressaltadas: (i) suporta modelagem das principais entidades de um SMA: agentes, organizações e ambiente; (ii) na sua versão 2.0, a linguagem contempla agentes reativos, baseados em objetivo e em utilidade; (iii) possibilita a definição de papéis, fundamental para a modelagem de agentes em sociedade; e (v) introduz explicitamente novos conceitos ao metamodelo UML relacionados a entidades orientadas a agentes. ...
Conference Paper
Full-text available
Agents based on the Belief Desire Intention (BDI) model include structural and behavioral properties that must been captured and modeled properly. Features present in the MAS-ML 2.0 language for the modeling of cognitive agents propitiate its extension to support the modeling of BDI agents. In this work, the extension process of the MAS-ML language is presented. Resumo— Agentes baseados no modelo Belief Desire Intention (BDI) possuem propriedades estruturais e comportamentais especificas que precisam ser capturadas e modeladas adequadamente. Mecanismos presentes na linguagem MAS-ML 2.0 para a modelagem de agentes cognitivos propiciou a extensão da linguagem para dar apoio à modelagem de agentes BDI. Em este trabalho, o processo seguido para a extensão da linguagem MAS-ML é apresentado.
... class diagrams) explain various relationships exist among classes, such as aggregation, association, composition, and generalization/specialization. In contrast, the behavioral models (e.g., communication and sequence diagrams) are used to represent a sequence of actions in an interaction which explain how the objects are interacting to complete their individual action [4]. The conventional slicing is generally executed using data and control dependency relationships present among program statements. ...
Article
diagrams are vital design and modeling artifacts. These UML models can also be used to create test cases. In this approach, condition slicing is used and creates test cases from UML sequence diagrams. Test cases can be planned at design level of software development life cycle. But to visualize the system model or architecture is hard due to its bulky and complex structure. This methodology derives test cases of the computed slice using conditional predicate and it beneficial for sequence diagram containing number of messages. The proposed methodology also use the notion of model based slicing to compute the slice of the sequence diagram by extracting the desired chunk. KeywordsTesting, Sequence diagram, Model based slicing.
... Traditional software engineering has been using model driven development tools for a long time. Unified Modeling Language (UML) [1] is one of the most effective methodologies to conduct traditional software design. Nevertheless, when we consider more specific engineering fields such as aeronautical or aerospace system design this tool cannot be used "off the shelf". ...
Conference Paper
Unmanned Aerial Systems (UAS) are currently flying in specific and segregated airspaces, separated from regular aircraft. Nevertheless, to be widely used in the future, UAS will need to be deployed in the same airspace as regular aircraft. However, their unmanned and automated features make this goal very difficult to certify. Indeed, it is necessary to validate the different parts of the UAS (operating system, communication system or even payload depending on their application) in order to be compliant with the whole airspace certification process. This paper deals with new model driven development approaches that are inherited from existing aerospace and aeronautical systems and that could be useful for the certification of UAS design. In this paper we demonstrate how a model driven design can improve UAS system robustness. A case study is introduced and focuses on the main advantages for UAS design environment: modularity and reusability.
... In the following we show a small ontology for SAW. First, in Section 2.1.1 we present the ontology in the UML language [1]. UML is a graphical language and thus is easier to understand by both the developer of an ontology and by the reader. ...
Article
Full-text available
Modern military operations are environments that produce very large amounts of com-plex information. To achieve success commanders must make decisions both quickly and accurately, and timely information is essential for accuracy. A high-level understanding of the military information for an area of interest is called situation assessment (SA). The process of achieving SA is called situation awareness (SAW). While SAW is a human activity performed by military commanders, computer support can help commanders to cope with the large amount and complexity of information involved in SAW. In this paper we introduce an ontology-based approach to SA and SAW based on the DARPA Agent Markup Language (DAML). In particular, we show how relevant symbolic information can be conveyed to a SA system and what can be inferred based upon this input.
Article
Full-text available
The study was conducted to design and implement the electronic voting system for supreme student council in the state college of Zamboanga del Sur, Philippines. It was explicitly undertaken to realize the following objectives of the study; (1) to automate the election result. (2) to develop a module that the voter can cast their votes easily. (3) to develop a module that the election officer can manage the candidates and (4) to provide necessary printed reports needed by the election officer. The methodology used was rapid application development to fasten the application development. It was implemented in web-based application using Hypertext pre-processor (PHP) programming language and MYSQL database. Furthermore, the system was tested by the stakeholders and garnered above average in terms of system usability. The newly developed voting system helps the office of student affairs and the supreme student council during the election process and canvassing.
Article
Full-text available
Robotic System Specification Language (RSSL) stems from the embodied agent approach to robotic system design. It enables the specification of both the structure and activities of a multi-robot multi-agent robotic system. RSSL specification can be verified and automatically transformed by its compiler into a six-layered Robotic System Hierarchical Petri Net (RSHPN). RSHPN models the activities and structure of the designed robotic system. The automatically generated RSHPN is loaded into RSHPN Tool modeling RSHPNs and automatically generating the controller code. This approach was validated on several robotic systems. The use of RSSL and RSHPN facilitates the synthesis of robotic system controllers.
Article
Full-text available
The article proposes a state forecasting method for telecommunications networks (TN) that is based on the analysis of behavioral models observed on users' network devices. The method applies user behavior that makes it possible to forecast with more accuracy both the network parameters and the load at various back-ends. Suggested forecasts facilitate implementing reasonable reconfiguration of the TN. The new method proposed as a further development of TN states the forecasting method presented by the authors before. In this new version, forecasting algorithm users' behavioral models are involved. The models refer to a class of time diagrams of device transitions between different states. The novelty of the proposed method is that resulting TN models enable forecasting device state transitions represented in a device state diagram in the form of knowledge graph, in particular changes in loads of different back-ends. The provided case study for a subgroup of network devices demonstrated how their states can be forecasted using behavioral models obtained from log files.
Article
Full-text available
The purpose of the study is to design and implement an enrolment system through web based application intended for higher education institution in Zamboanga Peninsula amidst covid-19 pandemic. The functionalities of the system are guided using Use Cases identified during requirement phase. The existing system encountered several constraints on the process of the enrolment, especially in detecting conflict of course schedules and the availability of slots of the courses offered, handling large number of data, and in cases where modifications or errors in the program that need to be fixed. The methodology used was prototyping to take advantage of the limited experience of users in using computerized systems. It was implemented using Hypertext Pre-processor (PHP) programming language and MYSQL database along with JavaScript, Cascading Style Sheets (CSS), JQuery and Macromedia Dreamweaver as integrated development environment (IDE). The functionalities of the system get the approval of the school administrators allowing the program to be use in an actual enrolment setting.
Article
Full-text available
“Doing Business in India: International Perspectives (With Particular Reference to Business Process Outsourcing (BPO) Industry)”, Refereed Proceedings, International Conference on “ Resurging India : Myths & Realities”, March 17 – 18, 2012, Teerthanker Mahaveer University, Moradabad, India, Copyright © 2012 , pp. 3 – 11, ISBN 978-93-82062-04-2, Excel India Publishers, New Delhi. (With James Ondracek, Andy Bertsch, and Matthew Cohen) ABSTRACT: The country of India is one of the fastest growing economies in the world. With beneficial business incentives and a wealth of highly qualified, highly motivated potential employees, India is becoming a hub for economic growth and technological advancement. With so much expansion in India many industries, specifically the contact center and Business Process Outsourcing (BPO) industry, have entered into the global market with vigorous development. Currently India is ranked number one in the world in both the call center and BPO industries, has been able to grow the BPO and IT export sector to more than $47 billion USD, and has captured half of the entire world’s offshore service business. This study provides an overview of business climate, glimpses of socio- cultural, economic, and technological environments, with particular reference to insights pertaining to business process outsourcing industry. This article is recommended reading for those interested in doing business in India including students of international business.
Chapter
Software development projects are complex. The more complex a project is, the higher are the requirements related to the software development process. The implementation of a process is a great challenge. This, in part, has to do with human factors (acceptance, etc.) as the benefits of a formal development process might not be obvious immediately and it may take a while until the process becomes the lifeblood of a team. A crucial step towards implementing, enacting and enforcing a process is to provide tool support for the many activities the process asks for. Tool support is necessary to guarantee efficiency in the project, to do the housekeeping and to minimize the “overhead” of the process. This chapter describes challenges and options for supporting process models by tools. Furthermore it describes concrete samples and shows how tool chains can be created with commercial tools as well as with open source tools.
Conference Paper
Full-text available
Transition from UML 1.4/1.5 standard to its 2.0 version has brought modifications extensive enough to reassess some concepts and views. One of such views is isomorphism of interaction diagrams, no longer so obvious after extending the scope of available interaction diagrams. In the paper author verifies the statement concerning isomorphism of interaction diagrams by studying possibility of unambiguous transformation of sequence diagram into any of other interaction diagrams.
Conference Paper
Full-text available
Software processes play an important role in the software industry, as they influence the quality of the product and determine the efficiency of the company that develops these software products. To be used systematically in different projects, software processes need to be disseminated in the organization and continuously evaluated when one wants to understand their quality. The evaluation of a software process maintains and promotes its quality and evolution. However, if these evaluations happen from data directly collected from a process that has been applied to a given development project, process quality problems have already influenced the outcome of the process and possibly the software product. Software process models, commonly specified in a process modeling language (PML), specify in a standardized way the elements of a process and the appropriate interactions between them. In addition to assigning to the understanding, communication and execution of a software process in a company, process models offer an opportunity for them to be evaluated before their first execution or even to help identify problems in the process of ongoing projects. This paper presents a proposal to use the concept of bad smells in software process models with the objective of identifying possible disharmonies in the models. Initially bad smells of object-oriented code were analyzed and adapted to SPEM (Software & Systems Process Engineering Meta-Model) to generate a catalog. Subsequently a survey was carried out to validate the definitions, representations and possible impacts of the proposed bad smells, resulting in a validation that presented an overall rate of 86% agreement. It is expected that being possible to characterize bad smells for software processes, to enable their applicability in real software development process.
Chapter
To ensure optimum governance of the state (regions, sectors, specific activities) all management system units (tasks, functions and services, normative environment, institutional framework, budget funding) have to operate in strong cooperation and consistency, which should be defined by corresponding documents. Unfortunately this logical relevance does not always exist. Documents and information are weakly connected, because of the huge complexity of units that objectively hinders implementation of the strong linkage and seriously weakens management quality. The situation can be radically improved by usage of ontological methodology for development of the back-office of the management system—strict structuring of objects and their decomposition in elementary logical units to form different hierarchies of information. Each hierarchy displays some aspect of information. Connection of related units of the information from different documents (hierarchies) creates a full graph of relevant information; it enables processing and usage of management information in static and dynamic regimes—definition and concretization of normative acts, institutional structure, funding—passage and execution of tasks, etc. Analysis of information from different aspects, checking its consistency, and making appropriate decisions on this base is becoming possible, thus minimizing threats to well-functioning and development of the country. Two case studies illustrate the current usage of the proposed principles. Complexity of the informative model requires appropriate level of IT support. Requirements as well as short descriptions of developed IT tools are included; they are oriented on usage by non-IT specialists (civil servants). Projections of further activities for improvement features of the model and consummation of IT tools are mentioned.
Chapter
The enterprise information systems engineering methodologies do not yet have a theoretical framework. One and only exception is data model design technique which is based on the internal modelling paradigm, because it uses concept of functional dependence in the normalization procedure. A theoretical soundness of knowledge-based approach towards enterprise IS engineering is assured by the use of principles of second-order cybernetics. An enterprise is considered as an entirety of self-managed activities correlated by management functional dependencies. New internal modelling views—a control view and a self-managing view—are included for capturing the management transactions—the key components of the subject domain knowledge. A normalized systems development life cycle is defined as required component of knowledge-based IS development.
Chapter
Globalisierung, Internationalisierung, Flexibilität und Time-to-Market sind seit einigen Jahren die Faktoren, die die Unternehmen stark prägen. Eine funktionierende IT sollte dabei unterstützen, diese Faktoren am effektivsten umzusetzen. Die Realität ist jedoch, dass gerade die Entwicklung, Wartung und Pflege von Anwendungssystemen über die Jahre immer mehr Kosten verursacht. Sind die Hardwarekosten in den letzten Jahrzehnten stetig gesunken, stiegen die Kosten für die Anwendungsentwicklung stetig an und haben mittlerweile einen Anteil von über 70 % der Gesamtkosten in der IT.
Chapter
In software modelling, it is difficult to properly arrange the modelling of system structure and behaviour as the traversal between software models usually lacks a clear progression path. Taking an inter-disciplinary approach, this paper tackles the problem by borrowing ideas from a successful movie “Architecture 101”. The commonalities between the movie and modelling are studied. The result is a proposal for multi-modelling. The benefits include more explicit guidance in software development. And the progression from model to code is made more productive.
Article
This paper focuses on the compression based clustering and aims to determine the most suitable combinations of algorithms for different clustering contexts (text, heterogeneous data, Web pages, metadata and so on) and establish whether using compression with traditional clustering methods leads to better performance. In this context, we propose an integrated cluster analysis test platform, called EasyClustering, which incorporates two subsystems: a clustering component and a cluster validity expert system, which automatically determines the quality of a clustering solution by computing the FScore value. The experimental results are focused on two main directions: determining the best approach for compression based clustering in terms of context, compression algorithms and clustering algorithms, and validating the functionality of the cluster analysis expert system for determining the quality of the clustering solutions. After conducting a set of 324 clustering tests, we concluded that compressing the input when using traditional clustering methods increases the quality of the clustering solutions, leading to results comparable to the NCD and the cluster analysis expert system proved 100% its accuracy so far, so we estimate that, even if some slight deviation should occur, it will be minimal.
Conference Paper
This paper presents the details of the OntoElect methodology for ontology engineering. These details comprise: (i) the presentation of the objectives with the emphasis on the problems arising when the domain knowledge stakeholder requirements to the developed ontology are elicited; (ii) the elaboration of the ontology engineering workflow and software tools; (iii) the proposal of the formal metrics for the representativeness of the used document corpus based on saturation and the fitness of different ontology elements to those requirements based on the computation of the stakeholder votes. The paper also reports on the set-up and results of our experiment with the document corpus of the ICTERI conference series papers and the ICTERI scope ontology. The available results of this ongoing experiment confirm that the methodological approach of OntoElect is valid.
Article
In this article we combine the use of a thematic approach and concept maps to propose a methodological approach for technology courses, in our case, computer networks. The thematic approach offers a good way to increase students ’ motivation and presents a new way of elaborating a curriculum. The concept maps, which are the principal tool of the assimilation theory, help in the organization of contents, facilitating the process of concept acquisition by learners. These ideas are synthesized in a Web application which can be used as an aid or guide for teachers and learners of computer networks to organize and to improve their educational activities. 1
Article
Full-text available
This paper presents a technique to detect clones in UML class models. Class metrics (number of attributes, number of operations) of a class, class attribute names, root nodes, child nodes and class method names are compared with corresponding metrics, attribute names, root node, child nodes and method names of another class. Based on the number of matched attributes, operations and metrics, a percentage of cloning is calculated. We declare two classes as clones of each other if the matching percentage of the class metrics, class attribute names and class method names is greater than a specific threshold.
Article
Full-text available
Monterey Phoenix (MP) has been designed as a framework for system and software architecture modeling and verification with focus on modeling the system's and the environment's behaviors. With the development of more case studies, advantages in using MP for business process modeling and analysis applications are beginning to emerge. Models of business processes aim to capture high level operational activities and decision points of an organization, describing processes ranging from product lifecycle to government operations. Businesses and governments seeking to make improvements to their processes may model them for the purpose of seeking improvements in schedule and task execution, product quality, risk reduction, and lifecycle/operating costs. MP enables activities to be modeled as events with two basic relations: precedence and inclusion, making it a candidate modeling language for business process analysis. By offering high level abstractions for interaction behavior modeling and separating component behaviors from the component interactions, MP supports a multidimensional picture of concurrent behaviors, with overlapping threads of process phases and participating actors, including environment behaviors. MP models are executable and may be used to generate an exhaustive set of possible business process scenarios up to a given scope limit.
ResearchGate has not been able to resolve any references for this publication.