The unified software development process - the complete guide to the unified process from the original designers.
... I argue that designing and developing the system using an iterative development technique (MacCormack 2001) known as the Unified Process (Jacobson, Booch, and Rumbaugh 1999) 5 instead of the traditional waterfall model (Royce 1970) 6 enables the designer to tackle the most difficult parts of the project first (Larman 1998). Visually modelling the system and targeting a software solution before considering a hardware implementation enables the system to mature at minimal financial cost (Fraietta 2001). ...
... The development of the Smart Controller was approached using the Unified Process (Jacobson, Booch, and Rumbaugh 1999), whereby the system requirements were broadly outlined and high-risk areas of development were identified and tackled first (Larman 1998). Larman gives an example where tackling high risk and high value issues early is preferable. ...
... The Unified Process (Jacobson, Booch, and Rumbaugh 1999), in contrast to the waterfall lifecycle, combines risk-driven development with an iterative lifecycle (Larman 2002). In both models, system requirements are outlined and documented; however, the first goal of the Unified Process is to identify and tackle the riskiest issues. ...
Many contemporary composers and sound artists are using sensing systems, based on
control voltage to MIDI converters and laptop computers running algorithmic
composition software, to create interactive instruments and responsive environments.
Using an integrated device that encapsulates the entire system for performance can
reduce latency, improve system stability, and reduce setup complexity.
This research addresses the issues of how one can develop such a device, including
the techniques one would use to make the design easily upgradeable as newer
technologies become available, the programming interface that should be employed
for use by artists and composers, the knowledge bases and specialist expert skills that
can be utilised to gain the required information to design and build such devices, and
the low-cost hardware and software development tools appropriate for such a task.
This research resulted in the development of the Smart Controller, a portable
hardware/software device that allows performers to create music using
programmable logic control. The device can be programmed remotely through the
use of a patch editor or Workbench, which is an independent computer application
that simulates and communicates with the hardware. The Smart Controller responds
to input control voltages, Open Sound Control, and MIDI messages, producing
output control voltages, Open Sound Control, and MIDI messages (depending upon
the patch1). The Smart Controller is a stand alone device—a powerful, reliable, and
compact instrument—capable of reducing the number of electronic modules required
in a live performance or sound installation, particularly the requirement for a laptop
computer.
The success of this research was significantly facilitated through the use of the
iterative development technique known as the Unified process instead of the
traditional Waterfall model; and through the use of the RTEMS real-time operating
system as the underlying scheduling system for the embedded hardware, an operating
system originally designed for guided missile systems.
... The period of live operation is determined based on the point of live deployment in the unified software development process [23]. We use 6 months as the lower bound to be able to investigate post-launch phases. ...
... The initial choice is not final, as pilot results may lead to the decision to switch to another framework better suited to the business case. This phase also marks the beginning of the Iterative Software Development process [23] marked in Figure 2 that continues throughout later phases. ...
In 2021, enterprise distributed ledger technology has evolved beyond the proof-of-concept stage. It is now providing business value to large consortia in several successful and well documented case studies. Nevertheless, other consortia and initiatives are stuck in early stages of consortium formation or conceptualization. They stand to benefit from lessons learned by successful consortia, but an in-depth comparison has not yet been conducted. Thus, this study performs the first methodological comparison of large DLT consortia that have launched a product. Based on the temporal evolution of these consortia, a lifecycle with 4 stages and 12 sub-phases is developed to provide further guidance for early-stage consortia. The results show how 9 pioneer consortia have successfully integrated novel DLT into existing processes, but also point out challenges faced on the way.
... Decision-makers interact with the BI experts to define the required analysis (i.e. functional requirements [26]). Furthermore, the conceptual design of IoT-based BI applications also involves IoT experts, which distributes the responsibilities for the different parts of such applications. ...
... The three main roles involved in the design of IoT-based BI applications are: (i) Domain experts and decision-makers (or business users), who define the functional requirements (e.g. agronomists and farmers in our case study); (ii) BI experts, who help business users to define the functional requirements of the applications and define the non-functional requirements of the BI (e.g. the DW experts in our case study); and (iii) IoT experts, who define non-functional requirements of the IoT part considering the functional requirements (e.g. the UEB experts in our case study) [3], [7], [26]. ...
In the context of Industry 4.0, the analysis of Internet of Things (IoT) data with Business Intelligence (BI) technologies has acquired high relevance. However, designing and implementing IoT-based BI applications is hard for several reasons. Therefore, we propose a novel conceptual data model based on UML Profiles and Model Driven Architecture (MDA) for modelling and implementing IoT-based BI applications. Our approach provides highly readable data models of IoT, which are also compatible with traditional BI data models. Furthermore, it could help in the implementation process of the IoT subsystem through automatic code generation.
... The Unified Modeling Language (UML) [40,135,240] is a generalpurpose, well-known modeling language in the field of classical software engineering. It provides a standard way to visualize the design of the classical software development life cycle. ...
... The software modernization method in classical software engineering has been proved to be an effective mechanism that can realize the migration and evolution of software while retaining business knowledge. The solution proposed is systematic and based on existing, well-known standards such as Unified Modelling Language (UML) [40,135,240] and Knowledge Discovery Metamodel (KDM) [219]. The solution could benefit from reducing the development of new quantum information systems. ...
Quantum software plays a critical role in exploiting the full potential of quantum computing systems. As a result, it is drawing increasing attention recently. This paper defines the term "quantum software engineering" and introduces a quantum software life cycle. Based on these, the paper provides a comprehensive survey of the current state of the art in the field and presents the challenges and opportunities that we face. The survey summarizes the technology available in the various phases of the quantum software life cycle, including quantum software requirements analysis, design, implementation, test, and maintenance. It also covers the crucial issue of quantum software reuse.
... Many software engineering technologies have been created precisely to avoid the delayed issue effect by removing risk as early as possible. Boehm's spiral model [18], Humphrey's PSP [40] and TSP [41], the Unified Software Development Process [46], and agile methods [7] all in part or in whole focus on removing risk early in the development lifecycle. Indeed, this idea is core to the whole history of iterative and incremental product development dating back to "plan-do-study-act" developed at Bell Labs in the 1930's [56] and popularized by W. Edwards Deming [26]. ...
Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.
... On the other hand, UML is a more comprehensive modeling language that encompasses various diagram types to represent different aspects of software systems, including structure, behavior, and interactions [27]. UML includes class diagrams, which are similar to ERDs and used to model the static structure of a system by showing classes, their attributes, methods, and relationships. ...
In an era dominated by information technology, the critical discipline of data management remains undervalued compared to the innovations it enables, such as artificial intelligence and social media. The ambiguity surrounding what constitutes data management and its associated activities complicates efforts to explain its importance and ensure data are collected, stored and used in a way that maximizes value and avoids failures. This paper aims to address these shortcomings by presenting a simple framework for understanding data management, referred to as MAGIC. MAGIC encompasses five key activities: Modeling, Acquisition, Governance, Infrastructuring, and Consumption support tasks. By delineating these components, the MAGIC framework provides a clear, accessible approach to data management that can be used for teaching, research and practice.
... On the other hand, UML is a more comprehensive modeling language that encompasses various diagram types to represent different aspects of software systems, including structure, behavior, and interactions [27]. UML includes class diagrams, which are similar to ERDs and used to model the static structure of a system by showing classes, their attributes, methods, and relationships. ...
In an era dominated by information technology, the critical discipline of data management remains undervalued compared to the innovations it enables, such as artificial intelligence and social media. The ambiguity surrounding what constitutes data management and its associated activities complicates efforts to explain its importance and ensure data are collected, stored and used in a way that maximizes value and avoids failures. This paper aims to address these shortcomings by presenting a simple framework for understanding data management, referred to as MAGIC. MAGIC encompasses five key activities: Modeling, Acquisition, Governance, Infrastructuring, and Consumption support tasks. By delineating these components, the MAGIC framework provides a clear, accessible approach to data management that can be used for teaching, research and practice. 1. Living in the Age of Magic In the 1962 science fiction book, "Profiles of the Future: An Inquiry into the Limits of the Possible", Arthur C. Clarke formulated his famous Three Laws, of which the third law is the best-known: "Any sufficiently advanced technology is indistinguishable from magic." Modern world is increasingly magical, driven by the relentless advances in information technology. Yet, the foundations of this magical world remain ill-understood, and sometimes even neglected. The modern world is digital. Virtually every aspect of human existence is becoming digitalized or depends in some way on information technology and the information systems built based on it. Just in the last three decades alone, the explosive developments in information technology gave rise to revolutionary changes in the way humans live. Consider some examples. The Internet, which became popular in the 1990s allowed electronic commerce and distributed information exchange. Leveraging the Internet, social media burst into existence in the 2000s, dramatically changing the way humans socialize, obtain and disseminate information. More recently, artificial intelligence (AI), resulting in such marvels as ChatGPT and driverless cars, has been dubbed "the pinnacle of [human] ingenuity" [17]. AI is estimated to contribute $15 trillion to global GDP by 2030 [43], and could bestow world dominance on the country-leader in AI [18]. With the dramatic expansion of IT, unprecedented demands are placed on computing resources, storage, and bandwidth. Responding to this challenge is quantum computing that harnesses the properties of the smallest elements of matter-photons, electrons, ions-for information processing and communication. The powers of quantum particle are so bizarre and unbelievable that are being called "The God Effect" [10]. Some suggest quantum computing "could be a revolution for humanity bigger than fire, bigger than the wheel" [31]. While new information systems continuously emerge, one thing remains constant. It is hence bigger than the entirety of artificial intelligence, quantum computing, social media, online banking, and the Internet. This thing is digital information itself. The essence of information technology and the systems built with it is information or data (used synonymously here). Without digital information neither driverless cars nor YouTube is possible, and quantum computers are but heaps of expensive junk.
... Typically, use cases comprise a task or function title and a sequence of system-user interactions. Use case models are mostly structured procedurally so that within a use case other sub-use cases can be called conditionally or unconditionally (e.g., the prominent format of Unified Modeling Language (UML) use cases in the unified process (Jacobson, Booch, & Rumbaugh, 1999)). Some approaches stress the relation to other models in the software development process, such as the link of essential use cases (Constantine & Lockwood, 1999) to view models in graphical user interfaces. ...
User requirements are based on information about user and stakeholder groups, their goals and tasks, and the environments in which the system will be used. This chapter provides an overview of techniques for collecting and analyzing user requirements. It is dedicated to the purpose and outcome of user requirements analyses: What are the established formats for documenting user research insights and user requirements? And what information is required when specifying user requirements? The chapter explores the methods for collecting the required information. Use requirements are collected through user research activities, ideally already at early phases of the development process using different approaches that can be differentiated based on their methodology, the type of data recoded as well as their setup (laboratory vs. natural environment). The user-centered development of a system always includes a combination of different methods for collecting, analyzing, and documenting user requirements.
... Unified Process RUP [7]) und der Wertanalyse (siehe VDI 2800 und DIN EN 16271) unterstützen in frühen Phasen der Produktentwicklung dabei, den vom Kunden erwarteten Nutzen möglichst exakt zu beschreiben. Unsicherheit soll dann schrittweise auf ein Minimum reduziert und Änderungskosten gesenkt werden [8]. ...
Inhalt Unscharfe Anforderungen, verschiedene Lösungs-alternativen oder eingeschränkt gültige Simulationsmodelle sind Beispiele für inhärente Unsicherheit in der Produktentwicklung. Im vorliegenden Beitrag wird ein modellbasierter Ansatz vorgestellt, der das industriell etablierte Denken in Sicherheitsfaktoren um qualitative Aspekte ergänzt. Modelle der Informationsqualität helfen, die Unsicherheit von Ent- wicklungsartefakten beschreibend zu charakterisieren. Mittels semantischer Technologien wird Unsicherheit so wirklich handhabbar – nicht im Sinne einer Berechnung, sondern im Sinne einer qualitativen Interpretation. Dadurch entsteht wertvolles Wissen für die iterative Anforderungsanalyse, die Bewertung alternativer System-Architekturen oder für die Rekonfiguration von Simulationen.
... Prescriptive process models fall in two categories [11]: the scope of a life-cycle process model is the complete development process of a software productlike in the Unified Process [72] and in Cleanroom Software Engineering [73]while the scope of an engineering process model is a specific practice within this process. Statistical testing [74] and hybrid cost estimation [75] are examples of engineering process models. ...
There is a plethora of software development practices. Practice adoption by a development team is a challenge by itself. This makes software process improvement very hard for organisations. I believe a key factor in successful practice adoption is proper incentives. Wrong incentives can lead a process improvement effort to failure. I propose to address this problem using game-theory. Game theory studies cooperation and conflict. I believe its use can speed the development of effective software processes. I surveyed game-theory applications to software engineering problems, showing the potential of this technique. By using game-theoretic models of software development practices, we can verify if the behaviour at equilibrium converges towards team cooperation. Modern software development is performed by large teams, working multiple iterations over long periods of time. Classic game representations do not scale well to model such scenarios, so abstraction is needed. In this thesis, I propose GTPI (game-theoretic process improvement), a software process improvement approach based on empirical game-theoretic analysis (EGTA) abstractions. EGTA enables the production of software process models of manageable size . I use GTPI to address technical debt, modelling developers that prefer quick and cheap solutions instead of high-quality time-consuming fixes. I have also approached bug prioritisation with GTPI, proposing the assessor-throttling prioritisation process, and developing a tool to support its adoption.
... According to most of the templates provided in IEEE Recommended Practice for Software Requirements Specifications (IEEE Computer Society, 1998), FRs and NFRs should be stated separately in specification documents. NFRs may also be attached to use cases wherever possible (other than global NFRs) (Jacobson et al. 1999). (Glinz, 2007). ...
... Por un lado, existe un marco conceptual claramente estructurado para el modelado y simulación de sistemas en general (Simulation Modelling), con múltiples aplicaciones y usos en áreas como la industrial, ambiental, química, electromecánica, etc. [6]; y por otro lado, se cuenta con una base conceptual en torno a la comprensión, análisis, diseño y monitorización de procesos de desarrollo de software; lo que permite explorar diferentes modelos de proceso, desde [12], o el que se explica más adelante en este trabajo, el Método de Craig Larman [13]. ...
The implementation of any software development process involves the consumption of critical resources. Software engineers cannot experiment with different development processes before starting them in real projects, due to the time that would entail and the number of elements that are involved, so it is vital to have tools that allow the pre-visualization of the results of executing the software development process and how the environmental variables affect it, thus being able to anticipate under what conditions the software development process will be deployed. This paper presents the modelling and simulation of a software development process using System Dynamics(SD), which allows the graphical representation of the elements intervening in the software process, and the incorporation of as many relevant elements as possible. As a software costs estimation reference, the COCOMO estimation model was used; which beyond being reliable has a theoretical-practical foundation. As an ideal, and real, software process system, the Craig Larman Software Process model was chosen, also known as the Larman Method.The simulation model developed here, allows one to make some initial estimation of the software process and its elements’behavior in the course of the simulation time. This is possible thanks to the observation and study of the system’s state variables, empowering one to discern about the effect of changes in the parameters on the general process. This model becomes atool for supporting SoftwareProject Management teams and enterprises whose business care on Technological Projects Management.
... Now, if the requirement understanding is clear then the framework makes query about customers availability in the project. If the customer is not available, Unified process model will be suggested for project development [16]. ...
... (a) End-users' Requirements: Firstly, identify the needs of the application's end-users, since the end-users' needs hold a unique position in the design decisions of any DF application (see Figure 3). Using the functional requirements of the DFARS process, the end-users' requirements are interpreted by employing use case, activity or sequence diagrams 10 . These users' needs (which are depicted as Use-Case components) are constantly reviewed by all users of the DF application to keep up to date with the current trends in digital forensics, as well as adhere to the core needs of the DF application. ...
1 and hventer@cs.up.ac.za 2 Word Count=8590 Abstract: The requirements to identify the cause of an incident, following the trail of events preceding the incident, as well as proving the consistency of the potential evidence recovered from the alleged incident, ultimately demand a proactive approach towards the design of digital forensics (DF) applications. Success in the use of digital evidence for analysis or validation in digital forensic investigations depends on established processes, scientific methods, guidelines and standards that are used in the DF application designs and developments. Adding legal and scientific processes capable of absorbing the constant upgrades and updates in the design of DF applications is a boost to the already existing DF processes and standards. However, there is need for such processes to be clearly defined, so that non-technical audience involved in crime investigations and decisions, can easily comprehend the DF application designs and development processes. To proactively overcome this challenge, this paper proposes the digital forensic application requirements specifications (DFARS) process. The proposed DFARS process outlines an easy-to-apply design process for designing any DF application. It further demonstrates in a case scenario, how to apply the DFARS process using the online neighbourhood watch (ONW) system. The ONW system is a DF application that crowd-source potential digital evidence (PDE). One of the objectives of the ONW system is to increase the volume of available PDE to enhance success in prosecution of neighbourhood crime. Therefore using the DFARS process with the ONW system as a case scenario, the result shows that the DFARS process ensures an easy application of modifiability, pluggability and reliability features at any point in the life-cycle of a DF application. This thereby accommodates the constant upgrades and changes associated with electronic devices, operating systems, hardware and other requirements. It further shows an easy-to-follow process that is understandable to both technical and non-technical audiences in the field of digital forensics.
... The methodology is based on the unified process (UP) [5][6][7] and traditional database design methodology, well known in the software development community. In addition, "the methodology uses models and methods based on the unified modeling language (UML) [8] and the object constraint language (OCL)" [9], which ensures the achievement of the sub goal (1). ...
A generalized model of information protection of a database management system is proposed, which can be used to implement database protection under any database management system. This model development methodology consists of four stages: requirements gathering, database analysis, “multi-level relational logical construction and a specific logical construction. The first three steps define actions for analyzing and developing a secure database, thus creating a generalized and secure database model”. © 2019 Institute of Advanced Engineering and Science. All rights reserved.
... The methodology is based on the unified process (UP) [5][6][7] and traditional database design methodology, well known in the software development community. In addition, "the methodology uses models and methods based on the unified modeling language (UML) [8] and the object constraint language (OCL)" [9], which ensures the achievement of the sub goal (1). ...
A generalized model of information protection of a database management system is proposed, which can be used to implement database protection under any database management system. This model development methodology consists of four stages: requirements gathering, database analysis, "multi-level relational logical construction and a specific logical construction. The first three steps define actions for analyzing and developing a secure database, thus creating a generalized and secure database model". Database protection issues (DBs) "are critical in ensuring the protection of modern corporate systems. However, most of works in the field of DB security is aimed primarily at overcoming existing and already known vulnerabilities", implementing basic access models and addressing issues specific to a particular database management system (DBMS). The problem is the lack of a database design methodology that considers security (and, therefore, database protection models) throughout the life cycle, especially at the earliest stages. This makes it difficult to create secure databases. Our goal is to solve this problem by offering a methodology for designing secure databases. Oracle9i Label Security software (OLS9i) [1, 2] "is a component of version 9 of Oracle database management system, which allows to implement multi-level databases" [3]. OLS9i defines the labels assigned to rows and database users". These labels contain information about confidentiality for rows and authorization information for users. OLS9i defines a combined access control mechanism, taking 1
... The Unified Process (UP) [30] was proposed in an attempt to take advantage of the best features of life cycle processes and to include some agile development principles. UP is a generic process framework designed as a structure for the methods and tools of the Unified Modeling Language (UML). ...
Software processes have evolved significantly since software engineers started to follow a disciplined flow of activities to gain quality and productivity in development. Several process models, methodologies, methods and/or software development practices have been proposed, adopted or rejected that differ at the ceremony level. For many years, there have been conflicts over whether to follow a totally traditional approach (“classical”) or to become more agile. Each approach has its strengths and weaknesses, and each has followers and critics. However, the ongoing diversity of software projects and the advancement of technology has led to debates about what kinds of software process approaches are more context-effective. This paper surveys existing traditional and agile processes and discusses their challenges.
... In fact, this notion of program differences is so central to the practice of programming that nearly all software development is done using a version control system [20,38] such as g i t [15] or m e r c u r i a l [28], a tool that stores successive versions of given documents-typically source code-and is often structured around changesets, sets of consistent differences between versions accompanied by a commit message, an informal description of the change given by the person who wrote it. ...
Computer programs are rarely written in one fell swoop. Instead, they are written in a series of incremental changes.It is also frequent for software to get updated after its initial release. Such changes can occur for various reasons,such as adding features, fixing bugs, or improving performances for instance. It is therefore important to be ableto represent and reason about those changes, making sure that they indeed implement the intended modifications.In practice, program differences are very commonly represented as textual differences between a pair of source files,listing text lines that have been deleted, inserted or modified. This representation, while exact, does not address thesemantic implications of those textual changes. Therefore, there is a need for better representations of the semanticsof program differences.Our first contribution is an algorithm for the construction of a correlating program, that is, a program interleaving the instructions of two input programs in such a way that it simulates their semantics. Further static analysis can be performed on such correlating programs to compute an over-approximation of the semantic differences between the two input programs. This work draws direct inspiration from an article by Partush and Yahav[32], that describes a correlating program construction algorithm which we show to be unsound on loops that include b r e a k or c o n t i n u e statements. To guarantee its soundness, our alternative algorithm is formalized and mechanically checked within the Coq proof assistant.Our second and most important contribution is a formal framework allowing to precisely describe and formally verify semantic changes. This framework, fully formalized in Coq, represents the difference between two programs by a third program called an oracle. Unlike a correlating program, such an oracle is not required to interleave instructions of the programs under comparison, and may “skip” intermediate computation steps. In fact, such an oracle is typically written in a different programming language than the programs it relates, which allows designing correlating oracle languages specific to certain classes of program differences, and capable of relating crashing programs with non-crashing ones.We design such oracle languages to cover a wide range of program differences on a toy imperative language. We also prove that our framework is at least as expressive as Relational Hoare Logic by encoding several variants as correlating oracle languages, proving their soundness in the process.
... Since traditional software processes like UP [18] were taught in previous lessons, authors used a teaching strategy based on comparison. Thereby, agile method concepts were presented in comparison with traditional methods. ...
... The evolution of software engineering to model-based software engineering [21] coincides with the model-based approach already successfully applied to simulation and systems engineering.The Unified Modeling Language (UML) that was also developed in the 1990s [22] opened new horizons in engineering of software intensive systems. ...
Model-based paradigm has been adopted by a number of disciplines since its introduction in the late 70s. After a brief history on how model-based approach started in simulation, the merit and the spread of the model-based approach are described. The concept of model is subsumed in simulation, but many times employing just the model-based approach does not involve simulation. The superiority and advantages of simulation-based approach over model-based approach are manifold. This article describes the differences and advocates the use of simulation-based paradigm in advancing any computational discipline.
... There is no general consensus in the community about the concepts of NFP and QoS. In [78], the authors dene Non-Functional Properties (NFP) as the element that species system properties, such as environmental and implementation constraints, performance, platform dependencies, maintainability, extensibility, and reliability; in short, a requirement that species constraints on a functional requirement. In the early phases of system development, ...
Most innovative applications having robotic capabilities like self-driving cars are developed from scratch with little reuse of design or code artifacts from previous similar projects. As a result, work at times is duplicated adding time and economic costs. Absence of integrated tools is the real barrier that exists between early adopters of standardization efforts and early majority of research and industrial community. These software intensive systems are composed of distributed, heterogeneous software components interacting in a highly dynamic, uncertain environment. However, no significant systematic software development process is followed in robotics research. The process of developing robotic software frameworks and tools for designing robotic architectures is expensive both in terms of time and effort, and absence of systematic approach may result in ad hoc designs that are not flexible and reusable. Making architecture meta-framework a point of conformance opens new possibilities for interoperability and knowledge sharing in the architecture and framework communities. We tried to make a step in this direction by proposing a common model and by providing a systematic methodological approach that helps in specifying different aspects of software architecture development and their interplay in a framework.
... A related claim is that most or all projects follow the same sequences of phases or lifecycle [8,67,68] -see above. These texts often make assumptions similar to those of Technical Rationality; for instance, Royce [8], Kruchten [107] and Jacobson et al. [121] all present problem solving and problem framing as loosely coupled. ...
The most profound conflict in software engineering is not between positivist and interpretivist research approaches or Agile and Heavyweight software development methods, but between the Rational and Empirical Design Paradigms. The Rational and Empirical Paradigms are disparate constellations of beliefs about how software is and should be created. The Rational Paradigm remains dominant in software engineering research, standards and curricula despite be contradicted by decades of empirical research. The Rational Paradigm views analysis, design and programming as separate activities despite empirical research showing that they are simultaneous and inextricably interconnected. The Rational Paradigm views developers as executing plans despite empirical research showing that plans are a weak resource for informing situated action. The Rational Paradigm views success in terms of the Project Triangle (scope, time, cost and quality) despite empirical researching showing that the project triangle omits critical dimensions of success. The Rational Paradigm assumes the analysts elicit requirements despite empirical research showing that analysts and stakeholder co-construct preferences. The Rational Paradigm views professionals as using software development methods despite empirical research showing that methods are rarely used, very rarely used as intended, and typically weak resources for informing situated action. This article therefore elucidates the Empirical Design Paradigm, an alternative view of software development more consistent with empirical evidence. Embracing the Empirical Paradigm is crucial for retaining scientific legitimacy, solving numerous practical problems and improving software engineering education.
... The development of the proposed domain model was based on a well-known object-oriented software development methodology: the Unified Software Development Process (Jacobson et al. 1999). Specifically, a use case-driven approach is used in the process of defining key system abstractions. ...
Well-structured and organized cadastral records and cadastral maps are a prerequisite for improving land administration services. In recent years, numerous problems and issues associated with cadastral data have been encountered in Serbia, and attempts to overcome these problems have been made. The integration of land registry data with cadastral data containing land use component usually results in inconsistencies in land administration databases. To address this problem, an appropriate domain model has been developed using the Unified Process methodology and considering the Land Administration Domain Model and other ISO 19000 standards. Examples of verifying land administration data integrity in relational and object-oriented data models are presented.
... Modelar processos de negócios por meio da UML a qual se impõe como padrão para a modelagem de sistemas de informações e caminhando para especificações mais consistentes acerca do mesmo, exige empenho, destacando-se Marshall (1999), descrito sucintamente em Azevedo (2001). Apresenta-se uma sistemática de utilização da UML para a construção de modelos de processos de negócio tendo por base o framework apresentado por Vernadat (1996) para Modelagem Empresarial (ME), o qual tenta-se estender através da utilização de conceitos propostos Jacobson (1999) e Booch, Rumbaugh & Jacobson (2005, especialmente em casos de uso. ...
... Many software engineering technologies have been created precisely to avoid the delayed issue effect by removing risk as early as possible. Boehm's spiral model [18], Humphrey's PSP [40] and TSP [41], the Unified Software Development Process [46], and agile methods [7] all in part or in whole focus on removing risk early in the development lifecycle. Indeed, this idea is core to the whole history of iterative and incremental product development dating back to "plan-do-study-act" developed at Bell Labs in the 1930's [56] and popularized by W. Edwards Deming [26]. ...
Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.
... El Proceso Unificado [Jacobson et al. 1999] ha recibido una gran atención de parte de la comunidad de desarrolladores de software. Entre otras razones, porque sus principales autores y sponsors se cuentan entre los principales metodologistas de la orientación a objetos: James Rumbaugh, Ivar Jacobson y Grady Booch. ...
... The combination of different components in a virtual representation is necessary to develop an adequate software architecture. Booch et al. provide the following definition: " [...] an architecture is the set of significant decisions about the organization of a software system[...] " [7]. A software architecture assigns functionality to software components and describes them. ...
Today's exible demands and short product life cycles have lead to a modular thinking for machinery and plant engineering. A hidden challenge for this sector is the maintenance of plant documentation throughout the entire operating time of the machine components. This paper introduces an architecture for updating plant documentation. The concept is based on a exible masterslave hierarchy for IT-integrated machine components and aims at detecting physical changes in them. The standard data exchange format AutomationML functions as a decentralized and up-to-date virtual representation of each component carrying all types and contents of both construction and documentation disciplines.
... It has become common to specify UIs using use cases. [10] Use Cases provide a structured approach to designing and documenting system behaviour. They can be read or written by non-software-professionals. ...
... There is no general agreement in the community about the concepts of NFP and QoS. In [3], the authors define Non-Functional Properties (NFP) as one that specifies system properties, such as environmental and implementation constraints, performance, platform dependencies, maintainability, extensibility, and reliability; in short, a requirement that specifies constraints on a functional requirement. QoS is the aptitude of a service for providing a quality level to the different demands of the clients [4]. ...
This book contains the Proceeding of research papers presented at the 1st International
Conference of the IEEE Nigeria Computer Chapter (IEEEnigComputConf’16), held between Wednesday, 23rd November, 2016 and Saturday, 26th November, 2016 at the University of Ilorin, Ilorin, Kwara State, Nigeria.
The conference was organized by the IEEE Nigeria Computer Chapter (http://www.ieee.org/go/nigeriacomputerchapter) in collaboration with the Department of Computer Science, Faculty of Communication and Information Sciences, University of Ilorin. The Department of Computer Engineering, Faculty of Engineering and Technology of the same institution also served as a technical co-sponsor.
In all, a total of over sixty (60) papers were submitted as at the time of going to the press. Apart from Nigeria, submissions were received from such countries as Malaysia, South Africa and Pakistan. The papers were subjected to a referee process with respect to the actual content and the level of originality. The thirty eight (38) papers which appear in this Proceeding were those that substantially met the set acceptance criteria.
O desenvolvimento do software educativo Analisar atende a temática do desenvolvimento tecnológico como forma de desenvolvimento (MYRDAL, 1984; BRUM, 1997; MAIA, 2006; REBOUÇAS, 2010), em um contexto de falta de formação tecnológica nas engenharias no Brasil (VEJA, 2007; ANPEI, 2009 ;FINEP, 2009; ESTADÃO, 2010; FNE, 2011; PROTEC, 2011) e no mundo (BERLIN DEUTSCH, 2009; VDI BRASIL, 2010). A escola, inserida no corpus social, não pode ignorar esses problemas. Na tomada de posição por parte de instituições educacionais, surgem projetos de estímulo ao estudo das Engenharias, como o CITEC Médio. Nesse sentido, este estudo propõem elaborar, desenvolver e validar o software educativo Analisar, que tem como objetivo servir de objeto de aprendizagem na Área de Matemática, mais especificamente Análise Combinatória.
Robotic systems are becoming more safety critical systems as they are deployed in unstructured human-centered environments. These software intensive systems are composed of distributed, heterogeneous software components interacting in a highly dynamic, uncertain environment. However, no systematic software development process is followed in robotics research. This is a real barrier for system level performance analysis and reasoning, which are in turn required for scalable bench-marking methods and reusing existing software. This chapter provides an end-to-end overview on how robotic software systems can be formally specified from requirement modeling, through solution space exploration, and architecture modeling, and finally to generate executable code. The process is based on SafeRobots framework—a model-driven toolchain for designing software for robotics. Several domain-specific modeling languages that are developed as a part of this integrated approach are also discussed.
Este artigo brevemente apresenta a necessidade potencial de desenvolver uma extensão para os estudos de produção (intitulada aqui como estudos de software televisuais). Tal extensão engloba processos relacionados à gestão, integração e sincronização de desenvolvimento de software, que seriam executados por produtores de TV durante a produção dos programas televisivos para posterior fornecimento de companion apps pelas emissoras para sincronização de anúncios publicitários entre telas. É argumentado que, apesar de a alternativa complexificar a produção de TV, ela pode também minimizar o risco de que a distração da audiência durante suas materializações de experiências de múltiplas telas promova uma redução no patrocínio dos programas de TV.
Advances in information technologies coupled with increased knowledge about genes
and proteins have opened new avenues for studying protein complexes. There is a growing
need to provide structured and integrated knowledge about various proteins for the study
of unknown proteins, the search for new drugs and the application of personalized medicine.
Indeed, proteins are biological molecules that play an essential role in identifying causes
of diseases. Therefore, providing structured knowledge about proteins is one of the most
important and frequently studied issues in biological and medical research, particularly
after the conclusion of the Human Genome Project, which helped answer the question of
whether there is a unique correspondence between genes and generated proteins, opening
new avenues for the study of proteins.
In order to create universal protein knowledge bases, it is particularly important to
find them structured representations, such as ontologies. Therefore, several computational
approaches have been proposed to develop ontologies integrating knowledge about proteins.
However, these approaches, even if they provide a set of structured vocabularies for the
protein domain, do not support dynamicity, they transform static protein sources into
static ontologies, or they develop static protein ontologies with a small number of concepts.
To address these concerns, the aim of this doctoral project is, to dynamically develop a
protein ontology that will be exploited by scientists for medical research (i.e. the application
of personalized medicine, the discovery of new diseases, the development of new drugs, ...).
To do this, it took to :
1. Understand the biological process of protein synthesis from which their structure
will be developed. The appropriate computer solution for modeling this process is
related to Multi-Agent Systems. Indeed, the collaborative nature of the entities that
synthesize the protein strongly reminds the functioning of a Multi-Agent System.
2. Align the sequences generated by the Multi-Agent System with sequences stored in
existing protein sources to allow their annotation. The annotated category will build
the class of known proteins. In the absence of annotation, the resulting sequences will
be categorized as evolved or abnormal proteins, a similarity rate will lead to these
two categories.
3. Define categories to enable the dynamic development of a protein knowledge base in
form of ontology. This choice is motivated by the hierarchical nature of proteins. To
date, researchers have focused on the development of static protein knowledge bases.
Our aim is to propose a dynamic structuring of several types of proteins (known, evolved and abnormal) in the form of protein ontology that will be exploited by scien-
tists for purposes of consultation, research and comparison of proteins to allow better
understanding of the living being with with issues in : medical, pharmaceutical and pa-
thological areas. In concrete terms, a platform has been developed to meet these needs.
All these proposals have been experimented and have shown that the sequencing of
proteins that we perform by the Multi-Agent System that we propose is more efficient
than that of biological approaches. Moreover, the dynamicity of our process guarantees the
evolution of the developed ontology.
The amount of research reports on how to properly teach, in conjunction with technical topics, agile skills in undergraduate courses is a good indicator of how important are such skills in academy and industry nowadays. Such investigations have addressed challenges
like how to engage students with agile principles and values without getting distracted by technology, or how to balance theory and practice to get students to meet learning objectives through practical experience. This paper intends to contribute to this research topic by describing new strategies for our particular needs for teaching agile in an introductory software engineering course, including better evaluation criteria for agile
values and practices, and higher quality projects. The described strategies include a new approach for theoretical, laboratory, and project sessions arrangement, as well as a ‘Continuous Delivery Pipeline’ adapted to our educational context, with very promising results.
This document describes the experience in developing a Timestamping application for the Electronic Factoring process in Ecuador which executes the process of electronic signing of documents in XML format with timestamp. The need of this application has its origin in an Electronic Factoring platform developed by the company BIGDATA CA, where it was sought to have a mechanism that allows to guarantee the integrity and validity of the information over time of certain documents generated by the system. The UWE methodology was used for its approach to processes, obtaining requirements, and support with CASE tools that provide support for the treatment of the same, allowing to develop applications of better quality. The use of electronic signatures in conjunction with timestamps allowed greater legal validity and integrity to the information contained in the signed documents, so that they can be used in judicial proceedings or to avoid alteration of the same, the use of these Two mechanisms contributed to give greater security and agility to the process of Electronic Factoring, since the process of signing now is done in an entirely online environment and not in a manual way as it was being done.
This chapter presents a background in cognitive processes such as problem-solving and analogical reasoning for considering modelling from an object-oriented perspective within the domain of requirements engineering. This chapter then describes a research project and the findings from a set of four cases which examine professional practice from perspective of cognitive modelling for object-oriented requirements engineering. In these studies, it was found that the analysts routinely built models in their minds and refined them before committing them to paper or communicating these models to others. The studies also showed that object-oriented analysts depend on analogical reasoning where they use past experience and abstraction to address problems in requirements specification.
The traditional wisdom for designing database schemas is to use a design tool (typically based on a UML or E-R model) to construct an initial data model for one’s data. When one is satisfied with the result, the tool will automatically construct a collection of 3rd normal form relations for the model. Then applications are coded against this relational schema. When business circumstances change (as they do frequently) one should run the tool again to produce a new data model and a new resulting collection of tables. The new schema is populated from the old schema, and the applications are altered to work on the new schema, using relational views whenever possible to ease the migration. In this way, the database remains in 3rd normal form, which represents a “good” schema, as defined by DBMS researchers. “In the wild”, schemas often change once a quarter or more often, and the traditional wisdom is to repeat the above exercise for each alteration.
In this paper we report that the traditional wisdom appears to be rarely-to-never followed for large, multi-department applications. Instead DBAs appear to attempt to minimize application maintenance (and hence schema changes) instead of maximizing schema quality. This leads to schemas which quickly diverge from E-R or UML models and actual database semantics tend to drift farther and farther from 3rd normal form. We term this divergence of reality from 3rd normal form principles database decay. Obviously, this is a very undesirable state of affairs, and should be avoided if possible.
The paper continues with tactics to slow down database decay. We argue that the traditional development methodology, that of coding applications in ODBC or JDBC, is at least partly to blame for decay. Hence, we propose an alternate methodology that should be more resilient to decay.
In this paper, we present a tool that preserves phase consistency from specifications to the design phase by reverse engineering UML activity diagrams, designed from scenario specifications, back to scenarios to ensure that all of the original scenarios can be recreated. We use a set of action and action-link rules to specify the activity and scenario diagrams in order to provide consistency and rigor. Given an activity diagram depicting a common telecentre process (es), we present an algorithm that follows this set of action and action-link rules to reverse engineer this activity diagram back to their set of scenarios. The validation of this algorithm is achieved when, given a set of activity diagrams, the algorithm is able to recreate the original set of scenarios. Thus, all original specifications, in the form of scenarios, are ensured to be encapsulated within their activity diagram.
Object-oriented programming languages support concise navigation of relations represented by references. However, relations are not first-class citizens and bidirectional navigation is not supported. The relational paradigm provides first-class relations, but with bidirectional navigation through verbose queries. We present a systematic analysis of approaches to modeling and navigating relations. By unifying and generalizing the features of these approaches, we developed the design of a data modeling language that features first-class relations, n-ary relations, native multiplicities, bidirectional relations and concise navigation.
Software has been a significant part of modern society for a long time.In particular, this paper is concerned with various software development process models.Software process model is a description of the sequence of activities carried out in a software engineering project,and the relative order of these activities.It represents some of the development models namely, waterfall, v-shaped, incremental, RAD, iterative spiral and agile model.Therefore,the main objective of this paper is to represent different models of software development and different aspects of each model to help the developers to select specific model at specific situation depending on customer demand.
De nos jours, l’évolution rapide des besoins dus à l’innovation technique, la concurrence, la réglementation, etc. conduit de plus en plus à décrire le cadre d’étude par des modèles conceptuels, métiers, etc. pour faciliter l’évolution du fonctionnement des systèmes informatiques. Actuellement, l’Ingénierie Dirigée par les Modèles (IDM) propose des outils permettant de transformer ces modèles en applications ou en systèmes d’information qui, bien évidemment, doivent évoluer comme les systèmes réels qu’ils sont sensés représenter. Généralement, le développement d’une application s’effectue en plusieurs phases qui constituent le cycle de développement du logiciel.Plusieurs équipes de nature différente contribuent aux différentes phases. Des intervenants, experts des domaines étudiés, produisent des modèles traduisant leur perception propre du système.Ainsi, les différentes perceptions des intervenants au cours de la phase d’analyse d’un système d’information donneront lieu à des modèles représentant des sous-systèmes spécifiques qu’il faudra ensuite réunir pour obtenir l’intégralité du modèle du système d’information étudié. L’objectif essentiel de la thèse est de concevoir et d’implémenter dans un atelier de génie logiciel les mécanismes et les transformations consistant d’une part à extraire le Plus Grand Modèle Commun, modèle factorisant les concepts communs à plusieurs modèles sources et, d’autre part, à proposer aux concepteurs une méthodologie de suivi de l’évolution de la factorisation des modèles sources.Pour réaliser la factorisation, nous avons mis en œuvre l’Analyse Formelle de Concepts (AFC) et l’Analyse Relationnelle de Concepts (ARC), méthodes d’analyse des données utilisées en Ingénierie Dirigée par les Modèles et basées sur la théorie des treillis. Dans un ensemble d’entités décrites par des caractéristiques, ces deux méthodes extraient des concepts formels qui associent un ensemble maximal d’entités à un ensemble maximal de caractéristiques partagées. Ces concepts formels sont structurés dans un ordre partiel de spécialisation qui les munit d’une structure de treillis. L’ARC permet de compléter la description des entités par des relations entre entités.La première contribution de la thèse est une méthode d’analyse de l’évolution de la factorisation d’un modèle basée sur l’AFC et l’ARC. Cette méthode s’appuie la capacité de l’AFC et de l’ARC à faire émerger au sein d’un modèle des abstractions thématiques de niveau supérieur, améliorant ainsi la sémantique des modèles. Nous montrons que ces méthodes peuvent être aussi employées pour suivre l’évolution du processus d’analyse avec des acteurs. Nous introduisons des métriques sur les éléments de modélisation et sur les treillis de concepts qui servent de base à l’élaboration de recommandations s’appuyant sur une expérimentation dans laquelle nous étudions l’évolution de la factorisation des 15 versions du modèle de classes du système d’information SIE-Pesticides. Ces versions ont été établies en phase d’analyse au cours de séances avec un groupe variable d’experts du domaine.La seconde contribution de la thèse est une étude approfondie du comportement de l’ARC sur des modèles UML. Nous montrons l’influence de la structure des modèles sur les temps d’exécution, la mémoire occupée, le nombre d’étapes et la taille des résultats au travers de plusieurs expérimentations sur les 15 versions du modèle SIE-Pesticides. Pour cela, nous étudions plusieurs configurations (choix d’éléments de modélisation et de relations entre eux dans le méta-modèle) et plusieurs paramètres (choix d’utiliser les éléments non nommés, choix d’utiliser la navigabilité). Des métriques sont introduites pour guider le concepteur dans le pilotage du processus de factorisation et des recommandations sur les configurations et paramétrages à privilégier sont faites. Nous étudions également les modèles en tant que graphes au travers de plusieurs indicateurs tels que la densité ou le degré.La dernière contribution est une nouvelle approche pour assister la factorisation intermodèles dont l’objectif est de regrouper au sein d’un modèle unique l’ensemble des concepts communs à différents modèles sources conçus par des experts ayant des points de vue différents sur le système. Ce modèle unique que nous appelons Plus Grand Modèle Commun permet de capitaliser la connaissance commune à plusieurs experts. Cette factorisation est basée sur l’Analyse Formelle de Concepts et les concepts formels sont classés à l’aide d’un arbre de décision. Outre le regroupement des concepts communs, cette analyse produit de nouvelles abstractions généralisant des concepts thématiques existants. Nous appliquons notre approche sur les 15 versions du modèle SIE-Pesticides. L’ensemble de ces travaux s’inscrit dans un cadre de recherche dont l’objectif est de factoriser des concepts thématiques au sein d’un même modèle et de contrôler par des métriques la profusion de concepts produits par l’AFC et surtout par l’ARC. Ces contributions et la robustesse des recommandations seront prochainement validées et consolidées en reproduisant l’expérimentation sur les différents modèles du projet de Système d’information pour la gestion du personnel et des étudiants de l’Université de Djibouti dont le but est de regrouper tous les modèles des systèmes d’information scolaires et universitaires des différentes institutions de la République de Djibouti.
The designing and implementation of an agent-based model for a system called complex are often guided by clear objectives and often meet a need. Such kinds of models are often used to help for understanding real systems mechanisms and to be at last a decision support. The evolution of a theory that is supported by validation stages through a simulation needs to be guided by a methodology. This one will make possible to organize the work and communicate with the end users in order to achieve the objectives with a reasonable time. We propose here a methodology that extends the Unified process. It provides a set of tools to carry out the modeling work successfully. This methodology is iterative and incremental and is guided by a collaborative approach based on communicating with the end users in one side and specialist in another side. The communication is also supported by tools and a common ontology. The present method formalism is based on Agent UML combined with GAIA. A practical case is also presented with an agent based model on the Rift valley fever.
Automated machine analysis of natural language requirements poses several challenges. Complex requirements such as functional requirements and use cases are hard to parse and analyze, the language itself is un-constrained, the flow of requirements may be haphazard, and one requirement may contradict another - to name a few challenges. In this paper, we present a lightweight semantic modeling technique through natural language processing to filter requirements and create a semi-formal semantic network of requirement sentences. We employ novel techniques of classification of verbs used in requirements, semantic role labeling, discourse identification, and a few verb entailment and dependency relationships to generate a lightweight semantic network and critique the requirements. We discuss the design of the model and some early results obtained from analyzing real-life industrial requirements.
The system integration is always a terrible headache for IT technologists. Several aspects are related to a proper integration of different components and services into remote healthcare solution. Furthermore the overall integration issue cannot be regarded only from a technical point of view but it has to take into account aspects such as deployment scenario, service organization, educational and business context, resource sharing with other services. The system integration is a crucial activity and requires to be properly planned, it is based on system and service architecture design however it must be empowered taking into account use case and deployment scenario, functional and technical specification and interoperability requirements with other services. Due to the variety and complexity of system integration, in this chapter only some of the major issues related to system integration are taken into account; in particular the authors have selected the following main issues: system integration topics checklist; the interoperability and portability of data as one of the crucial aspects enabling system integration and proper deployment of solutions into the healthcare domain; structured approach for solution deployment; the user interface design as basic aspect to engage the medical professionals. Finally the critical issues are raised breaking down the lessons learnt.
Software development is an intensively collaborative activity, where common collaboration issues (task management, resource use, communication, etc.) are aggravated by the fast pace of change, artifact complexity and interdependency, an ever larger volume of context information, geographical distribution of participants, etc. Consequently, the issue of tool-based support for collaboration is a pressing one in software engineering. In this thesis, we address collaboration in the context of modeling and enacting development processes. Such processes are traditionally conceived as structures imposed upon the development of a software product. However, a sizable proportion of collaboration in software engineering is ad hoc, and composed of unplanned activities. So as to make software processes contribute to collaboration support, especially the unplanned kind, we focus on their function of information repositories on the main elements of collaboration and the interactions of such elements. Our contribution, on the one hand, is a conceptual model of collaborative development support, which is able to account for popular tools like version control systems and bug tracking systems. This conceptual model is then applied to software processes. We hence define a global approach for the exploitation of process information for collaboration support, based on the central notions of query language and event handling mechanism. On the other hand, we propose a metamodel, CMSPEM (Collaborative Model-Based Software & System Process Engineering Metamodel), which extends SPEM (Software & System Process Engineering Metamodel) with concepts and relationships necessary for collaboration support. This metamodel is then tooled with model creation tools (graphical and textual editors), and a process server which implements an HTTP/REST-based query language and an event subscription and handling framework. Our approach is illustrated and validated, first, by an analysis of development practices inferred from project data from 219 open source projects. Second, collaboration support utilities (making contextual information available, automating repetitive actions, generating reports on individual contributions) have been implemented using the CMSPEM process server.
Nowadays, the electrical vehicles (EV) market is undergoing a rapid expansion and has become ofgreat importance for utility companies such as EDF. In order to fulfill its objectives (demand optimization,pricing, etc.), EDF has to extract and analyze heterogeneous data from EV and charging spots. Inorder to tackle this, we used data warehousing (DW) technology serving as a basis for business process(BP). To avoid the garbage in/garbage out phenomena, data had to be formatted and standardized.We have chosen to rely on an ontology in order to deal with data sources heterogeneity. Because theconstruction of an ontology can be a slow process, we proposed an modular and incremental constructionof the ontology based on bricks. We based our DW on the ontology which makes its construction alsoan incremental process. To upload data to this particular DW, we defined the ETL (Extract, Trasform& Load) process at the semantic level. We then designed recurrent BP with BPMN (Business ProcessModelization & Notation) specifications to extract EDF required knowledge. The assembled DWpossesses data and BP that are both described in a semantic context. We implemented our solutionon the OntoDB platform, developed at the ISAE-ENSMA Laboratory of Computer Science and AutomaticControl for Systems. The solution has allowed us to homogeneously manipulate the ontology, thedata and the BP through the OntoQL language. Furthermore, we added to the proposed platform thecapacity to automatically execute any BP described with BPMN. Ultimately, we were able to provideEDF with a tailor made platform based on declarative elements adapted to their needs.
ResearchGate has not been able to resolve any references for this publication.