ArticlePDF Available

Building a Flexible Software Factory Using Partial Domain Specific Models

  • Self Employed

Abstract and Figures

This paper describes some experiences in building a software factory by defining multiple small domain specific languages (DSLs) and having multiple small models per DSL. This is in high contrast with traditional approaches using monolithic models, e.g. written in UML. In our approach, models behave like source code to a large extend, leading to an easy way to manage the model(s) of large systems.
Content may be subject to copyright.
Building a Flexible Software Factory Using Partial
Domain Specific Models
Jos Warmer
, Anneke Kleppe
Ordina SI&D, The Netherlands
University Twente, Netherlands
Abstract. This paper describes some experiences in building a software factory
by defining multiple small domain specific languages (DSLs) and having multiple
small models per DSL. This is in high contrast with traditional approaches using
monolithic models, e.g. written in UML. In our approach, models behave like
source code to a large extend, leading to an easy way to manage the model(s) of
large systems.
1 Introduction
A new trend in software development is to use model driven techniques to develop software
systems. Domain specific models (DSMs), domain specific languages (DSLs), and the trans-
formations from the DSMs to code need to be carefully designed to make them really useable.
An obvious observation is that one single model (in a single file or single repository) will
not suffice for describing a complete application. Such a model would be too large to handle;
it would be unreadable and thus not understandable. Although obvious, this is something that
has not been acknowledged in the modelling world. Companies that apply model driven devel-
opment on a large scale are having problems in managing models that are sometimes over 100
MB size. We therefore argue for building smaller, partial models, each of which is part of a
complete model. This is much like the way a code file for a class is part of the complete source
code for the application. Each partial model may be written in either the same or a different
DSL, thus using the advantage of the fact that a DSL is designed to solve one specific part of
a problem as good as possible.
In this paper we show how partial models can be used to build large, complex applications.
We also show the consequences of this approach on the DSL definitions and their accompany-
ing model-to-code transformations. We will also show how managing models for large appli-
cations is simplified by using partial models.
This paper is based on the industrial experience of one of the authors with building the
SMART-Microsoft Software Factory at Ordina, a model driven development software factory
using the Microsoft DSL Tools. At the time of writing this software factory included four dif-
ferent DSLs. Typically several dozens of DSMs are created in a project that utilizes this soft-
ware factory. Although the experience was gained using the Microsoft DSL Tools, the
approach can be applied in other environments (like e.g. Eclipse GMF).
This paper is structured as follows. Section 2 explains in short the development process
when using a model driven software factory. Section 3 introduces the concept of partial mod-
3. The author is employed in the GRASLAND project funded by the Dutch NWO (project number
els, and section 4 explains our approach to references between partial models. Section 5 ex-
plains different forms of code generation from partial models.
2 The Software Development Process
The traditional development process, not using models, DSLs, or model transformations, can
(simplified) be described as follows. Decide on the architecture of the application (1). Design
the application (2). Write the code, compile it, and link it (3). Run the application (4).
The model driven software factory process, as introduced in [GSCK04], using DSLs and
model transformations, works in a different way. First, the software factory itself is designed
as follows:
1. Decide on the architecture of the application.
2. Design the DSLs for this architecture
3. Write the transformations for these DSLs
The system developer does not need to determine the architecture any more, but starts directly
with modelling the application:
1. Model the application.
2. Transform the models.
3. Write additional code (if required).
4. Compile and link the code, and run the application
This process is often done iteratively, meaning that after running the application in step 4 you
go back to step 1 and start modeling the next part of the application. The development of the
software factory is also done iteratively, but in the context of this paper that is not relevant.
Also note that a software factory is more than just a collection of DSLs, however this paper
focuses on the DSL aspect, and what’s more we focus on the first part of the process: how to
build a collection of DSLs and their transformations.
3 Developing a Flexible Software Factory
The first step when developing a model driven software factory, is to determine the architec-
ture of the applications that you are going to build with the software factory. Is it, for instance,
a web-based, administrative application or is it a process control system? The answer to this
question determines the architecture. From the architecture we derive which DSLs are to be
defined for modelling the application.
The SMART-Microsoft Software Factory is targeting web-based, administrative applica-
tions, of which the architecture is shown in Figure 1. Based on this architecture we have de-
fined four different DSLs, each of which corresponds to a part of the architecture. We
recognise the following domains: the Web Scenario DSL for the Presentation layer, the Busi-
ness Class DSL for the Business classes, the Services DSL for the Service Interface and Busi-
ness Processes, and the Data Contract DSL for the Data Contract. There is no DSL
corresponding to the Data layer, because this layer is completely generated from the Business
Class DSL. A developer who wants to build a compete system will use all DSLs together.
The different DSL are mostly independent, therefore it is possible to use a subset of the
DSLs provided. We can also combine the DSLs in a different way. For example, we are plan-
ning to develop a DSL for building Windows user interfaces, which can then be used instead
of the current Web Scenario DSL. This allows us to flexibly evolve the software factory.
3.1 Goals for Domain Specific Languages
1. A model is always executable in the sense that every aspect of a model is used to gen-
erate code. We do not consider models used purely for design or documentation, these
can be built effectively with UML or tools like Visio.
2. A concept only becomes part of a DSL if it is easier or less work to model it than to
code it. This keeps the number of concepts small and ensures that the DSL is very pro-
3. Models (or better said the code generated from the models) are meant to be extended
by code.
3.2 Introducing Partial Models
When using the software factory to build an application, a developer determines the number
and kind of DSMs that need to be built. One possibility, which we have seen used at several
places, is to create one DSM per DSL. This would mean that we have four DSMs for each ap-
plication. Still, for a large application this still does not scale up. For instance, if we have one
DSM for the complete Web Scenario Domain, this will become an incredibly large model for
any real application. The model would contain many Web Scenario elements, which each con-
sists of a set of Web Pages and Actions. A model of such size is not readable, and certainly not
Working with one large model also introduces many practical problems relating to manag-
ing such a model in a multi-user environment. Experience with traditional UML tools has
Fig. 1 The Web Application Service Architecture
learned us that this problem has not been solved by any of them. Even when a tool allows mul-
tiple persons to work simultaneously on a model, the model must usually be divided before-
hand in non-overlapping slices and the developers must be very disciplined while working
with the tools.
The solution to this problem that we present here is to use multiple DSMs per DSL. We call
these models partial models, because they do not represent the complete system. Each partial
DSM is stand alone and can be used in isolation. In the case of the Web Scenario DSL, the DSL
has been designed such that each DSM contains not more than one Web Scenario. If an appli-
cation needs e.g. twenty Web Scenarios, twenty Web Scenario DSMs will be created. As a di-
rect consequence of this choice each partial DSM has some unique and useful properties:
One partial DSM can be created and edited stand alone by one user.
The partial DSM is the unit of version control, and when the DSM is stored on file, ordi-
nary version control systems provide ample possibilities for version control.
Our approach fits very well with the structuring of the Microsoft DSL Tools that we have been
using, in which one model is stored in one file. Also, in the Microsoft DSL Tools one model
is identical to one diagram, and should therefore remain small. In the remainder of this paper
all DSMs are partial models, the DSLs are designed to support this.
4 Combining Partial DSMs using References
Allowing partial DSMs has direct consequences for the way that a DSL is defined. One such
consequence is that we need a way to define references between DSMs. This section describes
the ins and outs of references.
4.1 References between Partial DSMs
A model element from a partial DSM may be referenced in another partial DSM just like class-
es and their operations may be referenced in a code file. To ensure that a DSM remains a stand
alone artifact, references are always modelled explicitly and are always by name. There are no
hard links between different DSMs, otherwise we would end up with one monolithic model
again. To accommodate this we have introduced a metatype Reference to ModelElement for
each modelelement that we want to refer to in each of our DSLs. This metaclass may be sub-
classed to create a reference to a particular type of modelelement. Thus, a model element in a
DSM may be of type Reference to BusinessClassDto, holding the name (or path) of a business
class in another DSM.
References may link DSMs written in the same DSL, e.g. a Reference to WebScenario in a
Web Scenario DSM, or they may link DSMs written in different DSLs, e.g. a Reference to
BusinessClassDto in a Web Scenario DSM, that refers to a modelelement in a Data Contract
DSM. An example of the first can be found in DSM 1 in Figure 2, an example of the second
can be found in DSM 2.
4.2 Checking References
In a complete application the references within the DSMs should all be valid, e.g. the referred
WebScenario in Figure 2 must be defined in another DSM. For this purpose we have developed
inter-DSM validation support. With one button, a user can do a cross-check on all references
to check whether the referred elements exist. This validation is based on a small run-time com-
ponent, which is populated from the DSMs in the developers workspace. This component is
similar to a symbol table in a compiler and only holds the minimum information needed for
validation purposes.
Note that a single DSM is still valid if a reference does not exist, but the collection of DSMs
is not complete. The DSM with the unresolved reference can still be edited, checked in, and its
model elements can be referred to by other DSMs, etc.
4.3 Dealing with Changes in References
A change in the name of a referred model element is allowed, but will make existing refer-
ence(s) dangling. This is an inherent feature, following directly from the way DSLs are de-
signed. Tool support for coping with this kind of changes is not within the scope of language
definition, instead it should be provided by the IDE. There are various options for dealing with
dangling references:
No support: the inter-DSM validation will result in an error message and the developer has
to “repair” the dangling reference.
Refactoring support: the user may explicitly perform a name change of the referred model
element as a refactoring. The IDE will find all references, and change them to refer to the
new name.
Automatic support: when the user changes the name of a referred element, all references
will change automatically.
Having no support at all does work, but is cumbersome. Automatic support has the problem
that the developer does not know where automatic changes take place and he might therefore
encounter unexpected results. In the Plato model driven environment that we have built in the
Fig. 2 Example of references between partial models
past, we found that automatic changes also results in the need to re-test the whole system, be-
cause the changes were not controlled.
The best option seems to be refactoring support. Note that in this case renaming a model el-
ement works exactly as renaming a class in C# or Java code. Either the user changes the name,
which results in dangling references, or the user requests an explicit refactoring and is offered
the possibility to review the resulting changes and apply them. In the SMART-Microsoft Soft-
ware Factory we have chosen the option of using explicit refactoring. The run-time component
for cross-reference validation holds all the information needed to execute this.
In both automatic and refactoring support the following problem may occur. Specially in
large projects, there will be many dozens of DSMs, and each DSM can be edited simultane-
ously by a different user. To allow for automatic change propagation or refactoring the user
performing the change needs to have change control over all affected DSMs. Because we do
not have a model merging feature available in the Microsoft DSL Tools, this problem cannot
currently be solved.
5 Code Generation
In this section we explain different forms of code generation from partial models. As our mod-
els are meant to generate code, this is an essential part of the DSL definition. We do not use
our models for documentation purposes only.
5.1 Different Types of Generation
In model driven development [MSUW04, Fra03, KWB03] multiple layers of abstraction are
used. Inhabitants of the lowest layer are called code, inhabitants of all layers above the lowest
are called models. There is no restriction on the number of layers that may be used, as shown
in [GSCK04].
The definition of a DSL includes the code generation for that DSL. Interestingly, it is also
possible to generate another model instead of code, thus making use of multiple layers of ab-
straction. For DSLs defined at a higher level of abstraction, it is often more easy to generate a
lower level DSM than code, because the generation target itself is at a higher level of abstrac-
tion. Therefore we distinguish the following two types of generation.
DSM to Code Generation. The first type of generation is code generation from a DSM. This
is the most common way of generation. Template languages like T4 for Visual Studio or JET
and Velocity for Java are often used for this purpose.
DSM to DSM Generation. The second type of generation is to generate another model from
a DSM. This is possible when a DSM can be completely derived from another (higher level)
DSM. Often the generated DSM takes the form of a partial model. The developer can add (by
hand) other partial DSMs that refer to the generated DSM, thus extending or completing the
5.2 Linking Partial Models or Linking Partial Code?
Another distinction that needs to be made is the moment when the partial descriptions of the
application are brought together. There are two possibilities:
1. Link all partial models together to form the complete model. Transform the complete
model into code.
2. Transform a single partial model into (partial) code. Link the generated code.
Within the SMART-Microsoft Software Factory we have chosen to use option 2. The code is
generated immediately (and automatically) whenever a DSM is saved. Our experience is that
generating everything in one step from a complete model can become very time consuming,
resulting in long waiting times to conclude the code generation process. Using option 2, we
can perform the transformation process incrementally, re-generating only those parts that have
been changed. Also, option 2 fits much better in our philosophy that we do not need a complete
model at any point in time.
However, option 2 is not always feasible. When the information in two or more partial mod-
els needs to be transformed into a single code file, only option 1 will suffice. In our case we
generate C# code, which offers the notion of partial classes, which are used extensively in the
SMART-Microsoft Software Factory.
5.3 Regeneration and Manual Additions
When something - either code or another DSM - is generated from a DSM we need to be able
to do at least two actions:
Regenerate whenever the source DSM changes.
Manually add something to the generated code or generated DSM.
Besides, we must at any time be able to use the two options independently. That is, when we
have manually added code or DSMs, we must still be able to regenerate the generated parts
while maintaining the handwritten code or DSMs. This is implemented as follows.
Regeneration of Code. When we generate C# code, partial classes and abstract / virtual meth-
ods are used to enable the user to add code without touching the file that was generated. This
allows full regeneration without disturbing the handwritten code. For other types of artifacts,
the situation is more complex, often handwritten code has to be in the same file as generated
code (e.g. in XML configuration files). The handwritten code is then marked as a guarded
block and retained when the file is regenerated.
Regeneration of DSM. When we generate a DSM from a higher level DSM, we use the same
approach as when generating C# code. One partial DSM (in one file) is generated, and this file
remains untouched by developers. Handwritten additions must be modelled in separate partial
DSMs. Reference elements (see 4.1) may be used in the handwritten DSM to refer to model
elements in the generated DSM.
6 Other Views on Modeling
In the UML world the term model is often used rather loosely both for a diagram, and for the
underlying collection of interrelated modelelements. However, according to the UML lan-
guage definition, there can be only one model - consisting of multiple diagrams - and each di-
agram is a specific view on part of the underlying model. The UML offers no way to define
references between different models, it assumes that you always work with one (large) model.
In the agile modeling community there is a tendency to create small models. However, these
models are typically used for documentation and design, but are rarely used for code genera-
tion. The difference between this type of models and the ones presented here is that the agile
models usually cannot be processed automatically to generate working code. Although human
readers might recognise references between agile models, tools will not. Also, what is consid-
ered to be a set of small models, often is a set of diagrams in one UML model.
The partial DSM as described in this paper always constitutes an executable model. Within
a software development project these models have exactly the same status as source code.
They are the source from which the final system is generated. Before completely building the
system all references in both the source code and the (partial) models must be resolved.
7 Conclusion
We have described the development of a model driven software factory using multiple DSLs.
The approach takes a non-traditional view to modelling. Instead of having one monolithic
model we use many smaller models of many different types. These models are called partial
models or partial DSMs. In our approach one partial DSM has the same characteristics as one
source code file, which clarifies many things and leads to a different way of thinking about
models. The following characteristics of partial DSMs are identical to source code files.
Storing a DSM in a file
Code generation per DSM
Version control per DSM
References between DSMs always by name
Refactoring in the IDE if requested by user
Intellisense / code completion for Reference elements
While building the SMART-Microsoft Software Factory, we found more and more advantages
of our approach. Although not discussed in this paper, each DSL can be used in isolation of
other DSLs. This opens up possibilities to use a subset of the DSLs whenever applicable or,
for example, replace one DSL by another one in the software factory. We also see opportuni-
ties to reuse both DSLs and DSMs in a relatively easy way.
We view the approach to building a software factory using DSLs as an approach to MDA.
Although MDA is often related to UML, this is not mandatory. Using DSLs fits into this ap-
proach as well. Also, we deal with model-to-model transformations as well, although we have
no fixed number of levels like the PIM - PSM - Code levels in MDA.
The SMART-Microsoft Software Factory also has strong connections with the idea of de-
veloping product lines [CE00]. The software factory is a product line for administrative web
applications according to a defined architecture. We have ensured flexibility for building other
product lines by keeping the DSLs as independent entities. Apart from the DSLs, a Software
Factory also includes other things, like a specialized process and others. This paper focuses on
the DSL aspect only.
[CE00] Krzysztof Czarnecki and Ulrich W. Eisenecker. Generative programming: methods, tools, and ap-
plications. ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 2000.
[Fra03] David Frankel. Model Driven Architecture: Applying MDA to Enterprise Computing. John Wiley
& Sons, 2003.
[GSCK04] Jack Greenfield, Keith Short, Steve Cook, and Stuart Kent. Software Factories, Assembling Appli-
cations with Patterns, Models, Frameworks, and Tools. John Wiley & Sons, 2004.
[KWB03] Anneke G. Kleppe, Jos Warmer, and Wim Bast. MDA Explained: The Model Driven Architecture:
Practice and Promise. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2003.
[MSUW04] Stephen J. Mellor, Kendall Scott, Axel Uhl, and Dirk Weise. MDA Distilled, Principles of
Model_Driven Architecture. Addison-Wesley, 2004.
[SMART06],285/index.htm: SMART-Microsoft Website.
... With the progressive adoption of MDE techniques in the industry [5], [6], existing tools have to increasingly deal with large models, and the scalability of existing technical solutions to store, edit collaboratively, transform, and query models has become a major issue [7], [8]. Large models typically appear in various engineering fields, such as civil engineering [9], automotive industry [10], product lines [11], and can be generated in model-driven reverse engineering processes [12], such as software modernization. ...
... During this step, helper functions that compute the results of these OCL operations are also generated. The elements created in the different steps of the transformation are then merged (8) inside the GremlinScript to produce the output Gremlin Traversal Model (9). ...
Conference Paper
Full-text available
While Model Driven Engineering is gaining more industrial interest, scalability issues when managing large models have become a major problem in current modeling frameworks. Scalable model persistence has been achieved by using NoSQL backends for model storage, but existing modeling framework APIs have not evolved accordingly, limiting NoSQL query performance benefits. In this paper we present the Mogwa¨ıMogwa¨ı, a scalable and efficient model query framework based on a direct translation of OCL queries to Gremlin, a query language supported by several NoSQL databases. Generated Gremlin expressions are computed inside the database itself, bypassing limitations of existing framework APIs and improving overall performance, as confirmed by our experimental results showing an improvement of execution time up to a factor of 20 and a reduction of the memory overhead up to a factor of 75 for large models.
... 2) Linkage Model: As discussed in [16], loosely coupled models are easier to understand and reuse than a single monolithic model. To achieve loose coupling, the linkage model approach can be used. ...
Conference Paper
The influence of Internet of Things (IoT) and connected service-oriented systems in various application domains is increasing. Such multi-domain systems can work in collaboration to provide new functionalities. System development of different domains requires different specific tools that often lack common exchange interfaces. Common exchange interfaces between technical domains are necessary for a holistic architectural design that enables system-wide analysis, e.g. cyber-security. We propose a multi-domain metamodeling framework to create, extend and reuse metamodels of different technical domains in order to generate modeling tools for specific applications. The generated modeling tool enables analysis through all integrated domains. With loosely coupled metamodels, we are able to improve the reusability of the metamodels and manage the explicit connections using the weaving-model. This work also presents three use cases of application specific tools that can be built using the proposed framework: security analysis for industry 4.0 systems, multicore based safety systems and secure development of autonomous driving functions. The framework and a graphical editor to manage metamodel connections are implemented and used to generate modeling tools for the first two use cases. Our results show that a multi-domain system can be described precisely and analyzed across different domains with the specific generated tool. Index Terms-model based systems engineering, model driven engineering, multi-domain metamodeling
... Given that model-driven engineering (MDE) is progressively adopted in the industry [17,23], we believe that the support of prefetching and caching techniques at the modeling level is required to raise the scalability of MDE tools dealing with large models where storing, editing, transforming, and querying operations are major issues [21,32]. These large models typically appear in various engineering fields, such as civil engineering [1], automotive industry [4], product lines [26], and in software maintenance and evolution tasks such as reverse engineering [5]. ...
Full-text available
Caching and prefetching techniques have been used for decades in database engines and file systems to improve the performance of I/O-intensive application. A prefetching algorithm typically benefits from the system’s latencies by loading into main memory elements that will be needed in the future, speeding up data access. While these solutions can bring a significant improvement in terms of execution time, prefetching rules are often defined at the data level, making them hard to understand, maintain, and optimize. In addition, low-level prefetching and caching components are difficult to align with scalable model persistence frameworks because they are unaware of potential optimizations relying on the analysis of metamodel-level information and are less present in NoSQL databases, a common solution to store large models. To overcome this situation, we propose PrefetchML, a framework that executes prefetching and caching strategies over models. Our solution embeds a DSL to configure precisely the prefetching rules to follow and a monitoring component providing insights on how the prefetching execution is working to help designers optimize his performance plans. Our experiments show that PrefetchML is a suitable solution to improve query execution time on top of scalable model persistence frameworks. Tool support is fully available online as an open-source Eclipse plug-in.
... With the progressive adoption of MDE techniques in industry [10], existing model persistence solutions have to address scalability issues to store, query, and transform large and complex models [13]. Indeed, existing modeling frameworks were first designed to handle simple modeling activities, and often rely on XMI-based serialization to store models. ...
Conference Paper
Full-text available
The growing use of Model Driven Engineering (MDE) techniques in industry has emphasized scalability of existing model persistence solutions as a major issue. Specifically, there is a need to store, query, and transform very large models in an efficient way. Several persistence solutions based on relational and NoSQL databases have been proposed to achieve scalability. However, existing solutions often rely on a single data store, which suits a specific modeling activity, but may not be optimized for other use cases. In this article we present NEOEMF, a multi-database model persistence framework able to store very large models in key-value stores, graph databases, and wide column databases. We introduce NEOEMF core features, and present the different data stores and their applications. NEOEMF is open source and available online.
... Currently, there is lack of support for prefetching and caching at the model level. Given that model-driven engineering (MDE) is progressively adopted in the industry [15,21] such support is required to raise the scalability of MDE tools dealing with large models where storing, editing, transforming, and querying operations are major issues [19,28]. These large models typically appear in various engineering fields, such as civil engineering [1], automotive industry [4], product lines [24], and in software maintenance and evolution tasks such as reverse engineering [5]. ...
Conference Paper
Full-text available
Prefetching and caching are well-known techniques integrated in database engines and file systems in order to speed-up data access. They have been studied for decades and have proven their efficiency to improve the performance of I/O intensive applications. Existing solutions do not fit well with scalable model persistence frameworks because the prefetcher operates at the data level, ignoring potential optimizations based on the information available at the metamodel level. Furthermore, prefetching components are common in rela-tional databases but typically missing (or rather limited) in NoSQL databases, a common option for model storage nowadays. To overcome this situation we propose PrefetchML, a framework that executes prefetching and caching strategies over models. Our solution embeds a DSL to precisely configure the prefetching rules to follow. Our experiments show that PrefetchML provides a significant execution time speedup. Tool support is fully available online.
... The approach described in [43] supports the concept of partial classes for generated object-oriented code and protected regions for code that does not support this mechanism. Additionally, Brückmann et al. [44] advocate patterns such as delegation to incorporate manually written code in generated parts. ...
Conference Paper
Full-text available
In many development projects models are core artifacts used to generate concrete implementations from them. However, for many systems it is impossible or not useful to generate the complete software system from models alone. Hence, developers need mechanisms for integrating generated and handwritten code. Applying such mechanisms without considering their effects can cause issues in projects, where model and code artifacts are essential. Thus, a sound approach for the integration of both forms of code is needed. In this paper, we provide an overview of mechanisms for integrating handwritten and generated object-oriented code. To compare these mechanisms, we define and apply a set of criteria. The results are intended to help model-driven development (MDD) tool developers in choosing an appropriate integration mechanism. In this extended version, we additionally discuss essential integration aspects including the protection of generated code and elaborate on how to use action languages to extend generated code.
... The approach described in (Warmer and Kleppe, 2006) supports the concept of partial classes for generated object-oriented code and protected regions for code that does not support this mechanism. Additionally, Brückmann et al. (Brückmann and Gruhn, 2010) advocate patterns such as delegation to incorporate manually written code in generated parts. ...
Conference Paper
Full-text available
Code generation from models is a core activity in model-driven development (MDD). For complex systems it is usually impossible to generate the entire software system from models alone. Thus, MDD requires mechanisms for integrating generated and handwritten code. Applying such mechanisms without considering their effects can cause issues in projects with many model and code artifacts, where a sound integration for generated and handwritten code is necessary. We provide an overview of mechanisms for integrating generated and handwritten code for object-oriented languages. In addition to that, we define and apply criteria to compare these mechanisms. The results are intended to help MDD tool developers in choosing an appropriate integration mechanism.
... Sometimes partitioning models (partial models) are proposed as improvement to maintainability and understanding. This also adds benefits to model management in multi-user environments [11]. ...
Full-text available
Over the last three decades, an increasing number of languages used for designing and developing software have been created. Software developers gain the benefits of combining multiple programming languages and paradigms in application development, as a result the so-called language engineering approach can be outlined. It involves Domain Specific Languages (DSLs) and automatic code generation. This paper offers a brief review of the use of DSL as a modeling and programming language and it tight connection with automatic code generation. The evolution of the developed software product requires evolution of the domain-specific language as well. Some of the risks of abandoning of DSLs during development are discussed.
Conference Paper
Game developers are facing an increasing demand for new games every year. Game development tools can be of great help, but require highly specialized professionals. Also, just as any software development effort, game development has some challenges. Model-Driven Game Development (MDGD) is suggested as a means to solve some of these challenges, but witha loss in flexibility. We propose a MDGD approach that combines multiple domain-specific languages (DSLs) with design patterns to provide flexibility and allow generated code to be integrated with manual code. After experimentation, we observed that, with the approach, less experienced developers can create games faster and more easily, and the product of code generation can be customized with manually written code, providing flexibility. However, with MDGD, developers become less familiar with the code, making manual codification more difficult.
Conference Paper
Full-text available
In recent years, considerable effort has been put into the area of Programmable Logic Controller (PLC) in terms of performance, reliability, availability, communication, and integration of business functions. After this focus on hardware improvements, interest is increasing around PLC projects' lifecycle software (e.g. simulation, deployment, and requirement traceability). For example, the latest update of the IEC61131-3 standard includes the Object Oriented Programming (OOP) with the possibility to use interfaces and inheritance in order to allow software components to be more reusable. Industry 4.0 is aiming to develop future and smart factory by merging automation and digitalization, resulting in more efficient production methods. A major challenge of the Industry 4.0 is to industrialize the production of software. The main purpose of this industrialization is to introduce methods and tools to make software controllable and measurable and to reduce production cost. This can be done for example, by integrating developer best practices (e.g. Good Automated Manufacturing Practices in Pharmaceutical industry) or certification requirements (e.g. IEC 61 508). It enables non-developers to better understand and evaluate the quality (e.g. testability, performance and robustness) of PLC programs. Benefits of a more formal PLC development process include, for instance, increasing the CMMI maturity level, specifying the required quality level depending on the certification area (e.g. PESSRAL for controlling lift, EN 50128 for railway applications), and continuously comparing the intrinsic quality. In this paper, we present the architecture of a software factory for PLC programs that allows automatic synchronization between documentation and code, non-regression tests reporting, and software requirements traceability. We will also show how this can be used by PLC programmers and stakeholders to connect quality, productivity and efficiency during the whole PLC projects' lifecycle.
Conference Paper
Full-text available
The confluence of component based development, model driven development and software product lines forms an approach to application development based on the concept of software factories. This approach promises greater gains in productivity and predictability than those produced by incremental improvements to the current paradigm of object orientation, which have not kept pace with innovation in platform technology. Software factories promise to make application assembly more cost effective through systematic reuse, enabling the formation of supply chains and opening the door to mass customization.
From the Book:For many years, the three of us have been developing software using object oriented techniques. We started with object oriented programming languages, like C++, Smalltalk, and Eiffel. Soon we felt the need to describe our software at a higher level of abstraction. Even before the first object oriented analysis and design methods, like Coad/Yourdon and OMT, were published, we used our own invented bubbles and arrows diagrams. This naturally led to questions like "What does this arrow mean?" and "What is the difference between this circle and that rectangle?". We therefore rapidly decided to use the newly emerging methods to design and describe our software. During the years we found that we were spending more time on designing our models, than on writing code. The models helped us to cope with larger and more complex systems. Having a good model of the software available, made the process of writing code easier and in many cases even straightforward.In 1997 some of us got involved in defining the first standard for object oriented modeling called UML. This was a major milestone that stimulated the use of modeling in the software industry. When the OMG launched its initiative on Model Driven Architecture we felt that this was logically the next step to take. People try to get more and more value from their high level models, and the MDA approach supports these efforts.At that moment we realized that all these years we had naturally walked the path towards model driven development. Every bit of wisdom we acquired during our struggle with the systems we had to build, fitted in with this new idea of how to build software. It caused a feeling similar to an AHA-erlebnis: "Yes, this is it," the same feeling we had years before when we first encountered the object-oriented way of thinking, and again when we first read the GOF book on design patterns. We feel that MDA could very well be the next major step forward in the way software is being developed. MDA brings the focus of software development to a higher level of abstraction, thereby raising the level of maturity of the IT industry.We are aware of the fact that the grand vision of MDA, which Richard Soley, the president of the OMG, presents so eloquently, is not yet a reality. However some parts of MDA can already be used today, while others are under development. With this book we want to give you insight in what MDA means and what you can achieve, both today and in the future.Anneke Kleppe, Jos Warmer, and Wim BastSoest, the NetherlandsJanuary 2003