Figure 7 - uploaded by Jeroen Arnoldus
Content may be subject to copyright.
Source publication
This thesis discusses the notion of Software Templates and there formal inner working mechanism. Explained is how grammars can used to guarantee syntactically correctness of the output of a template engine and how syntactically errors can be found before a template is used. The thesis also shows that the metalanguage of templates does not need to b...
Citations
... The end-user can describe what the generated software needs to contain through various formal languages, such as JSON, XML, yaml or any other custom data formatting language. Code generators are "programs that generate other programs" [14] and are a subclass of meta-programs -software components that parse or manipulate other programs (e.g.: compilers). ...
... A template is defined in [13] as a generic representation of the output that it wishes to describe (source code). Templates are alternatively defined in [14] as "a non-empty sequence of text fragments and placeholders (including meta-code)". The text is considered the fixed (static) part and is copied identically into the generated files, while the placeholders represent an (as yet) uncompleted part of the text. ...
... The text is considered the fixed (static) part and is copied identically into the generated files, while the placeholders represent an (as yet) uncompleted part of the text. The static parts are unaltered source code fragments (text), while the dynamic parts are made up of aggregates of "meta-code" [14] -syntactic placeholders that indicate the existence of text that is to be replaced. This meta-code facilitates the dynamic generation of output in the form of desired text, since it contains actions and expressions that declare how to replace the placeholders (conditional replacements, execution loops, textual transformations and other operations). ...
... Arnoldus deals with syntax safe templates in his PhD thesis [1], which constitute a language that is the result of augmenting a specific notation for string templates with the grammar of the target language, providing a proof-of-concept for the aforementioned paper by Wachsmuth. Using a parser for that grammar, the static parts of an interconnected set of string templates can be verified to be syntactically correct fragments of the output language. ...
... Theorem 1 of the previous section also shows that even though arbitrary problem instances are undecidable, we can guarantee the syntactical correctness of the generated language by restricting the allowed set of string templates to subsets and language invariant transformations of the target language's associated STS (see corollary 2). This provides a formal explanation for the results in Wachsmuth's paper [9] and their subsequent adaptation as part of Arnoldus' PhD thesis [1]. ...
... It remains to be seen if the proposed restrictions with regards to allowing typed dynamic expressions only at specific points in string templates will be acceptable to programmers or not. If not, an easy workaround for programmers would be a dynamic cast from an unrestricted string into a string with a compatible regular expression, restoring the test-based verification outlined by Arnoldus [1]. ...
Many applications of model-based techniques ultimately require a model-to-text transformation to make practical use of the information encoded in meta-model instances. This step requires a code generator that has to be validated in order to ensure that the translation doesn’t alter the semantics of the model. Validation is often test-based, i.e. the code generator is executed on a wide range of inputs in order to verify the correctness of its output. Unfortunately, tests generally only prove the presence of errors, not their absence. This paper identifies the common core of string template implementations that are often used in the description of code generators, deriving a formal model that is suitable for mathematical reasoning. We provide a formal proof of the equivalence in expressiveness between string templates and context free grammars, thereby allowing the application of formal results from language theory. From there, we derive a scheme that would allow the verification of syntactical correctness for generated code before the translation of any model-instance is attempted, at the expense of freedom in the variability of the description.
... As Arnoldus [1] demonstrated in his PhD thesis, adapting the meta-language to include a syntactical description of the target-language yields a powerful tool for statically detecting syntax errors in the output before attempting to translate any source code. However, this requires modifications to the meta-language during parse-time (of the meta-program) which is technologically challenging. ...
When looking for solutions to automatically translate from high-level source to target code in heterogeneous programming environments and under budgetary restrictions, one often encounters the problem that affordable compilers with front-end support for a particular source language don’t map to the desired target-language. The goal of this paper is to present an approach for adapting existing compilers in a non-intrusive way that keeps the front-end intact and replaces the back-end with a component that supports the translation into any language with a context-free text-based syntax. This is achieved by introducing a domain-specific language for code generation to the compiler pipeline that offers a programmable interface to access internal representations of parsed source code in its programs. We formulate a set of requirements for this language and show how compiler developers can use the supplied interface in combination with the domain-specific language to adapt the textual output to their needs.
... To verify the usefulness of unparser-completeness metalanguages in practice, we have designed a metalanguage for templates and applied it in a number of case studies, including a redesign of a domain-specific language for web information systems, reimplementation of the Java-back end of a tree-like data structures manipulation library ApiGen [5], dynamic XHTML generation and reimplementation of the state-machine-based code generator Nun-niFSMGen. In the current section we focus on the metalanguage itself, while in Section 8 we present the NunniFSMGen case study, and discussion of other case studies can be found in [2] The metalanguage provides three constructs: match-replace, subtemplate invocation and substitution. The match-replace (Figure 5) is a construct containing a set of match-rules with a tree pattern and an accompanying result string. ...
... This line is responsible for outputting the string "class " followed by the class name, that consists of the value of the variable $state and the word "State", i.e., the template evaluator first determines the value of the expression between <: and :> which must yield a string, and then replaces with this value the placeholder in the template. One can show that the substitution can be written as a combination of subtemplates and matchreplaces [2] and, hence, it does not extend new functionality of the metalanguage. This combination of subtemplates and matchreplaces is, however, very verbose and frequently used, so we decided to provide an explicit construct for the substitution construct. ...
... The architecture of the reimplemented NunniF-SMGen is shown in Figure 10. The templates are evaluated by Repleo [2]. Repleo is a template engine based on the unparsercomplete metalanguage as defined in this article. ...
A code generator is a program translating an input model into code. In this paper we focus on template-based code generators in the context of the model view controller architecture (MVC).
The language in which the code generator is written is known as a metalanguage in the code generation parlance. The metalanguage should be, on the one side, expressive enough to be of practical value, and, on the other side, restricted enough to enforce the separation between the view and the model, according to the MVC.
In this paper we advocate the notion of unparser-complete metalanguages as providing the right level of expressivity. An unparser-complete metalanguage is capable of expressing an unparser, a code generator that translates any legal abstract syntax tree into an equivalent sentence of the corresponding context-free language. A metalanguage not able to express an unparser will fail to produce all sentences belonging to the corresponding context-free language. A metalanguage able to express more than an unparser will also be able to implement code violating the model/view separation.
We further show that a metalanguage with the power of a linear deterministic tree-to-string transducer is unparser-complete. Moreover, this metalanguage has been successfully applied in a non-trivial case study where an existing code generator is refactored using templates.
... To verify the usefulness of unparser-completeness metalanguages in practice, we have designed a metalanguage for templates and applied it in a number of case studies, including a redesign of a domain-specific language for web information systems, reimplementation of the Java-back end of a tree-like data structures manipulation library ApiGen [5], dynamic XHTML generation and reimplementation of the state-machine-based code generator NunniFSMGen . In the current section we focus on the metalanguage itself, while in Section 8 we present the NunniFSMGen case study, and discussion of other case studies can be found in [2] The metalanguage provides three constructs: match-replace, subtemplate invocation and substitution. The match-replace (Figure 5) is a construct containing a set of match-rules with a tree pattern and an accompanying result string. ...
... This line is responsible for outputting the string " class " followed by the class name, that consists of the value of the variable $state and the word " State " , i.e., the template evaluator first determines the value of the expression between <: and :> which must yield a string, and then replaces with this value the placeholder in the template. One can show that the substitution can be written as a combination of subtemplates and match- replaces [2] and, hence, it does not extend new functionality of the metalanguage. This combination of subtemplates and matchreplaces is, however, very verbose and frequently used, so we decided to provide an explicit construct for the substitution construct. ...
... The architecture of the reimplemented NunniF- SMGen is shown inFigure 10. The templates are evaluated by Repleo [2] . Repleo is a template engine based on the unparsercomplete metalanguage as defined in this article. ...
A code generator is a program translating an input model into code. In this paper we focus on template-based code generators in the context of the model view controller architecture (MVC).
The language in which the code generator is written is known as a metalanguage in the code generation parlance. The metalanguage should be, on the one side, expressive enough to be of practical value, and, on the other side, restricted enough to enforce the separation between the view and the model, according to the MVC.
In this paper we advocate the notion of unparser-complete metalanguages as providing the right level of expressivity. An unparser-complete metalanguage is capable of expressing an unparser, a code generator that translates any legal abstract syntax tree into an equivalent sentence of the corresponding context-free language. A metalanguage not able to express an unparser will fail to produce all sentences belonging to the corresponding context-free language. A metalanguage able to express more than an unparser will also be able to implement code violating the model/view separation.
We further show that a metalanguage with the power of a linear deterministic tree-to-string transducer is unparser-complete. Moreover, this metalanguage has been successfully applied in a non-trivial case study where an existing code generator is refactored using templates.
Simulationssprachen sind in Bezug auf die Unterstützung neuer domänenspezifischer Konzepte mit einer dem Problem entsprechenden prägnanten Darstellung nicht flexibel erweiterbar. Dies betrifft sowohl die Sprache in ihren Konzepten als auch die Unterstützung der Sprache durch Sprachwerkzeuge. In dieser Arbeit entsteht der neue Sprachentwicklungsansatz Discrete-Event Modelling with Extensibility (DMX) für die Entwicklung flexibel erweiterbarer Simulationssprachen für domänenspezifische Anwendungsfelder, der eine effiziente Entwicklung der Sprache und eine effiziente Ausführung von Modellen erlaubt. Der Fokus der Arbeit liegt auf der zeitdiskreten ereignisbasierten Simulation und einer prozessorientierten Beschreibung von Simulationsmodellen. Der Ansatz unterscheidet Basiskonzepte, die zur Basissprache gehören, und Erweiterungskonzepte, die Teil von Erweiterungsdefinitionen sind. Es wird untersucht, welche Basiskonzepte eine Simulationssprache bereitstellen muss, so dass eine laufzeiteffiziente Ausführung von prozessorientierten Modellen möglich ist. Die hohe Laufzeiteffizienz der Ausführung wird durch die Konzeption einer neuartigen Methode zur Abbildung von Prozesskontextwechseln auf ein C++-Programm gezeigt. Der Spracherweiterungsansatz ist nicht auf Simulationssprachen als Basissprachen beschränkt und wird daher allgemein beschrieben. Der Ansatz basiert auf einer Syntaxerweiterung einer Basissprache, die mit einem Metamodell und einer kontextfreien Grammatik definiert ist. Die Ausführung von Erweiterungskonzepten wird durch eine Konzeptreduktion auf Basiskonzepte erreicht. Der Ansatz stellt bestimmte Voraussetzungen an eine Basissprache und erlaubt bestimmte Arten von Erweiterungen, die in der Arbeit untersucht werden. Die Eignung des Anstatzes zur Entwicklung einer komplexen domänenspezifischen Simulationssprache wird an einer Sprache für Zustandsautomaten gezeigt.
Nowadays, 90 percent of the innovation in vehicles is enabled by software. Over the past thirty years different methods have been developed to tackle the increasing complexity and to decrease the development costs of the automotive software systems. In the scope of this thesis, automotive architectural modeling and quality evaluation methods have been addressed. According to the ISO 42010 standard, an Architecture Description language (ADL) and an Architecture Framework (AF) are the key mechanisms used in architecture descriptions. ADLs can exist without respective AFs. However, the successful application of an ADL can depend on the proper definition of an AF, since an AF enables better organization and application of an ADL with clear separation of concerns. Although automotive ADLs have been developed over the last decade, only in recent years, automotive companies started to take initiative in defining an architecture framework for automotive systems, e.g., the Architecture Design Framework by Renault. The first draft of the Automotive Architecture Framework (AAF) was already proposed half a decade ago by Broy. The first contribution of this thesis is the definition of an Architecture Framework for Automotive Systems (AFAS), which fills a major gap between existing automotive AFs and ADLs that was identified during the literature review and the evaluation of automotive ADLs.
During the evaluation of automotive ADLs, we identified the lack of the capability to ensure the architectural quality. Even though quality models based on the ISO/IEC SQuaRe quality standard have been specified for MATLAB Simulink design models, the quality framework for automotive architectural models has not been defined. Based on a series of structured interviews with architects (from one automotive company) responsible for modeling automotive software at different architectural viewpoints, we identified consistency, modularity, and complexity as the three main pillars of quality for automotive architectures. Modeling hierarchal elements consistently from different architectural viewpoints, and handling data and control complexity are the key needs of automotive architecture modeling. Therefore, the second contribution of this thesis is the definition and development of the quality evaluation framework for automotive software systems.
Ensuring consistency between the different architectural viewpoints is one of the key issues regarding architectural quality of automotive systems. Correspondence rules between architectural viewpoints are not formally defined in the scope of the automotive architecture description mechanisms. Therefore, we propose a consistency detection mechanism based on correspondence rules between automotive architectural viewpoints and developed a prototype tool to perform this consistency checking between different architectural viewpoints. The consistency checking approach and the prototype tool were evaluated in the scope of an Adaptive Cruise Control modeling between two separate teams emulating OEM and automotive supplier.
To evaluate modularity and complexity, we follow the Goal-Question-Metric (GQM) approach. By conducting a series of interviews with automotive architects and reviewing relevant standards, we have identified complexity and modularity aspects serving as goals in GQM. Then based on the academic and industrial publications, we have identified a series of questions that need to be answered to achieve the aforementioned goals. Automotive architects have again reviewed these questions. Finally, we have defined metrics required to answer the questions, and identified/implemented tools capable of measuring and presenting these metrics. The quality framework has been applied to industrial automotive architectural and design models. Results of the framework application have been evaluated by means of qualitative and quantitative analyses. By applying the framework to three subsequent releases of an architectural model and the corresponding design models, we have observed, for example, that addition of new functionality or bug fixing in design models often come at a price of increased complexity at the design level, and sometimes compromise modularity of the architectural model.
To facilitate the quality evaluation process, the framework applies visual analytics approach for the visualization of modularity and complexity with the help of SQuAVisiT toolset. This approach enables early feedback about software quality making it cheaper and easier to reuse and maintain than traditional techniques. In addition to the visualizations, a mechanism for clone management based on Variant Configuration Language (VCL) is developed to manage model clones and variants. The benefits of using VCL as the variability technique includes separating the variability concern from the functionality concern. The variability mechanism has been validated by converting a number of clone pairs with a varied set of differences into generic representations of VCL.
To summarize, we defined an architecture framework for automotive software systems with a coherent set of viewpoints and views for automotive ADLs. Having a coherent set of architecture viewpoints and views and analyzing automotive specific needs for architecture description mechanisms, we identified consistency, modularity, and complexity as the three main quality attributes for automotive software systems. We developed a correspondence rule based method for ensuring consistency between different architectural viewpoints and defined metric sets for assessing modularity and complexity as part of the quality framework. The quality framework is also extended by the quality visualization and clone detection mechanisms to improve software quality.