Conference PaperPDF Available

ATLTest: A White-Box Test Generation Approach for ATL Transformations

Authors:

Abstract and Figures

MDE is being applied to the development of increasingly complex systems that require larger model transformations. Given that the specification of such transformations is an error-prone task, techniques to guarantee their quality must be provided. Testing is a well-known technique for finding errors in programs. In this sense, adoption of testing techniques in the model transformation domain would be helpful to improve their quality. So far, testing of model transformations has focused on black-box testing techniques. Instead, in this paper we provide a white-box test model generation approach for ATL model transformations.
Content may be subject to copyright.
A preview of the PDF is not available
... In the field of white-box evaluating, Khan [20] and others defined white-box evaluating as a evaluation case design method that uses the control structure of the process design to derive evaluation cases, described the working process of white-box evaluating techniques, and some common techniques. Gonzá lez [21] and others provided a model generation method for white-box evaluating for ATL model transformations. Brenden [22] and others proposed a new method called Meta-learning for Compositionality (MLC) that evaluates the combinatorial generalization ability of large models. ...
... They describe the working process of white-box testing techniques and introduce some commonly used methods. González et al. [21] provide a model generation method for ATL model transformation for whitebox testing. Brenden et al. [22] proposed a new method called Meta-learning for Compositionality, which performs tests related to combinatorial generalization ability for large models. ...
Preprint
Full-text available
AI systems that do what they say, are reliable, trustworthy, and explainable are what people want. We propose a DIKWP (Data, Information, Knowledge, Wisdom, and Purpose) artificial consciousness white box evaluation standard and method for AI systems. We categorize AI system output resources into deterministic and uncertain resources, which include incomplete, inconsistent, and imprecise data. We then map these resources to the DIKWP framework for testing. For deterministic resources, we evaluate their transformation into different resource types based on purpose. For uncertain resources, we evaluate their potential conversion into other deterministic resources through purpose-driven. We construct an AI diagnostic scenario using a 25-dimensional (5x5) framework to evaluate both deterministic and uncertain DIKWP resources. The experimental results show that the DIKWP artificial consciousness white box evaluation standard and method effectively assess the cognition capabilities of AI systems and demonstrate a certain level of interpretability, thus contributing to AI system improvement and evaluation.
Preprint
Full-text available
AI 系统做到言行一致、可靠、可信、可解释。本文我们提出了一种 DIKWP 人工意识白盒测评标准与方法,将 AI 系统的输出资源分为确定性资源和不确定性资源(不完整、不一致 以及不精确),分别映射到数据、信息、知识、智慧以及意图(DIKWP)中进行测试,前者通过意图驱 动检测是否存在某一类型资源转化,后者通过意图驱动检测是否能转化成其他确定性资源或者将 DIKWP 资源处理输出。我们构建了一个 AI 问诊场景实例并进行测试,运用 25 个维度(5*5)来对 DIKWP 确定性资 源以及不确定性资源进行处理与测评,实验结果可知,DIKWP 人工意识白盒测评标准与方法能很好的检 测 AI 系统的认知能力,在可解释性上展示了一定的能力,可为 AI 系统的改进与评估提供帮助。
Preprint
Full-text available
AI 系 统做 到言 行一 致、 可靠 、可 信 、可 解释 。本 文我 们提 出了 一种 DIKWP 人工意识白盒测评标准与方法,将 AI 系统的输出资源分为确定性资源和不确定性资源(不完整、 不一致以及不精确),分别映射到数据、信息、知识、智慧以及意图(DIKWP)中进行测试,前者通过 意图驱动检测是否存在某一类型资源转化,后者通过意图驱动检测是否能转化成其他确定性资源(译者注: 或者 DIKWP-不确定性资源或输出)。我们构建了一个 AI 问诊场景实例并进行测试,运用 25 个维度(5*5) 来对 DIKWP 确定性资源以及不确定性资源进行处理与测评,实验结果可知,DIKWP 人工意识白盒测评 标准与方法能很好的检测 AI 系统的认知能力,在可解释性上展示了一定的能力,可为 AI 系统的改进与 评估提供帮助。
Article
Model transformation is a core mechanism for Model-Driven Engineering (MDE). Writing complex programs such as model transformations (MT) is error-prone, and efficient testing techniques are required for their quality assurance. There are several challenges when it comes to testing MT, including the automatic generation of suitable input test models and the construction of test oracles based on verification properties. Many approaches to generating input models ensure coverage of a certain level of the source meta-model and some input/output model constraints. Furthermore, most transformation testing techniques are tailored to specific implementation languages or quality properties, which makes it difficult to reuse testing techniques for different languages due to their language-specific nature. The diversity of languages and verification properties raises the need for a black-box testing framework of MT that is independent of transformation implementation languages as well as supports systematic verification of the quality properties. In this paper, we clarify the basic elements of such a framework, and how to apply this framework for systematically testing MT. The main tasks of the model transformation testing process, including test design, test execution and evaluation, are defined and realized within this integrated framework.
Article
Full-text available
The increasing popularity of MDE results in the creation of larger models and model transformations, hence converting the specification of MDE artefacts in an error-prone task. Therefore, mechanisms to ensure quality and absence of errors in models are needed to assure the reliability of the MDE-based development process. Formal methods have proven their worth in the verification of software and hardware systems. However, the adoption of formal methods as a valid alternative to ensure model correctness is compromised for the inner complexity of the problem. To circumvent this complexity, it is common to impose limitations such as reducing the type of constructs that can appear in the model, or turning the verification process from automatic into user assisted. Since we consider these limitations to be counterproductive for the adoption of formal methods, in this paper we present EMFtoCSP, a new tool for the fully automatic, decidable and expressive verification of EMF models that uses constraint logic programming as the underlying formalism.
Article
Full-text available
Model transformations play a critical role in Model Driven Engineering, and thus rigorous techniques for testing model transformations are needed. This paper identifies and discusses important issues that must be tackled to define sound and practical techniques for testing transformations.
Article
Full-text available
Model transformation is a core mechanism for model-driven engineering (MDE). Writing complex model transformations is error-prone, and efficient testing techniques are required as for any complex program development. Testing a model transformation is typically performed by checking the results of the transformation applied to a set of input models. While it is fairly easy to provide some input models, it is difficult to qualify the relevance of these models for testing. In this paper, we propose a set of rules and a framework to assess the quality of given input models for testing a given transformation. Furthermore, the framework identifies missing model elements in input models and assists the user in improving these models.
Conference Paper
Testing model transformations poses several challenges, among them the automatic generation of appropriate input test models and the specification of oracle functions. Most approaches to the generation of input models ensure a certain level of source meta-model coverage, whereas the oracle functions are frequently defined using query or graph languages. Both tasks are usually performed independently regardless their common purpose, and sometimes there is a gap between the properties exhibited by the generated input models and those demanded to the transformations (as given by the oracles). Recently, we proposed a formal specification language for the declarative formulation of transformation properties (invariants, pre- and postconditions) from which we generated partial oracle functions that facilitate testing of the transformations. Here we extend the usage of our specification language for the automated generation of input test models by constraint solving. The testing process becomes more intentional because the generated models ensure a certain coverage of the interesting properties of the transformation. Moreover, we use the same specification to consistently derive both the input test models and the oracle functions.
Article
We present UML-CASTING, a prototype tool for computer-assisted generation of test suites from UML models. The concept underlying Casting is that a large part of test cases corresponding to a given test strategy can be obtained automatically from the text of the specification. A test strategy consists of a set of decomposition rules that are applied to the expressions in a specification in order to reveal potentially interesting test cases. In the case of UML-CASTING, the decomposition is applied to class and state diagrams in order to yield an intermediate representation (also in the shape of a state machine) from which test (here, method invocations) can be extracted. Expressions are written in a subset of OCL2. The user can then specify two kinds of goals: either a coverage percentage that he wants to achieve or specific sequence of method calls that he wants to see executed. In either case, UML-CASTING will use constraint-solving techniques to produce test cases to achieve these goals.
Article
Model transformations are core to MDE, and one of the key aspects for all model transformations is that they are validated. In this paper we develop an approach to testing model transformations based on white-box coverage measures of the transformations. To demonstrate the use of this approach we apply it to some examples from the ATL metamodel zoo.
Conference Paper
Model transformation (MT) testing is gaining interest as the size and complexity of MTs grows. In general it is very difficult and expensive (time and computational complexity-wise) to validate in full the correctness of a MT. This paper presents a MT testing approach based on the concept of Tract, which is a generalization of the concept of Model Transformation Contract. A Tract defines a set of constraints on the source and target metamodels, a set of source-target constraints, and a tract test suite, i.e., a collection of source models satisfying the source constraints. We automatically generate input test suite models, which are then transformed into output models by the transformation under test, and the results checked with the USE tool (UML-based Specification Environment) against the constraints defined for the transformation. We show the different kinds of tests that can be conducted over a MT using this automated process, and the kinds of problems it can help uncovering.