Chapter

A Multi-Model Reviewing Approach for Production Systems Engineering Models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Background. In Production Systems Engineering (PSE) models, which describe plants, represent different views on several engineering disciplines (such as mechanical, electrical and software engineering) and may contain up to 10,000s of instance elements, such as concepts, attributes and relationships. Validating these models requires an integrated multi-model view and the domain expertise of human experts related to individual views. Unfortunately, the heterogeneity of disciplines, tools, and data formats makes it hard to provide a technology-independent multi-model view. Aim. In this paper, we aim at improving Multi-Model Reviewing (MMR) capabilities of domain experts based on selected model visualisation methods and mechanisms. Method. We (a) derive requirements for graph-based visualisation to facilitate reviewing multi-disciplinary models; (b) introduce the MMR approach to visualise engineering models for review as hierarchical and linked structures; (c) design an MMR software prototype; and (d) evaluate the prototype based on tasks derived from real-world PSE use cases. For evaluation purposes we compare capabilities of the MMR prototype and a text-based model editor. Results. The MMR prototype enabled performing the evaluation tasks in most cases considerable faster than the standard text-based model editor. Conclusion. The promising results of the MMR approach in the evaluation context warrant empirical studies with a wider range of domain experts and use cases on the usability and usefulness of the MMR approach in practice.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Engineering Graph Visualisation and Analysis. Text or tree based engineering networks and their dependencies are hard to understand without visualisation [21]. The visualisation of a MDEG requires capabilities to encode the type and supporting meta-information of links between elements of the graph. ...
Conference Paper
Industry 4.0 envisions adaptive production systems,i.e.,Cyber-Physical Production Systems (CPPSs), to manufacture products from a product line. Product-Process-Resource modeling represents the essential aspects of a CPPS. However,due to discipline-specific models, e.g., mechanical, electrical, and automation models, it is often unclear how to integrate the proprietary data into an integrated model due to missing common understanding. This paper investigates (i) how to integrate local engineering views with Common Concepts (CCs) and using them as a defined taxonomy for modeling a network of engineering concepts; (ii) how to build an engineering network graph for visualisation and analysis considering discipline-specific needs. We motivate a method to support CPPS engineering organisations to integrate their heterogeneous data using CCs. This builds the basis for defining multi-domain engineering graphs for visualisation and analysis aspects. In this paper, we present a research agenda discussing open issues and expected results.
Article
Climate change, increasing emissions and rising global temperatures have gradually affected the way we think about the future of our planet. Urban areas possess significant potential for reducing the energy consumption of the overall energy system. In recent years, there is an increasing number of research initiatives related to Urban Building Energy Modelling (UBEM) that focus on simulation processes and validation techniques. Although input data are crucial for the modelling process as well as for the validity of the results, the availability of input data and associated data formats were not analysed in detail. This paper closes the identified knowledge gap by presenting a taxonomic analysis of key UBEM components including: input data formats, simulation tools, simulation results and validation techniques. This paper concludes that over ∼95% of the studies analysed were not reproducible due to the absence of information relating to key aspects of the respective methodologies such as data sources and simulation workflows. This paper also qualifies how weak levels of interoperability, with respect to input and output data, is present in all phases of UBEM.
Chapter
Full-text available
Industry 4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of empowering interoperability in smart factories. These standards enable the description and interaction of the main components, systems, and processes inside of a smart factory. Due to the growing number of frameworks and standards, there is an increasing need for approaches that automatically analyze the landscape of I4.0 standards. Standardization frameworks classify standards according to their functions into layers and dimensions. However, similar standards can be classified differently across the frameworks, producing, thus, interoperability conflicts among them. Semantic-based approaches that rely on ontologies and knowledge graphs, have been proposed to represent standards, known relations among them, as well as their classification according to existing frameworks. Albeit informative, the structured modeling of the I4.0 landscape only provides the foundations for detecting interoperability issues. Thus, graph-based analytical methods able to exploit knowledge encoded by these approaches, are required to uncover alignments among standards. We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards. We use knowledge graph embeddings to automatically create these communities exploiting the meaning of the existing relationships. In particular, we focus on the identification of similar standards, i.e., communities of standards, and analyze their properties to detect unknown relations. We empirically evaluate our approach on a knowledge graph of I4.0 standards using the Trans family of embedding models for knowledge graph entities. Our results are promising and suggest that relations among standards can be detected accurately.
Article
Full-text available
Software code review, i.e., the practice of having third-party team members critique changes to a software system, is a well-established best practice in both open source and proprietary software domains. Prior work has shown that the formal code inspections of the past tend to improve the quality of software delivered by students and small teams. However, the formal code inspection process mandates strict review criteria (e.g., in-person meetings and reviewer checklists) to ensure a base level of review quality, while the modern, lightweight code reviewing process does not. Although recent work explores the modern code review process qualitatively, little research quantitatively explores the relationship between properties of the modern code review process and software quality. Hence, in this paper, we study the relationship between software quality and: (1) code review coverage, i.e., the proportion of changes that have been code reviewed, and (2) code review participation, i.e., the degree of reviewer involvement in the code review process. Through a case study of the Qt, VTK, and ITK projects, we find that both code review coverage and participation share a significant link with software quality. Low code review coverage and participation are estimated to produce components with up to two and five additional post-release defects respectively. Our results empirically confirm the intuition that poorly reviewed code has a negative impact on software quality in large systems using modern reviewing tools.
Article
Full-text available
This chapter has demonstrated an elegant way to visually represent ontological data. We have described how the Cluster Map visualization can use ontologies to create expressive information visualizations, with the attractive property that classes and objects that are semantically related are also spatially close in the visualization. Another key aspect of the visualization is that it focuses on visualizing instances rather than ontological models, thereby making it very useful for information retrieval purposes. A number of applications developed in the past few years have been described that prominently incorporate the Cluster Map visualization. Based on these descriptions, we could distinguish a number of generic information retrieval tasks that are well supported by the visualization. These applications prove the usability and usefulness of the Cluster Map in real-life scenarios. Furthermore, these applications show the applicability of the visualization in Semantic Web-based environments, where lightweight ontologies are playing a crucial role in organizing and accessing heterogeneous and decentralized information sources.
Article
Full-text available
A compound graph is a frequently encountered type of data set. Relations are given between items, and a hierarchy is defined on the items as well. We present a new method for visualizing such compound graphs. Our approach is based on visually bundling the adjacency edges, i.e., non-hierarchical edges, together. We realize this as follows. We assume that the hierarchy is shown via a standard tree visualization method. Next, we bend each adjacency edge, modeled as a B-spline curve, toward the polyline defined by the path via the inclusion edges from one node to another. This hierarchical bundling reduces visual clutter and also visualizes implicit adjacency edges between parent nodes that are the result of explicit adjacency edges between their respective child nodes. Furthermore, hierarchical edge bundling is a generic method which can be used in conjunction with existing tree visualization techniques. We illustrate our technique by providing example visualizations and discuss the results based on an informal evaluation provided by potential users of such visualizations.
Article
Full-text available
Introduction The Keystroke-Level Model (KLM), proposed by Card, Moran, & Newell (1983), predicts task execution time from a specified design and specific task scenario. Basically, you list the sequence of keystroke-level actions the user must perform to accomplish a task, and then add up the times required by the actions. It is not necessary to have an implemented or mocked-up design; the KLM requires only that the user interface be specified in enough detail to dictate the sequence of actions required to perform the tasks of interest. The actions are termed keystroke level if they are at the level of actions like pressing keys, moving the mouse, pressing buttons, and so forth, as opposed to actions like "log onto system" which is much more abstract. The KLM requires that you describe how the user would do the task in terms of actions at this keystroke level. The basic actions are called operators, in the sense of operators in the Model Human Processor discussion. There is a standard
Conference Paper
Background. In Production Systems Engineering (PSE), the planning of production systems involves domain experts from various domains, such as mechanical, electrical and software engineering collaborating and modeling their specific views on the system. These models, describing entire plants, can reach a large size (up to several GBs) with complex relationships and dependencies. Due to the size, ambiguous semantics and diverging views, consistency of data and the awareness of changes are challenging to track. Aim. In this paper we explore visualizations mechanisms for a model inspection tool to support consistency checking and the awareness of changes in multi-disciplinary PSE environments, as well has more efficient handing of AutomationML (AML) files. Method. We explore various visualization capabilities that are suitable for hierarchical structures common in PSE and identified requirements for a model-inspection tool for PSE purposes based on workshops with our company partner. A pr oof-of concept software prototype is developed based on the elicited requirements. Results. We evaluate the effectiveness of our Information Visualisation (InfoVis) approach in comparison to a standard modeling tool in PSE, the AutomationML Editor. The evaluation showed promising results for handling large-scale engineering models based on AML for the selected scenarios, but also areas for future improvement, such as more advanced capabilities. Conclusion. Although InfoVis was found useful in the evaluation context, in-depth analysis with domain experts from industry regarding usability and features remain for future work.
Conference Paper
Multi-disciplinary data exchange still poses many challenges: Heterogeneous data sources, diverging data views, and lack of communication lead to defects, late resolving of errors, and mismatches over the project lifecycle. Semantic approaches such as ontologies are a viable solution to derive common concepts between disciplines to limit negative effects in their collaboration. However, the application of these semantic approaches is still quite limited due to its inherent complexity. The purpose of this paper, is to discuss the concept of local glossaries as a step towards a Common Concept Glossary (CCG) method: A tool-supported method to enable and simplify the creation of common concepts derived from local glossaries that are built by discipline-specific workgroups. The local concepts can aid by making changes visible and enabling traceability and maintainability of common models between different disciplines.
Chapter
In the parallel engineering of industrial production systems, domain experts from several disciplines need to exchange data efficiently to prevent the divergence of local engineering models. However, the data synchronization is hard (a) as it may be unclear what data consumers need and (b) due to the heterogeneity of local engineering artifacts. In this paper, we introduce use cases and a process for efficient Engineering Data Exchange (EDEx) that guides the definition and semantic mapping of data elements for exchange and facilitates the frequent synchronization between domain experts. We identify main elements of an EDEx information system to automate the EDEx process. We evaluate the effectiveness and effort of the EDEx process and concepts in a feasibility case study with requirements and data from real-world use cases at a large production system engineering company. The domain experts found the EDEx process more effective and the EDEx operation more efficient than the traditional point-to-point process, and providing insight for advanced analyses.
Conference Paper
Code reviewing is well recognized as a valuable software engineering practice for improving software quality. Today a large variety of tools exist that support code reviewing and are widely adopted in open source and commercial software projects. They commonly support developers in manually inspecting code changes, providing feedback on and discussing these code changes, as well as tracking the review history. As source code is usually text-based, code reviewing tools also only support text-based artifacts. Hence, code changes are visualized textually and review comments are attached to text passages. This renders them unsuitable for reviewing graphical models, which are visualized graphically in diagrams instead of textually and hence require graphical change visualizations as well as annotation capabilities on the diagram level. Consequently, developers currently have to switch back and forth between code reviewing tools and comparison tools for graphical models to relate reviewer comments to model changes. Furthermore, adding and discussing reviewer comments on the diagram level is simply not possible. To improve this situation, we propose a set of coordinated visualizations of reviewing-relevant information for graphical models including model changes, diagram changes, review comments, and review history. The proposed visualizations have been implemented in a prototype tool called Mervin supporting the reviewing of graphical UML models developed with Eclipse Papyrus. Using this prototype, the proposed visualizations have been evaluated in a user study concerning effectiveness. The evaluation results show that the proposed visualizations can improve the review process of graphical models in terms of issue detection.
Article
Software and systems engineering combine artifacts from diverse domains and tools. This article explores how incremental consistency checking is able to automatically and continuously detect inconsistencies among these artifacts.
Book
This book discusses challenges and solutions for the required information processing and management within the context of multi-disciplinary engineering of production systems. The authors consider methods, architectures, and technologies applicable in use cases according to the viewpoints of product engineering and production system engineering, and regarding the triangle of (1) product to be produced by a (2) production process executed on (3) a production system resource. With this book industrial production systems engineering researchers will get a better understanding of the challenges and requirements of multi-disciplinary engineering that will guide them in future research and development activities. Engineers and managers from engineering domains will be able to get a better understanding of the benefits and limitations of applicable methods, architectures, and technologies for selected use cases. IT researchers will be enabled to identify research issues related to the development of new methods, architectures, and technologies for multi-disciplinary engineering, pushing forward the current state of the art.
Conference Paper
Data heterogeneity and proprietary interfaces present a major challenge for big data analytics. The data generated from a multitude of sources has to be aggregated and integrated first before being evaluated. To overcome this, an automated integration of this data and its provisioning via defined interfaces in a generic data format could greatly reduce the effort for an efficient collection and preparation of data for data analysis in automated production systems. Besides, the sharing of specific data with customers and suppliers, as well as near real-time processing of data can boost the information gain from analysis. Existing approaches for automatic data integration lack the fulfillment of all these requirements. On this basis, a flexible architecture is proposed, which simplifies data integration, handling and sharing of data over organizational borders. Special focus is put on the ability to process near real-time data which is common in the field of automated production systems. An evaluation with technical experts from the field of automation was carried out by adapting the generic concept for specific use cases. The evaluation showed that the proposed architecture could overcome the disadvantages of current systems and reduce the effort spent on data integration. Therefore, the proposed architecture can be an enabler for automated data analysis of distributed data from sources with heterogeneous data formats in automated production systems.
Chapter
Die Welt der Produktionssysteme ist an einem Wendepunkt. Die wachsende Bedeutung der Kundenwünsche und die wachsende Geschwindigkeit des technischen Fortschritts haben Produktionssysteminhaber dazu gebracht, die Flexibilität von Produktionssystemen hinsichtlich Produktportfolio und Ressourcennutzung auszuweiten (Terkaj et al. 2009). Jedoch ist diese Flexibilitätserweiterung nicht kostenlos zu haben. Neue Vorgehensweisen und Methoden des Entwurfes und der Nutzung von Produktionssystemen haben sich als notwendig erwiesen, wie sie in der Industrie 4.0 Initiative anvisiert werden (Kagermann et al. 2013; Jasperneite 2012).
Conference Paper
The systematic management of the various models in automated production systems engineering is a major challenge as, due to the multi-disciplinary nature, different stakeholders provide their potentially heterogeneous but partially overlapping descriptions of the system. This leads to the need of ensuring inter-model consistency by managing occurring inconsistencies. In previous work, existing approaches have been surveyed; however, none is providing a comprehensive approach for dealing with inconsistencies in multi-disciplinary systems engineering. Therefore, we propose in this paper a comprehensive but at the same time light-weight approach for specifying and managing inter-model consistency. In particular, inter-model consistency is explicitly modelled and, hence, allows to detect, represent, and manage inconsistencies. Concerning the latter, we provide an interactive and collaborative approach to automate the consistency maintenance during model evolution, which is especially needed in multi-disciplinary projects.
Conference Paper
The parallel engineering of cyber-physical production systems (CPPSs), an important type of sCPS, needs the collaboration of several engineering disciplines, which use a wide range of heterogeneous tools and data sources. Systems engineers and process participants need an overview on the engineering artifact versions and their relationships on the project level. Unfortunately, the version management of engineering artifacts typically is conducted on the file level and does not support the versioning of engineering model elements. Recently, the AutomationML standard has been introduced to facilitate the data exchange between systems engineering tools. In this paper, we discuss the challenges of versioning related model views during the engineering of CPPSs to achieve a mechatronic view on the engineering artifacts. Based on real-world examples, we discuss (a) the strengths and limitations of best-practice approaches in CPPS engineering and (b) how software engineering contributions can provide the foundation for effectively addressing the challenge of versioning engineering model elements to provide a mechatronic view. From this analysis, we derive research issues for future work.
Conference Paper
Modeling engineering knowledge explicitly and representing it by means of standardized modeling languages and in machine-understandable form enables advanced engineering processes in industrial and factory automation. This affects positively both process and product quality. In this paper we explore how the AutomationML format, an emerging data exchange standard, that supports the Industry 4.0 vision, can be represented by means of two established modeling approaches – Model-Driven Engineering (MDE) and Semantic Web. We report observed differences w.r.t. resulting model features and model creation process and, additionally, present the application possibilities of the developed models for engineering process improvement in a production system engineering context.
Book
This book provides guidelines for practicing design science in the fields of information systems and software engineering research. A design process usually iterates over two activities: first designing an artifact that improves something for stakeholders and subsequently empirically investigating the performance of that artifact in its context. This validation in context is a key feature of the book - since an artifact is designed for a context, it should also be validated in this context.
Conference Paper
Capturing traceability information among artifacts allows for assuring product quality in many ways such as tracking functional and non-functional requirements, performing system validation and impact analysis. Although literature provides many techniques to model traceability, existing solutions are either tailored to specific domains (e.g., Ecore modeling languages), or not complete enough (e.g., lack support to specify traceability link semantics). This paper examines the current traceability models and identifies the drawbacks that prevent from capturing some traceability information of heterogeneous artifacts. In this context, heterogeneous artifacts refer to artifacts that come from widely different modelling notations (e.g., UML, Simulink, natural language text, source code). Additionally, the paper proposes traceability model requirements that are necessary to build a generic traceability model. We argue that the proposed requirements are sufficient to build a traceability model oblivious of the heterogeneity of the models which elements need to be traced. We also argue that our proposed requirements can be adopted to create a generic traceability model that provides flexibility and can accommodate new ways of characterizing and imposing constraints on trace links or systems artifacts. The proposed requirements incorporate the ideas from many existing solutions in literature, in an attempt to be as complete as possible.
Conference Paper
This contribution discusses the well-known paradox of today’s standardization approaches for semantic models in automation engineering and proposes an evolutionary concept to resolve it. In this context, an overview of past and current standardization activities is presented which points out the existing barriers. Instead of aiming for a completely standardized neutral data model, the proposed approach actively supports data exchange with mixed neutral and proprietary data models. Hence, it utilizes the existing heterogeneity and the maturity of proprietary data models. In this context, the authors present a concept of maturity levels, which form milestones towards a stepwise evolution of a semantic standardization. Finally, this paper describes how this approach is technically implemented in AutomationML and provides examples to illustrate the concept.
Conference Paper
This contribution presents the basic architecture of the neutral data format AutomationML developed by the companies Daimler, ABB, Siemens, Rockwell, Kuka, Zuhlke, netAllied and the universities of Magdeburg and Karlsruhe. AutomationML serves for the data exchange between manufacturing engineering tools and therefore supports the interoperability between them. It covers information about the plant structure (topology and geometry) and the plant behaviour (logic and kinematics). The first version of AutomationML has been presented at the Hannover fair in 2008.
Conference Paper
Within the engineering of automated systems, different engineering disciplines are involved. Typically intermediate results from one discipline are handed over to another discipline. These results are refined throughout domain specific activities, and then handed over to other disciplines, incl. the originating one. This results in hidden dependencies between the involved disciplines, the planning assumptions as well as results, and the technical artefacts. This paper shows a method, proven in the engineering of automated plants in the metal industry, to gain explicit knowledge about the technical dependencies within the engineering of automated systems. Therefore the typical characteristics of the engineering process are described first, followed by a description how to capture the engineering process and a systematic approach to make these dependencies visible.
Book
Information Visualization is a relatively young field that is acquiring more and more consensus in both academic and industrial environments. This concise introduction to the subject explores the use of computer-supported interactive graphical representations to explain data and amplify cognition. Written in a lively, yet rigorous, style the book explores ways of communicating ideas or facts about data, and shows how to validate hypotheses, and facilitate the discovery of new facts via exploration. The concepts outlined in the book are illustrated in a simple and thorough manner, building a reference for those situations in which graphic representation of information, generated and assisted by the use of computer tools, can help in visualizing ideas, data and concepts. With suggestions for setting communications systems based on, or availing of, graphic representations, this textbook illustrates cases, situations, tools and methods which help make the graphic representations of information effective and efficient.
Article
The authors explain how to perform software inspections to locate defects. They present metrics for inspection and examples of its effectiveness. The authors contend, on the basis of their experiences and those reported in the literature, that inspections can detect and eliminate faults more cheaply than testing.< >
Challenges and research directions for successfully applying MBE tools in practice
  • F Bordeleau
  • G Liebel
  • A Raschke
  • G Stieglbauer
  • M Tichy
Bordeleau, F., Liebel, G., Raschke, A., Stieglbauer, G., Tichy, M.: Challenges and Research Directions for Successfully Applying MBE Tools in Practice. In: MOD-ELS (Satellite Events). pp. 338-343 (2017)
Datenaustausch in der Anlagenplanung mit AutomationML: Integration von CAEX
  • R Drath
Drath, R.: Datenaustausch in der Anlagenplanung mit AutomationML: Integration von CAEX, PLCopen XML und COLLADA. Springer-Verlag (2009)
Migration to AutomationML based tool chains - incrementally overcoming engineering network challenges
  • A Lüder
  • J L Pauly
  • K Kirchheim
  • F Rinker
  • S Biffl
Lüder, A., Pauly, J.L., Kirchheim, K., Rinker, F., Biffl, S.: Migration to AutomationML based Tool Chains -incrementally overcoming Engineering Network Challenges. In: 5th AutomationML User Conference (2018), https://www.automationml.org/o.red/uploads/dateien/ 1548668540-17 Lueder Migration-ToolChains Paper.pdf
Information visualization in production systems engineering
  • F Rinker
  • L Waltersdorfer
  • M Schüller
  • D Winkler
Rinker, F., Waltersdorfer, L., Schüller, M., Winkler, D.: Information Visualization in Production Systems Engineering. Tech. Report CDL-SQI 2019-15, TU Wien (Jun 2019), http://qse.ifs.tuwien.ac.at/wp-content/uploads/ CDL-SQI-2019-15.pdf
  • A Rivas
  • I Grangel-González
  • D Collarana
  • J Lehmann
  • M E Vidal
Rivas, A., Grangel-González, I., Collarana, D., Lehmann, J., Vidal, M.E.: Unveiling Relations in the Industry 4.0 Standards Landscape based on Knowledge Graph Embeddings. arXiv preprint arXiv:2006.04556 (2020)