Lab

Computing in Engineering


About the lab

The Chair of Computing in Engineering researches new scientific foundations for the systematic generalization of planning, calculation and simulation methods in civil engineering on the basis of innovative information and communication technology. The main focus is on the control of complex model interactions, the development of emergent and adaptive software concepts, holistic planning and simulation systems, and the formalization of expert and empirical knowledge.

Featured research (46)

Construction Digital Twins (CDTs) are pivotal in AECO; however, many recent publications offer non-standardized, monolithic frameworks. This research suggests adopting modular CDTs through standardized Information Containers for linked Document Delivery (ICDD) to promote high modularity and reusability. It proposes a way to couple multiple ICDDs in a System-of-Systems approach. Therefore, the data structure of ICDD is analyzed and extended towards a hierarchy of containers that can communicate horizontally and vertically. Standardized REST and SPARQL interfaces are used for horizontal communication. An extended data structure supports vertical integration of CDT modules within ICDD containers, subsidizing spatial, process, or life cycle criteria based on the standardized link-type specializations provided by the ICDD standard. Eventually, this paper presents a proof of concept for CDT module integration within ICDD, addressing the need for modular and standardized frameworks for deploying CDTs based on the use case of bridge condition assessment integrating inspection data and live data from structural health monitoring.
The efficiency of Tunnel Boring Machines (TBMs) and mechanized tunneling projects, in general, is a crucial factor during the project's planning phase. It is well-established that delays translate to increased costs, which is especially significant in the context of tunneling construction due to the typically large scale of these projects. Productivity is a key performance indicator for tunneling projects, and it can have a significant impact on project timelines and costs. After a comprehensive examination of TBM productivity and its broader application within mechanized tunneling projects, it is evident that the quantification and prediction of failures and downtimes represent important aspects. By analyzing TBM sensor data, we can break down how each step in the project contributes to overall downtime and identify the aspects causing delays. Downtime analysis can also reveal if there are any connections or correlations between these issues. This paper presents a comprehensive approach to analyze the TBM downtime's codes from two main aspects, the duration and occurrence frequency. The study classifies error codes and identifies dependencies between failure sources. Furthermore, it explores connections and correlations between TBM sensor reading and the occurrence of failures as a step toward the early detection of down times using machine learning techniques. The implementation of different anomaly detection methods demonstrates the dependency between machine failure and outliers in sensor reading, justifying the use of anomaly detection algorithms for early failure detection, and maybe early enough to avoid major breakdowns. The approach is applied on a use-case, and the results show the ability to detect downtime in main power and the cutting wheel after conducting principal components analysis of specific set of sensor readings and applying different anomaly detection techniques. This approach showcases the benefits of deploying artificial intelligence to optimize mechanized tunneling processes by leveraging TBM data analysis.
The AECO (Architectural, Engineering, Construction, and Operations) industry is increasingly benefiting from the advantages of advanced information management, particularly through the use of Building Information Modeling (BIM) methods. BIM integrates geometric and alphanumeric data, such as dimensions and spatial relationships, into digital building models. These elements are subject to specific constraints defined by industry standards and guidelines. The issue with these constraints embedded in regulatory documents is that they are mostly only available in non-machine readable formats which hinders the exchange and usage of data within the building's lifecycle. Currently, the required rules and requirements are extracted manually by experts from the regulatory documents and integrated manually and labor-intensively into corresponding BIM processes. Thus, many research projects are currently addressing the issue of how the textual components of regulatory documents can be automatically analyzed to extract rules and requirements using Natural Language Processing or other approaches. In addition to this textual information, many standards also contain figures and illustrations that explain the information described in the text in more detail or contain additional knowledge. Converting this graphical content into machine-readable information is just as challenging as analyzing the textual components. This paper aims to develop an approach for extracting information related to building information requirements from figures contained in standards using the two state-of-the-art MLLMs Kosmos 2 and Chat GPT-4. An MLLM is an AI system that can process text, images, audio, and video, unlike traditional models that are limited to text. MLLMs trained on large data sets recognize complex patterns in different types of data. With the help of the two MLLMs, the images, and their corresponding texts and captions from the standards will first be analyzed, and then the information obtained will be transferred into a structured data format to make it efficiently usable within BIM processes.
Efficient maintenance planning and streamlined inspection for bridges are essential to prevent catastrophic structural failures. Digital Bridge Management Systems (BMS) have the potential to streamline these tasks. However, their effectiveness relies heavily on the availability of accurate digital bridge models, which are currently challenging and costly to create, limiting the widespread adoption of BMS. This study addresses this issue by proposing a computer vision-based process for generating bridge superstructure models from pixel-based construction drawings. We introduce an automatic pipeline that utilizes a deep learning-based symbol pose estimation approach based on Keypoint R-CNN to organize drawing views spatially, implementing parts of the proposed process. By extending the keypoint-based detection approach to simultaneously process multiple object classes with a variable number of keypoints, a single instance of Keypoint R-CNN can be trained for all identified symbols. We conducted an empirical analysis to determine evaluation parameters for the symbol pose estimation approach to evaluate the method's performance and improve the trained model's comparability. Our findings demonstrate promising steps towards efficient bridge modeling, ultimately facilitating maintenance planning and management.
Although precast concrete construction offers more efficient construction processes, it still raises environmental concerns, mainly due to the carbon footprint associated with cement production. Life Cycle Assessment (LCA) is essential to systematically assess the environmental impacts of products, processes, or activities throughout their life cycle. A significant challenge in LCA is the collection of accurate resource consumption data, which is essential for accurately quantifying environmental impacts, developing sustainability strategies, and enabling reliable and transparent assessment. Digital Twins (DTs) enable LCAs by virtually mapping the entire life cycle. DTs provide a comprehensive overview of the entire life cycle of products, services, or processes and thus can enable accurate calculation of carbon emissions at different life cycle phases. By integrating data on energy consumption, transportation, production, and other relevant factors, the carbon footprint of precast concrete products can be determined. This work aims to create a data basis for selected aspects to evaluate the resource consumption of concrete modules generated during production, transport, and assembly. For this purpose, concepts are developed to integrate statically available life cycle inventory data from Life Cycle Inventory (LCI) databases and data collected during production into a DT. A hybrid approach is created by merging data collected in real time with established LCI databases. The Asset Administration Shell (AAS) is used as the platform for the DT. Existing LCI databases are integrated via Linked Data (LD) and Semantic Mapping. A prototypical implementation is sketched, and a case study for a precast module is presented in this paper.

Lab head

Markus König

Members (18)

Philipp Hagedorn
  • Ruhr University Bochum
Phillip Schönfelder
  • Ruhr University Bochum
Xuling Ye
  • Ruhr University Bochum
Sven Zentgraf
  • Ruhr University Bochum
Benedikt Faltin
  • Ruhr University Bochum
Simon Kosse
  • Ruhr University Bochum
Alessandro Bruttini
  • University of Florence
Patrick Herbers
  • Ruhr University Bochum
Jonas Maibaum
Jonas Maibaum
  • Not confirmed yet

Alumni (9)

Elham Mahmoudi
  • Ruhr University Bochum
Karlheinz Lehner
  • Ruhr University Bochum
Kristina Doycheva
  • Fraunhofer Institute for Transportation and Infrastructure Systems