Conference PaperPDF Available

Custom Digital Workflows: A New Framework for Design Analysis Integration

Authors:

Abstract and Figures

Flexible information exchange is critical to successful design integration, but current top-down, standards-based and model-oriented strategies impose restrictions that are contradictory to this flexibility. In this paper we present a bottom-up, user-controlled and process-ori- ented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for inte- gration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. Adopting a services-oriented system architecture, we propose a web-based plat- form that enables data, semantics and models to be shared on the fly. We discuss potential challenges and opportunities for the development thereof as a flexible, visual, collaborative, scalable and open system.
Content may be subject to copyright.
T. Fischer, K. De Biswas, J.J. Ham, R. Naka, W.X. Huang, Beyond Codes and Pixels: Proceedings of
the 17th International Conference on Computer-Aided Architectural Design Research in Asia, 163–172.
©2012, Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong
CUSTOM DIGITAL WORKFLOWS
A new framework for design analysis integration
BIANCA TOTH
Queensland University of Technology, Australia
bianca.toth@qut.edu.au
STEFAN BOEYKENS
KU Leuven, Belgium
ANDRE CHASZAR
Delft University of Technology (TUD), Netherlands
PATRICK JANSSEN
National University of Singapore (NUS), Singapore
and
RUDI STOUFFS
TUD and NUS
Abstract. Flexible information exchange is critical to successful design
integration, but current top-down, standards-based and model-oriented
strategies impose restrictions that are contradictory to this exibility.
In this paper we present a bottom-up, user-controlled and process-ori-
ented approach to linking design and analysis applications that is more
responsive to the varied needs of designers and design teams. Drawing
on research into scientic workows, we present a framework for inte-
gration that capitalises on advances in cloud computing to connect
discrete tools via exible and distributed process networks. Adopting
a services-oriented system architecture, we propose a web-based plat-
form that enables data, semantics and models to be shared on the y.
We discuss potential challenges and opportunities for the development
thereof as a exible, visual, collaborative, scalable and open system.
Keywords. Visual dataow modelling; design processes;
interoperability; simulation integration; cloud-based systems.
164 B. TOTH, S. BOEYKENS, A. CHASZAR, P. JANSSEN AND R. STOUFFS
1. Introduction
There is a clear and urgent need for better information exchange strategies
to address the persistent lack of interoperability and integration in building
design, analysis and construction. Recognising that most design teams use a
variety of software applications and platforms, the question that remains to be
answered is: How can we develop tools and technology that support designers
in creating their own design processes, rather than having to adapt their proc-
esses to suit the tools’ rigid requirements?
The key idea we present in this paper is that bottom-up, user-controlled and
process-oriented approaches to linking design and analysis applications are
more appropriate than current top-down, standards-based and model-oriented
strategies, because they provide degrees of exibility critical to the process(es)
of design. This idea comes from discussions raised at the “Open Systems and
Methods for Collaborative BEM (Building Environment Modelling” work-
shop held at the CAAD Futures 2011 Conference in Liège, Belgium, early in
July 2011. Here, we continue the ‘open systems’ dialogue with a conceptual
framework for bringing this idea into practical application, aiming to reduce
current obstructions to collaborative design. We propose an open framework
for integration where numerous small and specialised procedural tools are
developed, adapted and linked ad-hoc, to meet the needs of individual design
projects and project teams. These modular components encapsulate individ-
ual tasks that aid information exchange between domain-specic software by
(semi-) automating typically tedious and non value-adding tasks associated
with matching and mapping data across different schemas. A cloud-based plat-
form enables project- and user-specic workows to be created, shared and
managed on distributed resources, via web interfaces that allow users to inter-
act with workows graphically. This, in combination with the elimination of
le format and mapping language restrictions, ensures maximum exibility.
Drawing on research into scientic workows, we describe system require-
ments to guide future development of the proposed framework. We present a
system that dispenses with an ontological premise for integration, and discuss
the benets and challenges that such a system presents for design practice and
outcomes. Although no implementation of the framework has yet been created
or tested, we are in the process of assembling a team of researchers and practi-
tioners interested in pursuing our proposal. We are condent that the approach
described in this framework will lend itself well to coping with the frequently
changing pace and focus of design projects, as well as the varying priorities of
their many stakeholders.
165CUSTOM DIGITAL WORKFLOWS
2. System architecture
Similar to the AEC industry, increasing complexity in scientic research and
practice has led to a proliferation of specialised computational tools, each
developed by different people, at different times, to support different problem-
solving tasks. Across these tools, underlying data structures exhibit a high
degree of heterogeneity, akin to that observed in building software. To manage
this heterogeneity, and achieve the integration required to generate solutions,
information must be matched and mapped across a succession of different
schemas, applications and platforms (Bellahsene et al. 2011). Scientic work-
ows enable these information exchanges to take place quickly, reliably and
exibly, by “combining data and processes into a congurable, structured set
of steps that implement semi-automated computational solutions of a scien-
tic problem” (Altıntaş 2011, pp.9-10).
Scientic workow systems enable the composition and execution of these
complex task sequences on distributed computing resources (Deelman et al.
2009). These systems exhibit a common reference architecture, illustrated in
Figure 1, and typically consist of a graphical user interface (GUI) for authoring
workows (which can also be edited textually), along with a workow engine
that handles invocation of the applications required to run the solution (Curcin
and Ghanem 2008). The workow engine supports integration between appli-
cations by engaging a combination of data-ow and control-ow constructs
to handle the execution and management of tasks. Data-ow constructs estab-
lish information dependencies between tasks, and ensure that data-producing
procedures are complete before data-consuming ones begin (Deelman et al.
2009). Control-ow constructs support more complex workow interactions,
such as loops and conditionals, and also coordinate the execution of tasks on
remote resources (Deelman et al. 2009). Typically, control-ow constructs are
overlays on the data-ow graph, either as
separate nodes or layers.
Today, numerous workow systems
with different purposes and functionality
exist. Some provide sophisticated inter-
faces and graphics, like the data visualisa-
tion application Vistrails (Callahan et al.
2006), while other more generic workow
systems, such as YAWL, are less visual
but offer high-level process abstractions
that can be applied to a range of usage
scenarios (Curcin and Ghanem 2008). The
LONI Pipeline is designed specically to Figure 1. Workow system architecture.
166 B. TOTH, S. BOEYKENS, A. CHASZAR, P. JANSSEN AND R. STOUFFS
build up data processing streams for neuroimaging tools (Rex et al. 2003),
while Kepler provides advanced control algorithms for actor-oriented mod-
elling of complex physical, biological and chemical processes (Curcin and
Ghanem 2008). Each system acts to accelerate and streamline the problem-
solving process, however, individual capabilities vary greatly due to differ-
ences in workow representation, data ow and control ow.
Implementation strategies for each of these three aspects of workow are
the product of specic requirements and technologies needed for the individual
eld or purpose for which a system is developed. In the following subsections
we discuss strategies for workow representation, data ow and control ow
in relation to the needs of the AEC industry, aiming to capitalise on recent
advances in cloud computing.
2.1. WORKFLOW REPRESENTATION
Workow representation is critical for specifying tasks and dependencies.
Nearly all workow systems mentioned are visual programming tools in that
they allow processes to be described graphically using some form of ‘pipes-
and-lters’ logic. While not strictly workow systems, programs like Grass-
hopper and GenerativeComponents abstract underlying CAD systems to
offer similar functionality to designers for composing parametric-associative
models. Each ‘lter’ encapsulates some data processing task, represented by
a node, while a ‘pipe’ passes data (and in some instances control information)
between two lters, represented by a connecting wire. A workow is depicted
by a network of nodes and wires to be congured and recongured graphically
by users as required. From a user perspective, these nodes can act as a black
box to perform a given function without the need for extensive or expert pro-
gramming, although programming can empower the end user considerably.
Adopting this ‘pipes-and-lters’ architecture, our framework posits three
node types: process, input/output (IO) and control. Process nodes encapsu-
late data analysis and transformation procedures; while the latter two node
types provide functionality related to workow initiation, execution and com-
pletion. Process nodes have a number of (typed) input and output ports for
receiving and transmitting data, as well as variables that can be set by the
user to guide task execution. They can be further classied into tool nodes
and mapper nodes. Tool nodes wrap existing applications to make their data
and functionality accessible to the workow, while mapper nodes apply trans-
formation procedures to data sets to map the output from one tool node to
the input of another. Figure 2 shows an example network in which a Maya
modelling node is connected via a series of mapper nodes (denoted by ‘M’)
to EnergyPlus and Radiance simulation nodes. The Maya node encapsulates
167CUSTOM DIGITAL WORKFLOWS
a procedure that starts Maya, loads a specied model, and then generates a
model instance by applying the dened parameter values. The resulting geo-
metric output undergoes two separate transformations that map it into both
EnergyPlus and Radiance compatible formats. The simulation nodes then read
in this transformed data, run their respective simulations, and generate output
data in the form of simulation results.
Figure 2. Exemplar workow.
IO nodes act as data sources and sinks for the workow. Input nodes provide
the data by specifying input les and control parameters as inputs to the
tool nodes and to control the data extracted from these input les. Taking
the example in Figure 2, the Maya input node allows the user to specify not
only the model to be used for the data source, but also particular types of
geometry contained within it, while the EnergyPlus input node might simply
link the appropriate weather le. Output nodes contain the workow results,
here of the EnergyPlus and Radiance simulations. They can be linked to a
number of visualisation tools to display results, and users are able to dene
data ‘mashups’ in order to customise their visualisations without having to
understand the coding of the underlying processes.
Control nodes apply constraints to the workow, like conditionals and
loops, which manipulate the local order of execution of nodes further along
in the network. For example, an if–then node can force execution of different
branches in a workow, while a repeat node can force repeated execution of
a network branch. Global control is also possible, but is dened at workow
level, rather than task level, as discussed in Section 2.3.
Users congure nodes and their dependencies using a workow inter-
face. Since we are describing a platform that operates in the cloud, this inter-
face would be a web application able to access distributed cloud services.
We propose a HTML5-based GUI that provides drag-and-drop functionality
for placing nodes on the workow canvas, which are then wired together by
the user, similar to dening a model in Grasshopper. We also propose inter-
face functionality to aid users in managing workow complexity. Graphi-
cal nesting allows clusters of nodes to be collapsed into composite nodes,
168 B. TOTH, S. BOEYKENS, A. CHASZAR, P. JANSSEN AND R. STOUFFS
facilitating modularisation of the workow to improve its legibility (Davis et
al. 2011). Provenance information retrieval and querying enables workow
history to be reviewed, so that the decision-making process can be tracked
(Deelman et al. 2009).
2.2. DATA FLOW
Interoperability is a critical issue when linking applications from different
domains. Scientic workow systems deal with this in a number of ways,
ranging from an ontological approach, where a common le format is imposed
on data exchanges, to an open-world approach, where the user resolves data
format issues as needed - a process known as ‘shimming’ (Altıntaş 2011). In
the AEC industry, the prevailing solution to this issue is Building Information
Modelling (BIM), which tends toward the all-encompassing ontological end of
the spectrum. This is a top-down approach, reliant on the IFC data model and
its continuous extension to cater for all possible usage scenarios. Pragmatic
information exchange is assumed to evolve into process-oriented model views,
where only ltered subsets of the model are exchanged (Eastman et al. 2011).
Rather than reading and writing to a common representational structure,
we propose tools be coupled more effectively through procedures that allow
direct data transfer, as advocated by building simulationists (Hensen et al.
2004), and transfer only needed data, rather than entire models (Augenbroe
et al. 2003). While this approach is vulnerable to version changes in wrapped
tools, the sharing and reuse of interoperability solutions would be a mitigat-
ing factor. Furthermore, the proposed system does not disregard BIM, but
suggests IFC exchange be part of the workow process, integrated into these
custom data ows rather than forcing the whole system to adhere to its ontol-
ogy. A good example of such an approach is found in the “GeometryGym”
suite of tools, which enable parametric models generated in Grasshopper to be
linked both to BIM workows, through components that generate IFC objects,
and to structural simulations (Mirtschin 2011).
Considering all data to be exchanged as les, we may allow any other data
formats next to IFC, such as CAD data in the form of DXF, IGES or compara-
ble proprietary formats, XML formatted data or other structured data or text, or
even plain text. This list is deliberately open-ended; while the absence of any
format restrictions may complicate the exchange of data amongst numerous
tools, it avoids signicant limitations of both an ontology-specic data repre-
sentation (such as IFC) and a domain-independent general-purpose data format.
Choosing the latter would undoubtedly result in situations where translations
from highly customised data representations to the general-purpose format
and back pose signicant risks of information loss. The only necessary restric-
169CUSTOM DIGITAL WORKFLOWS
tion that we envision is that data formats are identied and their assumptions
described, such that any mapper node may reasonably rely on these assump-
tions when reading in data. This restriction does not limit exibility concerning
data formats, as a new format may always be identied and described.
The selection of les as the medium for data exchange is prompted as
much by the elimination of any data format restrictions as by the choice of a
cloud-based platform. However, the use of distributed data les over a cen-
tralised data model may introduce data redundancy and inconsistency; e.g.
when different workow branches drawing on the same input converge, com-
bining data from different but overlapping models. Such redundancy, admit-
tedly, is inherent to the bottom-up approach to integration that we are advocat-
ing. Eliminating this redundancy, however, would not only greatly reduce the
freedom and exibility of designers to create their own workow processes
from any selection of tools, it would also seriously hamper the ability to dene
and explore unconventional design spaces.
2.3. CONTROL FLOW
As discussed in Section 2.1, control nodes provide localised ways of manipu-
lating the workow. To provide the desired level of exibility, the framework
also needs to offer different types of global control ow. Many existing work-
ow systems are restricted to simple ow mechanisms that generate topo-
logical orderings, where each node will execute only after all its predecessor
nodes have executed. A key limitation of this approach, however, is that the
network must be a directed acyclic graph (DAG), so networks with loops are
not supported. Networks with loops, however, are clearly desirable in certain
situations, such as optimisation procedures. To support loops and other node
execution patterns, such as triggering nodes iteratively or periodically, dif-
ferent high-level control mechanisms are required. In addition to providing
workow execution functionality, these mechanisms are needed to support dis-
tributed computing, by triggering nodes to work in parallel with other nodes,
as well as executing synchronously or asynchronously. To ensure maximum
exibility, the user should be able to apply different control ow mechanisms
to different parts of the network. This could be achieved by assigning a control
mechanism to a composite node, which would further open the possibility of
nesting control ow.
3. System implementation
Implementing our framework in the cloud ensures its scalability, efciency
and reliability, as node execution can be distributed over multiple computers
170 B. TOTH, S. BOEYKENS, A. CHASZAR, P. JANSSEN AND R. STOUFFS
in the network. Process nodes are therefore dened so that both their data and
procedure can reside in the cloud. Input and output data is saved in les, and
a (distributed) repository is used to manage these les. The node procedure is
saved as an executable task (which may be written in any language) that reads
and writes les, and a task scheduler is used to manage the execution of these
tasks in the cloud.
Files resulting from one process node may be stored local to the execution
of the node’s task, awaiting retrieval by other process nodes. Storing les
ensures a trace of all workow output is maintained for later perusal. Files
may also be copied to different locations and their copies managed within the
repository. Similar to version-control systems, local le copies in combination
with a local copy of the repository guarantees access to all outputs even when
the user is disconnected from the cloud. File management within a repository
also eliminates the need for exchanging les directly between process tasks.
When a task creates output les, it registers their location (URL) together with
some minimal metadata origin, time of creation, format, etc. for which
the Dublin Core (http://dublincore.org), extended where necessary, may serve
as a template. In return, it receives tokens corresponding to the various les
(typically unique IDs assigned by the repository), which are passed to the task
scheduler to be forwarded on to the next process node’s task. The receiving
task then queries the repository for the metadata and the le(s).
4. Discussion
The question of interoperability has been vexing the AEC industry for decades.
The usual response is to impose standards for data formatting and construct
monolithic design-analysis systems that internalise, and thus opaquely
subsume, representation problems. This inicts severe limitations on the ways
in which information, and therefore designs themselves, can be described.
To overcome these limitations, we are proposing that the bugaboo of BIM
research and development communication that is direct between domain-
specic applications rather than via a common standard is the preferred and
even necessary arrangement. Although more work may be needed to create
the multitude of possible data converters required to support such commu-
nication, this is better than over-constraint of the design process caused by
the use of standardised representations and processes that are only applicable
within a relatively small region of ‘design space’.
Research and development of the proposed system is an ongoing effort
from the Open Systems group, however, success will ultimately depend on a
community of users and developers, able and willing to create, share and main-
tain process nodes to support various design and analysis activities. This could
171CUSTOM DIGITAL WORKFLOWS
potentially result in a vast collection of process nodes and an endless range of
options. To aid the designer in choosing appropriately, it is important not only
that a description of the functionality of each node is available, but also that
the designer is able to ensure that nodes ‘t’ other nodes in order to compose
a valid workow. Assertions must therefore be specied on node outputs and
assumptions specied for expected inputs so that automatic checking of node
compatibility is possible.
Workow design is likely to be an incremental process in which a number
of nodes are combined into a partial workow, tested by the designer, and then
further developed and extended. Besides automatic checking of the mutual
tness of adjacent nodes, the designer will need to check whether the (partial)
workow is behaving as expected and producing appropriate results. When
the results are not as expected, the designer will need to debug the workow
by tracing back execution, which can be assisted by displaying intermediate,
as well nal, results. In the context of a vast collection of process nodes and
choice of alternative ways to achieve the same or similar result, user-friendli-
ness and knowledge-based support, the two main concerns of designers when
using analysis software (Attia et al. 2009), will become crucial. Issues of accu-
racy, uncertainty and risk may also be of signicant concern. Macdonald et al.
(1999) propose the introduction of uncertainty considerations in simulations
to provide meaningful feedback to the user and to improve condence through
risk assessment. While these should be addressed within individual software
tools, the proposed system should also introduce this functionality into the
workow environment itself.
5. Conclusion
This paper represents an ongoing effort to address limitations in process
and technology that presently obstruct design collaboration. In it we argued
the need for a user-controlled and process-oriented approach to integration
and interoperability, and discussed how a cloud-based workow system can
support more exible and distributed design processes. We examined the fea-
tures and functionality needed to abstract computing and data resources to
make tools and technologies more accessible to users, both as individuals and
as members of design teams. As well as beneting design practice, we envis-
age the proposed system as a platform for researchers to share their work and
increase the impact of their individual efforts through integration with other
research. The system requirements that we have established will ensure that
the proposed integration platform is developed to be exible, visual, collabo-
rative, scalable and open.
172 B. TOTH, S. BOEYKENS, A. CHASZAR, P. JANSSEN AND R. STOUFFS
Acknowledgements
The authors would like to acknowledge and thank the participants of the “Open Systems and
Methods for Collaborative BEM (Building Environment Modelling)” workshop held at the
CAAD Futures 2011 Conference in Liège, Belgium, 4 July 2011, and of the LinkedIn Group
sharing the same name, for their contributions to the discussions leading to the ideas presented
and described in this paper. We invite interested parties to contribute to the development of
these ideas and to join the discussions in the LinkedIn Group.
References
Altıntaş, İ.: 2011, Collaborative Provenance for Workow-driven Science and Engineering,
PhD Thesis, University of Amsterdam, Amsterdam.
Attia, S., Beltrán L., De Herde, A. and Hensen, J.: 2009, Architect friendly. A comparison of
ten different building performance simulation tools, 11th IBPSA Conference, Glasgow,
204–211.
Augenbroe, G., deWilde, P., Moon, H., Malkawi, A., Brahme, R. and Choudhary, R.: 2003, The
Design Analysis Integration (DAI) initiative, 8th IBPSA Conference, Eindhoven, 79–86.
Bellahsene, Z., Bonifati, A., Duchateau, F. and Velegrakis, Y.: 2011, On evaluating schema
matching and mapping, in Bellahsene, Z., Bonifati, A. and Rahm, E. (eds.), Schema
Matching and Mapping, Springer, Berlin and Heidelberg, 253–291.
Callahan, S., Freire, J., Santos, E., Scheidegger, C., Silva, C. and Vo, H.: 2006, Vistrails:
visualization meets data management, SIGMOD 2006, Chicago, 745–747.
Curcin, V. and Ghanem, M.: 2008, Scientic workow systems - can one size t all? CIBEC
2008, Cairo, 1–9.
Davis, D., Burry, J. and Burry, M.: 2011, Untangling parametric schemata: enhancing
collaboration through modular programming, in Leclereq, P. et al. (eds.), Proc. 14th CAAD
Futures Conference, Liège, 55–68.
Deelman, E., Gannon, D., Shields, M. and Taylor, I.: 2009, Workows and e-science: an
overview of workow system features and capabilities, Future Generation Computer
Systems, 25, 528–540.
Eastman, C., Teichholz, P., Sacks, R. and Liston, K.: 2011, BIM Handbook - A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers, and Contractors, 2nd.
ed., John Wiley and Sons, Hoboken.
Hensen, J., Djunaedy, E., Radošević, M. and Yahiaoui, A.: 2004, Building performance
simulation for better design: some issues and solutions, PLEA 2004, vol. 2, Eindhoven,
1185–1190.
MacDonald, I., Clarke, J. and Strachan, P.: 1999, Assessing uncertainty in building simulation,
6th IBPSA Conference, vol. II, Kyoto, 683–690.
Mirtschin, J.: 2011, Engaging generative BIM workows, Collaborative Design of Lightweight
Structures, LSAA 2011, Sydney, 1–8.
Rex, D., Ma, J. and Toga, A.: 2003, The LONI pipeline processing environment, Neuroimage,
19(3), 1033–1048.
... It can facilitate the constructability of GD's automatic design solutions, and meanwhile improve BIM's capability in the early design phase. Thus, developing GD-BIM has drawn increasing attention academically and practically [12][13][14][15][16]. ...
... A programming language and software (usually BIM software) are crucial and indispensable tools in GD-BIM development [17,102,103]. Table 3 compares VPLs and TPLs in detail regarding definitions, languages, advantages, and limitations [12,17,[101][102][103][104]. Table 4 illustrates the popular software and applicable programming languages for scripting GD [25,103,105]. ...
... Thus, a suitability relationship is found between programming languages and GD components development, as indicated in Table 5 [12,17,101,104]. VPLs have better primitive dimensions, resulting in good performance of expressing intuitive ideas by simply connecting various primitive components. ...
Article
Full-text available
The integration of generative design (GD) and building information modelling (BIM), as a new technology consolidation, can facilitate the constructability of GD’s automatic design solutions, while improving BIM’s capability in the early design phase. Thus, there has been an increasing interest to study GD-BIM, with current focuses mainly on exploring applications and investigating tools. However, there are a lack of studies regarding methodological relationships and skill requirement based on different development objectives or GD properties; thus, the threshold of developing GD-BIM still seems high. This study conducts a critical review of current approaches for developing GD in BIM, and analyses methodological relationships, skill requirements, and improvement of GD-BIM development. Accordingly, novel perspectives of objective-oriented, GD component-based, and skill-driven GD-BIM development as well as reference guides are proposed. Finally, future research directions, challenges, and potential solutions are discussed. This research aims to guide designers in the building industry to properly determine approaches for developing GD-BIM and inspire researchers’ future studies.
... In order to automate the different phases of this project we used different software and programming languages, and we had to be very careful when transferring data throughout this workflow in order to avoid loss of information. The development of tools that support architects in creating their own design process is one of the big challenges of the CAAD industry nowadays, rather than forcing them to adapt their processes to rigid tool requirements [9]. ...
... Toth et al. [9] examined some features to develop a framework for integration, such as user-friendly representations, applying different control flow mechanisms to ensure maximum flexibility, and guaranteeing a repository for system implementation. According to them, "workflow design is likely to be an incremental process in which a number of nodes are combined into a partial workflow, tested by the designer, then further developed and extended" ( [9] p. 496). ...
... Toth et al. [9] examined some features to develop a framework for integration, such as user-friendly representations, applying different control flow mechanisms to ensure maximum flexibility, and guaranteeing a repository for system implementation. According to them, "workflow design is likely to be an incremental process in which a number of nodes are combined into a partial workflow, tested by the designer, then further developed and extended" ( [9] p. 496). ...
Article
This paper describes a design customization system that integrates two aspects of Computer-Aided Architectural Design (CAAD) that are usually developed in separate workflows: the algorithmic generation of designs and the detailed representation of the building. The system's workflow starts with the definition of shape grammar rules by an architect. The rules are then automatically imported into a user interface that allows future owners to interactively custom-design their apartment plans. Finally, the plans are automatically converted into detailed Building Information Models (BIM), which allow the architect to add custom finishes, estimate building costs, and automatically generate construction drawings. We conclude that our workflow could contribute to the real customization of houses and other simple architectural programmes, assuring the quality of the outcomes through shape grammars rules and at the same time reducing the cost of production drawings through automation. The paper ends with some suggestions of improvements in BIM software that would allow its integration with shape grammars and the implementation of our workflow in a simpler way.
... For BIM-LCA, it is important to consider if the standard structure can contain the data you want to extract from your model, as described in Section 2.1. Using a standard data structure will always restrict how data can be described, and thus used in the building performance tools [45]. However, data interoperability using an open standard data structure has obvious advantages as it reduces the number of times data need to be translated [30], see Figure 3. ...
... For BIM-LCA, it important to consider if the standard structure can contain the data you want to extra from your model, as described in 2.1. Using a standard data structure will always restr how data can be described, and thus used in the building performance tools [45]. How ever, data interoperability using an open standard data structure has obvious advantag as it reduces the number of times data need to be translated [30], see Figure 3. ...
Article
Full-text available
The climate debate necessitates reducing greenhouse gas emissions from buildings. A common and standardized method of assessing this is life cycles assessment (LCA); however, time and costs are a barrier. Large efficiency potentials are associated with using data from building information models (BIM) for the LCA, but development is still at an early stage. This study investigates the industry practice and needs for BIM–LCA, and if these are met through a prototype for the Danish context, using IFC and a 3D view. Eight qualitative in-depth interviews were conducted with medium and large architect, engineering, and contractor companies, covering a large part of the Danish AEC industry. The companies used a quantity take-off approach, and a few were developing plug-in approaches. Challenges included the lack of quality in the models, thus most companies supplemented model data with other data sources. Features they found valuable for BIM–LCA included visual interface, transparency of data, automation, design evaluation, and flexibility. The 3D view of the prototype met some of the needs, however, there were mixed responses on the use of IFC, due to different workflow needs in the companies. Future BIM–LCA development should include considerations on the lack of quality in models and should support different workflows.
... Achieving a data integration from IFC to CityGML answers a wider scope introduced with the concept of interoperability (Toth et al. 2012). Data formats in general address specific types of description, outdoor enhanced detailing (CityGML) or indoor (IFC). ...
Article
Digital building modelling faces issues related to inconsistent integration/interoperability, in particular for a BIM GIS convergence. A consistent data conversion approach should consider semantic and spatial geometric modelling, therefore potentially leading to a loss of spatial geometric information. This paper simplifies semantic solving, and concentrates on the accuracy of the spatial geometric representation to avoid conflicting 3D spatial mismatches. To solve the geometry, original spatial specifications in 3D described in the IFC data schema are considered for all objects present in the model. The method is explored with a main BIM-IFC model, and additionally tested with two other models.
... The role of the middleware component is to facilitate communication and coordination [30]. In the proposed framework, this entails the implementation of functions to manage data and control [31]. Managing data is done by exchanging data from and to the neutral BIM model, external data, domain models, performance evaluation, and optimization. ...
Article
Full-text available
Virtual design tools and methods can aid in creating decision bases, but it is a challenge to balance all the trade-offs between different disciplines in building design. Optimization methods are at hand, but the question is how to connect and coordinate the updating of the domain models of each discipline and centralize the product definition into one source instead of having several unconnected product definitions. Building information modelling (BIM) features the idea of centralizing the product definition to a BIM-model and creating interoperability between models from different domains and previous research reports on different applications in a number of fields within construction. Recent research features BIM-based optimization, but there is still a question of knowing how to design a BIM-based process using neutral file formats to enable multidisciplinary optimization of life-cycle energy and cost. This paper proposes a framework for neutral BIM-based multidisciplinary optimization. The framework consists of (1) a centralized master model, from which different discipline-specific domain models are generated and evaluated; and (2) an optimization algorithm controlling the optimization loop. Based on the proposed framework, a prototype was developed and used in a case study of a Swedish multifamily residential building to test the framework’s applicability in generating and optimizing multiple models based on the BIM-model. The prototype was developed to enhance the building’s sustainability performance by optimizing the trade-off between the building’s life-cycle energy (LCE) and life-cycle cost (LCC) when choosing material for the envelope. The results of the case study demonstrated the applicability of the framework and prototype in optimizing the trade-off between conflicting objectives, such as LCE and LCC, during the design process.
... How can user interact and interoperate with such a robust environment? In the intention of a framework where a design environment that supports interaction by engaging a data and cooperation strategy for exchanging information between particular parts in the workflow, [1,3] we propose a CAD integration of urban simulation into one looping interactive process. The system would provide results outside the BIM-based design environments and turns to a fully serviceable, extendable and scalable parametric modelling interface with direct (and interactive) user inputs and control in a full customized environment. ...
... The currently dominant proposals for overcoming the interoperability challenge to achieve virtually seamless collaboration between design partners are (a) confining teams to a tight, closed proprietary system of software applications, unified through a proprietary data format; (b) a loose system of many software tools with individual translation modules on an asneeded basis resulting in many one-to-one mappings ; and (c) a loose system of tools using an open or otherwise published, standardized data format like the Industry Foundation Classes (IFCs). Latter two approaches frequently employ workflow control systems to connect all the modules required for a specific workflow (Flager et al. 2008;Toth et al. 2012). This research project uses a mix of the first and third approaches by using a shared data representation while initially focusing on the implementation of frameworks that will allow loose association of authoring and analysis tools through an application programming interface (API). ...
Article
The use of Building Information Modeling (BIM) for building energy modeling (BEM) is a recent evolution in design practice. The success of BIM-BEM execution relies on considering two important aspects: process and technology. In this paper, a review of the literature using a systematic approach is proposed to highlight that these two aspects are rarely addressed concurrently. This review includes an overview of the BIM-BEM process and recent technological developments, while elaborating on the main research gap. In order to address the identified research gap, the creation of a framework is proposed that would embed the technological approaches within the whole design process by using a proper Level Of Development (LOD) and information requirements via Model View Definition (MVD).
Article
Full-text available
This paper examines Parametric Design (PD) in contemporary architectural practice. It considers three case studies: The Future of Us pavilion, the Louvre Abu Dhabi and the Morpheus Hotel. The case studies illustrate how, compared to non-parametrically and older parametrically designed projects, PD is employed to generate, document and fabricate designs with a greater level of detail and differentiation, often at the level of individual building components. We argue that such differentiation cannot be achieved with conventional Building Information Modelling and without customizing existing software. We compare the case studies' PD approaches (objected-oriented programming, functional programming, visual programming and distributed visual programming) and decomposition, algorithms and data structures as crucial factors for the practical viability of complex parametric models and as key aspects of PD thinking.
Article
Full-text available
A wide range of scientifically validated Building Performance Simulation tools BPS is available internationally. The users of those tools are mainly researchers, physicists and experts who value empirical validation, analytical verification and calibration of uncertainty as defined by e.g. BESTEST. However, literature and comparative surveys indicate that most architects who use BPS tools in design practice are much more concerned with the (1) Usability and Information Management (UIM) of interface and (2) the Integration of Intelligent design Knowledge-Base (IIKB). Those two issues are the main factors for identifying a building simulation program as "Architect Friendly". Now, with the advancement of BPS tools and the recent announcements of direct links between BIM or non-BIM modeling tools and BPS tools it is important to compare the existing programs. Based on an online survey, this paper presents the results of comparing ten major BPS tools. The following programs are compared: ECOTECT, HEED, Energy 10, Design Builder, eQUEST, DOE-2, Green Building Studio, IES VE, Energy Plus and Energy Plus-SketchUp Plugin (OpenStudio). With 249 valid responses, the survey ranked the tools in three classes and revealed that architects seek the IIKB above the UIM of the interface. Finally, the paper summarizes the key findings and underlines the major requirements for future improvement and development of BPS tools, mainly from an architectural perspective.
Chapter
Full-text available
Executive SummaryIntroductionTypes of Construction FirmsInformation Contractors Want from BIMProcesses to Develop a Contractor Building Information ModelReduction of Design Errors Using Clash DetectionQuantity Takeoff and Cost EstimatingConstruction Analysis and PlanningIntegration with Cost and Schedule Control and Other Management FunctionsUse for Offsite FabricationUse of BIM Onsite: Verification, Guidance, and Tracking of Construction ActivitiesImplications for Contract and Organizational ChangesBIM Implementation
Article
Full-text available
The exchange of information between the different organisations and individuals involved in the different stages of a building's life cycle has always been an important, but at the same time a difficult task. A vast number of participants with different views of the same physical structure have to interact and exchange information through the whole building life cycle. In order to find remedies to the current problems, in particular in CAD data exchange, the product modelling and information exchange standards community have developed several high level representations of buildings (known as Building Information Models - BIMs) in order to enable a more coherent exchange of data. Recently the Industry Foundation Classes (IFC), with a considerable number of software implementations, have emerged as the leading solution candidate. But soon after the first implementations doubts have been raised whether claimed IFC specification compliance by a software product insures a sufficient level of interoperability in practical data exchange. In the presented research work the interoperability performance of three widely used IFC compatible architectural, design applications has been evaluated. Tests with file based geometry exchange confirmed our anticipations that the IFC interfaces did not work as expected. The tests demonstrated through illustrative (simple and complex) examples revealed several cases of information distortion and/or information loss both on the entity and attribute level. Unsatisfying model handling proved to be characteristic of all the tested exchange scenarios. Our conclusion is that in the future more effort should be put into the IFC interface development.
Article
Full-text available
To provide substantial improvements in indoor environment and energy consumption levels, there is a need to treat a building with its associated systems as a complete entity, not as the sum of a number of separate systems. Building performance simulation is ideal for this. This paper discusses some issues that hinder the routine use of simulation in building design. In particular, the paper discusses the issues of quality assurance, the relative slow software developments and the limited use (usability) of building performance simulation during the total life cycle of a building. Possible solutions are also discussed by introducing our current research.
Conference Paper
Full-text available
The premise underlying this work is that introducing uncertainty considerations into simulation will facilitate risk assessment and that this, in turn, will help to improve designer confidence in simulation. Sources of uncertainty abound in building simulation and must be factored into the solution process. These sources have been identified and useful techniques for quantifying the effects of uncertainties are presented. Two approaches are described: the use of traditional statistical methods, and the use of alternative arithmetical methods. The theory behind these methods and implementation of appropriate methods into an existing simulation program is described.
Article
Full-text available
The Design Analysis Integration (DAI)-Initiative aims to steer towards new solutions for design analysis integration. These solutions should be able to overcome the limitations of current interoperability approaches that assume the existence of generic and static interfaces in a 'perfect world' in which all information is structured and all mappings between design and analysis representations are computable. This paper reports on the first phase of the development, a first-generation prototype in 'workbench' style for managing a process driven design analysis dialogue. The workbench is meant to enable a more robust use of existing building models such as IFC for the mapping to simulation tools. The paper presents the underlying theories, prototype development, and findings from the DAI-Initiative and concludes with a discussion of future work, targeting extension and benchmarking of the current prototype.
Article
Full-text available
Presently collaboration is difficult on complex parametric models, in part due to the illegibility of unstructured parametric schemata. This lack of legibility makes it hard for an outside author to understand a parametric model, reducing their ability to edit and share the model. This paper investigates whether the legibility of a parametric model is improved by restructuring the schema with modular programming principles. During a series of thinking-aloud interviews, where designers were asked to describe the function of unfamiliar schemata, the schema structured with modular programming principles were consistently better understood. Modular programming is found to be a beneficial, albeit small, change to parametric modelling that derives clear benefits in terms of legibility, particularly when the model is complex and used in a collaborative environment.
Article
Met de groei van wetenschappelijke kennis en de stijging van het aantal studies dat gebruikmaakt van informatie van verschillende wetenschappelijke disciplines wordt de complexiteit van systematisch wetenschappelijk onderzoek aanzienlijk vergroot. Om de ‘grote’ wetenschappelijke vragen te beantwoorden gebruiken wetenschappers computergebaseerde methodieken die bijna dagelijks worden aangepast. Veranderingen in de computerwetenschap en technologie hebben geleid tot een verzameling van gereedschappen ontworpen om het wetenschappelijke proces meer efficiënt en sneller te maken. Deze gereedschappen zijn in staat om de creatie en uitvoering van computergestuurde taken te vereenvoudigen en zijn bekend onder de naam ‘wetenschappelijke workflows’. Ilkay Altintas presenteert vier belangrijke bijdragen op het gebied van samenwerkende workflows, genaamd collaborative provenance. De voornaamste bijdragen in het vastleggen van samenwerkende oorspronginformatie in de studie leiden tot de ontwikkeling van computersystemen die interoperatief samenwerken en het hergebruik van workflowresultaten vergroten.
Chapter
The increasing demand of matching and mapping tasks in modern integration scenarios has led to a plethora of tools for facilitating these tasks. While the plethora made these tools available to a broader audience, it led to some form of confusion regarding the exact nature, goals, core functionalities, expected features, and basic capabilities of these tools. Above all, it made performance measurements of these systems and their distinction a difficult task. The need for design and development of comparison standards that will allow the evaluation of these tools is becoming apparent. These standards are particularly important to mapping and matching system users, since they allow them to evaluate the relative merits of the systems and take the right business decisions. They are also important to mapping system developers, since they offer a way of comparing the system against competitors, and motivating improvements and further development. Finally, they are important to researchers as they serve as illustrations of the existing system limitations, triggering further research in the area. In this work, we provide a generic overview of the existing efforts on benchmarking schema matching and mapping tasks. We offer a comprehensive description of the problem, list the basic comparison criteria and techniques, and provide a description of the main functionalities and characteristics of existing systems.