ArticlePDF Available

Custom digital workflows: a new framework for design analysis integration

Authors:

Abstract and Figures

Flexible information exchange is critical to successful design-analysis integration, but current top-down, standards-based and model-oriented strategies impose restrictions that contradict this flexibility. In this article we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks.We then discuss how a shared mapping process that is flexible and user friendly supports non-programmers in creating these custom connections. Adopting a services-oriented system architecture, we propose a web- based platform that enables data, semantics and models to be shared on the fly.We then discuss potential challenges and opportunities for its development as a flexible, visual, collaborative, scalable and open system.
Content may be subject to copyright.
481
Custom Digital
Workflows: A New
Framework for Design
Analysis Integration
Bianca Toth, Patrick Janssen, Rudi Stouffs,
Andre Chaszar and Stefan Boeykens
issue 04, volume 10international journal of architectural computing
482
Custom Digital Workflows: A New Framework for
Design Analysis Integration
Bianca Toth, Patrick Janssen, Rudi Stouffs, Andre Chaszar and Stefan Boeykens
Abstract
Flexible information exchange is critical to successful design-analysis
integration, but current top-down, standards-based and model-oriented
strategies impose restrictions that contradict this flexibility. In this
article we present a bottom-up, user-controlled and process-oriented
approach to linking design and analysis applications that is more
responsive to the varied needs of designers and design teams. Drawing
on research into scientific workflows, we present a framework for
integration that capitalises on advances in cloud computing to connect
discrete tools via flexible and distributed process networks.We then
discuss how a shared mapping process that is flexible and user friendly
supports non-programmers in creating these custom connections.
Adopting a services-oriented system architecture, we propose a web-
based platform that enables data, semantics and models to be shared
on the fly. We then discuss potential challenges and opportunities for its
development as a flexible, visual, collaborative, scalable and open
system.
1. INTRODUCTION
There is a clear and urgent need for better information exchange strategies
to address the persistent lack of interoperability and integration in building
design, analysis and construction.The continuing and active discourse
amongst AEC practitioners and researchers alike highlights ongoing
limitations in both process and technology that commonly challenge design
collaboration. Recognising as a basic premise that most design teams will,
whether by choice or necessity, use a variety of software applications and
platforms, the question that remains to be answered is: How can we
develop tools and technology that support designers in creating their own
design processes, rather than having to adapt their processes to suit rigid
tool requirements?
The key idea we present in this article is that bottom-up, user-controlled
and process-oriented approaches to linking design and analysis applications
are more appropriate than current top-down, standards-based and model-
oriented strategies, because they provide degrees of flexibility critical to the
process(es) of design.This idea originally comes from discussions raised at
the “Open Systems and Methods for Collaborative BEM (Building
Environment Modelling)” workshop held at the CAAD Futures 2011
Conference in Liège, Belgium. Here, we continue the ‘open systems’ dialogue
with a conceptual framework for bringing this idea into practical application,
aiming to reduce current obstructions to collaborative design.
We propose an open framework for integration where numerous small
and specialised procedural tools are developed, adapted and linked ad-hoc,
to meet the needs of individual design projects and project teams.These
modular components encapsulate individual tasks that aid information
exchange between software tools by (semi-)automating typically tedious and
non value-adding tasks associated with preparing data for transformation
between different data schemas. Flexible and user-friendly mapping
procedures enable custom connections to be easily created and shared
between schema descriptions adhering to the same underlying data model.
A cloud-based platform allows various design and analysis tools to be strung
together in this manner, through web interfaces that let users interact with
workflows graphically. Project- and user-specific workflows can be created,
shared and managed on distributed resources, which, in combination with
the elimination of file format and programming language restrictions,
promotes maximum flexibility.
Drawing on research into scientific workflows, we describe system
requirements to guide future development of the proposed framework. In
contrast with the predominating push for ever more monolithic and
standardised tools, models and workflows, we dispense with the premise of
a multi-domain schema for integration to present instead a system that is
flexible, distributed and modular. Discarding assumptions of homogeneity in
483Custom Digital Workflows: A New Framework for Design Analysis Integration
the format of design and analysis data, we then discuss the benefits and
challenges that such a system presents for design practice and outcomes.
An initial demonstration of one possible approach to the shared
mapping procedure has already been undertaken [1], and we have assembled
a collective of researchers and practitioners interested in pursuing our
proposal.We are confident that the approach described in this framework
will lend itself well to coping with the frequently changing pace and focus of
design projects, as well as the varying priorities of their many stakeholders.
2. SYSTEM ARCHITECTURE
Similar to the AEC sector, increasing complexity in scientific research and
practice has led to a proliferation of specialised computational tools, each
developed by different people, at different times, to support different
problem-solving tasks. Across these tools, underlying data structures exhibit
a high degree of heterogeneity, akin to that observed in building software.To
manage this heterogeneity and achieve the integration required to generate
solutions, information must be matched and mapped across a succession of
different data schemas, applications and platforms [2]. Scientific workflows
enable these information exchanges to take place quickly, reliably and
flexibly, by “combining data and processes into a configurable, structured set
of steps that implement semi-automated computational solutions of a
scientific problem” [3].
Scientific workflow systems enable the composition and execution of
these complex task sequences on distributed computing resources [4].
These systems exhibit a common reference architecture, illustrated in
Figure 1, and typically consist of a graphical user interface (GUI) for
authoring workflows (which can also be edited textually), along with a
workflow engine that handles invocation of the applications required to run
the solution [5].The GUI abstracts the usage requirements of underlying
resources to critical process parameters, making computational tools and
technologies more accessible to users.The workflow engine supports
integration between applications by engaging a combination of data-flow and
control-flow constructs to handle the execution and management of tasks.
Data-flow constructs establish information dependencies between tasks, and
ensure that data-producing procedures are complete before data-consuming
ones begin [4]. Control-flow constructs support more complex workflow
interactions, such as loops and conditionals, and also coordinate the
execution of tasks on remote resources [5].Typically, control-flow
constructs are overlays on the data-flow graph, either as separate nodes or
layers.
484 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
Today, numerous workflow systems with different purposes and functionality
exist. Some provide sophisticated interfaces and graphics, like the data
visualisation application Vistrails [6], while other more generic workflow
systems, such as YAWL, are less visual but offer high-level process
abstractions that can be applied to a range of usage scenarios [5].The LONI
Pipeline is designed specifically to build up data processing streams for
neuro-imaging tools [7], while Kepler provides advanced control algorithms
for actor-oriented modelling of complex physical, biological and chemical
processes [5]. Each system acts to accelerate and streamline the workflow
process; however, their individual capabilities vary greatly due to differences
in workflow representation, data flow and control flow.The implementation
strategies employed for each of these three workflow aspects reflect the
specific requirements and technologies of the individual field or purpose for
which a system is developed.
Within the AEC industry, systems developed for the purpose of
Multidisciplinary Design Optimisation (MDO) offer comparable workflow
functionality to those mentioned above. ModelCenter [8], by Phoenix
Integration, is a commercial software package that integrates and manages
the flow of data between modelling and simulation applications, to generate
trade-off evaluations of design and performance constraints for decision
support. Purpose-built software wrappers expose the data and functionality
of design and analysis applications within the ModelCenter environment,
enabling these tools to be linked together into workflows. DesignLink [9], a
platform developed by Arup and freely available under their collaborative
Figure 1:Workflow system
architecture.
485Custom Digital Workflows: A New Framework for Design Analysis Integration
licensing agreement, employs a different mechanism to achieve similar
workflow functionality, supporting interoperability through the combination
of an extensible XML-based file format and software-specific plug-ins that
support the import and export of this format. While these MDO
environments facilitate integration of diverse design and analysis tools, their
top-down approach to interoperability is subject to certain limitations. Use
of a common data schema to structure information exchange, regardless of
whether it is proprietary or an open standard, imposes restrictions on how
designs can be described and thus explored [10]; restrictions that are only
lessened, not overcome, when the schema is extensible. Further complicating
matters is the lack of support for non-programmers in creating custom
software connections, given that software development expertise is relatively
uncommon amongst architects and engineers [11]. In response to these
observed limitations, we pursue a more flexible and user-friendly approach to
information exchange that reconceptualises the workflow process to
support the full scope of design experience, aiming to improve design-
analysis integration across a range of application broader than simply MDO –
including open-ended design explorations, not only optimisations. Our work
differs from other related research in its bottom-up, rather than top-down,
approach to integration, which eliminates the common data schema as a
prerequisite for information exchange, allowing designers freedom to create
custom digital workflows unfettered by standardisation constraints.
In the following subsections we discuss strategies for workflow
representation, data flow and control flow in relation to the needs of the
AEC industry, to outline requirements for a system that will support our
bottom-up approach to information exchange.We aim to identify the
functionality needed to develop a flexible, open and intuitive platform for
collaborative design-analysis integration, building on capabilities exhibited in
scientific workflow systems and further capitalising on recent advances in
cloud computing.
2.1.Workflow representation
Workflow representation is critical for specifying tasks and dependencies.
Nearly all workflow systems mentioned are visual programming tools in
that they allow processes to be described graphically using some form of
‘pipes-and-filters’ logic.While not strictly workflow systems, programs like
Grasshopper, GenerativeComponents and Houdini abstract underlying CAD
systems to offer similar functionality to designers for composing
parametric-associative models, albeit performed within a single application.
Each ‘filter’ encapsulates some data processing task, and is represented by a
node, while a ‘pipe’ passes data (and in some instances control information)
between two filters, and is represented by a connecting wire. A workflow is
depicted by a network of nodes and wires to be configured and
reconfigured graphically by users as required. From a user perspective, these
nodes can act as a black box to perform a given function without the need
486 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
for extensive or expert programming, although programming can empower
the end user considerably, if the contents of the box are exposed and
editable.
Adopting this ‘pipes-and-filters’ architecture, our framework posits three
node types: process, input/output (IO) and control. Process nodes
encapsulate data analysis and transformation procedures; while the latter
two node types provide functionality related to workflow initiation,
execution and completion. An overview of these different node types, and
their respective sub-nodes, is seen in Table 1.
Process nodes have a number of (typed) input and output ports for
receiving and transmitting data, as well as variables that can be set by the
user to guide task execution.They can be further classified into ‘tool’ nodes
and ‘mapper’ nodes.Tool nodes wrap existing applications to make their
data and functionality accessible to the workflow, while mapper nodes apply
transformation procedures to data sets to map the output from one tool
node to the input of another. Ideally, these mappings are easily defined and
Ta b le 1: Overview of workflow
nodes.
487Custom Digital Workflows: A New Framework for Design Analysis Integration
able to be custom created (or edited) by users with limited programming
skills (discussed in more detail in Section 3). However, the need for more
complex data transformations is inevitable, and where programming is
unavoidable, the resulting nodes are differentiated as scripted mappers.
Figure 2 shows an example network in which a Maya modelling node is
connected via a series of mapper nodes (denoted by ‘M’) to Radiance and
EnergyPlus simulation nodes.The Maya node encapsulates a procedure that
starts Maya, loads a specified model, and then generates a model instance by
applying the defined parameter values.The resulting geometric output
undergoes two separate transformations that map it into both Radiance-
and EnergyPlus-compatible formats, the latter via a scripted mapper that
pre-processes geometry to ensure its compatibility with complex
EnergyPlus requirements.The simulation nodes then read in this
transformed data, run their respective simulations, and generate output data
in the form of simulation results.
IO nodes act as data sources and sinks for the workflow, as well as
providing data visualisation capabilities. Input nodes provide data by
specifying input files and control parameters, as inputs to the tool nodes
and to control the data extracted from these input files respectively. It is
important to note that input data is referenced into the workflow, not
imported, to ensure that the workflow remains dynamic and can be re-
evaluated when changes to this data occur; these changes are, however,
controlled to avoid conflicts arising while the workflow is running.Taking
the example in Figure 2, the Maya input node allows the user to specify not
only the origin of the model to be used for the data source, but also
particular types of geometry and other data contained within it, while the
EnergyPlus input node simply links the appropriate weather file.The output
node contains the workflow results, here of the EnergyPlus and Radiance
simulations.These nodes enable results to be visualised in various ways, with
users able to define data ‘mashups’ in order to customise their visualisations
without having to understand the coding of the underlying processes.
Control nodes apply constraints to the workflow, like conditionals and
loops, which manipulate the local order of execution of nodes further along
in the network. For example, an if-then node can force execution of
different branches in a workflow in response to a given condition, while a
Figure 2: Example of a workflow
with a parametric CAD system linked
to Radiance and EnergyPlus via
mapper nodes (M) – end users
contribute white components, node
developers build the grey components,
and black components are existing
tools that are wrapped.
488 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
repeat node can force repeated execution of a network branch a particular
number of times. Global control is also possible, but is defined at workflow
level, rather than task level, as discussed in Section 2.3.
Users configure nodes and their dependencies using a workflow
interface. Since we are describing a platform that operates in the cloud, this
interface must be a web application able to access distributed cloud
services.We propose a browser-based GUI that employs HTML5 to ensure
its usability in standalone applications, or even smartphone apps.This GUI
provides drag-and-drop functionality for placing nodes on the workflow
canvas, which are then wired together by the user, similar to defining a
model in Grasshopper, or in the various scientific workflow platforms
mentioned above.
We also propose several interface features to aid users in managing
workflow complexity. Graphical nesting allows clusters of nodes to be
collapsed into composite nodes, facilitating modularisation of the workflow
to improve its legibility [12], as well as enhancing reusability through the
sharing of these clusters amongst workflows. Provenance information
retrieval and querying enables workflow history to be reviewed, so that the
decision-making process can be tracked [4]. Further functionality includes
the ability to download the source code for any non-proprietary scripted
node, modify it and then republish it. This is crucial for ensuring the
platform’s flexibility and openness; however, it does require a clear
understanding of how data is passed through the system, discussed in more
detail below.
2.2. Data flow
Interoperability is a critical issue when linking applications from different
domains. Scientific workflow systems deal with this in a number of ways,
ranging from an ontological approach, where a common data schema is
imposed on data exchanges, to an open-world approach, where the user
resolves data compatibility issues as needed - a process known as ‘shimming’
[3]. In the AEC industry, a common approach to this issue is Building
Information Modelling (BIM), which tends toward the all-encompassing
ontological end of the spectrum.This is a top-down approach, reliant on the
IFC schema and its continuous extension to cater for all possible usage
scenarios.There, pragmatic information exchange is assumed to evolve into
discipline-oriented model views, where only filtered subsets of the model
are exchanged [13]. However, IFC – like any standardised ontology – has
significant epistemological, practical and technical limitations [10, 14, 15].
With a focus on data specification, rather than on implementation of this
specification, the data content involved in IFC exchange is often ambiguous
and implemented inconsistently between software [15].There is also a lack
of agreement concerning IFC model view definitions, which often results in
file exchanges involving large, cumbersome models unable to resolve
489Custom Digital Workflows: A New Framework for Design Analysis Integration
differences in data structure and content reliably between applications [14].
Furthermore, rather than actively supporting variability in design processes,
usage scenarios are limited to that which are accommodated by the current
IFC implementation (or risky ‘hacks’ thereof).
Instead of reading and writing to a common data schema, we therefore
propose tools be coupled more effectively through procedures that allow
direct data transfer between schemas. As advocated by proponents of
design-analysis integration, this transfer only needs to involve the required
data rather than the entire building description [16].The advantage of this
run-time coupling of different applications, Hensen et al. argue, is that it
“supports the exchange of information during a simulation as opposed to
... before the simulation” [17].While this approach is vulnerable to version
changes in wrapped tools, the sharing and reuse of interoperability
solutions is a mitigating factor. Furthermore, the proposed system does not
disregard BIM, but suggests IFC exchange be part of the workflow process,
integrated into these custom data flows rather than forcing the whole
system to adhere to IFC ontology. A good example of such an approach is
found in the “GeometryGym” a suite of tools that enable parametric
models generated in Grasshopper to be linked both to BIM workflows,
through components that generate IFC objects, and to structural
simulations [18].
The absence of any schema restrictions circumvents the limitations of
a multi-domain schema like IFC, where exchanges between tools and the
underlying data schema pose risks of significant information loss and
inappropriate process constraint.The only necessary constraint we
envision is that domain-specific input and output data schemas are
described and their assumptions identified, such that users may reasonably
rely on these descriptions when creating and editing workflows.This
constraint does not limit flexibility, as a new schema may always be
defined.
Data can be stored in a digital repository, or a key-value database, a
selection that is prompted as much by the elimination of any data schema
restrictions as by the choice of a cloud-based platform.The advantages of
these are that they do not impose a fixed schema for data description, are
highly scalable, and information access can be very fast. The key for a piece
of data is a unique ID, and its value can be a file, a class, a graph or any
other data type. It should be noted that the use of discrete packages of data
over a centralised process model may introduce some redundancy and
inconsistency; for example, when different workflow branches drawing on
the same input converge, combining data from different but overlapping
models. Such redundancy, admittedly, is inherent to the bottom-up approach
to integration that we are advocating. Eliminating this redundancy, however,
would not only greatly reduce the freedom and flexibility of designers to
create their own workflow processes from any selection of tools, it would
490 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
also seriously hamper the ability to define and explore unconventional
design spaces.
2.3. Control flow
As discussed in Section 2.1, control nodes provide localised ways of
manipulating the workflow.To provide the desired level of flexibility, the
framework also needs to offer different types of global, or high-level, control
flow. Many existing workflow systems are restricted to simple flow
mechanisms, where nodes are processed one at a time, in a pre-calculated
order. Such systems generate topological orderings, where each node in the
sequence will execute only after all its predecessor nodes have executed. A
key limitation of this approach though is that the network must be a
directed acyclic graph (DAG) to generate topological orderings, and as a
result networks with loops are not supported.
Networks with loops, however, are clearly desirable in certain situations,
such as optimisation routines.To support loops and other node execution
patterns, such as triggering nodes iteratively or periodically, different high-
level control mechanisms are required. In addition to providing workflow
execution functionality, these mechanisms are needed to support distributed
computing, by triggering nodes to work in parallel with other nodes, as well
as executing synchronously or asynchronously.
To ensure maximum flexibility, the user should be able to apply different
control flow mechanisms to different parts of the network.This could be
achieved by assigning a control mechanism to a composite node, which
would further open the possibility of nesting control flow.
3. CUSTOM DATA MAPPING
In order to create custom workflows, information must be matched and
mapped across a series of different applications and schemas.This may
involve deriving new data as well as disregarding data that is superfluous in
the target format. An explicit model describing these transformations is
therefore needed to relate constructs in the source representation to
constructs in the target representation [2, 19].
In a workflow system that dispenses with file format and programming
language restrictions, like ours, these transformations are a critical issue for
usability.To support a bottom-up, user-controlled and process-oriented
approach to linking design and analysis tools, a common mapping approach
is needed that allows users with limited programming skills to easily create
and share custom mappings, so that data compatibility issues can be
resolved collaboratively on the fly. For this to be achieved, the mapping
approach must be both flexible,so that users can apply the same mapping
approach to any type of data, and user-friendly,to the extent that it supports
non-programmers in the process of creating and debugging mappers.This
approach must therefore be semantically agnostic, employing data models
491Custom Digital Workflows: A New Framework for Design Analysis Integration
that support the organisation of information at an abstract level, in terms of
the data structure, data constraints and data operations. By making the
output schema of one node and input schema of another adhere to the
same data model, a mapping between two tools may then be more easily
defined.
3.1.A flexible mapping approach
The mapping process involves two distinct levels of data representation: the
data model versus the data schema.A data model allows data to be
organised using generic constructs that are domain independent and able to
describe a broad range of information types. It is a means of defining a data
structure that is very flexible but relies on human interpretation of semantic
meaning. A data schema describes information using semantic constructs
related to a particular domain.While the range of information types
supported is typically narrower, this level of representation allows
automated interpretation of semantic meaning.We propose a mapping
approach that engages both of these representation levels in order to
ensure that it is flexible and capable of automatically generating
transformation procedures from abstract mapping definitions.
Data schemas can be defined on top of a data model by specifying
additional constraints for the underlying generic constructs. One example is
the different XML schemas defined on top of the XML data model using the
XML Schema DDL. In this manner, data in any format can be made to
adhere to the same data model, easing the mapping definition process by
ensuring commonality in the way that information is represented between
schemas. Figure 3 illustrates this layered approach (option c) in comparison
with other AEC strategies for interoperability.We can see here that its
advantage lies in providing a consistent method for defining tool
connections, which is more cohesive than linking individual tools via file
translation (option a), and less restrictive than a universal data schema
(option b).
Mappings typically need to manipulate both data structure and content
to generate the required exchange procedure. This is achieved by specifying
relationships between domain-specific constructs in the output schema and
those in the input schema. In the modelling and simulation of design
problems, the resulting relationship networks are often highly complex, and
cannot be defined through a simple declaration of semantically-equivalent
constructs between the two schemas.To minimise the need for scripted
mappers in creating such networks, the mapping approach must allow
complex relationships to be defined in a highly flexible and intuitive manner.
We propose to support this flexibility by enabling schema relationships to
be described through transformation rules, using languages designed
specifically for querying and manipulating data models. In keeping with the
data models that they are used to traverse, these languages are highly
492 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
generic semantically, and offer the flexibility needed to negotiate
relationships between disparate design and analysis schemas. It should be
noted, however, that each is unique to a particular type of data model. In a
further paper, we have considered several data models and their respective
query languages from the point of view of applicability and ease of use [1].
Since adherence to a common data model is only a prerequisite for nodes
in any individual mapping, to ensure maximum flexibility, we propose to
dispense with data model restrictions at the system level.
3.2.A user-friendly mapping approach
Similar to other workflow systems, to assist the designer in creating these
custom mappings, we propose a mapping interface that supports visual
definition of transformation rules. In our system, the input and output data
schemas of each tool and scripted mapper node are represented as a
graphical abstraction based on a data model (such as a property graph).
Using graphical constructs that represent high-level transformations,
designers are able to visually define relationships between the output and
input schemas of different nodes to generate a full specification of the data
exchange procedure [2]. Based on this user-defined information, a mapping
procedure that transforms the source data set into the target data set can
then be generated automatically. Ideally, this interface would employ
advanced algorithms that offer guidance to the designer in matching data
across schemas, so that mappings can be created in less time, with less
effort and fewer mistakes [2]. An example of this can be seen with elastic
lists [20], where the selection of one object for mapping places logical
limitations on the properties that can be selected thereafter. This would
allow users to create transformation procedures without any programming,
Figure 3: Interoperability
approaches. Left: Linking individual
tools via file translation. Middle: Linking
tools via a shared data schema. Right:
Linking tools via a shared mapping
process.
493Custom Digital Workflows: A New Framework for Design Analysis Integration
although the interface would also enable direct editing of the
transformation script so that procedures could be enhanced for maximum
flexibility.
For tools structuring their input and output schemas using this same
data model, the resulting commonality would allow these tools to be easily
shared. Users could download diverse tools (and possibly mappers)
developed by different groups from a shared online repository, and then
string these together into their own unique customised workflows.The
users’ task would then be to define (i.e. create or edit) mappers, where
necessary, that generate data sets adhering to the input schemas of the
selected tools.
4. SYSTEM IMPLEMENTATION
Implementing our framework in the cloud promotes its scalability, efficiency
and reliability, as workflow execution can be distributed over multiple
computers in the network. Nodes may execute either as local tasks, if their
related design or analysis program resides on the local machine, or as cloud
applications, if their data-handling procedures are not linked to a locally-
housed program, and they are able to swap between modes of execution as
required.The node procedure, which may be written in any language, is
saved as an executable task that reads and writes data. A task scheduler,
housed on a server, is used to manage the execution of these tasks on
distributed resources, and the user controls the execution process through
a web-based workflow GUI that communicates with this scheduler. If the
system needs to be run locally, such as when a network connection cannot
be established, a server is simply started on the local machine.
Process nodes read and write data to and from a (distributed)
repository. Storing data in a repository ensures that a trace of all workflow
output is maintained for later perusal.The repository may be mirrored
locally, guaranteeing access to all outputs even when the user is
disconnected from the cloud. Similar to version-control systems used in
software development, such as Subversion or Git, users work on a local
copy and then merge or commit their changes to the shared repository
once the workflow has been executed. Data management within a
repository also eliminates the need for exchanging data directly between
process tasks.When a task creates output data, this is stored in the
repository together with some minimal metadata - origin, time of creation,
format, etc. - for which the Dublin Core [21], extended where necessary,
may serve as a template. In return, it receives a unique ID, which is then
passed to the task scheduler to be forwarded on to the next process
node’s task.The receiving task then queries the repository for the relevant
data.
Literature on scientific workflows also suggests several other desirable
system features [3], including:
494 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
•reusable features that are both generic and customisable to support a
wide range of tasks in specific ways;
seamless and dynamic subscription to and publishing of data and
processes in repositories;
•execution monitoring, failure recovery and interactive user guidance of
workflows.
This functionality will be incorporated into the future development of
the system, as well as steer the implementation of features already
discussed.
5. DISCUSSION
The question of interoperability has long been vexing the AEC industry.
While computational systems for design-analysis integration, for example
PACE [22], SEMPER [23] and P3 [24], have been the subject of research for
decades and instrumental in establishing a foundation for current integration
efforts, they have largely not been adopted in practice.Their inability to
support the ad hoc workflows experienced on real projects [25] is a
problem similarly observed in integration practices today, both of which can
be attributed to implementation of an interoperability approach based on a
common data schema. By imposing inflexible standards for data
representation and exchange, restrictions are placed on the software and
processes that can be supported, and severe limitations are inflicted on the
ways in which designs can be described and explored.To overcome these
limitations, we are proposing that direct communication between different
data schemas facilitated by an open mapping approach - rather than
communication via a multi-domain schema - is the preferred and even
necessary arrangement. Although more work may be needed to create the
many tool wrappers and scripted mappers required to support such
communication, this is more desirable than over-constraint of design
processes caused by the use of standardised representations and protocols
that are only applicable within a relatively small region of ‘design space’.
These nodes do not need to be created in advance, but can be developed
or adapted by designers as the need arises, and shared amongst the user
community.This promotes greater responsiveness to user needs than when
software development is left in the hands of firms motivated by commercial
considerations.
Research and development of the proposed system is an ongoing effort
from the Open Systems group; however, success will ultimately depend on a
community of users and developers, able and willing to adapt this system to
their own workflows, capitalising on tools and processes they already have
in place.We do not aim to replace their toolsets but rather provide a
framework to embed current tools in more cohesive, shareable and
customisable digital workflows. As observed within the Grasshopper user
community, the ability to influence and effect software extension, and
495Custom Digital Workflows: A New Framework for Design Analysis Integration
benefits of combining various add-ons to functionality, encourages users to
contribute to system development. Consultants could also showcase their
skills by sharing select process nodes that they develop in-house, as is
similarly achieved by the GeometryGym and Karamba plug-ins for
Grasshopper, establishing themselves as experts to prospective clients and
co-collaborators who require specialised workflows for unique design
projects and problems.These user developments could potentially result in
a large collection of process nodes, which may compete in offering
alternative ways of achieving almost identical outcomes, and a considerable
range of options available to designers.To aid designers in choosing
appropriately, it is important not only that a description of the functionality
of each node is available, but also that the designer is able to ensure that
nodes ‘fit’ other nodes in order to compose a valid workflow. Formal input
and output schemas for tool nodes should also be made readily available, to
assist users in determining which subsets of these schemas are relevant to
their particular design scenario. Assertions should also be specified on data
outputs and assumptions specified for expected inputs so that automatic
checking of node compatibility is possible, and where compatibility is lacking
a new or modified node is substituted.
Workflow design is likely to be an incremental process in which a
number of nodes are combined into a partial workflow, tested by the
designer, then further developed and extended. Besides automatic checking
of the mutual fitness of adjacent nodes, the designer will need to check
whether the (partial) workflow is behaving as expected and producing
appropriate results.When results are not as expected, the designer will
need to debug the workflow by tracing back execution, which can be
assisted by displaying intermediate, as well final, results.
In the context of a large collection of process nodes and choice of
alternative ways to achieve the same or similar result, user-friendliness and
knowledge-based support, the two main concerns of designers when using
analysis software [26], will become crucial.These concerns will also need to
be explored in terms of the scalability of the user-defined mapping approach
when working with large data sets and more complex data schemas (like
IFC). If the increased complexity makes it difficult to construct custom
mappers, then additional data management and schema management tools
will be required – tools which could also assist in workflow debugging and
schema formalisation. Issues of accuracy, uncertainty and risk may also be of
significant concern. Macdonald et al. propose the introduction of uncertainty
considerations in simulations to provide meaningful feedback to the user
and to improve confidence through risk assessment [27].While these can be
addressed within individual software tools, the proposed system should also
introduce this functionality into the workflow environment itself.
A final issue is security and trust. In a collaborative and distributed
environment, how can we guarantee that data is secure and processes can
496 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
be trusted? Data encryption and certification of process nodes will be
prerequisites, in combination with a secure workflow platform, to ensure
trust to the user.
6. CONCLUSION
This article summarises an ongoing effort to address limitations in process
and technology that currently obstruct design collaboration. In it we argued
the need for a user-controlled and process-oriented approach to integration
and interoperability, and discussed how a cloud-based workflow system can
support more flexible and distributed design processes.We examined the
features and functionality needed to abstract computing and data resources
to make tools and technologies more accessible to users, both as individuals
and as members of design teams. As well as benefiting design practice, we
envisage the proposed system as a platform for researchers to share their
work and increase the impact of their individual efforts through integration
with other research.The system requirements that we have identified will
help to ensure that the proposed design-analysis integration platform when
developed is flexible, visual, collaborative, scalable and open.
ACKNOWLEDGEMENTS
The authors would like to acknowledge and thank the participants of the
“Open Systems and Methods for Collaborative BEM (Building Environment
Modelling)” workshop held at the CAAD Futures 2011 Conference in
Liège, Belgium, 4 July 2011, and of the LinkedIn Group sharing the same
name, for their contributions to the discussions leading to the ideas
presented and described in this paper. We invite interested parties to
contribute to the development of these ideas and to join the discussions in
the LinkedIn Group.
REFERENCES
1. Janssen, P., Stouffs, R., Chaszar,A., Boeykens, S. and Toth, B., Data Transformations
in Custom Digital Workflows: Property Graphs as a Data Model for User-
Defined Mappings, Intelligent Computing in Engineering Conference - ICE2012,
Munich, 2012.
2. Bellahsene, Z., Bonifati, A., Duchateau, F. and Velegrakis,Y., On Evaluating Schema
Matching and Mapping, in: Bellahsene, Z., Bonifati A. and Rahm E., eds., Schema
Matching and Mapping,Springer, Berlin and Heidelberg, 2011, 253-291.
3. Altıntas,,I
.., Collaborative Provenance for Workflow-driven Science and Engineering, PhD
Thesis, University of Amsterdam, 2011.
4. Deelman, E., Gannon, D., Shields, M. and Taylor, I.,Workflows and e-Science: An
Overview of Workflow System Features and Capabilities, Future Generation
Computer Systems, 2008, 25, 528-540.
5. Curcin,V. and Ghanem, M., Scientific Workflow Systems - Can One Size Fit All?,
CIBEC 2008, Cairo, 2008, 1-9.
497Custom Digital Workflows: A New Framework for Design Analysis Integration
6. Callahan, S., Freire, J., Santos, E., Scheidegger, C., Silva, C. and Vo, H., Vistrails:
Visualization Meets Data Management, SIGMOD 2006,Chicago, 2006, 745-747.
7. Rex, D., Ma, J. and Toga,A.,The LONI Pipeline Processing Environment,
Neuroimage, 2003, 19(3), 1033–1048.
8. http://www.phoenix-int.com/software/phx-modelcenter.php [30-10-2012]
9. http://arupforge.arup.com/wiki/index.php?title=DesignLink_SDK [30-10-2012]
10. Bowker, G. and Starr, S., Sorting Things Out: Classification and Its Consequences,MIT
Press, Cambridge, 1999.
11. Flager, F.,Welle, B., Bansal, P., Soremekun, G. and Haymaker, J., Multidisciplinary
Process Integration and Design Optimization of a Classroom Building, ITcon,
2009, 14, 595-612.
12. Davis, D., Burry, J. and Burry, M., Untangling Parametric Schemata: Enhancing
Collaboration Through Modular Programming, CAAD Futures 2011,Liège, 2011,
55-68.
13. Eastman, C.,Teichholz, P., Sacks, R. and Liston, K., BIM Handbook - A Guide to
Building Information Modeling for Owners, Managers, Designers, Engineers, and
Contractors,2nd edn., John Wiley and Sons, Hoboken, 2011.
14. Kiviniemi, A.,Ten Years of IFC Development - Why We Are Not There Yet,
Proceedings CIB-W78,Montreal, 2006.
15. Pazlar,T. and Turk, Z., Interoperability in Practice: Geometric Data Exchange
Using the IFC Standard, ITcon, 2008, 13, 362-380.
16. Augenbroe, G., deWilde, P., Moon, H., Malkawi, A., Brahme, R. and Choudhary, R.,
The Design Analysis Integration (DAI) Initiative, 8th IBPSA Conference,Eindhoven,
2003, 79-86.
17. Hensen, J., Djunaedy, E., Rados
˘evic´, M. and Yahiaoui, A., Building Performance
Simulation for Better Design: Some Issues and Solutions, PLEA 2004,Eindhoven,
2004, vol. 2, 1185-1190.
18. Mirtschin, J., Engaging Generative BIM Workflows, Collaborative Design of
Lightweight Structures - LSAA 2011, Sydney, 2011, 1-8.
19. Kilian, A., Design Innovation Through Constraint Modeling, International Journal of
Architectural Computing, 2006, 4(1), 87–105.
20. http://well-formed-data.net/experiments/elastic_lists/ [30-10-2012].
21. http://dublincore.org [30-10-2012]
22. Maver,T., PACE 1: Computer Aided Design Appraisal, Architects Journal, 1971, July,
207-214.
23. Mahdavi,A., Mathew, P., Lee, S., Brahme, R., Kumar, S., Liu, G., Ries, R. and Wong,
N., On the Structure and Elements of SEMPER, AC ADIA ‘96,Tucson, 1996, 71-84.
24. Kalay,Y., P3: Computational Environment to Support Design Collaboration,
Automation in Construction, 1998, 8(1), 37-48.
25. Holzer, D., Optioneering in Collaborative Design Practice, International Journal of
Architectural Computing, 2010, 8(2), 165-182.
26. Attia, S., Beltrán L., De Herde,A. and Hensen, J., “Architect Friendly”: A
Comparison of Ten Different Building Performance Simulation Tools, 11th IBPSA
Conference, Glasgow, 2009, 204-211.
27. MacDonald, I., Clarke, J. and Strachan, P.,Assessing Uncertainty in Building
Simulation, 6th IBPSA Conference,Kyoto, 1999, vol. II, 683-690.
498 Bianca Toth, Patrick Janssen, Rudi Stouffs,Andre Chaszar and Stefan Boeykens
499Custom Digital Workflows: A New Framework for Design Analysis Integration
Bianca Toth1,Patrick Janssen2,Rudi Stouffs2,3,Andre Chaszar3
and Stefan Boeykens4
1Queensland University of Technology
School of Design, 2 George Street, D Block, Level 5
Brisbane QLD 4000 Australia
bianca.toth@qut.edu.au
2National University of Singapore
Department of Architecture, 4 Architecture Drive
Singapore 117566
patrick.janssen@nus.edu.sg, stouffs@nus.edu.sg
3Delft University of Technology
Faculty of Architecture, Juliananlaan 134
2628 BL Delft,The Netherlands
a.t.chaszar@tudelft.nl, r.m.f.stouffs@tudelft.nl
4KU Leuven
Department of Architecture, Urban Design and Regional Planning
Kasteelpark Arenberg 1 - box 2431
3001 Heverlee, Belgium
Stefan.Boeykens@asro.kuleuven.be
... However, in current AEC practices, we often encounter distributed interoperability across all layers where data is exchanged (Fig.1a), a centralized data schema across all disciplines (Fig. 1b), or federated interoperability where tools share an ontology that tools can relate to and refine (Fig. 1c). In the following subsections, we discuss distributed, centralized, and federated interoperability paradigms (adapted from [27]) in general, and provide an AEC application example of such interoperability. ...
... Using decentralized (distributed or federated) data interoperability over a centralized process model may introduce redundancy and inconsistency; for example, when different workflow branches drawing on the same input converge, combining data from different but overlapping models [27]. However, a centralized interoperability paradigm contradicts the multidisciplinary nature of building industry. ...
... However, a centralized interoperability paradigm contradicts the multidisciplinary nature of building industry. A centralized workflow reduces the freedom and flexibility of designers to create their workflow processes from any selection of tools, and it hinders the ability to define and explore unconventional design spaces [27]. Therefore, a common data schema, seems not applicable for co-designing buildings. ...
Conference Paper
Full-text available
In the building industry, disciplines have specific requirements for capturing, storing and representing information. As a result, one physical object yields several disciplinary representations, allowing designers to describe design options discipline-specifically and thus explore design differently. In a co-design approach, data from disciplines must be integrated and interoperable. But in current practice, interoperability problems between different software often hinders data integration. This paper presents a review and evaluation of interoperability paradigms in centralized, decentralized, and federated data, and it gives AEC application examples for each approach. It discusses data schemas (e.g. IFC) and interoperability tools (e.g. Speckle, BHoM). It also relates the interoperability tools to semantic web standards that are used to share information. It argues the advantages of federated data interoperability and suggests using design tools that employ such a mechanism. Furthermore, this paper suggests developing modular disciplinary ontologies to support collaborative design tools. Such federated interoperability, supported by modular disciplinary ontologies, can provide a solid ground to share and exchange data flexibly.
... Thereby, it imposes restrictions on how designs can be described and thus explored and shared (Plume and Mitchell, 2007;Patacas et al., 2014). (Toth et al., 2012;Negendahl, 2015) Toth et al. (2012) suggest another method of achieving interoperable data integration by linking tools through a shared mapping process (FIG.b). In this approach, tools do not share the same schema, however, data of one tool are automatically transformed to the target data set of another tool through a custom data-mapping interface that includes visual definitions of transformation rules. ...
... An advanced algorithm will offer guidance to the users in matching data across schemas, so that mapping can be created in less time, with less errors and fewer mistakes (Fagin et al., 2009). Although this approach provides a framework to embed current tools in a more cohesive, shareable and customisable digital workflow, it requires great effort to script data-maps and transformation rules for the various tools (Toth et al., 2012). ...
... In those cases, the maximum number of converters required is n(n-1)/2. Although this is a flexible approach, it relies on human interpretation of semantic meaning (Toth et al., 2012). ...
Article
Full-text available
The objectives of this study were to develop, test and summarise lessons learned using two different methods for data integration between autonomous software packages for design models, cost calculations and cost databases with regard to generating life-cycle costing analysis. The two developed methods followed the principles of compatibility and interoperability and were tested in three test cases: a simplistic design model, a university building model and a private company’s office building model. The compatible method entailed an MS Excel tool while the interoperable method followed a more automated procedure through a visual programming environment. Both methods were, however, facing several obstacles with regard to data integration across autonomous software packages and automated procedures for calculation of life-cycle cost which in turn left plenty of manual work and made the results prone to human errors.
... 2014) and IFC (BuildingSMART, 2013) standards), sought improvement, when teams are working with separated models. These formats have (yet) not succeeded in solving the many interoperability challenges between design tools and BPS tools, in a way that makes performance evaluations in design tools feasible (Toth et al., 2012). Even though the concept of a common reference model makes sense in all stages of building design, the early stages are often detached from any form of building information model. ...
... As a consequence, collaboration within centralized models has been considered time consuming and in some cases counterproductive in terms of design exploration. Using common data schema like IFC and gbXML to structure information exchange, regardless of whether it is proprietary or an open standard, imposes restrictions on how designs can be described and thus explored (Toth et al., 2012). Models translated from a shared building information model 14 are as precise as the database or schema allows it to be. ...
... As long as the VPL is able to interpret the data, the VPL will not distinguish between data types. Toth et al. (2012) sees the elimination of the common data schema as a prerequisite for information exchange, allowing design freedom to create custom digital workflows unfettered by standardization constraints while distancing itself from the central model method. Toth et al. (2012) further argues that the VPL does open up for exchanging data with IFC and other open formats, and the VPL can so to speak act as a gateway to a common data schema. ...
Thesis
Full-text available
In the wake of uncompromising requirements on building performance and the current emphasis on building energy consumption and indoor environment, designing buildings has become an increasingly difficult task. However, building performance analyses, including those of building energy consumption and indoor environment, are generally conducted late in the design process. As a result, building performance evaluations are omitted in the early design where changes are least expensive. Consequence based design is a framework intended for the early design stage. It involves interdisciplinary expertise that secures validity and quality assurance with a simulationist while sustaining autonomous control of building design with the building designer. Consequence based design is defined by the specific use of integrated dynamic models. These models include the parametric capabilities of a visual programming tool, the building analyses features of a building performance simulation tool and the modelling and visualisation features of a design tool. The framework is established to enhance awareness of building performance in the early stages of building design, in the aim to create High-Performance Buildings. The project relies on various advancements in the area of integrated dynamic models. It also relies on the application and test of the approach in practice to evaluate the Consequence based design and the use of integrated dynamic models. As a result, the Consequence based design approach has been applied in five case studies. All case studies concern building design projects performed in collaboration with Grontmij and various Danish architectural studios. Different types of integrated dynamic models have been implemented and tested for the individual projects. The findings from each project were used to alter and define new ways to implement integrated dynamic models for the following project. In parallel, seven different developments of new methods, tools and algorithms have been performed to support the application of the approach. The developments concern: Decision diagrams – to clarify goals and the ability to visualize any relevant building performance. AHP – the use of Analytic Hierarchy Process to clarify differences between solutions on both qualitative and quantitative evaluations. Termite – the implementation of the BPS tool solver Be10 as a plugin for Grasshopper that enables live feedback of entire building energy consumption. HQSS – a quasi-steady-state BPS tool solver dedicated for fast thermal analyses in Grasshopper. Moth – an agent-based optimization algorithm implemented in Grasshopper that attempts to combine qualitative and quantitative evaluations during optimization. Sentient models – a method to listen to user behaviour in Grasshopper and decrease the space of solutions. Surrogate models – a test of machine learning methods to speed up any BPS feedback through surrogate models with Grasshopper. This thesis demonstrates how integrated dynamic models may include building performance feedbacks, specifically feedbacks regarding energy consumption and indoor environment in the aim to create High-Performance Buildings. It further demonstrates the inclusion of quality defined performances un-associated with High-Performance Buildings. The thesis discusses ways integrated dynamic models affect the design process and collaboration between building designers and simulationists. Within the limits of applying the approach of Consequence based design to five case studies, followed by documentation based on interviews, surveys and project related documentations derived from internal reports and similar sources, this thesis can conclude that integrated dynamic models for these particular case studies can improve the speed of multiple and parallel performance evaluations, reduce working hours for the simulationists and are likely to improve the goal of creating High-Performance Buildings.
... Still, potential reductions in control stem from issues of system reliability, viruses, etc. (vs. physical damage to analog representations) and from questions about the applicability of approaches adopted from manufacturing when applied to more open design processes --although these can be mitigated to some extent by recourse to more open design space exploration tools, 'custom digital workflows', and approaches to interoperability [5,6,7,8] as discussed further below. Given these points, design representation with digital means on the whole seems to come out on the side of greater control, in balance. ...
... These tools can be used in the conventional mode of confirmatory analysis, but also provide useful input during earlier conceptual phases of design if handled properly, when relative merits rather than great precision are needed for decisions [10]. Tools for carrying these out are not always geared to non-experts, however, so their use may entail significant collaborative efforts [1,4,7] (see below) or else risk recourse to shaky assumptions giving results which can distort design decisions. Another associated risk is information overload, "losing sight of the forest for the trees": a possible excess of choices and consequent disorientation in balancing conflicting requirements. ...
... Optimisation builds upon analysis and simulation above, with additional power and control exercised through multiple iterations of goal-directed design revisions, greatly increasing the number of design variations examinable. Work on 'custom digital workflows' seeks to enable linking of possibly disparate software packages, giving more control over choice of software (for example those with which the design team is more familiar and confident) and over how design data are processed [6,7]. Yet not all aspects of optimisation remain firmly under the architects' control: for example the choices of optimisation algorithm types and their parameters, which often enough need expert input rather than acceptance of defaults. ...
Conference Paper
Issues of control and command in architecture are considered here via reflections on recent and current research projects concerning digital technologies. The projects’ topics cover a range of scales and approaches, from the planning and design of urban ensembles to the detailing of panels for constructing freeform building envelopes. Additional topics on this spectrum include methods to support open-ended design explorations, goal-driven optimisations, participatory design and the internet-of-things. In each of these the possibilities and methods for controlling the design process and the resulting artifacts and systems are addressed in different ways, which consequently influence the roles of architects in different ways. Overall we see that while digital technologies do indeed enhance architects’ control in some cases, some applications require sharing of control with others, while still others may result in loss of control either to other parties due to transferability of skills, for example or altogether due to complexity and feedbacks. Awareness of these different possibilities may aid better use of the technologies.
... However, the fragmented nature of the AEC industry contrasts with the monolithic approach of IFC [1]. A common data schema causes restrictions on how buildings can be represented and thus explored [7]. Contrary to IFC, the open source platform Building Habitat object Model (BHoM) [8] does not rely on a single "neutral format" but rather provides adapters to map data among many AEC concepts across existing languages and platforms, and it allows multiple discrete disciplinary representations of buildings and building elements; with an open and extendable data dictionary. ...
Conference Paper
Full-text available
Architecture, engineering, and construction (AEC) projects require multidisciplinary solutions; therefore, one physical asset results in several disciplinary representations. Interoperability problems between different software often hinder disciplinary data integration, which prevents the recognition of violated design constraints until it is too late. The open-source platform Building Habitat object Model (BHoM) integrates many AEC concepts across existing languages and platforms, allowing multiple discrete disciplinary representations of buildings. The Linked Building Data Community Group of the World Wide Web Consortium uses Semantic Web standards to store and share data. This paper explores and presents methods to recast BHoM's object model as a knowledge graph using Semantic Web standards. It analyzes BHoM in an ontological context by elaborating on its approach to object-oriented programming and comparing it to the Semantic Web standards. Furthermore, it presents BHoM RDF prototypes as a potential solution to the interoperability problem of multidisciplinary design in AEC. It exemplifies BHoM RDF prototypes on a small demonstrator project. When combined with Semantic Web standards, the BHoM framework can increase the use of knowledge graphs in the AEC industry, improving data interoperability and assisting design decisions through reasoning.
... Achieving a data integration from IFC to CityGML answers a wider scope introduced with the concept of interoperability ( Toth et al. 2012). Data formats in general address specific types of description, outdoor enhanced detailing (CityGML) or indoor (IFC). ...
Article
Digital building modelling faces issues related to inconsistent integration/interoperability, in particular for a BIM GIS convergence. A consistent data conversion approach should consider semantic and spatial geometric modelling, therefore potentially leading to a loss of spatial geometric information. This paper simplifies semantic solving, and concentrates on the accuracy of the spatial geometric representation to avoid conflicting 3D spatial mismatches. To solve the geometry, original spatial specifications in 3D described in the IFC data schema are considered for all objects present in the model. The method is explored with a main BIM-IFC model, and additionally tested with two other models.
... The role of the middleware component is to facilitate communication and coordination [30]. In the proposed framework, this entails the implementation of functions to manage data and control [31]. Managing data is done by exchanging data from and to the neutral BIM model, external data, domain models, performance evaluation, and optimization. ...
Article
Full-text available
Virtual design tools and methods can aid in creating decision bases, but it is a challenge to balance all the trade-offs between different disciplines in building design. Optimization methods are at hand, but the question is how to connect and coordinate the updating of the domain models of each discipline and centralize the product definition into one source instead of having several unconnected product definitions. Building information modelling (BIM) features the idea of centralizing the product definition to a BIM-model and creating interoperability between models from different domains and previous research reports on different applications in a number of fields within construction. Recent research features BIM-based optimization, but there is still a question of knowing how to design a BIM-based process using neutral file formats to enable multidisciplinary optimization of life-cycle energy and cost. This paper proposes a framework for neutral BIM-based multidisciplinary optimization. The framework consists of (1) a centralized master model, from which different discipline-specific domain models are generated and evaluated; and (2) an optimization algorithm controlling the optimization loop. Based on the proposed framework, a prototype was developed and used in a case study of a Swedish multifamily residential building to test the framework’s applicability in generating and optimizing multiple models based on the BIM-model. The prototype was developed to enhance the building’s sustainability performance by optimizing the trade-off between the building’s life-cycle energy (LCE) and life-cycle cost (LCC) when choosing material for the envelope. The results of the case study demonstrated the applicability of the framework and prototype in optimizing the trade-off between conflicting objectives, such as LCE and LCC, during the design process.
... Instead, they develop customized digital workflows to manage design, prefabrication and assembly. These workflows illustrate the "usercontrolled and process-oriented approach to integration and interoperability" proposed by Toth, Janssen, Stouffs, Chaszar, and Boeykens (2012). ...
Article
Full-text available
This paper examines Parametric Design (PD) in contemporary architectural practice. It considers three case studies: The Future of Us pavilion, the Louvre Abu Dhabi and the Morpheus Hotel. The case studies illustrate how, compared to non-parametrically and older parametrically designed projects, PD is employed to generate, document and fabricate designs with a greater level of detail and differentiation, often at the level of individual building components. We argue that such differentiation cannot be achieved with conventional Building Information Modelling and without customizing existing software. We compare the case studies' PD approaches (objected-oriented programming, functional programming, visual programming and distributed visual programming) and decomposition, algorithms and data structures as crucial factors for the practical viability of complex parametric models and as key aspects of PD thinking.
... How can user interact and interoperate with such a robust environment? In the intention of a framework where a design environment that supports interaction by engaging a data and cooperation strategy for exchanging information between particular parts in the workflow, [1, 3] we propose a CAD integration of urban simulation into one looping interactive process. The system would provide results outside the BIM-based design environments and turns to a fully serviceable, extendable and scalable parametric modelling interface with direct (and interactive) user inputs and control in a full customized environment. ...
Thesis
Full-text available
In the digital age of architecture, parametric design plays an essential role in the generative process of architectural design. The most important benefits of parametric design are having the visual representation of the design process, enabling the interaction of user during the generation of parametric model using visual digital codes, the representation of design steps sequentially and logically, and also the possibility to adapt the design model according to the customer's requirements and the designer's desires generating alternative design solutions for the same model. A parametric typology is an approach that combines typological design and computational parametric techniques to produce a variety of designs belonging to a specific building type. It facilitates the customization in housing design. The parametric typological design allows diversity, variation, and individualisation just as easily as the mass-standardized production. This research investigates the customization of housing components to satisfy the customer's needs. The research reviewed many previous studies that adopted the parametric typological approach in architectural design, ranging from general to specific studies. Accordingly, the research problem has been identified as the lack of a clear perception about the parametric typological approach that is customizable, to be applied in the design of single-family housing within a local residential complex in Mosul city. The research aim is to apply the parametric typological approach in the generation of customizable single-family housing designs in response to the customer's choices. To achieve the aim, the research methodology was to build a theoretical framework for the customization using parametric typological approach, The framework included three components that define the stages of the parametric typological design and the sequence of steps, the implementation of customization in the parametric typological design, and the computerized parametric techniques used in the application of customization. Later, some of the framework’s components were applied to generate new typological designs belong to a local project of single-family housing to meet various requirements of customers. The results of the study show the efficiency of applying customization in the parametric typological design process. The proposed algorithm proved its effectiveness in generating wide range of design alternatives. From six original design prototypes of Ain Aliraq Housing project in Mosul, the study generated 96 varied prototype plans for the ground floor, 1920 prototype plans for the first floor, and 069 three-dimensional facades. All design alternatives were generated from a single parametric model implemented in Grasshopper for Rhino without the use of ready-made software. The user interface presented in this study simplifies the design process for the customer. Where the process of selecting parameters is sequential and having the possibility of backtracking.
Article
Full-text available
A wide range of scientifically validated Building Performance Simulation tools BPS is available internationally. The users of those tools are mainly researchers, physicists and experts who value empirical validation, analytical verification and calibration of uncertainty as defined by e.g. BESTEST. However, literature and comparative surveys indicate that most architects who use BPS tools in design practice are much more concerned with the (1) Usability and Information Management (UIM) of interface and (2) the Integration of Intelligent design Knowledge-Base (IIKB). Those two issues are the main factors for identifying a building simulation program as "Architect Friendly". Now, with the advancement of BPS tools and the recent announcements of direct links between BIM or non-BIM modeling tools and BPS tools it is important to compare the existing programs. Based on an online survey, this paper presents the results of comparing ten major BPS tools. The following programs are compared: ECOTECT, HEED, Energy 10, Design Builder, eQUEST, DOE-2, Green Building Studio, IES VE, Energy Plus and Energy Plus-SketchUp Plugin (OpenStudio). With 249 valid responses, the survey ranked the tools in three classes and revealed that architects seek the IIKB above the UIM of the interface. Finally, the paper summarizes the key findings and underlines the major requirements for future improvement and development of BPS tools, mainly from an architectural perspective.
Article
Full-text available
The exchange of information between the different organisations and individuals involved in the different stages of a building's life cycle has always been an important, but at the same time a difficult task. A vast number of participants with different views of the same physical structure have to interact and exchange information through the whole building life cycle. In order to find remedies to the current problems, in particular in CAD data exchange, the product modelling and information exchange standards community have developed several high level representations of buildings (known as Building Information Models - BIMs) in order to enable a more coherent exchange of data. Recently the Industry Foundation Classes (IFC), with a considerable number of software implementations, have emerged as the leading solution candidate. But soon after the first implementations doubts have been raised whether claimed IFC specification compliance by a software product insures a sufficient level of interoperability in practical data exchange. In the presented research work the interoperability performance of three widely used IFC compatible architectural, design applications has been evaluated. Tests with file based geometry exchange confirmed our anticipations that the IFC interfaces did not work as expected. The tests demonstrated through illustrative (simple and complex) examples revealed several cases of information distortion and/or information loss both on the entity and attribute level. Unsatisfying model handling proved to be characteristic of all the tested exchange scenarios. Our conclusion is that in the future more effort should be put into the IFC interface development.
Article
Full-text available
Architecture, Engineering, and Construction (AEC) professionals typically generate and analyze very few design alternatives during the conceptual stage of a project. One primary cause is limitations in the processes and software tools used by the AEC industry. The aerospace industry has overcome similar limitations by using Process Integration and Design Optimization (PIDO) software to support Multidisciplinary Design Optimization (MDO), resulting in a significant reduction to design cycle time as well as improved product performance. This paper describes a test application of PIDO to an AEC case study: the MDO of a classroom building for structural and energy performance. We demonstrate how PIDO can enable orders of magnitude improvement in the number of design cycles typically achieved in practice, and assess PIDO's potential to improve AEC MDO processes and products.
Conference Paper
Full-text available
This paper describes the use of property graphs for mapping data between AEC software tools, which are not linked by common data formats and/or other interoperability measures. The intention of introducing this in practice, education and research is to facilitate the use of diverse, non-integrated design and analysis applications by a variety of users who need to create customised digital workflows, including those who are not expert programmers. Data model types are examined by way of supporting the choice of directed, attributed, multi-relational graphs for such data transformation tasks. A brief exemplar design scenario is also presented to illustrate the concepts and methods proposed, and conclusions are drawn regarding the feasibility of this approach and directions for further research.
Article
Full-text available
To provide substantial improvements in indoor environment and energy consumption levels, there is a need to treat a building with its associated systems as a complete entity, not as the sum of a number of separate systems. Building performance simulation is ideal for this. This paper discusses some issues that hinder the routine use of simulation in building design. In particular, the paper discusses the issues of quality assurance, the relative slow software developments and the limited use (usability) of building performance simulation during the total life cycle of a building. Possible solutions are also discussed by introducing our current research.
Conference Paper
Full-text available
The premise underlying this work is that introducing uncertainty considerations into simulation will facilitate risk assessment and that this, in turn, will help to improve designer confidence in simulation. Sources of uncertainty abound in building simulation and must be factored into the solution process. These sources have been identified and useful techniques for quantifying the effects of uncertainties are presented. Two approaches are described: the use of traditional statistical methods, and the use of alternative arithmetical methods. The theory behind these methods and implementation of appropriate methods into an existing simulation program is described.
Article
Met de groei van wetenschappelijke kennis en de stijging van het aantal studies dat gebruikmaakt van informatie van verschillende wetenschappelijke disciplines wordt de complexiteit van systematisch wetenschappelijk onderzoek aanzienlijk vergroot. Om de ‘grote’ wetenschappelijke vragen te beantwoorden gebruiken wetenschappers computergebaseerde methodieken die bijna dagelijks worden aangepast. Veranderingen in de computerwetenschap en technologie hebben geleid tot een verzameling van gereedschappen ontworpen om het wetenschappelijke proces meer efficiënt en sneller te maken. Deze gereedschappen zijn in staat om de creatie en uitvoering van computergestuurde taken te vereenvoudigen en zijn bekend onder de naam ‘wetenschappelijke workflows’. Ilkay Altintas presenteert vier belangrijke bijdragen op het gebied van samenwerkende workflows, genaamd collaborative provenance. De voornaamste bijdragen in het vastleggen van samenwerkende oorspronginformatie in de studie leiden tot de ontwikkeling van computersystemen die interoperatief samenwerken en het hergebruik van workflowresultaten vergroten.