Conference PaperPDF Available

Making in Mixed Reality. Holographic design, fabrication, assembly and analysis of woven steel structures

Authors:

Figures

Content may be subject to copyright.
2
Making in Mixed Reality
1 Woven Steel Pavilion, CAADRIA
2018 Workshop
Gwyllim Jahn
Fologram
Cameron Newnham
Fologram
Nicholas van den Berg
Fologram
Matthew Beanland
Fologram
Holographic design, fabrication, assembly and analysis of woven
steel structures
1
ABSTRACT
The construction industry’s reliance on two-dimensional documentation results in
inefciency, inconsistency, waste, human error, increased cost and the impracticality
of architectural experimentation with novel form, structure, material or fabrication
approaches. We describe a software platform that enables designers to create interactive
holographic instructions that translate design models into intelligent processes rather
than static drawings. A prototypical project to design and construct a pavilion from bent
mild steel tube illustrates the use of this software to develop applications assisting with the
design, fabrication, assembly and analysis of the structure. We further demonstrate that
fabrication within mixed reality environments can enable unskilled construction teams to
assemble complex structures in short time frames and with minimal errors, and outline
possibilities for further improvements.
TOPIC (ACADIA team will ll in) 3
INTRODUCTION
The construction industry’s reliance on two-dimensional
documentation results in inefciency, inconsistency, waste,
human error, increased cost and the impracticality of
architectural form that is difcult to describe orthograph-
ically. Despite advances in prefabrication systems, robotic
fabrication or even building-scale additive manufacturing
the construction industry resists innovations in auto-
mation due to unanticipated events that inevitably result
from inconsistencies between digital design and as-built
site conditions. Improving the ability of design models to
respond and adapt to requirements of construction teams
during assembly and fabrication would offer signicant
efciency improvements and cost and risk reductions
within the construction industry. Replacing often incom-
plete, redundant or inadequate drawn instructions with
constant, contextual and unambiguous descriptions of
design intent would reduce the risk associated with archi-
tectural experimentation with form, structure, material or
fabrication approaches (Boud et al, 1999).
As construction sites embrace building information model-
ling, construction workers are required to process more
information from more domains in order to complete their
tasks (Cote et al, 2013). This requires expertise in following
chalk marks, reading drawings, interpreting sched-
ules, checking a 3D model on a screen and working with
accountability and productivity tools. As design changes
are made, signicant delays result from reprocessing
this information and prevent construction workers from
focusing on fabrication tasks. Simplifying the tools and
environments in which fabrication instructions, assembly
processes, task clarication and verication take place
would dramatically reduce the requirements for exper-
tise and limit delays in both routine and non-standard
construction. Interactive mixed reality environments
simplify fabrication tasks by providing constant, contextual
and unambiguous description of design intent to fabrica-
tion teams. Creating interactive holographic instructions
enable designers to translate design models into intelligent
processes rather than static drawings, with signicant
opportunities and implications for architectural design and
production.
Mixed Reality
Milgram denes mixed reality as a continuum of virtual
reality technologies in which real and virtual objects are
combined in a single display (Milgram and Kishino, 1996).
Virtual reality (VR) and augmented reality (AR) lie at oppo-
site ends of this spectrum, with VR describing technologies
that place the user in a completely computer generated
virtual world and AR referring to systems that preserve
the user’s awareness of, and ability to interact with, their
immediate physical context by compositing the real world
and computer-generated models in a blended 3d space. The
idea of using AR to visualize designs or instructions in-situ
and assist with manufacturing tasks has existed since
the technology’s conception (Figure 2) (Caudell and Mizell,
1992).
Augmented reality applications have recently become ubiq-
uitous with software development kits for mobile platforms
such as ARCore and ARKit. Research has demonstrated
that augmented reality can improve task comprehension
and lead to faster implementation with fewer assembly
errors (Funk et al, 2017) and a proliferation of recent liter-
ature demonstrates the use of mobile augmented reality
for design visualisation and construction review tasks
(Ren, Ruan and Liu, 2017). However difculties registering
virtual and physical objects using a mobile video HMD
prohibit their application on tasks that require precise
spatial positioning or hands-free operation. As such there
is a need to better understand whether the capabilities of
current mixed reality head-mounted displays such as the
Microsoft HoloLens are sufcient to address challenges
of performing otherwise complex fabrication tasks within
mixed reality environments (Behzadan, Dong and Kamat,
2015).
Hololens
The HoloLens is an optical head-mounted display (HMD)
that composites virtual content with the users eld of view
by rendering to a transparent stereoscopic waveguide
display. The illusion of virtual content appearing xed in
place (registered to the physical environment) is achieved
through a combination of 3D scan data created with an
infrared depth camera and “inside out tracking” (Bernard
2
4
and Cummings, 2017) using feature pixels from RGB video.
These cameras detect user gestures and hand location, and
a 6DOF sensor on the device captures head position and
orientation to infer the wearer’s gaze. Users interact with
virtual content using a combination of gestures, gaze and
voice commands and can develop software applications
using the Unity 3D game engine. In order to begin to explore
these challenges there is also a need to develop software
environments suited to building applications focused on
architecture and design.
Objectives
We describe a platform that utilizes sensor data from
the HoloLens to build mixed reality applications within
Grasshopper, a parametric modelling environment for
McNeel & Associates Rhinoceros 3D. This platform enables
us to procedurally generate interactive holographic
instructions that allow complex assembly tasks (part
selection, verication, display of assembly instructions
and verication information) to be performed within a
single shared mixed reality environment. The utility of this
platform is tested by developing several applications to
assist with the construction of a pavilion from welded mild
steel pipe. Mixed reality applications are used to overlay
analogue tools with holographic guides to improve fabrica-
tion accuracy of complex parts, locate these parts within a
larger assembly and measure deviation from digital models
using marker tracking in order to compare the perfor-
mance of our approach to alternative fabrication methods.
We determine the usefulness of this environment by
working with completely unskilled construction teams that
have no existing task expertise and evaluate the capacity of
these teams to accurately fabricate and assemble complex
structures during a 3 day design-build workshop. This
project serves as an incremental step towards under-
standing the design implications of architectural fabrication
within mixed reality environments and explores the idea of
digital guides, adaptive and on-demand fabrication through
this prototypical project.
BACKGROUND
Extensive research has been conducted to identify applica-
tions of mixed reality within the architecture, engineering
and construction industry (Chi, Kang and Wang, 2013),
including locating 2D drawings within corresponding 3D
environments (Cote et al, 2013), improving registration
of digital models to as-built designs (Georgeli et al, 2007),
ltering redundant data in increasingly large building infor-
mation models (Chu, Matthews and Love, 2018) or precisely
locating and checking parts in space (Yabuki, 2007). The
rst application using mixed reality for in-situ architectural
construction tasks dates back to the 1990’s with Webster’s
system for assembling space frames from audio, text and
graphical instructions (Webster et al, 1996).
Marker-based systems have been designed for assembling
stacked timber structures from unique parts (Abe et al,
2017) using head-mounted see-through mobile displays.
While these systems provide some feedback to the user as
to the correct type and placement of parts the researchers
note results of up to 50mm deviation from digital models.
Systems using real-time object tracking (Sandy and Buchli,
2018) have been demonstrated to signicantly reduce
deviation error while Fazel et al have proposed a system
that locates parts using edge-detection algorithms to
enable non-uniform masonry structures to be assembled
with accuracy and construction time comparable to fully
automated robotic processes (2018). Relatively limited
exploration has been made in the utilization of mixed reality
to assist with non-standard fabrication tasks. The Free-D
developed at MIT is one example of a hardware and soft-
ware system that uses a heads-up display to visualize an
in-place digital 3D model for subtractive fabrication with a
hand-held CNC mill (Zoran et al, 2014).
These projects show that mixed reality environments
improve fabrication time and precision of non-uniform
structures when compared to traditional construction
processes and in some cases are comparable to fully
automated robotic processes. If these capabilities are to be
implemented by practitioners or within industry, systems
will need to be developed for consumer devices and CAD
packages to eliminate requirements for custom hardware
and software solutions and assembly from standardized or
repeated elements.
METHODS
We have developed a software platform that enables
designers to rapidly prototype mixed reality applications
directly within industry standard CAD tools, removing the
requirement for programming and application development
expertise and enabling improvisation of task specic
applications for design and construction. This platform
provides near real-time spatial and geometric information
bidirectionally between the Microsoft HoloLens and McNeel
Rhinoceros 3D and Grasshopper.
Mixed reality applications in Rhino and Grasshopper
Our platform consists of a Windows Mixed Reality appli-
cation running on the HoloLens and developed in Unity
(a game development engine supported by Microsoft for
mixed reality development), a windows desktop application
running within the Rhino and Grasshopper environments
Paper Title Author last names, separated by commas
TOPIC (ACADIA team will ll in) 5
and a server application facilitating local network commu-
nications between the two over WiFi. Geometry, text and
selection data from the Rhino and Grasshopper document
is streamed to the HoloLens. Rendering is performed on
the headset rather than remotely computed. This removes
the requirement for a powerful computer to perform the
rendering and enables fast refresh rates which persist
even if the network connection is lost, though imposes
limitations on the number of polygons that can be displayed
in mixed reality.
The HoloLens application continuously streams the device
position and orientation that is used to infer the wearers
gaze (Figure 3). The 3D spatial mesh that the HoloLens
creates from long-range infra-red depth scan data can
be requested and saved by Grasshopper components.
Detected gestures such as the presence of a wearers
hand, clicks or drags all trigger events that are registered
by Grasshopper components. Vuforia, a computer vision
package for Windows, is used for marker detection that can
be also be streamed as a position and orientation data in
Grasshopper. By streaming sensor data from the HoloLens
to Grasshopper, we create a sandbox environment which
allows sensor events data to customize the modes of inter-
activity and display of a parametric model using standard
Grasshopper components. Hence the same gesture can be
repurposed for different applications.
Mixed reality fabrication
In order to test the effectiveness of our platform to assist
with complex fabrication tasks we developed a pavilion
design consisting of a collection of interwoven steel
rods. The form of each rod was derived through several
lleted bends with linear sections that serve to dene
parallel joints. The design was derived by tting curves to
a graph following a series of goals that attempt to satisfy
aesthetic criteria, structural performance and fabrication
constraints (Figure 4). While the algorithm produces results
that could be built with some modication during assembly,
several mixed reality applications were developed to
create a scan of the construction site and identify prob-
lematic joints and geometry using a mixed reality markup
application.
Visualization of 3D models in context and at scale is useful
for understanding and modifying complex geometries that
are difcult to read when projected to a 2D screen. By
interpolating a curve through the recorded positions of a
users hand, geometry can be created that effectively allows
a user to draw directly within 3D space and the Rhino
3
4
6
5
document simultaneously (Figure 5). We utilized this mixed
reality application to create markup notes directly within
the Rhino model that can be addressed by a collaborator
working within the CAD environment at a desktop PC.
A 3D scan of the working environment was built by
traversing the physical space on foot and periodically
requesting 3D mesh data made available on the HoloLens
in Grasshopper over WiFi (Figure 6). While the resolution
of the scan is coarse (with the length of produced mesh
edges being up to 250mm), we found it sufcient to ensure
that the fabricated pavilion could be moved through the
construction site and that no collisions would take place
during the bending process.
Holographic bending
After completing the digital design, an application was built
in Grasshopper that allowed users to iterate over each part
in the model, unroll the polyline geometry, orient the geom-
etry to a physical workspace, make minor adjustments to
this model to ensure alignment with any variations in the
physical material (e.g. stretched, kinked or bent pipe) and to
calibrate the hologram to the tolerances of the bending arm.
Within the mixed reality environment, a fabricator could
control each of these elements of the Grasshopper model
through a graphical user interface consisting of buttons
for displaying and ipping parts, stepping through bends
or iterating over parts in the assembly (Figure 6). A toggle
to display all states of the bending process was used to
visually ensure that there were no collisions with the phys-
ical workspace. The fabricator could ip the bend order if
required in order to resolve any detected collisions.
During bending the fabricator aligned the physical pipe
with the hologram by eye to ensure the segment length
and pipe rotation is correct before beginning the bend. The
green overlays showing each stage of the bend (Figure 7)
allowed the fabricator to account for twisting of the pipe
during bending. Once the pipe aligned with the last of these
states the bend is complete. The fabricator then used the
bend selection buttons in the interactive holographic model
to proceed to the next bend, and the process repeated. On
completion of a part, the bending team passed the pipe to
the assembly team where it was placed in the structure.
Upon receiving the part from the bending team, the
assembler wearing the HoloLens assisted and directed
the assembly team to interweave the part into the correct
location (Figure 8) and ensure it did not deviate signicantly
from the holographic representation. As with the previous
application, the assembler could interact with holographic
buttons to view either highly task specic information (only
the current part being placed), or all of the parts in the
completed structure. The assembler was responsible for
double-checking the accuracy of parts and ensuring that
the joint locations were correct, and visually inspecting that
the placement of the current part does not compromise the
position of future parts. Assembly of multiple components
could occur in parallel by connecting multiple headsets to
the same co-located holographic model, though we did not
rely on this extensively during the workshop.
6
Paper Title Author last names, separated by commas
7
TOPIC (ACADIA team will ll in) 7
10
9
Digitization
A cuboid image marker (Figure 9) was tracked using
Vuforia and the position and orientation is streamed to
Grasshopper. Once the user placed the physical marker
at a desired sample point, a ‘tap’ gesture was performed
to register either the start or end of a linear segment of
the sampled part. Repeating this process created a series
of extended lines which could be intersected and lleted
to create an accurate representation of the physical
object (Figure 10). Once a pipe has been digitized, the user
clicked a holographic ‘bake’ button to add the curve to a
list of processed geometry and begin drawing a seperate
shape (Figure 11), allowing a continuous process without
physically returning to the computer. Once the object had
been completely sampled, Galapagos (an evolutionary
solver for Grasshopper) was used to t the digitized model
to the target digital model such that we can measure the
deviation.
11
RESULTS AND REFLECTION
Woven steel fabrication
The pavilion structure consisted of 92 unique parts and 560
individual bends and was fabricated and assembled over a
period of 3 days by two teams of two people without prior
experience with any of the tools or fabrication techniques
required for the project. Each team consisted of a fabri-
cator wearing the HoloLens and following holographic
instructions, and an assistant performing tasks as directed.
In order to transport the pavilion to site it was assembled in
4 discrete parts following a linear construction sequence.
On completion of each part, connection points were digi-
tized to measure deviation from the digital model and adjust
the geometry of subsequent parts to minimize error in the
joint.
Deviation and adaptive fabrication
For the digitized part (Figure 13), the digital design model
and the digitized physical model differ by at maximum
46mm, with an average of 20mm across all parts. Given the
structure was fabricated from 16mm pipe, this deviation
could often be attributed to human error in placing a joint
on a different side of the pipe to the digital model which is
considered an acceptable error as it may facilitate faster
fabrication time without comprimising the design language
of the structure. Larger deviation can be attributed to
human error during bending and part assembly, deection
in the physical model due to self-weight and contortion,
error in the digital model resulting in self-intersecting parts
that could not be accurately reproduced with physical
material, holographic drift from inside-out device tracking
8
8
and deliberate adjustments to the model during assembly
(Figure 12).
The fabrication team deliberately deviated from the digital
model in order to reduce the construction time of the
pavilion. Often a slight error in the placement of a part
could be accommodated for by adjusting the joints with
subsequent parts, or by manually bending subsequent
parts to re-align with the geometry of the structure.
Alternatively the part could be removed and re-bent with
more care and precision at the cost of increased construc-
tion time. As parts are fabricated on-demand using an
interactive holographic model, similar adaptations could
be made by digitizing the incorrect part and adjusting the
geometry of connecting parts before fabrication. This
enables the adaptive construction process to be extended
to larger scale structures and materials where manual
adjustment of parts is not feasible.
Causes of error
Inaccuracies in segment lengths between bends result
in signicant error as lengths cannot be adapted during
assembly and instead lead to accumulative errors across
the structure. However, we did not observe this as a
common cause of error in the Woven Steel pavilion as
registering the correct length of pipe using a holographic
guide is a simple task working from only 1D information and
is not affected by occlusion of physical material by holo-
graphic information. Inaccurate bend angles also result
in errors during assembly but can be accounted for due
to some exibility in material. We observe that using a 3D
holographic guide to accurately reproduce bend angles
requires signicant skill as even small errors in bend
angles accumulate across the part and result in deviations
from the digital model. During assembly parts can be placed
on the ‘wrong side’ of a joint or at the wrong length along a
joint resulting in minor errors. These were largely accom-
modated for through adaptive assembly.
Comparison to automation
The authors have some experience working with robotic
rod bending and in comparison, we identify some key
differences. While robotic bending processes are typically
used for mass-customization of parts with minor variations
made within the tolerances of the robotic arm, holographic
fabrication enables all parts to be entirely unique without
dramatically increasing the difculty of setup, production
or assembly. As the parts are also produced on demand,
no labelling or part identication is required. Subsequently,
there are fewer constraints on the design model which
does not need to account for collision with moving robots
or machinery. The direct visualization of the design model
for assembly allows for the design of unique joint loca-
tions or types without the need for additional specication
or documentation. While manual fabrication limits the
scale of material, it offers advantages in reducing safety
requirements, hardware expertise and equipment costs
in comparison to robotic or CNC processes. The tooling is
inherently mobile due to the small size and weight or the
bending equipment, and minimal expertise is required to
fabricate parts or assemble the structure.
12
Paper Title Author last names, separated by commas
TOPIC (ACADIA team will ll in) 9
Further work
The accuracy and speed of our holographic fabrication
approach could be improved by providing greater feed-
back to the user. Coloring holographic instructions based
on proximity feedback in the placement of physical mate-
rial when compared to the digital could be achieved using
marker tracking and provide immediate feedback regarding
the accuracy of a bend or placement of parts. Digitization
of the structure during assembly would also improve the
accuracy of the structure by adjusting digital models and
holographic instructions for parts following in the assembly
sequence. We recognize the need to measure and where
possible reduce the drift caused by inside-out tracking
systems, and in future work intend to compare results from
more robust industry standard 3D scanning processes to
those of holographic digitization method described in this
paper.
Whilst we identify some advantages of mixed reality fabri-
cation environments in comparison to robotic processes,
we recognize the advantages of a hybrid approach. Future
developments of this research will work with CNC and
robotic processes to combine faster and more accurate
part fabrication with the dexterity and intuition of human
assembly teams. Automated part fabrication will also assist
in scaling up the process and facilitate working with mate-
rials too rigid to bend by hand.
CONCLUSION
Fabrication within mixed reality environments enable
construction teams to assemble structures consisting
entirely of complex joints between unique parts without
the need to follow 2D instructions or work from 2.5D
setout points. While we did not conduct a comparative
study of fabrication time using traditional documentation
and fabrication time using holographic instructions, it is
not unreasonable to conclude that using drawings and
templates for 560 bend angles, feed measurements and
rotations would be excessively tedious and time consuming.
We demonstrate that traditional drawn instructions can
be replaced with holographic applications that describe
task specic information to fabricators in context and as
needed, reducing requirements to switch attention from
machine and material tasks to drawings or screens and
instead combining these two mediums in a mixed reality
experience. This improves the capability of fabricators to
develop skills and expertise in complex fabrication tasks,
as demonstrated by the construction of a pavilion by
construction teams with no prior experience with material
and fabrication processes. We demonstrate that working
from holographic guides enable fabrication teams to make
adjustments to the design as necessary during fabrication
in order to reduce construction time and complexity and
reduce the risk of task failure and accumulative error.
This research further demonstrates that current genera-
tion head-mounted displays such as the HoloLens provide
sufcient registration of holograms to physical environ-
ments to use these as a reference as to the accuracy of
an assembly or fabrication task, with most deviation from
digital models derived from lack of assembly experience
rather than drift in the holographic reference model. We
measure this using a novel method of digitizing the as-built
pavilion using a marker tracking method that creates an
accurate model from only a limited number of sample
points for immediate feedback to designers and without the
requirement for 3D scanning setups and mesh reconstruc-
tion software.
ACKNOWLEDGEMENTS
This work was produced with students during an invited workshop
at the 2018 CAADRIA Conference. We thank the participants and
organizers for their condence and support.
REFERENCES
Abe, U-ichi, Hotta, Kensuke, Hotta, Akito, Takami, Yosuke,
Ikeda, Hikaru and Ikeda, Yasushi. 2017. "Digital Construction -
Demonstration of Interactive Assembly Using Smart Discrete
Papers with Rd and Ar Codes." Protocols, Flows, and Glitches -
Proceedings of the 22nd CAADRIA Conference. 75-84.
13
10
14
1615
Behzadan, Amir & Dong, Suyang & Kamat, Vineet. 2015. Augmented
reality visualization: A review of civil infrastructure system applica-
tions. Advanced Engineering Informatics.
Boud, A.C. & Haniff, D.J. & Baber, Chris & Steiner, S.J.. 1999. “Virtual
reality and augmented reality as a training tool for assembly tasks.
Proceedings of IEEE International Conference on Information
Visualization. 1999. 32 - 36.
Chi, Hung-Lin, Shih-Chung Kang, and Xiangyu Wang. 2013.
"Research Trends and Opportunities of Augmented Reality
Applications in Architecture, Engineering, and Construction."
Automation in Construction 33. 116-22.
Chu, Michael, Jane Matthews, and Peter E. D. Love. 2018.
"Integrating Mobile Building Information Modelling and Augmented
Reality Systems: An Experimental Study." Automation in
Construction. 85. 305-16.
Côté S., Beauvais M., Girard-Vallée A., Snyder R. 2014. “A live
Augmented Reality Tool for Facilitating Interpretation of 2D
Construction Drawings.” Augmented and Virtual Reality. AVR 2014.
Lecture Notes in Computer Science, edited by De Paolis L., Mongelli
A. vol 8853. Springer, Cham.
Funk, Markus, Andreas Bächler, Liane Bächler, Thomas Kosch,
Thomas Heidenreich and Albrecht Schmidt. 2017. “Working with
Augmented Reality?: A Long-Term Analysis of In-Situ Instructions at
the Assembly Workplace.” PETRA.
Georgeli, P & Schroederi, P & Benhimanei, S & Hinterstoisseri, S
& Appel, Mirko & Navab, Nassir. 2007. “An Industrial Augmented
Reality Solution For Discrepancy Check.” ISMAR 2007: Proceedings
of the 6th IEEE and ACM International Symposium on Mixed and
Augmented Reality. 111-115.
Kress, Bernard C., and William J. Cummings. 2017. "11-1: Invited
Paper: Towards the Ultimate Mixed Reality Experience: Hololens
Display Architecture Choices." SID Symposium Digest of Technical
Papers 48 1. 127-31.
Milgram, Paul & Kishino, Fumio. 1994. “A Taxonomy of Mixed Reality
Visual Displays.” IEICE Trans. Information Systems. vol. E77-D, no.
12. 1321-1329.
Ren, Jiang & Liu, Yingying & Ruan, Zhicheng. 2017. “Architecture
in an Age of Augmented Reality: Applications and Practices for
Mobile Intelligence BIM-based AR in the Entire Lifecycle.” DEStech
Transactions on Computer Science and Engineering.
Sandy, T., and J. Buchli. 2018. "Object-Based Visual-Inertial
Tracking for Additive Fabrication." IEEE Robotics and Automation
Letters. 99
T. P. Caudell and D. W. Mizell. 1992. "Augmented reality: an appli-
cation of heads-up display technology to manual manufacturing
processes," Proceedings of the Twenty-Fifth Hawaii International
Conference on System Sciences, Kauai, HI, 1992, 659-669 vol.2.
Yabuki, 2007. “Cooperative Reinforcing Bar Arrangement and
Checking by Using Augmented Reality.” Springer Berlin Heidelberg.
Zoran, Amit, et al. 2014. "The Hybrid Artisans: A Case Study in
Smart Tools." ACM Trans. Comput.-Hum. Interact. 21 3 1-29.
IMAGE CREDITS
Paper Title Author last names, separated by commas
14 Welding an assembled part of the structure
15 A participant teaching a colleague the bending application
16 A participant instructing in the assembly of part of the structure
TOPIC (ACADIA team will ll in) 11
Figure 2: © T. P. Caudell and D. W. Mizell 1992
All other drawings and images by the authors.
Gwyllim Jahn is the co-founder of Fologram and on sabbatical as
a Lecturer in Architecture at RMIT. He develops design research in
the elds of mixed reality environments, autonomous robotic fabri-
cation, behavioural design systems and creative applications of
machine learning. His work has been published in leading compu-
tational design conferences and journals including IJAC, ACADIA
and RobArch and he has given talks, presentations and workshops
at international institutions including MIT, Stuttgart ICD, Cooper
Union, UCL, UTS, Tongji and Tsinghua University.
Cameron Newnham s the co-founder and CTO of Fologram. His
experience lies in the creation of novel tools for designing and
fabricating complex geometric systems, including code libraries
for mixed reality interfaces, 3D printing and robotic fabrication.
Cameron has experience as a computational designer in interna-
tionally renowned and award winning architectural practices, and
academic experience as an Associate Lecturer – Industry Fellow
at RMIT University, Melbourne University and has led numerous
international design and build workshops in Shanghai, New York,
Paris, Boston, Sydney, and Melbourne.
Nicholas van den Berg is the co-founder of Fologram and a tech-
nology research and development consultant.
Matthew Beanland is completing his Masters of Architecture
thesis at RMIT university.
17 A small chunk of the pavilion with image marker used to calibrate digital and physical models during digitization 17
... In recent years, the use of AR has increased both in the design process and in the field of digital fabrication due to its mature technology. The digital model -design data-created on the computer can be transferred to the mixed environment provided by AR, where users can engage in real-time with holograms superimposed on their view (Chi et al. 2013;Jahn et al. 2018;Song 2020;Song et al. 2021). Previous studies on AR-assisted digital fabrication used elements with a variety of forms, operations, and materials, including linear timber elements (Abe et al. 2017;Morse et al. 2020), irregular foam blocks (Sun et al. 2018), standard bricks (Fazel and Azadi 2018;Jahn et al. 2019;Mitterberger et al. 2020), bent steel (Jahn et al. 2018), bent timber (Jahn et al. 2019), raw stones (Wibranek and Tessmann 2019), and thermoplastic sheet (Song 2020). ...
... The digital model -design data-created on the computer can be transferred to the mixed environment provided by AR, where users can engage in real-time with holograms superimposed on their view (Chi et al. 2013;Jahn et al. 2018;Song 2020;Song et al. 2021). Previous studies on AR-assisted digital fabrication used elements with a variety of forms, operations, and materials, including linear timber elements (Abe et al. 2017;Morse et al. 2020), irregular foam blocks (Sun et al. 2018), standard bricks (Fazel and Azadi 2018;Jahn et al. 2019;Mitterberger et al. 2020), bent steel (Jahn et al. 2018), bent timber (Jahn et al. 2019), raw stones (Wibranek and Tessmann 2019), and thermoplastic sheet (Song 2020). Departing from the above works, and assuming AR technologies will become as prevalent as smartphones or wearables, this study contributes to the field by introducing the use of AR to prepare molds to produce variable earth-based blocks (voussoirs) for stereotomic construction. ...
Conference Paper
Full-text available
Mass customizing of building components allows new conditions to explore aesthetic and sustainability in architecture. However, such possibilities tend to require the use of expensive and heavy digital fabrication machinery, which is seldomly available in most regions on the planet. In this context, this paper presents a research in progress that explores Augmented Reality (AR) to support craft production of customized stereotomic components. As a portable technology, the work examines the potential of AR to materialize design solutions that are geometrically complex and variable. Considering the current research on augmented fabrication processes, this work contributes to producing variable building components for stereotomic construction with a focus on earth-based materials. Extending the findings of a recently completed PhD thesis, the work replaces the use of a robot with the HoloLens glasses and Fologram application to produce low-cost and reusable molds. This augmented fabrication setup allows the human control of the production of variable molds, ready for casting and assembly of stereotomic components. This work addresses several of the NEB and UN SDGs goals.
... (Fig. 3) To transform the digital model into a dish, we propose an AR approach. For this purpose, we use the Grasshopper plugin fologram 3 (Jahn et al., 2018) to overlay the dough with the digital model of our pizza, successively prompting us with the right places to put the toppings. (Fig. 4) This utilizes a well-tested form of assembly, where the user' s senses are augmented by the device, enabling the placement of parts at pre-planned positions and rotations within the tolerances of the device and human limits of precision. ...
Book
Full-text available
http://dx.doi.org/10.15488/10074 Creative Food Cycles addresses three interconnected fields of innovation: 1. CREATIVE FOOD CYCLES AS DRIVER FOR URBAN RESILIENCE. Fostering novel and adaptive Food Cycles as driver for resilience in cities, economy, society, and culture. 2. CREATIVE FOOD CYCLES AS SOCIAL INNOVATION. Extending civic participation in Food Cycles toward active engagement, new urban communities, and new models of social entrepreneurship. 3. CREATIVE FOOD CYCLES BASED ON DIGITAL TECHNOLOGIES. Experimenting with interactive devices and digital protocols with a strong cultural and social impact, as an empowering force of Food Cycles Creative Food Cycles is funded by the European Union in the Creative Europe programme from 2018 to 2020. The project is coordinated by the Institute for Design and Urban Planning of the Leibniz University of Hannover and pre- formed with the project partners Institute for Advanced Architecture of Cat- alonia IAAC Barcelona/Spain and Department of Architecture and Design of the University of Genoa/Italy. Creative Food Cycles combines research with experimental prototyping, cultural actions and social dialogue.
... The use of mixed-reality devices in digital-fabrication methods has become increasingly common in recent years. Mixed-reality devices were used in digital fabrication applications such as knitting with bamboo material [6], brick wall assembly [7,8], knitting with metal bars [9], timber structures assembly [4], making a vault structure with Styrofoam pieces [10], and rubble bridge-making [11], as well as in additive manufacturing [12]. Mixed-reality devices were also used in the design and digital fabrication study with composite parts that are stretched and shaped [13]. ...
Article
Full-text available
In this study, a method, in which parametric design and robotic fabrication are combined into one unified framework, and integrated within a mixed reality environment, where designers can interact with design and fabrication alternatives, and manage this process in collaboration with other designers, is proposed. To achieve this goal, the digital twin of both design and robotic fabrication steps was created within a mixed-reality environment. The proposed method was tested on a design product, which was defined with the shape-grammar method using parametric-modeling tools. In this framework, designers can interact with both design and robotic-fabrication parameters, and subsequent steps are generated instantly. Robotic fabrication can continue uninterrupted with human–robot collaboration. This study contributes to improving design and fabrication possibilities such as mass-customization, and shortens the process from design to production. The user experience and augmented spatial feedback provided by mixed reality are richer than the interaction with the computer screen. Since the whole process from parametric design to robotic fabrication can be controlled by parameters with hand gestures, the perception of reality is richer. The digital twin of parametric design and robotic fabrication is superimposed as holographic content by adding it on top of real-world images. Designers can interact with both design and fabrication processes both physically and virtually and can collaborate with other designers.
... AEC industry has employed AR for assisting construction processes, especially with complex geometry or material assemblies by providing visuals and instructions (Chen, Liao and Chu, 2018;Jahn et al., 2018;Hahm et al., 2019;Kwiatek et al., 2019;Qian, 2019;Yan, 2022) that can reduce errors and training periods of the staff. Instead of a fully automated construction process with robotic arms, drones, CNC machines, and automatic-operation construction machines, a hybrid system between machine and AR-powered human is the most practical one and giving the most accurate results (Abe et al., 2017). ...
... The shape of the façade pattern was driven by a control curve and the solar radiation simulation. A holographic construction helps to reduce the tedious repetitive translation of parametric models and provides implementers with visual instructions instead of static drawings (Jahn et al., 2018). MR technology relies on human tactile and vision to stimulate collaborative knowledge, then to resolve the complexity of a high-touch non-standard construction. ...
Article
Full-text available
New architectural forms offered by digital design approaches often appear incompatible with the prescribed precision and control in construction, especially in developing regions where advanced implementation means are limited. In response, this paper suggests working with design practice indeterminacy. Named ‘anexact architecture’, the post-digital design practice strategy presents a convergent diagram of seeking the feasible design solution space. It relies on the procedural parametric modelling to constantly integrate computation and humanisation, so that a rigorous built outcome is capable of accommodating project-specific idiosyncrasies and constraints. The demonstrator projects are discussed based on the combination of the Participatory Action Research method and the idea of anexact architecture. This paper aims to illustrate the peculiarity of anexact architecture and its ideology of treating design delivery uncertainties as essentials rather than negatives when practicing in a volatile construction context.
... By projecting computationally generated geometries onto the field of vision with wearable devices or smartphones, it creates an immersive environment for implementers. The interactive holography allows architects to translate design intentions to intelligent instructions instead of static plans and sections, offers a significant opportunity to POST-CARBON, Proceedings reduce construction complexity, and potentially increases precision (Jahn et al., 2018). MR-based architectural design practices are situated within a post-digital context, where the humanisation of digital is addressed via the interplay between digital and analogue culture, between high-tech and high-touch experiences, between global and local matters (Crolla, 2018). ...
Article
Technology is employed in the fields of architecture, engineering, and construction (AEC) for characteristics like producing visual representations and offering assistance during the building phase. Both users and creators of these tools are able to immediately take advantage of the technology's potential as well as create a variety of workarounds for its drawbacks. Both viewpoints will be looked at in this study with regard to mobile extended reality SDKs (software development kit). By excluding the articles that did not provide the relevant information, this research concentrates solely on the papers that discuss the technological aspects of the SDK that were used, the opportunities the SDK offers, and/or the flaws of the SDK. The study's main objective is to compare the technological contributions made by the SDKs employed in the scope of the examined literature to the AEC disciplines and to the contexts in which such contributions are made. Through applications in literature research, the study aims to highlight the contributions of mobile extended reality SDKs to the fields of architecture, engineering, and construction. An entry-level developer can use the SDKs in accordance with his work by using the comparison diagrams, produced in this study, to see the relationships and comparisons between them, as well as to build a framework for what uses should be made in which domains. The technological capabilities and constraints of SDKs have an impact on how research is designed. Making relationality diagrams on the SDK to use and the effects it will have throughout the research phase is also crucial. As a result of the research, SDKs permit flexible uses in a variety of sectors, and their use also financially and logistically supports literature studies.
Article
With increased prefabrication in the construction industry, fabrication workers are tasked to assemble more complicated assemblies with tighter tolerances. However, the existing measurement tools and processes have not changed to accommodate this shift. Lack of advanced measurement tools and existing processes results in increased risk of late detection of geometric errors. To reduce these risks, three-dimensional (3D) quality control systems leveraging scan-vs-BIM methods can be adopted as part of the fabrication process. However, these systems have not been widely adopted yet by fabrication shops, because: (1) fabrication shops often do not have 3D models corresponding to shop drawings; and (2) the cost of integrating accurate 3D scanning equipment into fabrication workflows is assumed to be too high. To remove the first barrier, in this article, a framework for developing 3D digital templates is developed for inspecting received parts. The framework is used for developing a library of 600 3D-models of piping parts. The library is leveraged to deploy a 3D quality control system that was then tested in an industrial scale case study. The results of the case study are used to develop a discrete event simulation model. The simulation results from the model and subsequent cost benefit analysis show that investment in integrating the scan-vs-3D-model quality control systems can have significant cost savings and provide a payback period of less than two years.
Conference Paper
Full-text available
Due to increasing complexity of products and the demographic change at manual assembly workplaces, interactive and context-aware instructions for assembling products are becoming more and more important. Over the last years, many systems using head-mounted displays (HMDs) and in-situ projection have been proposed. We are observing a trend in assistive systems using in-situ projection for supporting workers during work tasks. Recent advances in technology enable robust detection of almost every work step, which is done at workplaces. With this improvement in robustness, a continuous usage of assistive systems at the workplace becomes possible. In this work, we provide results of a long-term study in an industrial workplace with an overall runtime of 11 full workdays. In our study, each participant assembled at least three full workdays using in-situ projected instructions. We separately considered two different user groups comprising expert and untrained workers. Our results show a decrease in performance for expert workers and a learning success for untrained workers.
Conference Paper
Full-text available
In this research, a bridge product model named New IFC-BRIDGE was developed to represent entities of various types of bridges in a standardized manner. To solve problems identified in planning and design of reinforcing bar works, a cooperative reinforcing bar arrangement support system using Augmented Reality technology was developed. In this system, multiple users can move tangible markers that represent entities of reinforcing bars and that are linked to computer graphics images represented from the New IFC-BRIDGE product model data. A prototype system was developed by deploying head mounted displays with video cameras. Furthermore, to enhance the reinforcing bar checking task at construction sites, a cooperative reinforcing bar checking support system was developed by using AR technology. The test of the prototype system showed the practicality of the system, and some problems were identified for future study.
Conference Paper
Full-text available
Construction consists of a complex set of tasks in the 3D world based on instructions encoded in 2D drawings. Although the process is facilitated by the availability of 3D models displayed on digital tablets on construction sites, it is not always clear what is the exact 3D location of specific elements in 2D drawings. In this preliminary study, we propose a method based on a computer tablet and a head mounted augmentation system that enables the user to display a 3D element by clicking on its 2D representation on a construction drawing. Early results show that the method has potential, but highlights perception issues. We proposed and tested solutions to alleviate those issues.
Article
Full-text available
Mixed Reality (MR) visual displays, a particular subset of Virtual Reality (VR) related technologies, involve the merging of real and virtual worlds somewhere along the 'virtuality continuum' which connects completely real environments to completely virtual ones. Augmented Reality (AR), probably the best known of these, refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects. The converse case on the virtuality continuum is therefore Augmented Virtuality (AV). Six classes of hybrid MR display environments are identified. However quite different groupings are possible and this demonstrates the need for an efficient taxonomy, or classification framework, according to which essential differences can be identified. An approximately three-dimensional taxonomy is proposed comprising the following dimensions: extent of world knowledge, reproduction fidelity, and extent of presence metaphor.
Article
In this paper, a system is presented to track the motion of a sensor-head relative to multiple objects of known geometry. Measurements from a monocular camera and an inertial measurement unit are probabilistically fused in a moving horizon estimator to obtain high accuracy estimates. Methods for detecting tracking loss and automatically resuming tracking are presented. The performance of the system is shown through experiments tracking various objects and ground truth measurements demonstrate the system's ability to provide accurate real-time motion estimates. As an initial application of the system, a 100 brick structure with complex geometry was built by hand using the tracking system and an augmented reality visualizer to guide construction.
Article
HoloLens by Microsoft Corp. is the world’s first untethered Mixed Reality (MR) Head Mounted Display (HMD) system, released to developers in March 2016 as a Development Kit. We review in this paper the various display requirements and subsequent optical hardware choices we made for HoloLens. Its main achievements go along performance and comfort for the user: it is the first fully untethered MR headset, with the highest angular resolution and the industry’s largest eyebox. It has the first inside-out global sensor fusion system including precise head tracking and 3D mapping all controlled by a fully custom on-board GPU. Based on such achievements, HoloLens came out as the most advanced MR system today. Additional features may be implemented in next generations MR headsets, leading to the ultimate experience for the user, and securing the upcoming fabulous AR/MR market predicted by most analysts.
Article
In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: (1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; (2) it allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; and (3) it offsets the significant cost of 3D Model Engineering by including the real world background. This paper reviews critical problems in AR and investigates technical approaches to address the fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co-existence; and the integration of these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables workers to “see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.
Article
We present an approach to combining digital fabrication and craft, demonstrating a hybrid interaction paradigm where human and machine work in synergy. The FreeD is a hand-held digital milling device, monitored by a computer while preserving the makers freedom to manipulate the work in many creative ways. Relying on a pre-designed 3D model, the computer gets into action only when the milling bit risks the objects integrity, preventing damage by slowing down the spindle speed, while the rest of the time it allows complete gestural freedom. We present the technology and explore several interaction methodologies for carving. In addition, we present a user study that reveals how synergetic cooperation between human and machine preserves the expressiveness of manual practice. This quality of the hybrid territory evolves into design personalization. We conclude on the creative potential of open-ended procedures within this hybrid interactive territory of manual smart tools and devices.