Content uploaded by Gwyllim Jahn
Author content
All content in this area was uploaded by Gwyllim Jahn on Nov 28, 2022
Content may be subject to copyright.
2
Making in Mixed Reality
1 Woven Steel Pavilion, CAADRIA
2018 Workshop
Gwyllim Jahn
Fologram
Cameron Newnham
Fologram
Nicholas van den Berg
Fologram
Matthew Beanland
Fologram
Holographic design, fabrication, assembly and analysis of woven
steel structures
1
ABSTRACT
The construction industry’s reliance on two-dimensional documentation results in
inefciency, inconsistency, waste, human error, increased cost and the impracticality
of architectural experimentation with novel form, structure, material or fabrication
approaches. We describe a software platform that enables designers to create interactive
holographic instructions that translate design models into intelligent processes rather
than static drawings. A prototypical project to design and construct a pavilion from bent
mild steel tube illustrates the use of this software to develop applications assisting with the
design, fabrication, assembly and analysis of the structure. We further demonstrate that
fabrication within mixed reality environments can enable unskilled construction teams to
assemble complex structures in short time frames and with minimal errors, and outline
possibilities for further improvements.
TOPIC (ACADIA team will ll in) 3
INTRODUCTION
The construction industry’s reliance on two-dimensional
documentation results in inefciency, inconsistency, waste,
human error, increased cost and the impracticality of
architectural form that is difcult to describe orthograph-
ically. Despite advances in prefabrication systems, robotic
fabrication or even building-scale additive manufacturing
the construction industry resists innovations in auto-
mation due to unanticipated events that inevitably result
from inconsistencies between digital design and as-built
site conditions. Improving the ability of design models to
respond and adapt to requirements of construction teams
during assembly and fabrication would offer signicant
efciency improvements and cost and risk reductions
within the construction industry. Replacing often incom-
plete, redundant or inadequate drawn instructions with
constant, contextual and unambiguous descriptions of
design intent would reduce the risk associated with archi-
tectural experimentation with form, structure, material or
fabrication approaches (Boud et al, 1999).
As construction sites embrace building information model-
ling, construction workers are required to process more
information from more domains in order to complete their
tasks (Cote et al, 2013). This requires expertise in following
chalk marks, reading drawings, interpreting sched-
ules, checking a 3D model on a screen and working with
accountability and productivity tools. As design changes
are made, signicant delays result from reprocessing
this information and prevent construction workers from
focusing on fabrication tasks. Simplifying the tools and
environments in which fabrication instructions, assembly
processes, task clarication and verication take place
would dramatically reduce the requirements for exper-
tise and limit delays in both routine and non-standard
construction. Interactive mixed reality environments
simplify fabrication tasks by providing constant, contextual
and unambiguous description of design intent to fabrica-
tion teams. Creating interactive holographic instructions
enable designers to translate design models into intelligent
processes rather than static drawings, with signicant
opportunities and implications for architectural design and
production.
Mixed Reality
Milgram denes mixed reality as a continuum of virtual
reality technologies in which real and virtual objects are
combined in a single display (Milgram and Kishino, 1996).
Virtual reality (VR) and augmented reality (AR) lie at oppo-
site ends of this spectrum, with VR describing technologies
that place the user in a completely computer generated
virtual world and AR referring to systems that preserve
the user’s awareness of, and ability to interact with, their
immediate physical context by compositing the real world
and computer-generated models in a blended 3d space. The
idea of using AR to visualize designs or instructions in-situ
and assist with manufacturing tasks has existed since
the technology’s conception (Figure 2) (Caudell and Mizell,
1992).
Augmented reality applications have recently become ubiq-
uitous with software development kits for mobile platforms
such as ARCore and ARKit. Research has demonstrated
that augmented reality can improve task comprehension
and lead to faster implementation with fewer assembly
errors (Funk et al, 2017) and a proliferation of recent liter-
ature demonstrates the use of mobile augmented reality
for design visualisation and construction review tasks
(Ren, Ruan and Liu, 2017). However difculties registering
virtual and physical objects using a mobile video HMD
prohibit their application on tasks that require precise
spatial positioning or hands-free operation. As such there
is a need to better understand whether the capabilities of
current mixed reality head-mounted displays such as the
Microsoft HoloLens are sufcient to address challenges
of performing otherwise complex fabrication tasks within
mixed reality environments (Behzadan, Dong and Kamat,
2015).
Hololens
The HoloLens is an optical head-mounted display (HMD)
that composites virtual content with the users eld of view
by rendering to a transparent stereoscopic waveguide
display. The illusion of virtual content appearing xed in
place (registered to the physical environment) is achieved
through a combination of 3D scan data created with an
infrared depth camera and “inside out tracking” (Bernard
2
4
and Cummings, 2017) using feature pixels from RGB video.
These cameras detect user gestures and hand location, and
a 6DOF sensor on the device captures head position and
orientation to infer the wearer’s gaze. Users interact with
virtual content using a combination of gestures, gaze and
voice commands and can develop software applications
using the Unity 3D game engine. In order to begin to explore
these challenges there is also a need to develop software
environments suited to building applications focused on
architecture and design.
Objectives
We describe a platform that utilizes sensor data from
the HoloLens to build mixed reality applications within
Grasshopper, a parametric modelling environment for
McNeel & Associates Rhinoceros 3D. This platform enables
us to procedurally generate interactive holographic
instructions that allow complex assembly tasks (part
selection, verication, display of assembly instructions
and verication information) to be performed within a
single shared mixed reality environment. The utility of this
platform is tested by developing several applications to
assist with the construction of a pavilion from welded mild
steel pipe. Mixed reality applications are used to overlay
analogue tools with holographic guides to improve fabrica-
tion accuracy of complex parts, locate these parts within a
larger assembly and measure deviation from digital models
using marker tracking in order to compare the perfor-
mance of our approach to alternative fabrication methods.
We determine the usefulness of this environment by
working with completely unskilled construction teams that
have no existing task expertise and evaluate the capacity of
these teams to accurately fabricate and assemble complex
structures during a 3 day design-build workshop. This
project serves as an incremental step towards under-
standing the design implications of architectural fabrication
within mixed reality environments and explores the idea of
digital guides, adaptive and on-demand fabrication through
this prototypical project.
BACKGROUND
Extensive research has been conducted to identify applica-
tions of mixed reality within the architecture, engineering
and construction industry (Chi, Kang and Wang, 2013),
including locating 2D drawings within corresponding 3D
environments (Cote et al, 2013), improving registration
of digital models to as-built designs (Georgeli et al, 2007),
ltering redundant data in increasingly large building infor-
mation models (Chu, Matthews and Love, 2018) or precisely
locating and checking parts in space (Yabuki, 2007). The
rst application using mixed reality for in-situ architectural
construction tasks dates back to the 1990’s with Webster’s
system for assembling space frames from audio, text and
graphical instructions (Webster et al, 1996).
Marker-based systems have been designed for assembling
stacked timber structures from unique parts (Abe et al,
2017) using head-mounted see-through mobile displays.
While these systems provide some feedback to the user as
to the correct type and placement of parts the researchers
note results of up to 50mm deviation from digital models.
Systems using real-time object tracking (Sandy and Buchli,
2018) have been demonstrated to signicantly reduce
deviation error while Fazel et al have proposed a system
that locates parts using edge-detection algorithms to
enable non-uniform masonry structures to be assembled
with accuracy and construction time comparable to fully
automated robotic processes (2018). Relatively limited
exploration has been made in the utilization of mixed reality
to assist with non-standard fabrication tasks. The Free-D
developed at MIT is one example of a hardware and soft-
ware system that uses a heads-up display to visualize an
in-place digital 3D model for subtractive fabrication with a
hand-held CNC mill (Zoran et al, 2014).
These projects show that mixed reality environments
improve fabrication time and precision of non-uniform
structures when compared to traditional construction
processes and in some cases are comparable to fully
automated robotic processes. If these capabilities are to be
implemented by practitioners or within industry, systems
will need to be developed for consumer devices and CAD
packages to eliminate requirements for custom hardware
and software solutions and assembly from standardized or
repeated elements.
METHODS
We have developed a software platform that enables
designers to rapidly prototype mixed reality applications
directly within industry standard CAD tools, removing the
requirement for programming and application development
expertise and enabling improvisation of task specic
applications for design and construction. This platform
provides near real-time spatial and geometric information
bidirectionally between the Microsoft HoloLens and McNeel
Rhinoceros 3D and Grasshopper.
Mixed reality applications in Rhino and Grasshopper
Our platform consists of a Windows Mixed Reality appli-
cation running on the HoloLens and developed in Unity
(a game development engine supported by Microsoft for
mixed reality development), a windows desktop application
running within the Rhino and Grasshopper environments
Paper Title Author last names, separated by commas
TOPIC (ACADIA team will ll in) 5
and a server application facilitating local network commu-
nications between the two over WiFi. Geometry, text and
selection data from the Rhino and Grasshopper document
is streamed to the HoloLens. Rendering is performed on
the headset rather than remotely computed. This removes
the requirement for a powerful computer to perform the
rendering and enables fast refresh rates which persist
even if the network connection is lost, though imposes
limitations on the number of polygons that can be displayed
in mixed reality.
The HoloLens application continuously streams the device
position and orientation that is used to infer the wearers
gaze (Figure 3). The 3D spatial mesh that the HoloLens
creates from long-range infra-red depth scan data can
be requested and saved by Grasshopper components.
Detected gestures such as the presence of a wearers
hand, clicks or drags all trigger events that are registered
by Grasshopper components. Vuforia, a computer vision
package for Windows, is used for marker detection that can
be also be streamed as a position and orientation data in
Grasshopper. By streaming sensor data from the HoloLens
to Grasshopper, we create a sandbox environment which
allows sensor events data to customize the modes of inter-
activity and display of a parametric model using standard
Grasshopper components. Hence the same gesture can be
repurposed for different applications.
Mixed reality fabrication
In order to test the effectiveness of our platform to assist
with complex fabrication tasks we developed a pavilion
design consisting of a collection of interwoven steel
rods. The form of each rod was derived through several
lleted bends with linear sections that serve to dene
parallel joints. The design was derived by tting curves to
a graph following a series of goals that attempt to satisfy
aesthetic criteria, structural performance and fabrication
constraints (Figure 4). While the algorithm produces results
that could be built with some modication during assembly,
several mixed reality applications were developed to
create a scan of the construction site and identify prob-
lematic joints and geometry using a mixed reality markup
application.
Visualization of 3D models in context and at scale is useful
for understanding and modifying complex geometries that
are difcult to read when projected to a 2D screen. By
interpolating a curve through the recorded positions of a
users hand, geometry can be created that effectively allows
a user to draw directly within 3D space and the Rhino
3
4
6
5
document simultaneously (Figure 5). We utilized this mixed
reality application to create markup notes directly within
the Rhino model that can be addressed by a collaborator
working within the CAD environment at a desktop PC.
A 3D scan of the working environment was built by
traversing the physical space on foot and periodically
requesting 3D mesh data made available on the HoloLens
in Grasshopper over WiFi (Figure 6). While the resolution
of the scan is coarse (with the length of produced mesh
edges being up to 250mm), we found it sufcient to ensure
that the fabricated pavilion could be moved through the
construction site and that no collisions would take place
during the bending process.
Holographic bending
After completing the digital design, an application was built
in Grasshopper that allowed users to iterate over each part
in the model, unroll the polyline geometry, orient the geom-
etry to a physical workspace, make minor adjustments to
this model to ensure alignment with any variations in the
physical material (e.g. stretched, kinked or bent pipe) and to
calibrate the hologram to the tolerances of the bending arm.
Within the mixed reality environment, a fabricator could
control each of these elements of the Grasshopper model
through a graphical user interface consisting of buttons
for displaying and ipping parts, stepping through bends
or iterating over parts in the assembly (Figure 6). A toggle
to display all states of the bending process was used to
visually ensure that there were no collisions with the phys-
ical workspace. The fabricator could ip the bend order if
required in order to resolve any detected collisions.
During bending the fabricator aligned the physical pipe
with the hologram by eye to ensure the segment length
and pipe rotation is correct before beginning the bend. The
green overlays showing each stage of the bend (Figure 7)
allowed the fabricator to account for twisting of the pipe
during bending. Once the pipe aligned with the last of these
states the bend is complete. The fabricator then used the
bend selection buttons in the interactive holographic model
to proceed to the next bend, and the process repeated. On
completion of a part, the bending team passed the pipe to
the assembly team where it was placed in the structure.
Upon receiving the part from the bending team, the
assembler wearing the HoloLens assisted and directed
the assembly team to interweave the part into the correct
location (Figure 8) and ensure it did not deviate signicantly
from the holographic representation. As with the previous
application, the assembler could interact with holographic
buttons to view either highly task specic information (only
the current part being placed), or all of the parts in the
completed structure. The assembler was responsible for
double-checking the accuracy of parts and ensuring that
the joint locations were correct, and visually inspecting that
the placement of the current part does not compromise the
position of future parts. Assembly of multiple components
could occur in parallel by connecting multiple headsets to
the same co-located holographic model, though we did not
rely on this extensively during the workshop.
6
Paper Title Author last names, separated by commas
7
TOPIC (ACADIA team will ll in) 7
10
9
Digitization
A cuboid image marker (Figure 9) was tracked using
Vuforia and the position and orientation is streamed to
Grasshopper. Once the user placed the physical marker
at a desired sample point, a ‘tap’ gesture was performed
to register either the start or end of a linear segment of
the sampled part. Repeating this process created a series
of extended lines which could be intersected and lleted
to create an accurate representation of the physical
object (Figure 10). Once a pipe has been digitized, the user
clicked a holographic ‘bake’ button to add the curve to a
list of processed geometry and begin drawing a seperate
shape (Figure 11), allowing a continuous process without
physically returning to the computer. Once the object had
been completely sampled, Galapagos (an evolutionary
solver for Grasshopper) was used to t the digitized model
to the target digital model such that we can measure the
deviation.
11
RESULTS AND REFLECTION
Woven steel fabrication
The pavilion structure consisted of 92 unique parts and 560
individual bends and was fabricated and assembled over a
period of 3 days by two teams of two people without prior
experience with any of the tools or fabrication techniques
required for the project. Each team consisted of a fabri-
cator wearing the HoloLens and following holographic
instructions, and an assistant performing tasks as directed.
In order to transport the pavilion to site it was assembled in
4 discrete parts following a linear construction sequence.
On completion of each part, connection points were digi-
tized to measure deviation from the digital model and adjust
the geometry of subsequent parts to minimize error in the
joint.
Deviation and adaptive fabrication
For the digitized part (Figure 13), the digital design model
and the digitized physical model differ by at maximum
46mm, with an average of 20mm across all parts. Given the
structure was fabricated from 16mm pipe, this deviation
could often be attributed to human error in placing a joint
on a different side of the pipe to the digital model which is
considered an acceptable error as it may facilitate faster
fabrication time without comprimising the design language
of the structure. Larger deviation can be attributed to
human error during bending and part assembly, deection
in the physical model due to self-weight and contortion,
error in the digital model resulting in self-intersecting parts
that could not be accurately reproduced with physical
material, holographic drift from inside-out device tracking
8
8
and deliberate adjustments to the model during assembly
(Figure 12).
The fabrication team deliberately deviated from the digital
model in order to reduce the construction time of the
pavilion. Often a slight error in the placement of a part
could be accommodated for by adjusting the joints with
subsequent parts, or by manually bending subsequent
parts to re-align with the geometry of the structure.
Alternatively the part could be removed and re-bent with
more care and precision at the cost of increased construc-
tion time. As parts are fabricated on-demand using an
interactive holographic model, similar adaptations could
be made by digitizing the incorrect part and adjusting the
geometry of connecting parts before fabrication. This
enables the adaptive construction process to be extended
to larger scale structures and materials where manual
adjustment of parts is not feasible.
Causes of error
Inaccuracies in segment lengths between bends result
in signicant error as lengths cannot be adapted during
assembly and instead lead to accumulative errors across
the structure. However, we did not observe this as a
common cause of error in the Woven Steel pavilion as
registering the correct length of pipe using a holographic
guide is a simple task working from only 1D information and
is not affected by occlusion of physical material by holo-
graphic information. Inaccurate bend angles also result
in errors during assembly but can be accounted for due
to some exibility in material. We observe that using a 3D
holographic guide to accurately reproduce bend angles
requires signicant skill as even small errors in bend
angles accumulate across the part and result in deviations
from the digital model. During assembly parts can be placed
on the ‘wrong side’ of a joint or at the wrong length along a
joint resulting in minor errors. These were largely accom-
modated for through adaptive assembly.
Comparison to automation
The authors have some experience working with robotic
rod bending and in comparison, we identify some key
differences. While robotic bending processes are typically
used for mass-customization of parts with minor variations
made within the tolerances of the robotic arm, holographic
fabrication enables all parts to be entirely unique without
dramatically increasing the difculty of setup, production
or assembly. As the parts are also produced on demand,
no labelling or part identication is required. Subsequently,
there are fewer constraints on the design model which
does not need to account for collision with moving robots
or machinery. The direct visualization of the design model
for assembly allows for the design of unique joint loca-
tions or types without the need for additional specication
or documentation. While manual fabrication limits the
scale of material, it offers advantages in reducing safety
requirements, hardware expertise and equipment costs
in comparison to robotic or CNC processes. The tooling is
inherently mobile due to the small size and weight or the
bending equipment, and minimal expertise is required to
fabricate parts or assemble the structure.
12
Paper Title Author last names, separated by commas
TOPIC (ACADIA team will ll in) 9
Further work
The accuracy and speed of our holographic fabrication
approach could be improved by providing greater feed-
back to the user. Coloring holographic instructions based
on proximity feedback in the placement of physical mate-
rial when compared to the digital could be achieved using
marker tracking and provide immediate feedback regarding
the accuracy of a bend or placement of parts. Digitization
of the structure during assembly would also improve the
accuracy of the structure by adjusting digital models and
holographic instructions for parts following in the assembly
sequence. We recognize the need to measure and where
possible reduce the drift caused by inside-out tracking
systems, and in future work intend to compare results from
more robust industry standard 3D scanning processes to
those of holographic digitization method described in this
paper.
Whilst we identify some advantages of mixed reality fabri-
cation environments in comparison to robotic processes,
we recognize the advantages of a hybrid approach. Future
developments of this research will work with CNC and
robotic processes to combine faster and more accurate
part fabrication with the dexterity and intuition of human
assembly teams. Automated part fabrication will also assist
in scaling up the process and facilitate working with mate-
rials too rigid to bend by hand.
CONCLUSION
Fabrication within mixed reality environments enable
construction teams to assemble structures consisting
entirely of complex joints between unique parts without
the need to follow 2D instructions or work from 2.5D
setout points. While we did not conduct a comparative
study of fabrication time using traditional documentation
and fabrication time using holographic instructions, it is
not unreasonable to conclude that using drawings and
templates for 560 bend angles, feed measurements and
rotations would be excessively tedious and time consuming.
We demonstrate that traditional drawn instructions can
be replaced with holographic applications that describe
task specic information to fabricators in context and as
needed, reducing requirements to switch attention from
machine and material tasks to drawings or screens and
instead combining these two mediums in a mixed reality
experience. This improves the capability of fabricators to
develop skills and expertise in complex fabrication tasks,
as demonstrated by the construction of a pavilion by
construction teams with no prior experience with material
and fabrication processes. We demonstrate that working
from holographic guides enable fabrication teams to make
adjustments to the design as necessary during fabrication
in order to reduce construction time and complexity and
reduce the risk of task failure and accumulative error.
This research further demonstrates that current genera-
tion head-mounted displays such as the HoloLens provide
sufcient registration of holograms to physical environ-
ments to use these as a reference as to the accuracy of
an assembly or fabrication task, with most deviation from
digital models derived from lack of assembly experience
rather than drift in the holographic reference model. We
measure this using a novel method of digitizing the as-built
pavilion using a marker tracking method that creates an
accurate model from only a limited number of sample
points for immediate feedback to designers and without the
requirement for 3D scanning setups and mesh reconstruc-
tion software.
ACKNOWLEDGEMENTS
This work was produced with students during an invited workshop
at the 2018 CAADRIA Conference. We thank the participants and
organizers for their condence and support.
REFERENCES
Abe, U-ichi, Hotta, Kensuke, Hotta, Akito, Takami, Yosuke,
Ikeda, Hikaru and Ikeda, Yasushi. 2017. "Digital Construction -
Demonstration of Interactive Assembly Using Smart Discrete
Papers with Rd and Ar Codes." Protocols, Flows, and Glitches -
Proceedings of the 22nd CAADRIA Conference. 75-84.
13
10
14
1615
Behzadan, Amir & Dong, Suyang & Kamat, Vineet. 2015. Augmented
reality visualization: A review of civil infrastructure system applica-
tions. Advanced Engineering Informatics.
Boud, A.C. & Haniff, D.J. & Baber, Chris & Steiner, S.J.. 1999. “Virtual
reality and augmented reality as a training tool for assembly tasks.”
Proceedings of IEEE International Conference on Information
Visualization. 1999. 32 - 36.
Chi, Hung-Lin, Shih-Chung Kang, and Xiangyu Wang. 2013.
"Research Trends and Opportunities of Augmented Reality
Applications in Architecture, Engineering, and Construction."
Automation in Construction 33. 116-22.
Chu, Michael, Jane Matthews, and Peter E. D. Love. 2018.
"Integrating Mobile Building Information Modelling and Augmented
Reality Systems: An Experimental Study." Automation in
Construction. 85. 305-16.
Côté S., Beauvais M., Girard-Vallée A., Snyder R. 2014. “A live
Augmented Reality Tool for Facilitating Interpretation of 2D
Construction Drawings.” Augmented and Virtual Reality. AVR 2014.
Lecture Notes in Computer Science, edited by De Paolis L., Mongelli
A. vol 8853. Springer, Cham.
Funk, Markus, Andreas Bächler, Liane Bächler, Thomas Kosch,
Thomas Heidenreich and Albrecht Schmidt. 2017. “Working with
Augmented Reality?: A Long-Term Analysis of In-Situ Instructions at
the Assembly Workplace.” PETRA.
Georgeli, P & Schroederi, P & Benhimanei, S & Hinterstoisseri, S
& Appel, Mirko & Navab, Nassir. 2007. “An Industrial Augmented
Reality Solution For Discrepancy Check.” ISMAR 2007: Proceedings
of the 6th IEEE and ACM International Symposium on Mixed and
Augmented Reality. 111-115.
Kress, Bernard C., and William J. Cummings. 2017. "11-1: Invited
Paper: Towards the Ultimate Mixed Reality Experience: Hololens
Display Architecture Choices." SID Symposium Digest of Technical
Papers 48 1. 127-31.
Milgram, Paul & Kishino, Fumio. 1994. “A Taxonomy of Mixed Reality
Visual Displays.” IEICE Trans. Information Systems. vol. E77-D, no.
12. 1321-1329.
Ren, Jiang & Liu, Yingying & Ruan, Zhicheng. 2017. “Architecture
in an Age of Augmented Reality: Applications and Practices for
Mobile Intelligence BIM-based AR in the Entire Lifecycle.” DEStech
Transactions on Computer Science and Engineering.
Sandy, T., and J. Buchli. 2018. "Object-Based Visual-Inertial
Tracking for Additive Fabrication." IEEE Robotics and Automation
Letters. 99
T. P. Caudell and D. W. Mizell. 1992. "Augmented reality: an appli-
cation of heads-up display technology to manual manufacturing
processes," Proceedings of the Twenty-Fifth Hawaii International
Conference on System Sciences, Kauai, HI, 1992, 659-669 vol.2.
Yabuki, 2007. “Cooperative Reinforcing Bar Arrangement and
Checking by Using Augmented Reality.” Springer Berlin Heidelberg.
Zoran, Amit, et al. 2014. "The Hybrid Artisans: A Case Study in
Smart Tools." ACM Trans. Comput.-Hum. Interact. 21 3 1-29.
IMAGE CREDITS
Paper Title Author last names, separated by commas
14 Welding an assembled part of the structure
15 A participant teaching a colleague the bending application
16 A participant instructing in the assembly of part of the structure
TOPIC (ACADIA team will ll in) 11
Figure 2: © T. P. Caudell and D. W. Mizell 1992
All other drawings and images by the authors.
Gwyllim Jahn is the co-founder of Fologram and on sabbatical as
a Lecturer in Architecture at RMIT. He develops design research in
the elds of mixed reality environments, autonomous robotic fabri-
cation, behavioural design systems and creative applications of
machine learning. His work has been published in leading compu-
tational design conferences and journals including IJAC, ACADIA
and RobArch and he has given talks, presentations and workshops
at international institutions including MIT, Stuttgart ICD, Cooper
Union, UCL, UTS, Tongji and Tsinghua University.
Cameron Newnham s the co-founder and CTO of Fologram. His
experience lies in the creation of novel tools for designing and
fabricating complex geometric systems, including code libraries
for mixed reality interfaces, 3D printing and robotic fabrication.
Cameron has experience as a computational designer in interna-
tionally renowned and award winning architectural practices, and
academic experience as an Associate Lecturer – Industry Fellow
at RMIT University, Melbourne University and has led numerous
international design and build workshops in Shanghai, New York,
Paris, Boston, Sydney, and Melbourne.
Nicholas van den Berg is the co-founder of Fologram and a tech-
nology research and development consultant.
Matthew Beanland is completing his Masters of Architecture
thesis at RMIT university.
17 A small chunk of the pavilion with image marker used to calibrate digital and physical models during digitization 17