Conference PaperPDF Available

Pop Up Factory : Collaborative Design in Mixed Reality Interactive live installation for the makeCity festival, 2018 Berlin

Authors:

Abstract and Figures

This paper examines a novel, integrated and collaborative approach to design and fabrication, enabled through Mixed Reality. In a bespoke fabrication process, the design is controlled and altered by users in holographic space, through a custom, multi-modal interface. Users input is live-streamed and channeled to 3D modelling environment,on-demand robotic fabrication and AR-guided assembly. The Holographic Interface is aimed at promoting man-machine collaboration. A bespoke pipeline translates hand gestures and audio into CAD and numeric fabrication. This enables non-professional participants engage with a plethora of novel technology. The feasibility of Mixed Reality for architectural workflow was tested through an interactive installation for the makeCity Berlin 2018 festival. Participants experienced with on-demand design, fabrication an AR-guided assembly. This article will discuss the technical measures taken as well as the potential in using Holographic Interfaces for collaborative design and on-site fabrication.Please write your abstract here by clicking this paragraph. INTRODUCTION In this paper we argue that AR can improve man-machine collaboration in architecture making. Holo-graphic, multimodal interfaces simplify access to cutting-edge technology for the all-user. Human operators can edit digital information with using intuitive interfaces, for CAD/CAM. Leveraging human capabilities to communicate vie speech and hand-gestures allows non-professionals easy access to 3D modelling and robotic fabrication. We examined AR representation and multimodal interface potential in the inclusion of non-professional in architectural design and making, human-machine collaboration and multi-participant design. This approach was tested through a fully integrated cycle of holographic 3D modelling, on-demand robotic machining and AR-guided assembly that resulted in a collaborative installation for the MakeCity Festival 2018 in Berlin.
Content may be subject to copyright.
Pop Up Factory : Collaborative Design in Mixed Rality
Interactive live installation for the makeCity festival, 2018 Berlin
Giovanni Betti1, Saqib Aziz2, Gili Ron3
1,2,3HENN GmbH
1,2{Giovanni.Betti|Saqib.Aziz}@henn.com 3giliron.space@gmail.com
This paper examines a novel, integrated and collaborative approach to design
and fabrication, enabled through Mixed Reality. In a bespoke fabrication process,
the design is controlled and altered by users in holographic space, through a
custom, multi-modal interface. Users input is live-streamed and channeled to 3D
modelling environment,on-demand robotic fabrication and AR-guided assembly.
The Holographic Interface is aimed at promoting man-machine collaboration. A
bespoke pipeline translates hand gestures and audio into CAD and numeric
fabrication. This enables non-professional participants engage with a plethora of
novel technology. The feasibility of Mixed Reality for architectural workflow was
tested through an interactive installation for the makeCity Berlin 2018 festival.
Participants experienced with on-demand design, fabrication an AR-guided
assembly. This article will discuss the technical measures taken as well as the
potential in using Holographic Interfaces for collaborative design and on-site
fabrication.Please write your abstract here by clicking this paragraph.
Keywords: Holographic Interface, Augmented Reality, Multimodal Interface,
Collaborative Design, Robotic Fabrication, On-Site Fabrication
INTRODUCTION
In this paper we argue that AR can improve man-
machine collaboration in architecture making. Holo-
graphic, multimodal interfaces simplify access to
cutting-edge technology for the all-user. Human op-
erators can edit digital information with using in-
tuitive interfaces, for CAD/CAM. Leveraging human
capabilities to communicate vie speech and hand-
gestures allows non-professionals easy access to 3D
modelling and robotic fabrication. We examined AR
representation and multimodal interface potential in
the inclusion of non-professional in architectural de-
sign and making, human-machine collaboration and
multi-participant design. This approach was tested
through a fully integrated cycle of holographic 3D
modelling, on-demand robotic machining and AR-
guided assembly that resulted in a collaborative in-
stallation for the MakeCity Festival 2018 in Berlin.
BACKGROUND
The 4th industrial revolution marks the fusion of
novel automation with advanced means of commu-
nication and increased connectivity (Schwab, 2016).
Cyber-Physical production, the joint work of
Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3 - eCAADe 37 / SIGraDi 23 |115
human-human, machine-human and machine-
machine, is the foundation for increased productivity
on-which lies Industry 4.0 (Schuh, et al., 2014). In the
architecture field this is seen in the adaptation of
tools originating from various industries: fabrication
with automotive-industry robotic arms; visualization
techniques origination in the gaming industry, such
as AR and VR goggles. Synthesizing the two brings
about precise, on-demand CAD/CAM fabrication on-
site.
Studies in robotic fabrication mark its perfor-
mance as a prominent tool of the future work-
site. Automation brings about increased process
control, speed, precision of execution and increase
load-bearing capacities (Bonwetsch, F. Gramazio, M.
Kohler, 2012). It enables continuous work in-situ as
well as an increase in workers’ safety (Keating, 2017).
On-site automated fabrication with using robotic-
arms has been studied for its potential in combined
fabrication with several agents (Parascho et al., 2017).
Latest design research and application in robotic-
human fabrication in a building scale is seen in the
collaborative project DFAB House, of NCCR Digital
Fabrication and ETH Zurich (ETH Zurich, NCCR Digi-
tal Fabrication, 2018).
Augment Reality is advantageous in showing
contextual information, regarding design and fab-
rication. It allows hands-free access to informa-
tion, serving as guides for fabrication (J. Zhou et al.,
2011), increasing product quality and reducing prod-
uct costs (M. Brettel et al., 2014). In architecture, on-
site visualization during 3D modelling design phase
promotes context-aware design, adaptation to site-
conditions (Zlatanova, 2001) and increased design
performance (Popovski, 2014). Visualization is a key
factor in collaborative design and public participa-
tion (Allen, 2011). Previous experiments in using
AR-interfaces for fabrication purposes, were tested
for professional and non-professional use (Weichel,
2014), executed in small-scale (Do, 2010) and taught
a skill in repeating a predefined design (Billinghurst,
2015). AR and VR has been long identified for their
simulative advantages, training human agents in
working with digital fabrication tools (Lin, 2002). The
results of improper operation can be simulated with-
out incurring the associated costs in terms of human
injury and equipment repair.
Multimodal interfaces make use of human nat-
ural communication capabilities: speech, gestures,
touch. It is advantageous in translating long-learnt
motoric skills into machine-related skills, in short
time (Arshad et al., 2017). Multimodal interfaces have
been in use for the gaming industry, tracking mo-
tion and sound. Inputs are tracked with using sen-
sors (Nintendo Wii; Microsoft Kinect) or translated
through remote controls (HTC Vive; Samsung Gear
VR; Oculus Rift). Tracking of human motoric skills
into machine-code is seen in digital art-making (Hac-
mun, 2018) and in medicine practices, allowing re-
mote support from experts (Bodner, 2005). Human-
body “outputs”, translated into machine-code, have
been used as artistic generators in art installation:
heart-beats, neuronal activity and sound. The lat-
ter, an exploration in sound-driven pattern-making,
serves as base for the installation further discussed
(Betti, 2018).
A fully operational AR based, multimodal system
as the one prototyped here is yet to be developed
and tested at scale but it seems to belong to a rapidly
approaching future.
The use of Holographic, multimodal Interface is
tested for its possible impacts on in-situ design, fab-
rication and assembly. The workflow demonstrated
here is aimed at the inclusion of non-professionals in
a collaborative design and fabrication process. For
this purpose, machine inputs are customized to fit
the every-user. In sum, Pop-Up Factory system is
composed of hardware, software and human agents.
METHODS
The Pop-Up Factory installation explores digital de-
sign, fabrication and assembly on-site with using
multiple technologies, controlled via AR. Hardware
components used are Microsoft HoloLens AR head-
set, ABB 1200 robotic arm augmented with a custom-
made hot-wire cutting end-effector and a micro-
phone. Software components used are Rhinoceros
116 |eCAADe 37 / SIGraDi 23 - Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3
Figure 1
The physical
components of the
Pop-Up Factory
installation
Figure 2
The physical and
holographic
components of the
Pop-Up Factory
installation
Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3 - eCAADe 37 / SIGraDi 23 |117
3D modelling, Grasshopper programming, Python
programming; and various Grasshopper plug-ins,
among them: Fologram and HAL robotics. We will
discuss those in the following sections.
1) Holographic Interface: A bespoke AR user in-
terface was designed for a design and fabrication en-
terprise. The installation consists of a workflow in
three stages: design of the brick tower (CAD), fabrica-
tion of bricks (CAM) and manual assembly. AR inter-
face is used to control software and automated hard-
ware, allows navigation between the different CAD/-
CAM stages and serves as guide in the assembly se-
quence. The interface exhibits six applications, ‘Ac-
tion Buttons’, each displaying an identifying icon:
1. Eye / Hand icon: shift between two visualiza-
tion modes: built portion of the tower (hand);
design of the over-all tower (eye).
2. Control Points: controlling the paths that de-
termine the tower’sgrowth patterns and over-
all form.
3. Audio Recording icon: starts and pauses an
audio recording, used to texture the brick’sex-
terior face.
4. Ruler icon: defines the portion of design still
available for modifications (‘abovethe line’) or
already decided (‘below the line’).
5. Send Fabrication Data: initiates wire cutting
tool-path to the robot.
6. Show Assembly Instructions
Hand gesture, Air Tap and Drag, are registered with
the headset’s sensors, activating the interface. Aug-
mented Reality goggles display the proposed design
on-top-of the structure.
2) Architecture: AR-interface is used in the mak-
ing of an architectural-scale brick installation. The de-
sign emerges from the assembly of individual bricks.
Its gradual growth is subjected to user’s input, with
the following limitations:
1. Tower’s dimensions respond to site condi-
tions.
2. Base-to-canopy ratio is limited to ensure
structure stability.
3. Tower’s aggregation is limited by human’s
reach, stopping at 1.9 (m).
4. Component’s dimensions fit a human palm
(15*10*5 cm, approx.).
3) 3D modelling: digital design typically required pre-
vious experience with 3D modelling environments.
To include non-professional users in a collaborative
design effort, a new, intuitive AR-based interface was
developed. The new design pipeline makes use of
sound inputs and hand gestures, to generate and
modify the tower’s design. Holographic-Interface
3D modelling takes place in two scales and is per-
formed as follows: design of the tower is attained
with hand-gestures, shifting control points. Design
of the tower’s façade-pattern is performed through
the processing of audio signals. Sound samples
are divided into channels: tone, pitch and volume.
Each façade element receives a unique pattern, spe-
cific to each user (Betti et al, 2018). A bespoke 3D-
modelling pipeline discretizes the global form into
bricks. The bricks differ in surface texturing and the
geometry of the ___ contact surfaces (responsive to
the tower’soverall geometry) to generate tower’s cur-
vature. User’s inputs are registered in the CAD/CAM
environment and updated in real-time with any new
data.
4) Robotic Fabrication: on-demand fabrication,
using a robotic arm placed in-situ is controlled with
the interface. A bespoke pipeline translates 3D mod-
elling information into robotic machine code. Tool-
path was designed to execute bricks design, with
minimal leftover material. Each tool-path generates
five bricks, cut in black foam. Tool-path was de-
signed to execute brick’s joinery and façade-details,
with minimal leftover material.
5) Assembly: Users assemble the bricks manu-
ally. Assembly instructions are shown in AR and up-
date as the structure grows. Once a series of five
bricks is cut, animated arrows indicate its orientation,
location and assembly sequence. Shifting between
visualization modes allows users to evaluate the on-
going production process, to inspect the individual
and the overall structure.
118 |eCAADe 37 / SIGraDi 23 - Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3
Figure 3
Icons of the
holographic
interface and their
activation
RESULTS
The installation lasted two days and was visited by
general public and colleagues, ranging in age and
design experience. The following workflow was
exercised: installation space accommodated three
production-related areas, designated for 3D design,
numeric fabrication and assembly. A graphic tuto-
rial exhibited on-site introduced visitors with the dif-
ferent holographic design modes and their related
‘Action Buttons’. Participants were first introduced to
AR space and operating-gestures: wearing AR gog-
gles and practicing Air Tap and Drag commands, and
interface-navigation. In taking on collaborative de-
sign, participants explored digital design and fabri-
cation using the ‘Action Buttons. Each user in turn
added to the global design of the structure, fabricat-
ing and assembling a series of five bricks.
Crowd participation was somewhat limited:
while people were keen to try on the AR googles
and experience the interface and installation in AR,
participation in design was in smaller numbers. We
attribute this mainly to limited time available for each
visitor to interact with the installation.
Human-machine collaboration. Most interface
‘Action Buttons’ were reported as easy to use. Hand
gestures shifting ‘Control Points’, were sometime not
registered. This could be due to its location at the
very top of the Holographic space, missed by the sen-
sors; and errors in reading caused by sun glare com-
ing from the windows of the installation space.
The AR interface was extremely successful in en-
abling a productive interaction with the robotic arm,
Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3 - eCAADe 37 / SIGraDi 23 |119
Figure 4
The various
interaction
elements of the
holographic
interface
Figure 5
Two of the
innumerable
different possible
configurations of
the installation
120 |eCAADe 37 / SIGraDi 23 - Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3
Figure 6
Overview of the
installation through
the holographic
interface
allowing continuous fabrication and no physical in-
teraction between participants and robot.
The hands-free and mobile interface made easy
the free-form assembly of multiple discrete elements,
otherwise extremely arduous. Visual feedback en-
abled design in context and accurate fabrication. The
sequenced design and fabrication process realized
for non-professional was clear and easy to follow.
Human-human collaboration. The designed
work-sequence had a large effect on the nature of the
practiced collaborative design. Though the installa-
tion relied on multiple design inputs its outcome was
bound by design definitions, made beforehand. The
pre-requisites were aimed at increased, speeded fab-
rication of minimal waste and installation-timeline.
Bricks’ dimensions, for example, were pre-defined;
allowing individual input only on its ’façade’ and in
joinery detail. Another example in which user input
was bound to installation predefined logic was a col-
laborative design of a tower only in aggregation.
Another interesting aspect of the participatory
discourse taking place was the sequential nature of
participation. Most participant wanted to contribute
to the design only once. The outcome is different
than what is usually seen in participatory design,
where participants are encouraged to debate and al-
ter the design until the final results is agreed by the
majority of the group.
Our initial hypothesis was then only partially ver-
ified. We still believe that AR can play a role in pro-
moting collaborative design.
Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3 - eCAADe 37 / SIGraDi 23 |121
Figure 7
Human Machine
collaboration
Figure 8
AR-guided
assembly
122 |eCAADe 37 / SIGraDi 23 - Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3
DISCUSSION
1. This paper outlines a novel, integrated and
collaborative approach to design and fabri-
cation, enabled through Augmented Reality,
robotic fabrication and multimodal interface.
In this installation, the custom notational in-
put used for 3D modelling was altered, ac-
cessible to intuitive input. We contribute
to the emerging field of production research
by providing simplified access to innovative
design and fabrication tools. Hand-gestures
and speech were used as multimodal inter-
action with CAD/CAM, in a large-scale, multi-
participant effort. Non-professionals easily
followed visual cues in AR and utilized their
existing communication skills to control 3D
modelling and robotic fabrication, with little
time of training. In doing so, the focus in
architecture-making is shifted from forming
the design itself to forming a platform for oth-
ers to explore design. The public, now in the
focus of architecture-making, participates in
decision-making creative effort and manufac-
turing with using cutting-edge technology.
For the architectural field this marks a poten-
tial shift to public participation in practice and
discourse. In active participation, the public
gains better understanding of the impact of
the design and has the freedom to alter it ac-
cording to their wishes.
2. Recent innovations in the optic industry ac-
celerates simulative abilities, enabling 3D rep-
resentation in-situ. Augmented reality gives
accessed to plentiful, varied information. It is
advantageous in experiencing a proposed de-
sign with context to site or end-user, teach-
ing skills to non-professionals and quality con-
trol. In this installation, AR interface was used
for 3D design on-site, providing context to the
structure; a remote and simple control over
automated fabrication tools; instructions for
precise execution in manual fabrication; and
means of assessing of the assembled struc-
ture. We believe AR as a generative and crit-
ical tool has the potential in bridging gaps
between plan and execution, bringing about
adapted design.
3. The Digital Turn (Carpo, 2013) and parametric
practice embodied a change in workflow. The
use of AR leads a change in design represen-
tation, moving from 2D drawings to 3D ele-
ments. New ways of communicating architec-
ture might have great consequences on the
built environment: in the form of new geome -
try used and new practitioners in the field, in-
teracting with advanced tech.
REFERENCES
Allen, M, Regenbrecht, H and Abbott, M 2011 ’Smart-
Phone Augmented Reality for Public Participation in
Urban Planning’, OZCHI ’11, pp. 11-20
Arshad, SZ 2017 ’Human-in-the-Loop Machine Learning
with Intelligent Multimodal Interfaces’, Proceedings
of the ICML2017 Workshop: Human in the Loop Ma-
chine Learning, Sydney, Australia
Betti, G 2018 ’Communication Landscapes’, Robotic Fab-
rication in Architecture, Art and Design, pp. 74-84
Billinghurst, M 2015, ’A Survey of Augmented Reality’,
Foundations and Trends•Rin Human-Computer, 8, pp.
74-272
Bodner, J 2005, ’The Da Vinci Robotic System for General
Surgical Applications: a Critical Interim Appraisal’,
Swiss Medical Weekly, 135, pp. 674-679
Bonwetsch, FG 2012, R-O-B: Towards a Bespoke Building
Process, John Wiley and Sons
Carpo, M 2013, ’The Digital Turn in Architecture 1992 -
2012’, AD, 1, pp. 8-15
Friederichsen, N, Keller, M and Brettel, MRM 2014, ’How
Virtualization, Decentralization and Network’, World
Academy of Science, Engineering and Technology, 8,
pp. 37-44
Gandia, A, Mirjan, A, Gramazio, F and Parascho, MKS
2017 ’Cooperative Fabrication of Spatial Metal Struc-
tures’, Fabricate 2017: Rethinking Design and Con-
struction, pp. 24-30
Hacmun, I, Regev, D andSolomon, R 2018, ’ The Principles
of Art Therapy in Virtual Reality’, Frontiers in Psychol-
ogy, 9, pp. 1-8
Lau, M, Kim, D, Villar,N and Weichel, HGC 2014 ’MixFab: a
Mixed-Reality Environment for Personal Fabrication’,
Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3 - eCAADe 37 / SIGraDi 23 |123
Proceedings of CHI ’14
Lee, TV and Do, JW 2010, ’3DARModeler: a 3D Modeling
System in Augmented Reality Environment’, World
Academy of Science, Engineering and Technology, 39,
pp. 538-547
Leland, JC, Levi, C and Keating, NOSJ 2017, ’Toward Site-
Specific and Self-Sufficient Robotic Fabrication on
Architectural Scales’, Science Robotics, 26, pp. 1-15
Popovski, F 2014, ’Generating 3D Model in Virtual RE-
ALITY AND analyzing its performance’, International
Journal of Computer Science \& Information Technol-
ogy (IJCSIT), 6, pp. 123-128
G Schuh, TP 2014 ’Collaboration Mechanisms to Increase
Productivity in the Context of Industrie 4.0’, Proce-
dia CIRP 19, Robust Manufacturing Conference (Ro-
MaC 2014), pp. 51-56
Schwab, K 2016, The Fourth Industrial Revolution, Crown
Publishing Group
Ye, L, Duffy, VG, Lin, C and Jung-Su, F 2002, ’Develop-
ing Virtual Environments for Industrial Training’, In-
formation Sciences, 1, pp. 153-170
Zhou, J 2011 ’Applying Spatial Augmented Reality to Fa-
cilitate In-Situ Support’, proceedings of VRCAI 2011,
Hong Kong, China, pp. 195-200
Zlatanova, S 2001 ’3D Modelling for Augmented Reality’,
Dynamic and Multi-dimensional GIS, Bangkok, Thai-
land
[1] https://dfabhouse.ch/
124 |eCAADe 37 / SIGraDi 23 - Simulation - VIRTUAL AND AUGMENTED REALITY 2 - Volume 3
... This SDK is used in the literature because of its simplicity of use, which is another important factor [14]. It enables navigation between the various CAD/CAM phases and acts as a reference during the assembly step [15]. The Fologram SDK recognizes user gestures, screen taps, device position, it has a customizable interface on mobile phones and HoloLens, and the Fologram SDK offers the opportunity to interact with Grasshopper in the AR environment [11]. ...
... These holographic instructions make it feasible to create anything precisely and quickly. It assists unskilled laborers in the assembly of complex structures [13][14][15]. Collaborative and creative production is possible [14]. ...
Article
Full-text available
Technology is employed in the fields of architecture, engineering, and construction (AEC) for characteristics like producing visual representations and offering assistance during the building phase. Both users and creators of these tools are able to immediately take advantage of the technology's potential as well as create a variety of workarounds for its drawbacks. Both viewpoints will be looked at in this study with regard to mobile extended reality SDKs (software development kit). By excluding the articles that did not provide the relevant information, this research concentrates solely on the papers that discuss the technological aspects of the SDK that were used, the opportunities the SDK offers, and/or the flaws of the SDK. The study's main objective is to compare the technological contributions made by the SDKs employed in the scope of the examined literature to the AEC disciplines and to the contexts in which such contributions are made. Through applications in literature research, the study aims to highlight the contributions of mobile extended reality SDKs to the fields of architecture, engineering, and construction. An entry-level developer can use the SDKs in accordance with his work by using the comparison diagrams, produced in this study, to see the relationships and comparisons between them, as well as to build a framework for what uses should be made in which domains. The technological capabilities and constraints of SDKs have an impact on how research is designed. Making relationality diagrams on the SDK to use and the effects it will have throughout the research phase is also crucial. As a result of the research, SDKs permit flexible uses in a variety of sectors, and their use also financially and logistically supports literature studies.
... With the installation Pop-Up Factory, realised for the festival Make. City Berlin in 2018 [12], we aimed at creating a richer design and assembly experience. To allow for more complex and nuanced interactions between NEROs and robots, we decided to explore the use of Augmented Reality (AR) devices such as the Microsoft HoloLens. ...
Chapter
Full-text available
The introduction of robotic construction methods in the building industry holds great promise to increase the stagnating productivity of the construction industry and reduce its current carbon footprint, in considerable part caused by waste and error in construction. A promising aspect is the implementation of integrated CAD/CAM processes where the design intent -expressed in CAD- is directly translated in machine instructions (CAM) without loss of information or need for intermediate translation and refactoring. As the introduction of robotics will hardly spell the disappearance of human workers, a key challenge will be orchestrating human-machine collaboration in and around the construction site or fabrication plant. Our contribution presents explorations in this field. Key to our approach is the investigation of the above-mentioned questions in conjunction with Mixed Reality (MR) interfaces to give access to both human and machine workers to the same dynamic CAD model and assembly instructions. This paper describes our work in this field through two artistic installations and ongoing research on additive manufacturing enabled construction. We probe principles for precise, customised, efficient, almost waste-free and just-in-time productions in the construction sector.
... There are also studies where mixed-reality tools and industrial robots were used together in robotic-fabrication applications. In a robotic wire-cutting-application study, the Styrofoam pieces were produced using an industrial robot and they were knitted using the mixed-reality device [14]. In a study of knitting wooden sticks, an industrial robot was used to notch the joints of wooden sticks, and the mixed-reality device was used during the knitting of wooden sticks [15]. ...
Article
Full-text available
In this study, a method, in which parametric design and robotic fabrication are combined into one unified framework, and integrated within a mixed reality environment, where designers can interact with design and fabrication alternatives, and manage this process in collaboration with other designers, is proposed. To achieve this goal, the digital twin of both design and robotic fabrication steps was created within a mixed-reality environment. The proposed method was tested on a design product, which was defined with the shape-grammar method using parametric-modeling tools. In this framework, designers can interact with both design and robotic-fabrication parameters, and subsequent steps are generated instantly. Robotic fabrication can continue uninterrupted with human–robot collaboration. This study contributes to improving design and fabrication possibilities such as mass-customization, and shortens the process from design to production. The user experience and augmented spatial feedback provided by mixed reality are richer than the interaction with the computer screen. Since the whole process from parametric design to robotic fabrication can be controlled by parameters with hand gestures, the perception of reality is richer. The digital twin of parametric design and robotic fabrication is superimposed as holographic content by adding it on top of real-world images. Designers can interact with both design and fabrication processes both physically and virtually and can collaborate with other designers.
... ere are many kinds of applications where many formats of files have been produced, and these files are scattered in the minds of project personnel, which requires a compatible platform. e files of the whole project are stored in the project directory according to certain rules, which can be accessed by project personnel with project authority, thus improving the For files about co-design of cave dwellings on the platform, fuzzy search can be performed by file name, or advanced search can be performed by file attribute [17] which is convenient for users to find the required files quickly when they are unfamiliar with the project folder structure. ...
Article
Full-text available
Collaborative design is an important link in building construction at present, while the communication and cooperation among disciplines need further regulation. In order to maintain the cost-effectiveness and design quality of building engineering in the competitive environment and obtain higher economic benefits, the design team must ensure the collaborative design quality of building engineering. Based on the analysis of collaboration theory and collaboration mechanism, a collaborative design platform for caves in Henan province is constructed under the current cloud environment. In addition, the requirements and nonfunctional requirements of the platform are analyzed, and the overall structure with each part is designed specifically, hoping to improve the quality of collaborative design effectively.
Article
Full-text available
This research presents an innovative approach that integrated gesture recognition into a Mixed Reality (MR) interface for human–machine collaboration in the quality control, fabrication, and assembly of the Unlog Tower . MR platforms enable users to interact with three-dimensional holographic instructions during the assembly and fabrication of highly custom and parametric architectural constructions without the necessity of two-dimensional drawings. Previous MR fabrication projects have primarily relied on digital menus and custom buttons within the interface for user interaction between virtual and physical environments. Despite this approach being widely adopted, it is limited in its ability to allow for direct human interaction with physical objects to modify fabrication instructions within the virtual environment. The research integrates user interactions with physical objects through real-time gesture recognition as input to modify, update, or generate new digital information. This integration facilitates reciprocal stimuli between the physical and virtual environments, wherein the digital environment is generative of the user’s tactile interaction with physical objects. Thereby providing user with direct, seamless feedback during the fabrication process. Through this method, the research has developed and presents three distinct Gesture-Based Mixed Reality (GBMR) workflows: object localization, object identification, and object calibration. These workflows utilize gesture recognition to enhance the interaction between virtual and physical environments, allowing for precise localization of objects, intuitive identification processes, and accurate calibrations. The results of these methods are demonstrated through a comprehensive case study: the construction of the Unlog Tower , a 36’ tall robotically fabricated timber structure.
Thesis
Full-text available
This research posits design methods and implications for architectural fabrication in mixed reality. The notion of spatialising fabrication instructions in mixed reality to guide processes of formation by hand is a new and novel contribution to the discourse on digital fabrication and craft. The benefits of mixed reality environments for visualisation of unbuilt designs, or assembly of designs agnostic to methods of construction has been well established by the literature. However the impact that these new affordances will have on architectural design thinking and production represents a gap in disciplinary knowledge that this research seeks to address. By defining the capacities, limitations and affordances of subjective interpretation of digital fabrication instructions in mixed reality, this research proposes a framework for thinking about the new design conventions and opportunities for designing for mixed reality fabrication. Several case study projects provide practical evidence of this framework by demonstrating the impact of mixed reality on enabling creative exploration and application of traditional craft skills within digital design-to-production processes, as well as improving performance in conventional fabrication processes such as non-linear bricklaying. A reflection on these projects speculates on the broader implications of adoption of mixed reality fabrication by manufacturing and construction industries, the extent to which traditional craft practices may be reinterpreted in mixed reality and the possibilities of an expanded design space for architecture and art.
Article
With increased prefabrication in the construction industry, fabrication workers are tasked to assemble more complicated assemblies with tighter tolerances. However, the existing measurement tools and processes have not changed to accommodate this shift. Lack of advanced measurement tools and existing processes results in increased risk of late detection of geometric errors. To reduce these risks, three-dimensional (3D) quality control systems leveraging scan-vs-BIM methods can be adopted as part of the fabrication process. However, these systems have not been widely adopted yet by fabrication shops, because: (1) fabrication shops often do not have 3D models corresponding to shop drawings; and (2) the cost of integrating accurate 3D scanning equipment into fabrication workflows is assumed to be too high. To remove the first barrier, in this article, a framework for developing 3D digital templates is developed for inspecting received parts. The framework is used for developing a library of 600 3D-models of piping parts. The library is leveraged to deploy a 3D quality control system that was then tested in an industrial scale case study. The results of the case study are used to develop a discrete event simulation model. The simulation results from the model and subsequent cost benefit analysis show that investment in integrating the scan-vs-3D-model quality control systems can have significant cost savings and provide a payback period of less than two years.
Chapter
Architectural education is historically rooted in Vitruvius’ teachings of durability, utility, and beauty - adopted from his text “Ten Books on Architecture” which still hold relevance to modern architectural theory. Throughout the evolution of architecture, the need for feasibility has grown. The divide between architecture and built construction has since questioned the vision versus reality of what is constructed. The rapid development of architecture is rooted in its synthesis of materials and methods through technologies. From construction techniques of rebar within reinforced concrete to the formal generation of iconic architecture using computational design, architecture is the manifestation of cultural aspirations. Visionary architecture has expanded mediums from discrete imagery to digital experiences that immerse users mobilized by the growing innovations in visualization techniques and tools. While the illustrative techniques of visualization have begun to blur the vision between reality and fiction, it is the responsibility of the architect to utilize the tools to bring the ideas to built reality. Whereas traditional procedures of mandated licensure or obtainment of a degree open the opportunity to bring ideas to reality, contemporary pedagogy makes the possibility well before requirements of these milestones. Architecture students today have access to emerging technologies that are instrumental in the design, visualization, detailed fabrication, and delivery of a project well before the completion of their undergraduate degrees. This paper presents a case study at Canada’s largest accredited architectural program demonstrating the range of accessible technologies for students to apply their design ideas into built reality using parametric modelling, virtual reality, augmented reality, and digital fabrication. The authors posit that this accessible agency provides confidence and comfort in closing the divide between architecture and construction.
Chapter
Full-text available
This paper proposes the TRAC (Teaching-based Robotic Arm Construction) system, which aims to the intuitive robot-assisted bricklaying process. The goal of this research is to integrate the design-to-build process and help designers to implement robotic-assisted fabrication. The paper introduces two-staged steps of practice: 1. The teaching-based design method simulates bricklaying motion in VR to lead the intuitive design process; 2. The auto-translation from the design result to the robotic arm bricklaying working path. Finally, the verification of the system is implemented in the campus workshop via the KUKA robot. The final brick wall result shows the seamless working process from intuitive teaching-based design to robot-assisted build.
Article
Full-text available
In recent years, the field of virtual reality (VR) has shown tremendous advancements and is utilized in entertainment, scientific research, social networks, artistic creation, as well as numerous approaches to employ VR for psychotherapy. While the use of VR in psychotherapy has been widely discussed, little attention has been given to the potential of this new medium for art therapy. Artistic expression in VR is a novel medium which offers unique possibilities, extending beyond classical expressive art mediums. Creation in VR includes options such as three-dimensional painting, an immersive creative experience, dynamic scaling, and embodied expression. In this perspective paper, we present the potentials and challenges of VR for art therapy and outline basic principles for its implementation. We focus on the novel qualities offered by this creative medium (the virtual environment, virtual materials, and unreal characteristics) and on the core aspects of VR (such as presence, immersivity, point of view, and perspective) for the practice of art therapy.
Conference Paper
Full-text available
Machines have the ability to manipulate material cooperatively, enabling them to materialise structures that could not otherwise be realised individually. Operating with more than one (mechanical) arm allows for the exploitation of assembly processes by performing material manipulations on a shared fabrication task. The work presented here is an investigation of such cooperative robotic construction, wherein two industrial robots assemble a spatial metal structure consisting of discrete steel tubes.
Article
Full-text available
Contemporary construction techniques are slow, labor-intensive, dangerous, expensive, and constrained to primarily rectilinear forms, often resulting in homogenous structures built using materials sourced from centralized factories. To begin to address these issues, we present the Digital Construction Platform (DCP), an automated construction system capable of customized on-site fabrication of architectural-scale structures using real-time environmental data for process control. The system consists of a compound arm system composed of hydraulic and electric robotic arms carried on a tracked mobile platform. An additive manufacturing technique for constructing insulated formwork with gradient properties from dynamic mixing was developed and implemented with the DCP. As a case study, a 14.6-m-diameter, 3.7-m-tall open dome formwork structure was successfully additively manufactured on site with a fabrication time under 13.5 hours. The DCP system was characterized and evaluated in comparison with traditional construction techniques and existing large-scale digital construction research projects. Benefits in safety, quality, customization, speed, cost, and functionality were identified and reported upon. Early exploratory steps toward self-sufficiency—including photovoltaic charging and the sourcing and use of local materials—are discussed along with proposed future applications for autonomous construction.
Article
Full-text available
This survey summarizes almost 50 years of research and development in the field of Augmented Reality (AR). From early research in the 1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched.
Article
Full-text available
We investigate smart-phone based augmented reality architecture as a tool for aiding public participation in urban planning. A smart-phone prototype system was developed which showed 3D virtual representations of proposed architectural designs visualised on top of existing real-world architecture, with an appropriate interface to accommodate user actions and basic feedback. Members of the public participated in a user study where they used the prototype system as part of a simulated urban planning event. The prototype system demonstrated a new application of augmented reality architecture and an accessible way for members of the public to participate in urban planning projects.
Article
Full-text available
The recently introduced robotic surgical systems were developed to overcome the limitations of conventional minimally invasive surgery. We analyse the impact of the da Vinci robotic system on general surgery. The da Vinci operating robot is a telemanipulation system consisting of a surgical arm cart, a master console and a conventional monitor cart. Since its purchase in June 2001, 128 patients have undergone surgery using the da Vinci robot in our department. The mean age of the 78 female and 50 male patients was 52 (range 18-78) years. The procedures included 29 cholecystectomies, 16 partial fundoplications, 16 extended thymectomies, 14 colonic interventions, 10 splenectomies, 10 bariatric procedures, 7 hernioplasties, 6 oesophageal interventions, 5 adrenalectomies, 5 lower lobectomies, 4 neurinomectomies and 6 others. 122 of 128 procedures (95%) were completed successfully with the da Vinci robot. Open conversion proved necessary in 4 patients due to surgical problems, and two other procedures were completed by conventional laparoscopy due to robot system technical errors. 30-day mortality was 0%, one redo-operation was necessary and two lower complications not requiring surgical re-intervention occurred. The resection margins of all tumour specimens were histologically tumour free. Various general surgical procedures have proved feasible and safe when performed with the da Vinci robot. The advantage of the system is best seen in tiny areas difficult of access and when dissecting delicate, vulnerable anatomical structures. However, in view of longer operating times, higher costs and the lack of adequate instruments, robotic surgery does not at the moment represent a general alternative to conventional minimally invasive surgery.
Article
This paper describes a 3D modeling system in Augmented Reality environment, named 3DARModeler. It can be considered a simple version of 3D Studio Max with necessary functions for a modeling system such as creating objects, applying texture, adding animation, estimating real light sources and casting shadows. The 3DARModeler introduces convenient, and effective human-computer interaction to build 3D models by combining both the traditional input method (mouse/keyboard) and the tangible input method (markers). It has the ability to align a new virtual object with the existing parts of a model. The 3DARModeler targets nontechnical users. As such, they do not need much knowledge of computer graphics and modeling techniques. All they have to do is select basic objects, customize their attributes, and put them together to build a 3D model in a simple and intuitive way as if they were doing in the real world. Using the hierarchical modeling technique, the users are able to group several basic objects to manage them as a unified, complex object. The system can also connect with other 3D systems by importing and exporting VRML/3Ds Max files. A module of speech recognition is included in the system to provide flexible user interfaces.
Article
In this paper is presented an virtual environment of a real model. Here are given all analyzes for making and vizualization of virtual environment in Quest3D. All analyzes of performance of the system in real time is presented.We described advantages and disadvantages of interactions in virtual environment and made a critical analysis on a rendering speed and quality on different machines.
Article
Personal fabrication machines, such as 3D printers and laser cutters, are becoming increasingly ubiquitous. However, designing objects for fabrication still requires 3D modeling skills, thereby rendering such technologies inaccessible to a wide user-group. In this paper, we introduce MixFab, a mixed-reality environment for personal fabrication that lowers the barrier for users to engage in personal fabrication. Users design objects in an immersive augmented reality environment, interact with virtual objects in a direct gestural manner and can introduce existing physical objects effortlessly into their designs. We describe the design and implementation of MixFab, a user-defined gesture study that informed this design, show artifacts designed with the system and describe a user study evaluating the system's prototype.