Technical ReportPDF Available

Digital Architecture Technical Research


Abstract and Figures

By mapping computer-generated objects onto physical, tactile placeholders, an intersection between the digital and physical realms is formed. When viewed through a device capable of displaying this overlay, the object and the architectural environment can assume abilities that are physically impossible, yet perfectly viable in a digital environment. To test and develop my understanding of the technology, I have designed a film studio containing a series of spaces that simplify and highlight the key stages of the process of creating a film. Through a digital "augmenting reality" layer, I aim to make the standard film creation process more spatial-and so inspire maximum creative potential within people. This is the essence of spatial computing.
No caption available
No caption available
No caption available
No caption available
No caption available
Content may be subject to copyright.
Digital Architecture
Rafal Kopiec
BUIL1074 Technical
Architecture Dissertation
Session 2017-18
to be presented to the Department of Architecture
and Landscape at the University of Greenwich as
part of the BA (Hons) Architecture course
By mapping computer-generated objects onto physical, tactile placeholders,
an intersection between the digital and physical realms is formed. When viewed
through a device capable of displaying this overlay, the object and the architectur-
al environment can assume abilities that are physically impossible, yet perfectly
viable in a digital environment.
To test and develop my understanding of the technology, I have designed a film
studio containing a series of spaces that simplify and highlight the key stages of
the process of creating a film. Through a digital “augmenting reality” layer, I aim to
make the standard film creation process more spatial - and so inspire maximum
creative potential within people.
This is the essence of spatial computing.
“A Digital Frontier”
1 Abstract
3 You are here
4 “A digital frontier”
6 Site
12 Prologue
14 The Realms
16 Case Studies
20 Reskinning Objects
22 Systematic Trackers
24 World Tracking
26 AR Anchor
28 Epilogue
design the host building to make use of as
much of the available environmental ener-
gy as possible - harvesting heat and light
energy from the sun as well as from the
heat exchangers on neighbouring rooftops.
Amid frustration that the majority of spaces
are *still* not being designed to embrace
the incoming Mixed Reality technological
revolution, I have decided to forgo develop-
ing the building’s standard environmental
measures in favour of pursuing first hand
research and development with Digital
Architecture. This is a project that is readily
accessible to me, as I have the means and
resources to test, develop and deploy. As
the project is software based (rather than,
as in traditional architectural technological
investigations, hardware based) I have the
opportunity to create 1:1 scale experiences.
This is a monumental advantage, as I will
be able to deploy products based on my
research to clients relatively soon - these
1:1 scale experiences *are* examples of
finished products - not merely scaled rep-
So, this research document contains a
range of experiments I conducted that
test the limits of mobile augmented reality
technology - something that I believe will
become very useful to the future of archi-
“A digital frontier”
The idea is simple.
By creating a set of links between the dig-
ital and physical world, a space retains the
benefits of being a physical space and also
gains a multitude of advantages that would
normally be restricted to digital experienc-
Through this, I want to drive and enhance
maximum creative potential to the users of
a space.
In other words, by mapping comput-
er-generated objects onto physical, tactile
placeholders, an intersection between
the digital and physical realms is formed.
When viewed through a device capable of
displaying this overlay, the architectural
environment can assume attributes that
are physically impossible, yet perfectly
viable in a digital environment. To test and
develop my understanding of Augmented
Reality, I have designed a film studio that is
formed of a series of spaces that simplify
and highlight the key stages of the process
of creating a film. As the building contains
a large variety of spaces that could also
be present in other buildings, the process
has given me a range of spaces and work-
flows onto which I can test how the design
element of AR might interweave with the
design of physical architecture.
One of the many side effects of making use
of augmented reality is the reduced elec-
trical energy consumption. Many discrete
electronic devices and appliances can
now be synthesised digitally, coexisting
in the physical realm with the real-world
footprint of a few electrons on a microchip
and a series of emitted photons directed
at a user’s eye. Because of this, I chose to
Site Constraints
The site I chose is right at the boundary of
Unit 5’s chosen area: Clerkenwell. With-
in this area I have decided to celebrate
the ancient River Fleet that has now been
converted into a sewer, and runs directly
beneath Farringdon Road to Blackfriars
Bridge. The Holborn Viaduct (designed
by Christopher Wren) marks a significant
development in the river’s history, and is
the largest standing monument to some-
thing that has been forgotten by many. So,
I decided to build as close to it as possible.
I have chosen the south west corner to
improve, as it is currently the site of an ‘out-
of-place’ 1960’s era block currently home to
Escrow Global.
Each of the four corner pieces of the via-
duct is a historic facade, and so the corner
piece on the chosen site must be preserved.
Every one also has a staircase integrated
in order to enable pedestrians to change
street levels. My design is centred around
an expansion of the staircase.
As the building is designed to be used by
those who work for YouTube, it will help
propagate the technological revolution
in London and immediate area, given that
the Amazon HQ is on the other side of the
Designing for this location allowed for
uncommon massing, as the building inter-
faces with two different streets at an acute
angle, while being seperated by 9 metres in
Site Utilisation - Site Map 1:400
2 4 8 12 16
20 28 36
Holborn Viaduct
Residential & Filmmakers’ access
Daylit Cinema & Public Lift
New Public
Large Cargo
Farringdon Road
Building Organisation
Aqueduct Daylit Cinema Mapping Room Audio Design Visual Design Accommodation Server Room Long Term Storage
Initial Area of Detailed
Technical Application
[Editing Suite]
10 11
I first came across mobile augmented real-
ity technology in early 2017, when I realised
the building I was designing could be better
represented if it was shown in the medium
it was designed in; in three dimensions. I
realised that, while often effective, plans,
elevations & sections could only get me so
far in showing my thinking, as translating
three dimensions to two often proved diffi-
cult. In the age before CAD, this would have
never been a problem as buildings would
be designed exclusively with these ab-
stracted views. So, the buildings that were
designed were “optimised” for the plan, el-
evation & section. Of course, this isn’t to say
that buildings not optimised for these views
are in any way superior; they are still at a
disadvantage. However, they are restricted
by the nature of the method of representa-
tion; a two dimensional projection.
As it turned out, other architects were fac-
ing similar issues. Often what was a func-
tional three dimensional computer model
had to be simplified and adjusted in order
for the plan, section and elevation to con-
vey the functionality of the building. I soon
found their solution - augmented reality.
The company Augment, Inc. had pioneered
a system in which a 3D virtual model could
be mapped onto a tracked printed image
as seen through a device’s camera - thus
creating a “hologram”. This for me was a
spectacle to behold - anything I designed
in a CAD program could almost instantly
exist in the real world alongside me, at al-
most no cost! My tutors at that time weren’t
impressed on the other hand; they saw it
as unnecessary. It took a few months for
me to understand their perspective; while
it was an evolutionary step in architectur-
al representation, it did not really inform
the design of my project - and so was not
And so, I was essentially at square one with
this technology, but I knew I had to con-
tinue to develop my understanding of the
potential future use cases of it. The reason
was simple. I drew up a simple 3D model of
a spacecraft and then proceeded to show
it as a hologram to almost every person I
came across. To perform this, I used any
high contrast printout I could find around
me when wanting to demonstrate, and, with
the help of Augment Inc’s Augment app,
used that as a tracker for the virtual holo-
gram that was “projected” onto it. By letting
the user hold both my iPhone (AR Portal)
and the tracker, they could feel as if they
had direct control over the virtual minia-
ture spacecraft - moving it as if it were real.
To my enjoyment, the response was al-
ways the same, no matter who the user
was - elderly, young, artistic, scientific. The
moment they realised they were fooled into
thinking the virtual object was real, a smile
had quickly spread across each of their
The experience of holding a virtual object
with the user’s hands proved to be ethere-
al, and as I had designed the miniature to
be hovering 10 cm over the programmed
tracker, I had created an object that had
impossible physical qualities. While it was
at that moment not much more than a gim-
mick, the process of letting others experi-
ence something they have never come into
contact with before had sparked my inter-
est and I continued my pursuit.
12 13
The Realms
There exists a spectrum of mixed realities.
It is necessary to create spectrum catego-
ries as it helps to classify types of experi-
ences people may take part in, giving them
an idea of what they can expect.
I’ve decided to name them, in order of
digital intervention, as follows:
PR - Physical Reality
This is by far the largest realm, in which
every one of us has experienced. The en-
vironment is built up of physical & tactile
components; a composition of atoms. The
Digital Reality is accessed through an array
of devices that serve as small windows into
the realm; an inconvenient interface.
MRT - Mixed Reality Transition
This is my area of interest. By applying both MRA and MRB in
various configurations in an architecture wherever necessary, a
space becomes highly versatile and incredibly customisable.
MRA - Mixed Reality A
The Physical Reality is the dominant envi-
ronment, and small digital interventions
are made. This allows for spatial com-
puting, where digital processes can take
physical forms. This is also the premise of
Case Study 2.
MRB - Mixed Reality B
In this combination, the digitally synthe-
sised reality is the dominant environment.
This environment is still designed to coexist
with physical placeholders, however they
cannot be detected with human vision; they
can only serve the other senses. Tricking
the sense of sight is easy if what is seen can
also be confirmed to be real by the other
senses. Case Study 1 is built on this princi-
DR - Digital Reality
More commonly referred to as Virtual
Reality (VR), Digital Reality represents the
complete synthesis of environment with
complete disregard to the Physical Real-
ity of the viewer. This realm carries every
benefit the digital realm has to offer, while
limiting the viewer to a small traversable
“safe area”. This is the reality that all cur-
rent consumer headsets use. Due to the
lack of integration with the Physical Reality
(PR), the experiences aren’t as exciting as
they only stimulate one sense completely:
14 15
Case Study One - THE VOID
THE VOID - an exciting startup-based pro-
ject from Salt Lake City, Utah. I came across
this example relatively recently in Janu-
ary 2018, and so it is a project that had not
sparked my interest in this idea but rather
confirmed my beliefs in a proposal scheme.
The project is an excellent example of the
digital realm working in concert with the
physical. The experience consists of a large
space filled with monochromatic walls,
surfaces, and a miscellany of other archi-
tectural elements, onto which a virtual
environment is projected through head
mounted displays (HMDs). Unlike an aug-
menting-reality headset where the virtual
objects are superimposed onto the observ-
er’s true environment, the startup makes
use of their proprietary Rapture HMD to
completely immerse the viewer in the syn-
thesised environment.
While this is suitable for this application
of technology (transporting the viewer to
an imaginary environment to take part in
games), I have decided to not follow this
exact strategy as it promotes escapism; a
notion where the user takes part in the ex-
perience as they believe it is better than the
real, physical realm they exist in. Instead, I
have chosen to learn from their application,
and use the technology to blend in a variety
of digital enhancements to the realm we do
live in - hopefully making this demographic
realise that there is no need to escape. Of
course, in this particular use case, escap-
ism does have merit.
THE VOID has successfully proved that
there are merits to mixing PR and DR, as
since inception (2015) they have set up
profitable experiences in many different
cities around the world.
Various augmentations transform a bare-bones
environment - an architectural reskinning
One of the reprogrammable locations at THE VOID
16 17
Case Study Two - MS HoloLens
Microsoft’s Hololens project is built on an-
other strategy that I have learned from, but
have not followed directly in my own work.
The Hololens is designed to be used with-
in the MRA realm, with some customers
extending the platform to integrate it with
some physical objects too - giving it MRT
qualities. This is closer to my design strat-
egy, as I am working to enhance the built
environment with digital modifications. The
key differentiator between this example
and Case Study 1 is, apart from it giving real
world priority over digital, the method of
interaction with the architecture. With Ho-
lolens, users do witness seemingly lifelike
three dimensional virtual objects appear in
their environment, but they have very little
in relation to the PR the viewer is located in.
And so, the illusion that the objects are real
breaks down very quickly - especially as
the viewer is expected to control the envi-
ronment via mid-air gestures and speech
So, this approach is most useful for show-
ing objects that don’t need to be directly
manipulated or interacted with, and can
remain as a simple holographic augmenta-
In fact, one of the best use cases of this
technology is in the design stage of design-
ers - especially architects. Greg Lynn is one
such architect that has become involved
with augmented reality, making use of the
HoloLens to animate the flow of people and
visualise space at human scale from his
3D-printed models. While I respect the care
he has taken in working with AR, he was an
architect that was taught before the rapid
advancement of digital devices. Because
of that, his thinking and application doesn’t
truly make use of the technology - his
method could have been easily substituted
with a two-dimensional traditional projec-
tor - at a fraction of the cost and with the
added benefit of not having to wear a head-
set in order to view the augmentations. Of
course, a projector cannot show what it is
like to be inside of an unbuilt building - but
a virtual reality headset can. The difference
between that and the Hololens is a tiny, yet
immensly powerful feature - the optical-
ly-transparent display that overlays virtual
content onto the real world.
In a 2016 interview with Amy Frearson of
Dezeen, he mentioned that AR will not
change design language of buildings, but
will only affect the methods of communi-
cating design through construction. While
that may have been a valid statement to
make if the innovation curve was flat, it isn’t
- such that bold statements like that have
no value.
In fact, augmented reality technology will
be at it’s peak if the design language of
a building is optimised to make use of a
digital augmentation layer - which is the
motive of my experiments.
Top: Microsoft imaging a way the technology could be
used. Innovative, yet clunky and unnatural.
“Air-tapping” is infinitely more cumbersome
than touching an object - or even using a mouse.
Right: Mr. Lynn awkwardly handling a hologram by
pinching a spot in the middle of his sightline. Yes,
that’s the “air-tap” - an unenchanting interface.
18 19
MRT - Reskinning Objects
In both case studies I have found issues
which could be resolved if a project were to
combine the two ventures together into one
experience. By grounding the experience
firmly in the physical reality (a la HoloLens)
escapism isn’t promoted - the technology
is used to improve our current reality. The
HoloLens is a device designed to be used as
a retrofit in an existing space - if a space is
purpose-built for the interface, like in the
case of THE VOID, using the technology is
very intuitive.
Furthering the development and under-
standing, I set about creating augmented
reality enhancements to three products
that belonged to three seperate beverage
companies (Innocent, Monster Energy &
RedBull). The main motivation behind this
experiment was to test the limits of the
tracker-based app, Augment. The three
products each had unique qualities in pack-
aging, ranging in what I presumed to be or-
ders of difficulty. In order to keep the tests
fair, I decided to anchor the AR projections
to the logos of the products.
Innocent’s smoothie bottle proved to be
easiest to work with as a tracker - its logo
had a lot of contrast, it was printed on a
matte surface, and the surface was flat. I
realised that these three conditions were
optimal for a device to calculate the posi-
tion of an anchor. When it came to testing
the program on a Monster Energy can, the
curved surface proved to be difficult for the
device to track. Given that the app had been
programmed to recognise a 2d representa-
tion of a portion of the can (the logo at a
normal relative to the camera), the device
would do just that - making the previously
instantaneous image recognition a fantasy.
Then, when testing the same setup on the
logo of a RedBull can, the device behaved
anchor had to be unique, as each virtual
model had to have a unique AR tracked
assigned to it. So in the future, best practice
would be to have some form of systematic
method of detailing trackers, given that
there may be many used simultaneously in
the same space.
So, through this process I found that for a
virtual model to be correctly mapped to a
physical object (for object reskinning, for
example), the AR anchor had to be a matte,
flat surface with plenty of contrast. Testing
the three objects together also revealed
something new - each
unpredictably. The highly reflective surface,
coupled with the fact that the surface of the
programmed tracker wasn’t flat, meant that AR
tracker recognition proved impossible.
Maximum potential AR tracker
Limited size due to
The larger the AR tracker, the more accurate
the AR anchor for 3D content is in the real world
Using the RedBull logo on the can as an AR
tracker/anchor proved to be unreliable, as the
reflective surface of the aluminium created
enough deviation from the programmed image
that the device could not recognise the surface.
A small step into the future of small objects. It was neces-
sary to practice on real world objects first, as it also tests
real-world materials that are mass produced already.
Re-skinning could easily completely change the facade of
an object/building - but I wanted to simulate adding value
to designs through the 3rd dimension.
20 21
Systematic Trackers for AR Anchors
Outdoor markers
Minimum θ necessary for
tracker to be recognised
by device at a comfortable
Indoor markers
22 23
// ViewController.swift
// Digital Architecture
// “A Digital Frontier”
// Created by Rafal Kopiec on 09/02/2018.
// Copyright © 2018 Digital Architecture, Rafal Kopiec. All rights reserved.
// Headset app version
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
@IBOutlet weak var sceneViewLeft: ARSCNView!
@IBOutlet weak var sceneViewRight: ARSCNView!
@IBOutlet weak var imageViewLeft: UIImageView!
@IBOutlet weak var imageViewRight: UIImageView!
let eyeCamera : SCNCamera = SCNCamera()
// Parameters
let interpupilaryDistance = 0.066 // This is the value for the distance between two pupils (in metres). The Interpupilary Distance
let viewBackgroundColor : UIColor =
let eyeFOV = 60; var cameraImageScale = 3.478; // Calculation based on iPhone 7
override func viewDidLoad() {
sceneView.delegate = self
sceneView.showsStatistics = true
// Create a new scene
let scene = SCNScene(named: “art.scnassets/ship.scn”)!
sceneView.scene = scene
UIApplication.shared.isIdleTimerDisabled = true
// Scene setup
sceneView.isHidden = true
self.view.backgroundColor = viewBackgroundColor
// Set up Left-Eye SceneView
sceneViewLeft.scene = scene
sceneViewLeft.showsStatistics = sceneView.showsStatistics
sceneViewLeft.isPlaying = true
// Set up Right-Eye SceneView
sceneViewRight.scene = scene
sceneViewRight.showsStatistics = sceneView.showsStatistics
sceneViewRight.isPlaying = true
eyeCamera.zNear = 0.001
eyeCamera.fieldOfView = CGFloat(eyeFOV)
// Setup ImageViews - for rendering Camera Image
self.imageViewLeft.clipsToBounds = true
self.imageViewLeft.contentMode =
self.imageViewRight.clipsToBounds = true
self.imageViewRight.contentMode =
override func viewWillAppear(_ animated: Bool) {
let configuration = ARWorldTrackingConfiguration()
override func viewWillDisappear(_ animated: Bool) {
override func didReceiveMemoryWarning() {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.main.async {
func updateFrame() {
let pointOfView : SCNNode = SCNNode()
pointOfView.transform = (sceneView.pointOfView?.transform)!
pointOfView.scale = (sceneView.pointOfView?.scale)!
// Create POV from Camera = eyeCamera
// Set PointOfView for SceneView-LeftEye
sceneViewLeft.pointOfView = pointOfView
// Clone pointOfView for Right-Eye SceneView
let pointOfView2 : SCNNode = (sceneViewLeft.pointOfView?.clone())!
// Determine Adjusted Position for Right Eye
let orientation : SCNQuaternion = pointOfView.orientation
let orientationQuaternion : GLKQuaternion = GLKQuaternionMake(orientation.x, orientation.y, orientation.z, orientation.w)
let eyePos : GLKVector3 = GLKVector3Make(1.0, 0.0, 0.0)
let rotatedEyePos : GLKVector3 = GLKQuaternionRotateVector3(orientationQuaternion, eyePos)
let rotatedEyePosSCNV : SCNVector3 = SCNVector3Make(rotatedEyePos.x, rotatedEyePos.y, rotatedEyePos.z)
let mag : Float = Float(interpupilaryDistance)
pointOfView2.position.x += rotatedEyePosSCNV.x * mag
pointOfView2.position.y += rotatedEyePosSCNV.y * mag
pointOfView2.position.z += rotatedEyePosSCNV.z * mag
// Set PointOfView for SceneView-RightEye
sceneViewRight.pointOfView = pointOfView2
// Clear Original Camera-Image
sceneViewLeft.scene.background.contents = UIColor.clear
ARKit 1.0 - World Tracking
Having conducted the described series of
experiments, I began to understand that the
very principle on which the app was built on
was a barrier to it being used broadly within
architectural spaces. The reliance upon an
AR anchor was the app’s only strength, but
also its greatest weakness: as soon as the
tracker was hidden from the host device’s
view, tracking was interrupted. As soon
as this occurred, it immediately broke the
illusion that the virtual objects were real as
they were no longer synchronised with the
real world environment. This would prove
to be a problem, as I was planning to modify
a physical space with digital interventions.
As if on cue, I found that Apple were work-
ing on a modern alternative to this by
providing world-based tracking. While this
did not support “tracker tracking” at the
time, it proved to be a viable reason for me
to learn how to make use of the technology
for my needs. ARKit, as it’s called, is a set of
algorithms that makes use of a multitude of
device sensors - so data from the camera is
supported by real-time information from a
gyroscope, compass and accelerometer. As
the device tracks the real world and not a
predefined marker, the virtual models that
it places as holograms are 1:1 scale. I real-
ised this could be an exciting tool for ar-
chitectural representation, as users could
“step into” a virtual environment overlaid
onto the real world.
As my intention was to modify the physical
world with digital interventions, it proved
to be quite a complex task as there wasn’t
a tracker I could specify that would cre-
ate a link between the digital and physical
environments. To get around this, as I was
testing the technology in a room I had reg-
ular access to, I modelled a 3d wireframe
version of it and always started world
tracking from a pre-planned location in the
room - such that the starting location in
the real room and wireframe model were
synchronised. Combining this with a fea-
ture I added that let the device be used as a
head-mounted camera-passthrough dis-
play, I created an obstacle course filled with
virtual objects that looked as though they
were fixed onto the real-world floor. I then
asked my test subject to stand at one end of
the room and walk towards me; to the other
end of the room.
Though the physical path was in fact clear
of any obstructions, the subject cautiously
navigated the course as he made progress
towards me - squeezing through 0.5m
openings and using a real world chair to
help him climb over a virtual fence 1m tall.
This was great progress in my eyes, as it
proved to me that digital interventions,
even if they were not made to look high-
ly realistic, prompted instinctual human
responses. Testing the augmented en-
vironment myself, I experienced similar
sensations when facing a virtual wall as I
do a physical wall; it’s euphoric to find that
instincts can be fooled so well.
An expedition of the multi-realm
Retrofitting an iPhone as a holographic headset - a low cost HoloLens
24 25
// ViewController.swift
// Digital Architecture
// “A Digital Frontier”
// Created by Rafal Kopiec on 14/03/2018.
// Copyright © 2018 Digital Architecture, Rafal Kopiec. All rights reserved.
import UIKit
import SceneKit
import ARKit
import SpriteKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
var selectedProductName: String?
var session: ARSession {
return sceneView.session
override func viewDidLoad() {
sceneView.delegate = self
sceneView.showsStatistics = false
override func viewWillAppear(_ animated: Bool) {
let configuration = ARWorldTrackingConfiguration()
override func viewDidAppear(_ animated: Bool) {
UIApplication.shared.isIdleTimerDisabled = true
override func viewWillDisappear(_ animated: Bool) {
// Pause the view’s session
func resetTracking() {
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: “AR Resources”, bundle: nil) else {
fatalError(“AR Reference photos aren’t loaded up”)
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages, options: [.resetTracking, .removeExistingAnchors])
let updateQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! +
// Let the good stuff begin
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
updateQueue.async {
// Digitising the reference image plane
let plane = SCNPlane(width: 1*(referenceImage.physicalSize.width), height: 1*(referenceImage.physicalSize.height))
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0
planeNode.eulerAngles.x = -.pi / 2
let scene = SCNScene(named: “art.scnassets/ship.scn”)
let portalNode = scene!.rootNode.childNode(withName: “ship”, recursively: false)!
// New SKVideoNode with chosen local video file name.
let playerNode = SKVideoNode(fileNamed: “DJI.MOV”)
// trying to introduce planes...
New SKScene instance, for video node
let spriteKitScene = SKScene(size: CGSize(width: 1080, height: 1920))
spriteKitScene.scaleMode = .aspectFit
// Position of video node in SKScene.
playerNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
let material = SCNMaterial()
material.diffuse.contents = spriteKitScene
material.isDoubleSided = false // Make video visible on both sides of node.
material.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)
plane.firstMaterial = material
scene?.rootNode.childNode(withName: “tvscreen1”, recursively: false)?.geometry?.materials = [material]
self.sceneView.isPlaying = true
func session(_ session: ARSession, didFailWithError error: Error) {}
func sessionWasInterrupted(_ session: ARSession) {
func sessionInterruptionEnded(_ session: ARSession) {
When complete, this room would be filled
with elements of augmented reality - an al-
ternative to the environmentally expensive
yet standard computer screens normal-
ly found in a film editing room. As the AR
interventions are currently at the stage of
being designed, I could not implement this
just yet. Instead, I chose to drop in the most
basic of components in order to illustrate
what the space is for - and then I attached
the remainder of the building as a 3D mod-
el - in a similar fashion to Greg Lynn’s AR
Out of all of my experiments, this proved to
be most exciting - there was a direct dia-
logue between the physical and the digital.
The intersection, the transition,the interac-
tion - that’s what I’ll now be exploiting in the
design aspect of the project.
ARKit 1.5 - AR Anchor
After a few weeks of exploring the limits of
Apple’s version of the technology, I found
that image recognition was a critical com-
ponent of the experience that was not pos-
sible with this implementation - it wasn’t
possible to create a link between the physi-
cal and digital environments. I seem to have
started learning how to develop with ARKit
at the right time, as in February 2018 Apple
released a beta update that enabled this
exact feature I needed. Now, using what I
had learned conducting my experiments
with both the Tracker-based Augment app
and my more recent World Tracking experi-
ments, I could combine the best of both into
one experience.
This implementation scouted for the loca-
tion and orientation of the AR Anchor three
times per second, and filled in the gaps us-
ing the device’s motion sensors. This meant
that, in practice, the device would only need
to “see” the preconfigured tracker once per
session. This became really useful as I re-
alised that once the device knew the loca-
tion of the tracker, the digital environment
was perfectly calibrated with the physical.
This synchronisation continued even after
the tracker was out of the device’s field of
view! So, to integrate this technology into
an architecture, the setup process would
be very simple - the users of the build-
ing (wearing augmenting reality glasses)
would just have to look at the tracker for a
few moments, and then they would be free
to explore/use the rest of the building, as
the software has already oriented itself to
the surroundings.
In order to illustrate this to my tutors, I built
a basic model that consisted of a thin sec-
tion through part of the film studio sche-
matic I designed - the composition room.
1:1 scale tracker
One of the advantages of mixing realities
- a large model can be portable
A simplification of holograms that are in
development - an editing dreamatorium
26 27
From my work up until this point and from
research on the two case studies, I can
estimate that it will be worthwhile merging
physical and digital realms into an MRT, as a
seemingly infinite supply of opportunities is
unleashed by doing so.
From my experiments, it seems that mo-
bile-powered augmented reality is a good
choice for development, as it is accessible
by many - and will be even more accessible
with future devices. Eventually I suspect AR
glasses will be as ubiquitous are reading
glasses are today - but for project develop-
ment, an iPhone is a suitable device.
Within this document I’ve chosen to include
only the most varied & complete out of
the 100+ individual tests I have conducted.
Due to the instantaneous nature of digital
development, I did not take the time to doc-
ument every step I have taken. However, I
am confident that the tests I have chosen to
feature here will give a broad but concen-
trated overview of the investigations.
28 29
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.