Access to this full-text is provided by Wiley.
Content available from Healthcare Technology Letters
This content is subject to copyright. Terms and conditions apply.
Towards AR-assisted visualisation and guidance for imaging of dental decay
Yaxuan Zhou1,2, Paul Yoo3, Yingru Feng3, Aditya Sankar3, Alireza Sadr4, Eric J. Seibel2✉
1
Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
2
Human Photonics Lab, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
3
Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
4
School of Dentistry, University of Washington, Seattle, WA 98195, USA
✉E-mail: eseibel@uw.edu
Published in Healthcare Technology Letters; Received on 19th September 2019; Accepted on 2nd October 2019
Untreated dental decay is the most prevalent dental problem in the world, affecting up to 2.4 billion people and leading to a significant
economic and social burden. Early detection can greatly mitigate irreversible effects of dental decay, avoiding the need for expensive
restorative treatment that forever disrupts the enamel protective layer of teeth. However, two key challenges exist that make early decay
management difficult: unreliable detection and lack of quantitative monitoring during treatment. New optically based imaging through the
enamel provides the dentist a safe means to detect, locate, and monitor the healing process. This work explores the use of an augmented
reality (AR) headset to improve the workflow of early decay therapy and monitoring. The proposed workflow includes two novel AR-
enabled features: (i) in situ visualisation of pre-operative optically based dental images and (ii) augmented guidance for repetitive imaging
during therapy monitoring. The workflow is designed to minimise distraction, mitigate hand–eye coordination problems, and help guide
monitoring of early decay during therapy in both clinical and mobile environments. The results from quantitative evaluations as well as a
formative qualitative user study uncover the potentials of the proposed system and indicate that AR can serve as a promising tool in tooth
decay management.
1. Introduction: Oral health problems remain a major public health
challenge worldwide in the past 30 years, leading to economic and
social burden [1–3]. Wherein, untreated dental decay is the most
prevalent issue and is relevant to socio-economic disparities
[4,5]. As shown in Fig. 1, the traditional dental care pattern for
dental decay management consists of routine examination in
clinics, non-destructive treatments for detected early decays and
destructive treatments for irreversible decays. There are three
limitations to this pattern. First, visual or tactile examination and
the current gold-standard x-ray radiography cannot reliably and
timely detect interproximal and occlusal lesions [6], which are the
most common types of dental decays. Second, medicine therapy
and instructed cleaning are performed by patients at home
without supervision. And they need to revisit the dental clinic,
which limits the timely monitoring of decay and often leads to
further progression of the decay into irreversible decay. Lastly,
the treatments for irreversible lesion such as drill-and-fill procedure,
root canal treatment and even dental implant are all destructive,
painful, expensive and time-consuming. These limitations need
to be solved to develop an ideal dental care procedure for decay
management, also shown in Fig. 1. If early-stage lesions can
be detected reliably, patients can be prescribed with medicinal
therapies and instructed/directed cleaning over time outside the
dental clinic [3,7,8]. Also, if the current clinic-revisiting-based
monitoring of decay can be enhanced by monitoring at community
health centre or even patient’s home and sharing data with dentists,
then timely intervention can be made with fewer clinic-visits and
less burden on both dentists and patients [3,9]. Then, early
decays can be detected and healed in time thus avoiding destructive
and costly procedures. In need is the continuous research into such
ideal management of tooth decay [3].
To move towards this ideal pattern, there have been significant
strides towards developing reliable, sensitive and low-cost imaging
modalities to diagnose early decays [10,11]. Three-dimensional
(3D) imaging modalities such as cone-beam computed tomography
(CBCT) and optical coherence tomography (OCT) are reliable and
sensitive but usually require long imaging time on expensive
clinical systems. Clinicians typically perform 3D imaging
pre-operatively and use the 3D image for planning and intra-
operative reference. For intra-operative imaging and also remote
monitoring, clinicians also need a 2D imaging modality, e.g. the
scanning fibre endoscope (SFE).
Along with the development of imaging modalities, the ease of
use for dental imaging needs to be improved in general.
Acquiring high-quality images from desired perspective usually
requires expert manipulation of the instrument. For example, to
effectively monitor the condition of a carious lesion with SFE,
users need to image the decay from the same perspective every
time, which is difficult without any assistance [12]. Also, using
the previous images for navigation requires hand–eye coordination.
Clinicians need to divert their attention to the display monitor
while manually positioning the scope, additionally compensating
for patient’s movement. This is particularly challenging in dental
field as there is only manual fixation of patient’s jaw and patients
are typically not under local anaesthesia during dental procedures.
The above challenges lead to a lengthy learning curve for providing
treatment accurately [13,14]. Moreover, resource-limited areas may
lack budgets for well-trained personnel.
In this work, we utilise an augmented reality (AR) head-mounted
display (HMD) to develop a platform for visualising dental images
from multiple modalities. We also use the HMD as a guidance tool
for positioning of an imaging probe during repetitive monitoring of
dental lesions and their treatments. We built a prototype system
using the Magic Leap One AR headset and two dental imaging
modalities OCT and infrared SFE. The key contributions of our
work are (i) the design and development of a novel end-to-end
system for multi-modal dental image visualisation, (ii) a technique
for guided image capture using SFE, and (iii) quantitative evalua-
tions as well as a user study to evaluate the usefulness, usability
and limitations of our system and identify areas for future work.
To the best knowledge of the authors, this is the first pilot
study to develop HMD-based AR environment for visualisation
and guidance for optically monitoring the status of dental lesions.
Continued advances in AR devices, dental imaging modalities, as
well as systems that combine these two technologies will together
push the traditional dental practice towards an ideal future.
Healthcare Technology Letters, 2019, Vol. 6, Iss. 6, pp. 243–248
doi: 10.1049/htl.2019.0082
243
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
2. Related work: Near-infrared (NIR) optical imaging is shown
to have the potential to detect early-stage dental decays more
reliably [15,16]. In NIR reflection image, dental decays appear
brighter than surrounding sound areas due to increasing scattering
coefficient [17]. OCT is a 3D volumetric imaging technique
and has been used for NIR imaging of dental decay [18]. Fig. 2a
shows a prototype OCT system imaging an extracted human
tooth and a slice of the 3D OCT scan where two interproximal
dental lesions appear as bright spots. OCT systems are expected
to be expensive when introduced to dental clinics, and currently
a complete 3D scan takes at least several minutes from prototype
systems. Also, the OCT probe is bulky and requires expert
manipulation to acquire high-quality scans. Thus OCT is more
suitable as the pre-operative imaging modality used in clinics.
The SFE is a 2D imaging technique with the advantages of
miniature probe tip and expected low cost. Many SFE prototypes
have been used for real-time NIR dental imaging in previous
works [19–21]. Fig. 2bshows SFE imaging an extracted human
tooth and the SFE image where the white patterns on both sides
of tooth indicate two interproximal dental lesions. In the figure,
SFE is imaging from the biting surface of tooth, but since NIR
light penetrates around 3 mm deep into the surface [20], the
interproximal dental lesion under the surface also shows up in
the image. This is very helpful for dental decays that are hidden
in between the neighbouring teeth and not accessible to the
operator. Due to the above advantages, SFE is well-suited for
quick intraoperative screening and long-term monitoring.
AR technology has been introduced into research areas of dental
implant [22–26], oral and maxillofacial surgery [14,27–29], ortho-
dontics [30] as well as dental education [31,32]. In previous work,
introduction of AR has assisted clinicians by displaying and regis-
tering virtual models in the operating field thus reducing difficulty
of hand–eye coordination. However, there is as yet no study aimed
at assisting dental imaging modalities for detection and monitoring
of dental decay [33]. Among all available AR devices, HMDs have
the advantage of compactness and intuitiveness (as compared to
handheld or armature mounted AR devices). For this study, we
chose Magic Leap One [34] AR headset as the hardware platform.
Magic Leap One also includes a hand-held controller with a home
button, a bumper, a trigger and a touchpad.
3. Methods: The proposed workflow and corresponding technical
components are described in Fig. 3. During the initial appointment
in dental clinics with high resource availability, a pre-operative
3D raw image is acquired and transferred onto AR headset, and
then dentists can examine the 3D image in AR environment intra-
operatively and make a diagnosis based on observed position,
dimension and severity of dental decays. During this process, the
dentist can translate, rotate, and scale the 3D image at will to
view it from an optimal viewing angle based on their preference
and experience. The dentist can also adjust display parameters
including intensity, opacity, and contrast threshold to optimise
decay visibility and also account for varying external lighting
conditions. Furthermore, they can examine the image by slicing
through the 3D structure to accurately locate the decay.
For long-term monitoring, the dentist can select the desired angle
of view for future repetitive 2D imaging. Then a virtual model of
tooth and imaging instrument, with registered spatial relationships,
is generated and stored. During the monitoring phase, 2D imaging
can be performed regularly within or outside of a clinical setting,
using the virtual model as guidance. In order to reproduce the ref-
erence image, the operator aligns the position of the selected tooth
and the imaging probe with respect to the virtual model so that the
same desired view angle is preserved. Alignment of imaging probe
can be done by manual alignment or tracking-based alignment.
2D images are then transferred into AR environment and fused
Fig. 2 Demonstration of two NIR dental imaging modalities and their images
aAn OCT probe imaging an extracted human tooth; a slice of the 3D OCT
scan, where the bright patterns indicate demineralised regions of enamel
(dental lesion)
bAn SFE probe imaging an extracted human tooth; SFE image, where the
bright patterns (marked by arrows) indicate high optical reflectance from
dental decay regions
Fig. 1 Comparison of traditional and ideal dental care patterns for tooth
decay management. Blue texts are areas that are under active development.
Purple texts indicate how our work is supporting the new approach to
healing dental decays
Fig. 3 Diagram of workflow and corresponding technical components
244
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
Healthcare Technology Letters, 2019, Vol. 6, Iss. 6, pp. 243–248
doi: 10.1049/htl.2019.0082
with the 3D image and all previous 2D images for comparison. The
operator or remote dentist can change the desired angle of view
according to updated 2D images throughout the period of monitor-
ing. After 2D SFE images are acquired, they are fused with 3D
image and transferred to a dentist with computer-aided image ana-
lysis for interpretation. By comparing the historical images to the
present, the dentist can make determination of whether the dental
decay is healing or is progressing under the current prescription
and make corresponding adjustment on the prescription (such as
frequency and dose of medicine application, and/or time of next
dental visit). We prototyped a software system based on this prin-
ciple using Unity [35] (version 2019.1.0f1) with Magic Leap
Lumin SDK [34].
3.1. AR-assisted visualisation of pre-operative 3D image: In our
pilot study, a pre-operative 3D image of the tooth is acquired
using a pre-commercial 1310 nm swept-source OCT (Yoshida
Dental Mfg., Tokyo, Japan) with 110 nm band and 50kHz scan.
The OCT 3D scan is taken from the occlusal view with an
imaging range of 10 ×10 ×8mm
3and an axial imaging
resolution of 11 µm. The raw data from OCT imaging system is
first converted into point cloud data and downsampled to reduce
the data size without losing useful features. The point intensities
are then rescaled to increase the dynamic range. The point cloud
data is then rendered as a 3D volumetric object using an
open-source Unity package for volumetric rendering [36].
Slicing through three orthogonal directions is implemented to
allow users to inspect inner structures of the tooth. By examining
cross-section slices, dentists can comprehensively inspect the loca-
tion and size of dental lesions. More importantly, dentists can find
out how deep the dental decay has progressed into the dental
enamel layer, which would determine whether a drill-and-fill pro-
cedure is needed or the medicine treatment should be prescribed
with long-term monitoring. Since the visualisation needs to accom-
modate different lighting conditions and user preferences, adjust-
ment of three display parameters is provided. Users can adjust
intensity value to adjust the overall brightness of the volumetric
display. They can also adjust the threshold value for saturation,
hiding areas that have low contrast. Opacity value can be adjusted
to determine the transparency of the volume. Appropriate opacity
values allow the user to see the surface structure of tooth as well
as inner features like dental decay or a crack without having to
inspect through every slice, thus providing an initial and intuitive
sense of existence, position and structure of these features.
Slicing and display adjustment are implemented as sliders on a
panel. The controller is used to select and adjust sliders. The
panel and the pre-operative 3D image can be selected by aiming
the controller at them and holding down the trigger and physically
translating or rotating the controller. When the panel or the image is
selected, users can also rescale them by pressing on left of the
touchpad to shrink and left of the touchpad to enlarge. See the
video in supplementary material for the interaction demo.
3.2. AR-assisted guidance for 2D imaging: Guidance for 2D
imaging is necessary not only in that it helps non-dentist
personnel to take 2D images at desired view angles, but also in
that it guarantees the field of view and perspective of 2D images
during repetitive imaging remain the constant and the series of
images can be quantitatively compared. After dentists spot decay
on the OCT 3D image, they can designate the desired view angle
to take 2D images so that the decay can be detected by 2D
images. In the view angle selection mode, a virtual cone shape is
attached to the end of controller, corresponding to the view
frustum of the endoscope. Since NIR SFE has a disc-shaped field
of view which grows larger when the target is further away from
the probe, a cone can be used to represent the field of view of
SFE. The user can aim the cone at the OCT 3D image and adjust
the area that is covered by the cone, as shown in Fig. 4a.By
pressing the bumper to indicate that the desired view angle
is chosen and a virtual reference model consisting of 3D tooth
surface model registered with SFE probe model according to
indicated view angle is generated for future guidance. The 3D
tooth surface model is acquired by an intra-oral scanner (3Shape
TRIOS 3, 3Shape, Copenhagen, Denmark).
In this pilot study, we strive to keep the system and workflow as
concise as possible, so we are not using any fiducial-point-based
tracking which requires an additional tracker. Furthermore, the
alignment between the virtual tooth model with the real tooth is
done manually by the user. Since the virtual tooth model is the
3D surface structure scan from the same tooth, the user can
shrink the model to the same size as the tooth and align them.
The next step is to use the reference model for guidance of 2D
imaging, where the user needs to align the virtual probe model.
The alignment of SFE probe to the virtual model is made more
difficult since SFE probe is of a smaller scale. Therefore, we
designed two virtual SFE probe models, a cylinder model and a
tri-colour-plane model, as shown in Figs. 4band c.
Besides manual alignment, there are also two tracking-based
methods supported by hardware systems on Magic Leap One.
The first method is based on image-tracking API provided by
Magic Leap [37]. The front-view camera and depth camera on
the headset can be used for tracking the spatial position and rotation
of a flat image. The target image is printed in the dimension of
3.4×3.2cm
2and attached to the SFE probe. Then the tracked
position and rotation of the target image can be transformed to
the position and rotation of the probe, assuming the offset
between the probe and target image remains rigid and unchanged.
The second method is based on the electromagnetic 6-DoF spatial
tracking of the control handle [38]. By fixing the SFE probe
with the control handle, the tracked position and rotation of the
controller can be transformed into the position and rotation of the
probe. Once the probe is being tracked, a red cylinder virtual
model is shown to indicate the tracked position and rotation.
Then the user needs to align the red cylinder virtual model (the
tracked position and rotation of the real probe) with the virtual
probe model (desired position and rotation for positioning the
real probe).
3.3. Data transfer and image fusion: The 2D SFE images are
transferred from the instrument to the AR headset via a web
server. A polling-based scheme downloads newly acquired
images onto the headset, over HTTP. 2D SFE images and the 3D
OCT image can then be registered according to the view angles
with which the SFE images were taken. As shown in Fig. 5,an
occlusal-view SFE image is registered with the OCT 3D image.
With the image fusion, users can interpret and compare images
from multiple modalities and also inspect the condition of decays
during monitoring of therapy.
Fig. 4 Design of view selection and probe models
aUse cone model to select desired angular view for consistent 2D imaging
bThe tri-colour-plane model for probe alignment
cThe cylinder model for probe alignment
Healthcare Technology Letters, 2019, Vol. 6, Iss. 6, pp. 243–248
doi: 10.1049/htl.2019.0082
245
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
4. Evaluation
4.1. Experiments: To measure the augmentation quality, we set up a
3D grid coordinate as shown in Fig. 6a. The grid paper has 1 mm
fine grids, 5 mm medium grids and 1 cm large grids. Once the
hologram is manually aligned with the object, the observer uses a
sharp pointer to localise position of a certain point on hologram
and then measures the distance between the points on real object
and hologram. Jitter and perceived drift of the hologram are
quantified by the translation distance measured on the grid paper.
To measure the alignment performance, we also measure the
end-to-end accuracy quantified by keypoint displacement in
acquired SFE images. We choose to image a USAF resolution
test chart as shown in Fig. 6b, to simplify the accurate extraction
of keypoints in SFE images. Ten key points are selected on the
test chart. The user first aligns the SFE probe in front of the test
chart in desired viewpoint and takes one image. Then after
putting the SFE probe down for a while, the user realigns the
SFE probe with or without guidance and takes another SFE
image with the attempt to replicate the same viewpoint as in the
first image. Three guidance approaches are used in turn for the guid-
ance of repositioning of SFE probe, among which, ‘without any
guidance’means that user aligns the probe only according to their
memory of the desired probe position without referring to real-time
SFE video, ‘with AR guidance’means that user aligns the probe
with the AR hint of desired probe position, ‘with video guidance’
means that user aligns the probe by referring to the real-time SFE
video and comparing with the reference image. Three guidance
approaches are used in random order for ten runs to avoid training
bias. The time it takes to realign the probe to desired position is
recorded. The xand ypositions of the ith keypoint are measured
in pixels in reference image and repetitive image as pref
xi,pref
yi,
prep
xi,prep
yi. The overall keypoint displacement Dof the repetitive
image is then calculated according to
D=i
(prep
xi−pref
xi)2+(prep
yi−pref
yi)2
10
Among ten runs, the mean and standard deviation of Dis quantified
and used to evaluate the three guidance approaches along with
the time.
4.2. User study: We conducted a user study to get user feedbacks
for this prototype. We used a dentoform model with an extracted
human tooth installed on it, as shown in Fig. 6c. The extracted
human tooth has two artificial dental lesions on its interproximal
surfaces. OCT 3D image, occlusal-view SFE 2D image as well as
3D surface shape scan were acquired from this sample, as shown
in Fig. 7.
Six subjects were recruited and asked to conduct the tasks with
the system, to walk through the workflow. Among the six subjects,
three self-reported as dental students or clinicians, while the other
three were general users without specialised dental knowledge.
All users were new to this AR system and the workflow. The proto-
col that subjects were asked to perform using the Magic Leap One
were as follows: (i) examine the 3D OCT image in the headset by
slicing and adjusting display parameters. (ii) Use the cone to select
the desired view angle. (iii) Manually align the virtual model with
the real tooth. (iv) Align the SFE probe with the virtual probe model
and compare two virtual probe models. The manual alignment,
image-tracking-based alignment and controller-tracking-based
alignment are also compared.
After the tasks were completed, the users were asked to fill out a
questionnaire anonymously. See supplementary material for the
template of questionnaire.
5. Results and discussion: In the quantitative measurements,
we measured the augmentation quality between hologram and
objects manually aligned together. We noticed the augmentation
quality is influenced by jitter, perceived drift and latency,
which degrade perception as well as accuracy and efficiency
of the alignment procedure. Jitter is the continuous shaking of
the hologram. We measured jitter within the range of 1 mm,
which is at the edge of our acceptable range considering the
tooth to have a dimension of around 10 mm. Perceived drift is
that when the observer moves around a hologram, the perceived
position of hologram drifts away. We measured the perceived
drift within the range of 5 mm when the observer takes two
orthogonal viewpoints. The perceived drift limits users from
observing from multiple viewpoints to align probe with the
hologram. However, considering that users are not able to freely
move around when aligning the probe, the perceived drift may
be less fatal for our prototype. Latency is the time lag of
hologram update when the user moves their head and is
determined by the distance of head movement. The measured
latency is within range of 2 s when head motion is within the
Fig. 6 Experiment setup
a3D grid coordinate for measuring augmentation accuracy between
hologram and object
bUSAF resolution test chart for measuring end-to-end accuracy during
probe repositioning. Ten key points are selected from square corners
marked by red dots
cDentoform model with an extracted human tooth installed on top. There
are two artificial dental decays on the interproximal surfaces marked by
the two red arrows
Fig. 5 Fusion of OCT 3D image and SFE 2D images
Fig. 7 Extracted human tooth with artificial interproximal decays
aPhotograph of the extracted human tooth with two artificial interproximal
lesions
bOne slice of OCT 3D image of the tooth
cNIR occlusal-view SFE image
d3D surface shape scan of the tooth. Note that in (b) and (c), the blue frame
indicates an artificial dental decay deep into the dentin, the orange frame
indicates an artificial dental decay less than halfway into the enamel, and
the green circle indicates a natural dental decay in the groove under the
biting surface
246
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
Healthcare Technology Letters, 2019, Vol. 6, Iss. 6, pp. 243–248
doi: 10.1049/htl.2019.0082
general range needed for performing the imaging procedure. We
also measured the accuracy of image-tracking-based alignment and
controller-tracking-based alignment. The image-tracking-based
alignment suffers from limited capability of front-facing camera.
The image tracking has an error of up to 4 mm and may lose the
target when the printed target image moves fast. Furthermore,
when the background of environment is complicated, the image
tracking may recognise the wrong target. It is recommended that
the image tracking is used in well-lit space while avoiding black or
very uniform surfaces as well as reflective surface like mirrors or
glasses. The controller-tracking-based alignment suffers from the
hologram drift when the electromagnetic sensor is rotated around
or moved close to conducting surfaces. All that being said, the
current image-tracking and controller-tracking-based alignment
approaches suffer from instability and accuracy issues and need
improvement either from hardware or from the tracking scheme
design. So far, manual alignment seems to be more robust in terms
of accuracy and efficiency.
The end-to-end accuracy and efficiency of manual alignment is
quantified by the keypoint displacement in acquired reference
SFE image and repetitive SFE image with dimension of
400 × 400 pixels. As shown in Table 1, AR guidance has the
advantage of better repositioning accuracy compared to without
any guidance, and the advantage of faster repositioning speed
compared to using SFE real-time video for guidance. By trans-
ferring the real-time SFE video to AR headset and placing it near
the operating field, we may further improve the accuracy and
efficiency of our prototype.
In the user study, the average time taken to educate each subject
to use the system to general proficiency (i.e. familiar with the inter-
action techniques and can use them to accomplish the workflow)
was 15 min, which is quite fast considering their unfamiliarity
to AR devices. Afterwards, all subjects were able to accomplish
the protocol. During the process of prototyping and quantitative
evaluation, we thought the following factors may influence the
workflow and therefore included qualitative questions regarding
their effects. The factors include (i) the latency which may
impede the accuracy and efficiency of alignment of the tooth and
probe with the virtual models due to the small scale, (ii) the avail-
able field of view of the headset. For Magic Leap One, the width
and height of the AR field of view are currently the largest in the
market and the interface design also avoid borders of frames to miti-
gate the sense of the limited field of view. However, when the user
is too close to the virtual objects, the virtual objects will be cut off
by a clipping plane. This limits users to work from a distance of
about 37 cm away from the virtual objects, which means that
the users may have to always extend their arms away from their
body during the alignment tasks. Five subjects felt the latency
was noticeable but it did not impede their workflow, while one
dental clinician felt the latency of the headset was an impediment.
Five subjects reported that the limits of the AR field of view within
the headset were unnoticeable, while only one general user thought
clipping plane of the headset caused discomfort/distraction during
the workflow.
As for feedback on the workflow, three dental personnel all
thought the AR-assisted visualisation of OCT is an improvement
over standard screen display in the sense of flexible movement in
space while preserving the same information as the standard
display. Two dental clinicians that are familiar with the OCT
image were able to localise the position of both artificial interprox-
imal lesions (decay) and even the natural decay in the groove. The
other dental student is not familiar with OCT images so was not
able to do this. Although, they commented that the rendering
speed of OCT image may be a problem when more 3D scans
need to be acquired. All three dental personnel and one general
user thought the SFE 2D imaging AR-assisted guidance is easier
than without guidance, while two other general users thought it
was more difficult. These two general users commented that the
manual alignment of the virtual tooth model and the real tooth is
complicated due to one major reason. The depth perception does
not work well when you want to accurately align virtual object
with real object. This is caused by an inherent issue called occlusion
leak which has also been reported for other AR devices like
Hololens [39] and there’s ongoing research on solving this issue
[40]. The image tracking and controller tracking sometimes also
suffer from instability. The choice of manual alignment versus
tracking-based alignment methods seems to be up to personal pref-
erence. In terms of choice of virtual probe model, all three general
users prefer the tri-colour-plane model, while three dental personnel
have various preference. Therefore it is advantageous to have
both virtual probe models available and provide an interface to
switch between the two.
This first-ever prototype showed both clinical potential and
technical limitations in our study, which we believe will be a
useful reference for future research. First, the AR display can
relieve clinicians or general users from the troubles of constantly
switching views between patient and computer screen and the con-
sequent hand–eye coordination problem. Importantly, the AR
display preserves required information in the composite images.
Second, this system can assist in the adaptation of multiple dental
imaging modalities into clinical use, such as the safe and inform-
ative infrared optical imaging. Since images from multiple modal-
ities can be integrated into the system and provide supplementary
information for clinicians, this improves the learning curve of clin-
icians on using these new imaging modalities, and also improves
the reliability and sensitivity of dental decay quantification.
Notably, the prototype can be easily generalised to other dental
imaging modalities available in the clinics, such as CBCT, NIR
and fluorescence dental cameras. Also, most of these imaging
modalities along with the intra-oral scanners are common in
dental clinics. The SFE we use in this study is not commercial
but expected to be a low-cost NIR imaging modality. The other
addition is the AR headset which continues to get cheaper. Thus,
our prototype is both generalisable and cost-effective. Lastly,
the proposed solution can help repetitive imaging of dental
decay for therapy monitoring, which is the core of the ideal
dental care protocol of tooth decay management which maintains
the integrity of teeth. There are definite limitations in our prototype
reported above. Some limitations stem from the inherent restrictions
of the Magic Leap One hardware, such as jitter, perceived
drift, latency, occlusion leak and limited FOV. We believe
that the rapid progress of AR HMD products will help resolve
these limitations. Other limitations stem from our designs on the
software and workflow themselves, such as the inaccuracy of
manual alignment, which may be resolved by improved designs
of tracking mechanism. See supplementary material for the video
demo of our system in use.
6. Conclusion: In this work, we proposed an AR-assisted
visualisation and guidance system for imaging of dental
decay. We introduce a novel workflow which is implemented
as a software application on the Magic Leap One AR headset.
We evaluated the multimodal system and workflow through
quantitative measurements as well as a pilot user study with the
recognition that the prototype can be generalisable to other more
conventional dental imaging modalities, such as 3D-CBCT and
2D-oral cameras. Thus, with the addition of an AR headset and
Table 1 Comparison of different imaging guidance approaches
Imaging guidance approach keypoint displacement, px Time taken, s
without any guidance 83 ± 10 3
with AR guidance 31 ± 11 10
with video guidance 7 ± 2 20
Healthcare Technology Letters, 2019, Vol. 6, Iss. 6, pp. 243–248
doi: 10.1049/htl.2019.0082
247
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
a low-cost 2D imaging modality like SFE, our prototype can be
adapted into dental clinics and rural community health centres.
7. Funding and declaration of interests: Financial support
was provided by US NSF PFI:BIC 1631146 award and
VerAvanti Inc. Equipment support was provided by NIH/NIDCR
R21DE025356 grant and Yoshida Dental Mfg. Corp. A.S. was
supported by the University of Washington (UW) Reality Lab,
Facebook, Google, and Huawei. Authors have no personal
conflicts of interest outside the UW. UW receives license and
funding from Magic Leap Inc., and VerAvanti has licensed SFE
patents from UW for medical.
8 References
[1] Kassebaum N.J., Bernabé E., Dahiya M., ET AL.: ‘Global burden of un-
treated caries: a systematic review and metaregression’,J. Dent. Res.,
2015, 94, (5), pp. 650–658
[2] Kassebaum N.J., Smith A.G.C., Bernabé E., ET AL.: ‘Global, regional,
and national prevalence, incidence, and disability-adjusted life years
for oral conditions for 195 countries, 1990–2015: a systematic
analysis for the global burden of diseases, injuries, and risk
factors’,J. Dent. Res., 2017, 96, (4), pp. 380–387
[3] Featherstone J.D., Fontana M., Wolff M.: ‘Novel anticaries and
remineralization agents: future research needs’,J. Dent. Res., 2018,
97, (2), pp. 125–127
[4] Rozier R.G., White B.A., Slade G.D.: ‘Global burden of untreated
caries: a systematic review and metaregression’,J. Dent. Res.,
2017, 81, (8), pp. 97–106
[5] Gupta N., Vujicic M., Yarbrough C., ET AL.: ‘Disparities in untreated
caries among children and adults in the U.S., 2011–2014’,J. Dent.
Res., 2018, 18, (1), p. 30
[6] Shah N., Bansal N., Logani A.: ‘Recent advances in imaging tech-
nologies in dentistry’,World J. Radiol., 2014, 6, (10), pp. 794–807
[7] Gardner G., Xu Z., Lee A., ET AL.: ‘Effects of mHealth applications on
pediatric dentists’fluoride varnish protocols’. IADR/AADR/CADR,
Vancouver, BC, Canada, 2019, 3183697
[8] Savas S., Kucukyilmaz E., Celik E.U.: ‘Effects of remineralization
agents on artificial carious lesions’,Pediatr. Dent., 2016, 38, (7),
pp. 511–518
[9] Fontana M., Eckert G.J., Keels M.A., ET AL.: ‘Fluoride use in health
care settings: association with children’s caries risk’,Adv. Dent.
Res., 2018, 29, (1), pp. 24–34
[10] Karlsson L.: ‘Caries detection methods based on changes in optical
properties between healthy and carious tissue’,Int. J. Dent., 2010,
270729, pp. 1–9
[11] Javed F., Romanos G.E.: ‘A comprehensive review of various laser-
based systems used in early detection of dental caries’,Stoma. Edu.
J., 2015, 2, (2), pp. 106–111
[12] Zhou Y., Jiang Y., Kim A.S., ET AL.: ‘Developing laser-based therapy
monitoring of early caries in pediatric dental settings’. Proc. SPIE
10044, Lasers in Dentistry XXIII, San Francisco, CA, USA, 2017,
p. 100440D
[13] Breedveld P., Stassen H.G., Meijer D.W., ET AL.: ‘Manipulation in
laparoscopic surgery: overview of impeding effects and supporting
aids’,J. Laparoendosc Adv. Surg. Tech. A., 1999, 9, (6), pp. 469–480
[14] Bosc R., Fitoussi A., Hersant B., ET AL.: ‘Intraoperative augmented
reality with heads-up displays in maxillofacial surgery: a systematic
review of the literature and a classification of relevant technologies’,
Int. J. Oral Maxillofac Surg., 2019, 48, (1), pp. 132–139
[15] Chung S., Fried D., Staninec M., ET AL.: ‘Multispectral near-IR reflect-
ance and transillumination imaging of teeth’,Biomed. Opt. Exp.,
2011, 2, (10), pp. 2804–2814
[16] Fried W.A., Fried D., Chan K.H., ET AL.: ‘High contrast reflectance
imaging of simulated lesions on tooth occlusal surfaces at near-IR
wavelengths’,Lasers Surg. Med., 2013, 45, (8), pp. 533–541
[17] Darling C.L., Huynh G., Fried D.: ‘Light scattering properties of
natural and artificially demineralized dental enamel at 1310nm’,
J. Biomed. Opt., 2006, 11, (3), pp. 1–11
[18] Machoy M., Seeliger J., Szyszka-Sommerfeld L., ET AL.: ‘The use of
optical coherence tomography in dental diagnostics: a state-of-the-art
review’,J. Healthc. Eng., 2017, 2017, p. 7560645
[19] Zhang L., Kim A.S., Ridge J.S., ET AL.: ‘Trimodal detection of early
childhood caries using laser light scanning and fluorescence spectro-
scopy: clinical prototype’,J. Biomed. Opt., 2013, 18, (11), p. 111412
[20] Zhou Y., Lee R., Finkleman S., ET AL.: ‘Near-infrared multispectral
endoscopic imaging of deep artificial interproximal lesions in
extracted teeth’,Lasers Surg. Med., 2019, 51, (5), pp. 459–465
[21] Lee R., Zhou Y., Finkleman S., ET AL.: ‘Near-infrared imaging of arti-
ficial enamel caries lesions with a scanning fiber endoscope’,Sensors,
2019, 19, (6), p. 1419
[22] Jiang J., Huang Z., Qian W., ET AL.: ‘Registration technology of aug-
mented reality in oral medicine: a review’,IEEE. Access., 2019, 7,
pp. 53566–53584
[23] KaticD., Spengler P., Bodenstedt S., ET AL.: ‘A system for
context-aware intraoperative augmented reality in dental implant
surgery’,Int. J. Comput. Assist. Radiol. Surg., 2015, 10, (1),
pp. 101–108
[24] Lin Y.-K., Yau H.-T., Wang I.-C., ET AL.: ‘A novel dental implant
guided surgery based on integration of surgical template and augmen-
ted reality’,Clin. Implant Dentistry Rel. Res., 2015, 17, (3),
pp. 543–553
[25] Song T., Yang C., Dianat O., ET AL.: ‘Endodontic guided treatment
using augmented reality on a head-mounted display system’,
Healthcare Technol. Lett., 2018, 5, (5), pp. 201–207
[26] Ma L.F., Jiang W., Zhang B., ET AL.: ‘Augmented reality surgical navi-
gation with accurate CBCT-patient registration for dental implant
placement’,Med. Biol. Eng. Comput., 2019, 57, (1), pp. 47–57
[27] Won Y.-J., Kang S.-H.: ‘Application of augmented reality for inferior
alveolar nerve block anesthesia: A technical note’,J. Dental
Anesthesia Pain Med., 2017, 17, (2), pp. 129–134
[28] Bijar A., Rohan P.Y., Perrier P., ET AL.: ‘Atlas-based auto- matic
generation of subject-specificfinite element tongue meshes’,
Ann. Biomed. Eng., 2016, 44, (1), pp. 16–34
[29] Wang J., Suenaga H., Yang L., ET AL.: ‘Video see-through augmented
reality for oral and maxillofacial surgery’,Int. J. Med. Robot.
Comput. Assist. Surg., 2017, 13, (2), p. e1754
[30] Aichert A., Wein W., Ladikos A., ET AL.: ‘Image-based tracking of the
teeth for orthodontic augmented reality’. Proc. 15th Int. Conf.
Medical Image Computing and Computer-Assisted Intervention
(MICCAI), Nice, France, 2012, pp. 601–608
[31] Onishi K., Mizushino K., Noborio H., ET AL.: ‘Haptic AR dental simu-
lator using Z-buffer for object deformation’. Universal Access in
Human-Computer Interaction. Aging and Assistive Environments,
Heraklion, Crete, Greece, 2014, pp. 342–348
[32] Wang D.X., Tong H., Shi Y.J., ET AL.: ‘Interactive haptic simulation of
tooth extraction by a constraint-based haptic rendering approach’.
Proc. IEEE Int. Conf. Robotics and Automation (ICRA), Seattle,
Washington, USA, 2015, pp. 26–30
[33] Farronato M., Maspero C., Lanteri V., ET AL.: ‘Current state of the art
in the use of augmented reality in dentistry: a systematic review of the
literature’,BMC Oral Health, 2019, 19, (135), pp. 1–15
[34] Magic leap one AR headset. Available at https://www.magicleap.
com/magic-leap-one, accessed: 2019-07-15
[35] Unity real-time development platform. Available at https://unity.com/
, accessed: 2019-07-15
[36] Unity package for volume rendering. Available at https://github.com/
mattatz/unity-volume-rendering, accessed: 2019-08-28
[37] Magic leap one AR headset image tracking API. Available at https://
creator.magicleap.com/learn/guides/sdk-example-image-tracking,
accessed: 2019-07-15
[38] Magic leap one AR headset controller tracking API. Available
at https://creator.magicleap.com/learn/guides/control-6dof, accessed:
2019-07-15
[39] El-Hariri H., Pandey P., Hodgson A.J., ET AL.: ‘Augmented reality
visualisation for orthopaedic surgical guidance with pre- and
intra-operative multimodal image data fusion’,Healthcare Technol.
Lett., 2018, 5, (5), pp. 189–193
[40] Itoh Y., Hamasaki T., Sugimoto M.: ‘Occlusion leak compensation
for optical see-through displays using a single-layer transmissive
spatial light modulator’,IEEE Trans. Visualization Comput.
Graphics, 2017, 23, (11), pp. 2463–2473
248
This is an open access article published by the IET under the
Creative Commons Attribution License (http://creativecommons.
org/licenses/by/3.0/)
Healthcare Technology Letters, 2019, Vol. 6, Iss. 6, pp. 243–248
doi: 10.1049/htl.2019.0082
Available via license: CC BY 3.0
Content may be subject to copyright.
Content uploaded by Yaxuan Zhou
Author content
All content in this area was uploaded by Yaxuan Zhou on Oct 12, 2019
Content may be subject to copyright.