ArticlePDF Available

Towards AR-assisted Visualization and Guidance for Imaging of Dental Decay

Authors:

Abstract and Figures

Untreated dental decay is the most prevalent dental problem in the world, affecting up to 2.4 billion people and leading to a significant economic and social burden. Early detection can greatly mitigate irreversible effects of dental decay, avoiding the need for expensive restorative treatment that forever disrupts the enamel protective layer of teeth. However, two key challenges exist that make early decay management difficult: unreliable detection and lack of quantitative monitoring during treatment. New optically based imaging through the enamel provides the dentist a safe means to detect, locate, and monitor the healing process. This work explores the use of an augmented reality (AR) headset to improve the workflow of early decay therapy and monitoring. The proposed workflow includes two novel AR-enabled features: (i) in situ visualisation of pre-operative optically based dental images and (ii) augmented guidance for repetitive imaging during therapy monitoring. The workflow is designed to minimise distraction, mitigate hand–eye coordination problems, and help guide monitoring of early decay during therapy in both clinical and mobile environments. The results from quantitative evaluations as well as a formative qualitative user study uncover the potentials of the proposed system and indicate that AR can serve as a promising tool in tooth decay management.
Content may be subject to copyright.
Towards AR-assisted Visualization and Guidance for Imaging of Dental Decay
Yaxuan Zhou1,3, Paul Yoo2, Yingru Feng2, Aditya Sankar2, Alireza Sadr4and Eric J. Seibel3
1Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
2Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
3Human Photonics Lab, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
4School of Dentistry, University of Washington, Seattle, WA 98195, USA
E-mail: eseibel@uw.edu
Published in Healthcare Technology Letters; Received on xxx; Revised on xxx.
Untreated dental decay is the most prevalent dental problem in the world, affecting up to 2.4 billion people and leading to significant economic
and social burden. Early detection can greatly mitigate irreversible effects of dental decay, avoiding the need for expensive restorative treatment
that forever disrupts the enamel protective layer of teeth. However, two key challenges exist that make early decay management difficult:
unreliable detection, and lack of quantitative monitoring during treatment. New optically-based imaging through the enamel provides the
dentist a safe means to detect, locate, and monitor the healing process. This work explores the use of an Augmented Reality (AR) headset
to improve the workflow of early decay therapy and monitoring. The proposed workflow includes two novel AR-enabled features: 1) in-situ
visualization of pre-operative optically-based dental images and 2) augmented guidance for repetitive imaging during therapy monitoring.
The workflow is designed to minimize distraction, mitigate hand-eye coordination problems, and help guide monitoring of early decay during
therapy in both clinical and mobile environments. The results from quantitative evaluations as well as a formative qualitative user study
uncover the potentials of our system and indicates that AR can serve as a promising tool in tooth decay management.
1. Introduction: Oral health problems remain a major public
health challenge worldwide in the past 30 years, leading to
economic and social burden[1, 2, 3]. Wherein, untreated dental
decay is the most prevalent issue and is relevant to socio-economic
disparities[4, 5]. As shown in Fig.1, the traditional dental care
pattern for dental decay management is consisted of routine
examination in clinics, non-destructive treatments for detected early
decays and destructive treatments for irreversible decays. There are
three limitations in this pattern. Firstly, visual or tactile examination
and the current gold-standard x-ray radiography can’t reliably and
timely detect interproximal and occlusal lesions[6], which are the
most common types of dental decays. Secondly, the medicine
therapy and instructed cleaning are performed by patients at home
without supervision. And they need to revisit the dental clinic,
which limits the timely monitoring of decay and often leads to
further progression of the decay into irreversible decay. Lastly, the
treatments for irreversible lesion such as drill-and-fill procedure,
root canal treatment and even dental implant are all destructive,
painful, expensive and time-consuming. These limitations need to
be solved to develop an ideal dental care procedure for decay
management, also shown in Fig.1. If early-stage lesions can
be detected reliably, patients can be prescribed with medicinal
therapies and instructed/directed cleaning over time outside the
dental clinic[7, 3, 8]. Also, if the current clinic-revisiting-based
monitoring of decay can be enhanced by monitoring at community
health center or even patient’s home and sharing data with dentists,
then timely intervention can be made with fewer clinic-visits and
less burden on both dentists and patients[3, 9]. Then, early decays
can be detected and healed in time thus avoiding destructive and
costly procedures. In need is the continuous research into such an
ideal management of tooth decay[3].
To move towards this ideal pattern, there have been significant
strides towards developing reliable, sensitive and low-cost imaging
modalities to diagnose early decays[10, 11]. 3D imaging modalities
such as cone-beam computed tomography (CBCT) and optical
coherence tomography (OCT) are reliable and sensitive but usually
require long imaging time on expensive clinical systems. Clinicians
typically perform 3D imaging pre-operatively and use the 3D image
for planning and intra-operative reference. For intra-operative
Figure 1 Comparison of traditional and ideal dental care patterns for tooth
decay management. Blue texts are areas that are under active development.
Purple texts indicate how our work is supporting the new approach to
healing dental decays.
imaging and also remote monitoring, clinicians also need a 2D
imaging modality, e.g. the scanning fiber endoscope (SFE).
Along with the development of imaging modalities, the ease
of use for dental imaging needs to be improved in general.
Acquiring high-quality images from a desired perspective usually
requires expert manipulation of the instrument. For example, to
effectively monitor the condition of a carious lesion with SFE,
users need to image the decay from the same perspective every
time, which is difficult without any assistance[14]. Also, using
the previous images for navigation requires hand-eye coordination.
Clinicians need to divert their attention to the display monitor
while manually positioning the scope, additionally compensating
Healthcare Technology Letters, pp. 1–6 1
c
The Institution of Engineering and Technology 2013
Auto-generated PDF by ReView Healthcare Technology Letters
HTLLiveSample.texM ainDocument IET Review Copy Only 2
This article has been accepted for publication in a future issue of this journal, but has not been fully edited.
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.
Figure 2 (a) An OCT probe imaging an extracted human tooth; a slice of
the 3D OCT scan, where the bright patterns indicate demineralized regions
of enamel (dental lesion). (b) An SFE probe imaging an extracted human
tooth; SFE image, where the bright patterns(marked by arrows) indicate
high optical reflectance from dental decay regions.
for patient’s movement. This is particularly challenging in dental
field as there is only manual fixation of patient’s jaw and patients
are typically not under local anesthesia during dental procedures.
The above challenges lead to a lengthy learning curve for providing
treatment accurately[15, 16]. Moreover, resource-limited areas may
lack budgets for well-trained personnel.
In this work, we utilize an AR head-mounted display (HMD)
to develop a platform for visualizing dental images from multiple
modalities. We also use the HMD as a guidance tool for positioning
of an imaging probe during repetitive monitoring of dental lesions
and their treatments. We built a prototype system using the
Magic Leap One AR headset and two dental imaging modalities
OCT (Optical Coherence Tomography) and infrared SFE. The
key contributions of our work are 1) the design and development
of a novel end-to-end system for multi-modal dental image
visualization, 2) a technique for guided image capture using SFE,
and 3) quantitative evaluations as well as a user study to evaluate
the usefulness, usability and limitations of our system and identify
areas for future work.
To the best knowledge of the authors, this is the first pilot
study to develop a HMD-based AR environment for visualization
and guidance for optically monitoring the status of dental lesions.
Continued advances in AR devices, dental imaging modalities, as
well as systems that combine these two technologies will together
push the traditional dental practice towards to an ideal future.
2. Related Work: Near-infrared(NIR) optical imaging is shown
to have the potential to detect early stage dental decays more
reliably[18, 19]. In NIR reflection image, dental decays appear
brighter than surrounding sound areas due to increasing scattering
coefficient[20]. OCT is a 3D volumetric imaging technique and has
been used for NIR imaging of dental decay[21]. Fig.2(a) shows a
prototype OCT system imaging an extracted human tooth and a
slice of the 3D OCT scan where two interproximal dental lesions
appear as bright spots. OCT systems are expected to be expensive
when introduced to dental clinics, and currently a complete 3D scan
takes at least several minutes from prototype systems. Also, the
OCT probe is bulky and requires expert manipulation to acquire
high-quality scans. Thus OCT is more suitable as the pre-operative
imaging modality used in clinics. The SFE is a 2D imaging
technique with the advantages of miniature probe tip, expected
low cost and prototypes have been used for real-time NIR dental
imaging in previous works[12, 13, 22]. Fig.2(b) shows SFE imaging
an extracted human tooth and the SFE image where the white
patterns on both sides of tooth indicate two interproximal dental
lesions. In the figure, SFE is imaging from the biting surface of
tooth, but since NIR light penetrates around 3mm deep into the
surface, the interproximal dental lesion under the surface also shows
up in the image. This is very helpful for dental decays that are
hidden in between neighboring teeth and not accessible to the
operator. Due to the above advantages, SFE is well-suited for quick
intraoperative screening and long-term monitoring.
AR technology has been introduced into research areas of dental
implant[17, 23, 24, 25, 26], oral and maxillofacial surgery[16, 27,
28, 29], orthodontics[30] as well as dental education[31, 32].In
previous work, introduction of AR has assisted clinicians by
displaying and registering virtual models in the operating field
thus reducing difficulty of hand-eye coordination. However, there
is as yet no study aimed at assisting dental imaging modalities for
detection and monitoring of dental decay[33]. Among all available
AR devices, head-mounted displays (HMD) have the advantage
of compactness and intuitiveness (as compared to handheld or
armature mounted AR devices). For this study, we chose Magic
Leap One [34] AR headset as the hardware platform. Magic Leap
One also includes a hand-held controller with a home button, a
bumper, a trigger and a touchpad.
3. Methods: The proposed workflow and corresponding technical
components are described in Fig.3. During the initial appointment
in dental clinics with high resource availability, a pre-operative
3D raw image is acquired and transferred onto AR headset, then
dentists can examine the 3D image in AR environment intra-
operatively and make a diagnosis based on observed position,
dimension and severity of dental decays. During this process, the
dentist can translate, rotate, and scale the 3D image at will to
view it from an optimal viewing angle based on their preference
and experience. The dentist can also adjust display parameters
including intensity, opacity, and contrast threshold to optimize
decay visibility and also account for varying external lighting
conditions. Furthermore, they can examine the image by slicing
through the 3D structure to accurately locate the decay.
For long-term monitoring, the dentist can select the desired angle
of view for future repetitive 2D imaging. Then a virtual model of
tooth and imaging instrument, with registered spatial relationships,
is generated and stored. During the monitoring phase, 2D imaging
can be performed regularly within or outside of a clinical setting,
using the virtual model as guidance. In order to reproduce the
reference image, the operator aligns the position of selected tooth
and the imaging probe with respect to the virtual model so that the
same desired view angle is preserved. Alignment of imaging probe
can be done by manual alignment or tracking-based alignment. 2D
images are then transferred into AR environment and fused with the
3D image and all previous 2D images for comparison. The operator
or remote dentist can change the desired angle of view according
to updated 2D images throughout the period of monitoring. After
2D SFE images are acquired, they are fused with 3D image and
transferred to a dentist with computer-aided image analysis for
interpretation. By comparing the historical images to the present,
the dentist can make determination of whether the dental decay is
healing or is progressing under the current prescription and make
corresponding adjustment on the prescription (such as frequency
Figure 3. Diagram of workflow and corresponding technical components.
2Healthcare Technology Letters, pp. 2–6
c
The Institution of Engineering and Technology 2012
Auto-generated PDF by ReView Healthcare Technology Letters
HTLLiveSample.texM ainDocument IET Review Copy Only 3
This article has been accepted for publication in a future issue of this journal, but has not been fully edited.
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.
and dose of medicine application, and/or time of next dental visit).
We prototyped a software system based on this principle using
Unity[35](version 2019.1.0f1) with Magic Leap Lumin SDK[34].
3.1. AR-assisted visualization of pre-operative 3D image: In our
pilot study, a pre-operative 3D image of the tooth is acquired using a
pre-commercial 1310nm swept source OCT (Yoshida Dental Mfg.,
Tokyo, Japan) with 110nm band and 50kHz scan. The OCT 3D
scan is taken from the occlusal view with an imaging range of
10 ×10 ×8mm3and an axial imaging resolution of 11um. The
raw data from OCT imaging system is first converted into point
cloud data, and downsampled to reduce the data size without losing
useful features. The point intensities are then rescaled to increase
the dynamic range. The point cloud data is then rendered as a
3D volumetric object using an open-source Unity package for
volumetric rendering[36].
Slicing through three orthogonal directions is implemented to
allow users to inspect inner structures of the tooth. By examining
cross-section slices, dentists can comprehensively inspect the
location and size of dental lesions. More importantly, dentists
can find out how deep the dental decay has progressed into
the dental enamel layer, which would determine whether a drill-
and-fill procedure is needed or medicine treatment should be
prescribed with long-term monitoring. Since the visualization needs
to accommodate different lighting conditions and user preferences,
adjustment of three display parameters is provided. Users can adjust
intensity value to adjust the overall brightness of the volumetric
display. They can also adjust the threshold value for saturation,
hiding areas that have low contrast. Opacity value can be adjusted
to determine the transparency of the volume. Appropriate opacity
values allows the user to see the surface structure of tooth as well
as inner features like dental decay or a crack without having to
inspect through every slice, thus providing an initial and intuitive
sense of existence, position and structure of these features. Slicing
and display adjustment are implemented as sliders on a panel. The
controller is used to select and adjust sliders. The panel and the pre-
operative 3D image can be selected by aiming the controller at them
and holding down the trigger and physically translating or rotating
the controller. When the panel or the image is selected, users can
also rescale them by pressing on left of the touchpad to shrink
and left of the touchpad to enlarge. See the video in supplementary
material for the interaction demo.
3.2. AR-assisted guidance for 2D imaging: Guidance for 2D
imaging is necessary not only in that it helps non-dentist personnel
to take 2D images at desired view angles, but also in that it
guarantees the field of view and perspective of 2D images during
repetitive imaging remain the constant and the series of images can
be quantitatively compared. After dentists spot decay on the OCT
3D image, they can designate the desired view angle to take 2D
images so that the decay can be detected by 2D images. In the view
angle selection mode, a virtual cone shape is attached to the end
of controller, corresponding to the view frustum of the endoscope.
Since NIR SFE has a disc-shaped field of view which grows larger
when the target is further away from the probe. Thus, a cone can
be used to represent the field of view of SFE. The user can aim the
cone at the OCT 3D image and adjust the area that is covered by
the cone, as shown in Fig.4(a). By pressing the bumper to indicate
that the desired view angle is chosen, and a virtual reference model
consisting of 3D tooth surface model registered with SFE probe
model according to indicated view angle is generated for future
guidance. The 3D tooth surface model is acquired by an intra-oral
scanner (3Shape TRIOS 3, 3Shape, Copenhagen, Denmark).
In this pilot study, we strive to keep the system and workflow
as concise as possible, so we are not using any fiducial-point-
based tracking which requires an additional tracker. Furthermore
the alignment between the virtual tooth model with the real tooth
Figure 4 (a) Use cone model to select desired angular view for consistent
2D imaging. (b) The tri-color-plane model for probe alignment. (c) The
cylinder model for probe alignment.
Figure 5. Fusion of OCT 3D image and SFE 2D images.
is done manually by user. Since the virtual tooth model is the 3D
surface structure scan from the same tooth, the user can shrink the
model to the same size as the tooth and align them. The next step is
to use the reference model for guidance of 2D imaging, where the
user needs to align the virtual probe model. The alignment of SFE
probe to the virtual model is made more difficult since SFE probe
is of a smaller scale. Therefore we designed two virtual SFE probe
models, a cylinder model and a tri-color-plane model, as shown in
Fig.4(b,c).
Besides manual alignment, there are also two tracking-based
methods supported by hardware systems on Magic Leap One. The
first method is based on image tracking API provided by Magic
Leap[37]. The front-view camera and depth camera on the headset
can be used for tracking the spatial position and rotation of a flat
image. The target image is printed in the dimension of 3.4×3.2
cm2and attached to the SFE probe. Then the tracked position and
rotation of the target image can be transformed to the position and
rotation of the probe, assuming the offset between the probe and
target image remains rigid and unchanged. The second method is
based on the electromagnetic 6-DoF spatial tracking of the control
handle[38]. By fixing the SFE probe with the control handle, the
tracked position and rotation of the controller can be transformed
to the position and rotation of the probe. Once the probe is being
tracked, a red cylinder virtual model is shown to indicate the
tracked position and rotation. Then the user needs to align the red
cylinder virtual model(the tracked position and rotation of the real
probe) with the virtual probe model(desired position and rotation
for positioning the real probe).
3.3. Data transfer and image fusion: The 2D SFE images are
transferred from the instrument to the AR headset via a web server.
A polling based scheme downloads newly acquire images onto the
headset, over HTTP. 2D SFE images and the 3D OCT image can
then be registered according to the view angles with which the SFE
images were taken. As shown in Fig.5, an occlusal-view SFE image
is registered with the OCT 3D image. With the image fusion, users
can interpret and compare images from multiple modalities and also
inspect the condition of decays during monitoring of therapy.
Healthcare Technology Letters, pp. 3–6 3
c
The Institution of Engineering and Technology 2012
Auto-generated PDF by ReView Healthcare Technology Letters
HTLLiveSample.texM ainDocument IET Review Copy Only 4
This article has been accepted for publication in a future issue of this journal, but has not been fully edited.
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.
Figure 6 Experiment setup: (a)3D grid coordinate for measuring
augmentation accuracy between hologram and object. (b) USAF resolution
test chart for measuring end-to-end accuracy during probe repositioning.
Ten keypoints are selected from square corners marked by red dots. (c)
Dentoform model with an extracted human tooth installed on top. There
are two artificial dental decays on the interproximal surfaces marked by the
two red arrows.
4. Evaluation:
4.1. Experiments: To measure the augmentation quality, we set
up a 3D grid coordinate as shown in Fig.6(a). The grid paper has
1mm fine grids, 5mm medium grids and 1cm large grids. Once the
hologram is manually aligned with the object, the observer uses a
sharp pointer to localize position of a certain point on hologram and
then measures the distance between the points on real object and
hologram. Jitter and perceived drift of the hologram are quantified
by the translation distance measured on the grid paper.
To measure the alignment performance, we also measure the end-
to-end accuracy quantified by keypoint displacement in acquired
SFE images. We choose to image a USAF resolution test chart as
shown in Fig.6(b), to simplify the accurate extraction of keypoints
in SFE images. Ten keypoints are selected on the test chart. The
user first aligns the SFE probe in front of the test chart in a desired
viewpoint and takes one image. Then after putting the SFE probe
down for a while, the user realigns the SFE probe with or without
guidance and takes another SFE image with the attempt to replicate
the same viewpoint as in the first image. Three guidance approaches
are used in turn for the guidance of repositioning of SFE probe,
among which, "without any guidance" means that user aligns the
probe only according to their memory of the desired probe position
without referring to real-time SFE video, "with AR guidance"
means that user aligns the probe with the AR hint of desired probe
position, "with video guidance" means that user aligns the probe
by referring to the real-time SFE video and comparing with the
reference image. Three guidance approaches are used in random
order for ten runs to avoid training bias. The time it takes to realign
the probe to desired position is recorded. The x and y positions
of the ith keypoint are measured in pixels in reference image and
repetitive image as pref
xi,pref
yi,prep
xi,prep
yi. The overall keypoint
displace D of the repetitive image is then calculated according to
D=Pir(prep
xi
pref
xi)2+(prep
yi
pref
yi)2
10 . Among ten runs, the mean
and standard deviation of D is qualified and used to compare the
three guidance approaches along with the time.
4.2. User Study: We conducted a user study to get user feedbacks
for this prototype. We used a dentoform model with an extracted
human tooth installed on it, as shown in Fig.6(c). The extracted
human tooth has two artificial dental lesions on its interproximal
surfaces. OCT 3D image, occlusal-view SFE 2D image as well as
3D surface shape scan were acquired from this sample, as shown in
Fig.7.
Six subjects were recruited and asked to conduct the tasks with
the system, to walk through the workflow. Among the six subjects,
three self-reported as dental students or clinicians, while the other
three were general users without specialized dental knowledge. All
users were new to this AR system and the workflow. The protocol
that subjects were asked to perform using the Magic Leap One were
as follows: (i) Examine the 3D OCT image in the headset by slicing
Figure 7 (a) Photo of the extracted human tooth with two artificial
interproximal lesions. (b) One slice of OCT 3D image of the tooth. (c) NIR
occlusal-view SFE image. (d) 3D surface shape scan of the tooth. Note that
in (b) and (c), the blue frame indicates an artificial dental decay deep into
the dentin, the orange frame indicates an artificial dental decay less than
half way into the enamel, and the green circle indicates a natural dental
decay in the groove under the biting surface.
and adjusting display parameters. (ii) Use the cone to select desired
view angle. (iii) Manually align the virtual model with the real
tooth. (iv) Align the SFE probe with the virtual probe model and
compare two virtual probe models. The manual alignment, image-
tracking based alignment and controller-tracking based alignment
are also compared.
After the tasks were completed, the users were asked to fill out
a questionnaire anonymously. See supplementary material for the
template of questionnaire.
5. Results and Discussion: In the quantitative measurements, we
measured the augmentation quality between hologram and objects
manually aligned together. We noticed the augmentation quality
is influenced by jitter, perceived drift and latency, which degrade
perception as well as accuracy and efficiency of aligment procedure.
Jitter is the continuous shaking of hologram. We measured jitter
within the range of 1mm, which is at the edge of our acceptable
range considering the tooth has dimension of around 10mm.
Perceived drift is that when observer moves around a hologram,
the perceived position of hologram drifts away. We measured
perceived drift within the range of 5mm when the observer
takes two orthogonal viewpoints. The perceived drift limits users
from observing from multiple viewpoints to align probe with the
hologram. But considering that users are not able to freely move
around when aligning the probe, the perceived drift may be less
fatal for our prototype. Latency is the time lag of hologram update
when the user moves their head and is determined by the distance of
head movement. The measured latency is within range of 2 seconds
when head motion is within general range needed for performing
the imaging procedure. We also measured the accuracy of image-
tracking-based alignment and controller-tracking-based alignment.
The image-tracking-based alignment suffers from limited capability
of front-facing camera. The image tracking has an error of up
to 4mm and may lose the target when the printed target image
moves fast. Furthermore, when the background of environment is
complicated, the image tracking may recognize the wrong target.
The controller-tracking-based alignment suffers from the hologram
drift when the electromagnetic sensor is rotated around or moved
close to conducting surfaces. All that being said, the current image-
tracking and controller-tracking based alignment approaches suffer
from instability and accuracy issues and need improvement either
from hardware or from the tracking scheme design. So far, manual
alignment seems to be more robust in terms of accuracy and
efficiency.
The end-to-end accuracy and efficiency of manual alignment is
quantified by the keypoint displacement in acquired reference SFE
image and repetitive SFE image with dimension of 400×400 pixels.
As shown in Table. 1, AR guidance has the advantage of better
repositioning accuracy compared to without any guidance, and the
advantage of faster repositioning speed compared to using SFE real-
time video for guidance. By transferring the real-time SFE video to
4Healthcare Technology Letters, pp. 4–6
c
The Institution of Engineering and Technology 2012
Auto-generated PDF by ReView Healthcare Technology Letters
HTLLiveSample.texM ainDocument IET Review Copy Only 5
This article has been accepted for publication in a future issue of this journal, but has not been fully edited.
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.
AR headset and placing it near the operating field, we may further
improve the accuracy and efficiency of our prototype.
Table 1 Comparison of different imaging guidance approaches.
Imaging guidance approach keypoint displacement (px) Time taken (s)
without any guidance 83±10 3
with AR guidance 31±11 10
with video guidance 7±2 20
In the user study, the average time it took to educate each
subject to use the system to general proficiency(i.e. familiar with
the interaction techniques and can use them to accomplish the
workflow) was 15 min, which is quite fast considering their
unfamiliarity to AR devices. Afterwards, all subjects were able
to accomplish the protocol. During the process of prototyping
and quantitative evaluation, we thought the following factors may
influence the workflow, and therefore included qualitative questions
regarding their effects. The factors include 1) the latency which may
impede the accuracy and efficiency of alignment of tooth and probe
with the virtual models due to the small scale, 2) the available field
of view of the headset. For Magic Leap One, the width and height
of the AR field of view are currently the largest in the market and
the interface design also avoid borders of frames to mitigate sense
of limited field of view. However, when the user is too close to the
virtual objects, the virtual objects will be cut off by a clipping plane.
This limits users to work from a distance of about 37cm away from
the virtual objects, which means that the users may have to always
extend their arms away from their body during the alignment tasks.
Five subjects felt the latency was noticeable but it did not impede
their workflow, while one dental clinician felt the latency of the
headset was an impediment. Five subjects reported that the limits
of the AR field of view within the headset were unnoticeable, while
only one general user thought clipping plane of the headset caused
discomfort/distraction during the workflow.
As for feedback on the workflow, three dental personnel all
thought the AR-assisted visualization of OCT is an improvement
over standard screen display in the sense of flexible movement
in space while preserving the same information as the standard
display. Two dental clinicians that are familiar with OCT image
were able to localize the position of both artificial interproximal
lesions (decay) and even the natural decay in the groove. The
other dental student isn’t familiar with OCT images so wasn’t able
to do this. Although, they commented that the rendering speed
of OCT image may be a problem when more 3D scans need
to be acquired. All three dental personnel and one general user
thought the SFE 2D imaging AR-assisted guidance is easier than
without guidance, while two other general users thought it was
more difficult. These two general users commented that the manual
alignment of virtual tooth model and real tooth is complicated due
to one major reason. The depth perception doesn’t work well when
you want to accurately align virtual object with real object. This is
caused by an inherent issue called occlusion leak which has also
been reported for other AR devices like Hololens[39] and there’s
ongoing research on solving this issue[40]. The image tracking
and controller tracking sometimes also suffer from instability.
The choice of manual alignment versus tracking-based alignment
methods seem to be up to personal preference. In terms of choice
of virtual probe model, all three general users prefer the tri-color-
plane model, while three dental personnel have various preference.
Therefore it’s advantageous to have both virtual probe models
available and provide an interface to switch between the two.
This first-ever prototype showed both clinical potential and
technical limitations in our study, which we believe will be
useful reference for future research. Firstly, the AR display
can relieve clinicians or general users from the troubles of
constantly switching views between patient and computer screen
and the consequent hand-eye coordination problem. Importantly,
the AR display preserves required information in the composite
images. Secondly, this system can assist in the adaptation of
multiple dental imaging modalities into clinical use, such as
the safe and informative infrared optical imaging. Since images
from multiple modalities can be integrated into the system and
provide supplementary information for clinicians, this improves the
learning curve of clinicians on using these new imaging modalities,
and also improves the reliability and sensitivity of dental decay
quantification. Notably, the prototype can be easily generalized to
other dental imaging modalities available in the clinics, such as
CBCT, NIR and fluorescence dental cameras. Also, most of these
imaging modalities along with the intra-oral scanners are common
in dental clinics. The SFE we use in this study is not commercial
but expected to be a low-cost NIR imaging modality. The other
addition is the AR headset which continues to get cheaper. Thus,
our prototype is both generalizable and cost-effective. Lastly, the
proposed solution can help repetitive imaging of dental decay for
therapy monitoring, which is the core of the ideal dental care
protocol of tooth decay management which maintains the integrity
of teeth. There are definite limitations in our prototype reported
above. Some limitations stem from the inherent restrictions of the
Magic Leap One hardware, such as jitter, perceived drift, latency,
occlusion leak and limited FOV. We believe that the rapid progress
of AR HMD products will help resolve these limitations. Other
limitations stem from our designs on the software and workflow
themselves, such as the inaccuracy of manual alignment. See
supplementary material for video demo of our system in use.
6. Conclusion: In this work we proposed an AR-assisted
visualization and guidance system for imaging of dental decay.
We introduce a novel workflow which is implemented as a
software application on the Magic Leap One AR headset. We
evaluated the multimodal system and workflow through quantitative
measurements as well as a pilot user study with recognition that the
prototype can be generalizable to other more conventional dental
imaging modalities, such as 3D-CBCT and 2D-oral cameras. Thus,
with the addition of an AR headset and a low-cost 2D imaging
modality like SFE, our prototype can be adapted into dental clinics
and rural community health centers.
7. Funding and Declaration of Interests: Financial support was
provided by US NSF PFI:BIC 1631146 award and VerAvanti Inc.
Equipment support was provided by NIH/NIDCR R21DE025356
grant and Yoshida Dental Mfg. Corp. A.S. was supported by the
UW Reality Lab, Facebook, Google, and Huawei. Authors have no
personal conflicts of interest outside the University of Washington
(UW). UW receives license and funding from Magic Leap Inc., and
VerAvanti has licensed SFE patents from UW for medical.
8 References
[1] Kassebaum, N.J., Bernabé, E., Dahiya, M., ET AL.:
’Global burden of untreated caries: a systematic review and
metaregression’, J Dent Res., 2015, 94(5), pp. 650-658
[2] Kassebaum, N.J., Smith, A.G.C., Bernabé, E., ET
AL.:’Global, Regional, and National Prevalence, Incidence,
and Disability-Adjusted Life Years for Oral Conditions for
195 Countries, 1990-2015: A Systematic Analysis for the
Global Burden of Diseases, Injuries, and Risk Factors’,
2017, 96(4), pp. 380-387
[3] Featherstone, J.D., Fontana, M., Wolff, M.: ’Novel
Anticaries and Remineralization Agents: Future Research
Needs’, J Dent Res, 2018, 97(2), pp. 125-127
[4] Rozier, R.G., White, B.A., Slade, G.D.:’Global burden of
untreated caries: a systematic review and metaregression’, J
Dent Res, 2017, 81(8), pp.97-106
Healthcare Technology Letters, pp. 5–6 5
c
The Institution of Engineering and Technology 2012
Auto-generated PDF by ReView Healthcare Technology Letters
HTLLiveSample.texM ainDocument IET Review Copy Only 6
This article has been accepted for publication in a future issue of this journal, but has not been fully edited.
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.
[5] Gupta, N., Vujicic, M., Yarbrough, C., ET AL.:’Disparities
in untreated caries among children and adults in the U.S.,
2011-2014’, J Dent Res, 2018, 18(1), pp.30
[6] Shah, N., Bansal, N., Logani, A.:’Recent advances in
imaging technologies in dentistry’, World J Radiol, 2014,
6(10), pp.794-807.
[7] Gardner, G., Xu, Z., Lee, A., ET AL.:’Effects of mHealth
Applications on Pediatric Dentists’ Fluoride Varnish
Protocols’, IADR/AADR/CADR, 2019, 3183697
[8] Savas, S., Kucukyilmaz, E., Celik, E. U.:’Effects of
Remineralization Agents on Artificial Carious Lesions’,
Pediatric Dentistry, 2016, 38(7), pp.511-518.
[9] Fontana, M., Eckert, G.J., Keels, M.A., ET AL. :’Fluoride
Use in Health Care Settings: Association with Children’s
Caries Risk’, Adv Dent Res., 2018, 29(1), pp.24-34.
[10] Karlsson, L.:’Caries detection methods based on changes in
optical properties between healthy and carious tissue’, Int J
Dent, 2010, 270729, pp.1-9.
[11] Javed, F., Romanos, G.E.:’A comprehensive review of
various laser-based systems used in early detection of dental
caries’, Stoma Edu J, 2015, 2(2), pp.106-111.
[12] Zhang, L., Kim, A.S., Ridge, J.S., ET AL. :’Trimodal
detection of early childhood caries using laser light
scanning and fluorescence spectroscopy: clinical prototype’,
J Biomed Opt., 2013, 18 (11), pp.111412.
[13] Zhou, Y., Lee, R., Finkleman, S., ET AL.:’Near-infrared
multispectral endoscopic imaging of deep artificial
interproximal lesions in extracted teeth’, Lasers in Surgery
and Medicine, 2019, 51(5), pp.459-465.
[14] Zhou, Y., Jiang, Y., Kim, A.S.,ET AL. :’ Developing laser-
based therapy monitoring of early caries in pediatric dental
settings’, Proc. SPIE 10044, Lasers in Dentistry XXIII,
2017, pp.100440D.
[15] Breedveld, P., Stassen, H.G., Meijer, D.W., ET AL.
:’Manipulation in laparoscopic surgery: overview of
impeding effects and supporting aids’, J Laparoendosc Adv
Surg Tech A., 1999, 9(6), pp.469-480.
[16] Bosc, R., Fitoussi, A., Hersant, B., ET AL.:’Intraoperative
augmented reality with heads-up displays in maxillofacial
surgery: a systematic review of the literature and
a classification of relevant technologies’, Int J Oral
Maxillofac Surg., 2019, 48(1), pp.132-139.
[17] Jiang, J., Huang, Z., Qian, W., ET AL.:’Registration
Technology of Augmented Reality in Oral Medicine: A
Review’, IEEE Access, 2019, 7, pp.53566-53584.
[18] Chung, S., Fried, D., Staninec, M., ET AL.:’Multispectral
near-IR reflectance and transillumination imaging of teeth’,
Biomed Opt Exp, 2011, 2(10), pp.2804-2814.
[19] Fried, W.A., Fried, D., Chan, K.H., ET AL.:’High contrast
reflectance imaging of simulated lesions on tooth occlusal
surfaces at near-IR wavelengths’, Lasers Surg Med, 2013,
45(8), pp.533-541.
[20] Darling, C.L., Huynh, G., Fried, D.:’ Light scattering
properties of natural and artificially demineralized dental
enamel at 1310nm’, J Biomed Opt, 2006, 11 (3), pp.1-11.
[21] Machoy, M., Seeliger, J., Szyszka-Sommerfeld, L., ET
AL.:’The Use of Optical Coherence Tomography in Dental
Diagnostics: A State-of-the-Art Review’, J Healthc Eng.,
2017, (), pp.7560645.
[22] Lee, R., Zhou, Y., Finkleman S., ET AL.:’Near-Infrared
Imaging of Artificial Enamel Caries Lesions with a
Scanning Fiber Endoscope’, Sensors, 2019, 19(6).
[23] Kati´
c, D., ET AL.:’A system for context-aware
intraoperative augmented reality in dental implant
surgery’, Int. J. Comput. Assist. Radiol. Surg., 2015,
10(1), pp.101-108
[24] Lin, Y.-K., Yau, H.-T., Wang, I.-C., ET AL.:’A novel dental
implant guided surgery based on integration of surgical
template and augmented reality’, Clin. Implant Dentistry
Rel. Res., 2015, 17(3), pp.543-553.
[25] Song, T., Yang, C., Dianat, O., ET AL.:’ ’, Endodontic
guided treatment using augmented reality on a head-
mounted display system, 2018, 5(5), pp.201-207
[26] Ma, L. F., ET AL. :’Augmented reality surgical navigation
with accurate CBCT-patient registration for dental implant
placement’, Med. Biol. Eng. Comput., 2019, 57(1), pp.47-
57
[27] Won, Y.-J., Kang, S.-H.:’Application of augmented reality
for inferior alveolar nerve block anesthesia: A technical
note’, J. Dental Anesthesia Pain Med., 2017, 17(2), pp.129-
134
[28] Bijar, A., Rohan, P. Y., Perrier, P., ET AL. :’Atlas-based auto-
matic generation of subject-specific finite element tongue
meshes’, Ann. Biomed. Eng., 2016, 44(1), pp.16-34
[29] Wang,J., Suenaga, H., Yang, L., ET AL. :’Video see-through
augmented reality for oral and maxillofacial surgery’, Int. J.
Med. Robot. Comput. Assist. Surg., 2017, 13(2), pp.e1754
[30] Aichert, A., Wein, W., Ladikos, A., ET AL. :’Image-based
tracking of the teeth for orthodontic augmented reality’,
Proc. 15th Int. Conf. Med. Image Comput. Comput.-Assist.
Intervent. (MICCAI), 2012, pp.601-608.
[31] Onishi, K., Mizushino, K., Noborio, H., ET AL.:’Haptic
AR dental simulator using Z-buffer for object deformation’,
Universal Access in Human-Computer Interaction. Aging
and Assistive Environments, 2014, pp.342-348.
[32] Wang, D. X., Tong, H., Shi, Y. J., ET AL.:’Interactive haptic
simulation of tooth extraction by a constraint-based haptic
rendering approach’, Proc. IEEE Int. Conf. Robot. Autom.
(ICRA), 2015, pp.26-30.
[33] Farronato, M., Maspero, C., Lanteri V., ET AL.:’Current
state of the art in the use of augmented reality in dentistry:
a systematic review of the literature’, BMC Oral Health,
2019, 19(135), pp.1-15.
[34] Magic Leap One AR headset: https://www.
magicleap.com/magic-leap- one, Accessed:
2019-07-15.
[35] Unity Real-Time Development Platform:https:
//unity.com/, Accessed: 2019-07-15.
[36] Unity Package for Volume Rendering:https:
//github.com/mattatz/unity-volume-
rendering, Accessed: 2019-08-28.
[37] Magic Leap One AR headset Image Tracking
API:https://creator.magicleap.com/learn/
guides/sdk-example- image-tracking,
Accessed: 2019-07-15.
[38] Magic Leap One AR headset Controller Tracking
API:https://creator.magicleap.com/learn/
guides/control-6dof, Accessed: 2019-07-15.
[39] El-Hariri, H., Pandey, P., Hodgson, A. J., ET AL.
:’Augmented reality visualisation for orthopaedic surgical
guidance with pre- and intra-operative multimodal image
data fusion’, Healthcare Technology Letters, 2018, 5(5),
pp.189-193.
[40] Itoh, Y., Hamasaki, T., Sugimoto, M.:’Occlusion Leak
Compensation for Optical See-Through Displays Using a
Single-Layer Transmissive Spatial Light Modulator’, IEEE
Transactions on Visualization and Computer Graphics,
2017, 23(11), pp.2463-2473.
6Healthcare Technology Letters, pp. 6–6
c
The Institution of Engineering and Technology 2012
Auto-generated PDF by ReView Healthcare Technology Letters
HTLLiveSample.texM ainDocument IET Review Copy Only 7
This article has been accepted for publication in a future issue of this journal, but has not been fully edited.
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.
... Multiple researchers have proved the effectiveness of the AR simulators in assisting dentists by showing and displaying virtual models in the operating field. This directly contributed to the reduction in the difficulty of hand-eye coordination [17]. ...
... AR has already been introduced in the dental research, incorporating the dental implant, oral and maxillofacial surgery, orthodontic, endodontic, prosthodontics, paedodontics, operative dentistry, as well as dental education [1,[4][5][6][17][18][19][20][21][22][23][24][25][26]. The reason why this study aims to acknowledge the latest technological development related to augmented reality uses and applications in the dental field, also its future, and how can it be improved. ...
... It also suggested that the utilization of this technique could prove beneficial in orthodontics or prosthodontics, with certain enhancements made Figure 5. Zhou Y. et al [14]. published a study in 2019, also proposed the use of an HMD with a low-cost 2D imaging modality like SFE in the early detection of dental caries [17]. Such simple AR systems could enable the usage of AR in routine dental clinical practice in the future [14,17]. ...
... 26,27 For instance, the intraoral scanners often provide the anterior and posterior teeth arrangement as well as the position of the dental cavity. 28,29 Dentists require teeth images from various angles such as right/left buccal and maxillary/mandibular occlusal for accurate diagnosis of teeth. 30,31 However, previously developed intraoral cameras have technical bottlenecks such as large minimum object distance, thick total track length, narrow viewing angle, and the absence of functionality fusion. ...
... full robot arm placement in minimally invasive gastrectomy (abdominal surgery) fromFotouhi et al. (2020) (Fig. 10 (b)). The remaining applications of surgical guidance cover topics such as stent-graft placement in endovascular aortic repair(Rynio et al., 2019), imaging probe navigation for tooth decay management(Zhou et al., 2019a), C-arm positioning guidance in percutaneous orthopaedic procedures, identification of spinal anatomy underneath the skin(Aaskov et al., 2019) and dissection guidance for vascular pedunculated flaps of the lower extremities presented byPratt et al. (2018) (Fig. ...
Preprint
Full-text available
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6\%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n = 40). Experiments mainly involve phantoms (n = 43) or system setup (n = 21), with patient case studies ranking third (n = 19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Article
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n = 40). Experiments mainly involve phantoms (n = 43) or system setup (n = 21), with patient case studies ranking third (n = 19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Article
Introduction: With all the advancements that technology has reached, Dentistry can't be left behind. In the past few years, researchers have focused on emerging technologies like Virtual and Augmented Reality with clinical practice. Objectives: This literature review aims to provide an update on the latest technological applications and development in augmented reality in the dental field. Methods: The PubMed database was reviewed, and the studies that fulfilled the inclusion criteria in the last 20 years, from 2000 to 5 May 2020, were included. Results: The search results revealed a total of 72 articles, 32 were excluded, while 40 articles were included. It’s been observed that augmented reality application is still under testing, as certain drawbacks still tie the spread of this technology in the dental field. Multiple studies have resulted in a system that is suitable for clinical use. Yet no routine clinical application has been reported. Conclusion: The research department has already covered more advanced technologies like mixed reality. Therefore, a question arises, whether augmented realty will continue to grow independently or will mixed reality dominate the field.
Article
Introduction: With all the advancements that technology has reached, Dentistry can't be left behind. In the past few years, researchers have focused on emerging technologies like Virtual and Augmented Reality with clinical practice. Objectives: This literature review aims to provide an update on the latest technological applications and development in augmented reality in the dental field. Methods: The PubMed database was reviewed, and the studies that fulfilled the inclusion criteria in the last 20 years, from 2000 to 5 May 2020, were included. Results: The search results revealed a total of 72 articles, 32 were excluded, while 40 articles were included. It’s been observed that augmented reality application is still under testing, as certain drawbacks still tie the spread of this technology in the dental field. Multiple studies have resulted in a system that is suitable for clinical use. Yet no routine clinical application has been reported. Conclusion: The research department has already covered more advanced technologies like mixed reality. Therefore, a question arises, whether augmented realty will continue to grow independently or will mixed reality dominate the field.
Preprint
In this paper, a mobile camera positioning method based on forward and inverse kinematics of robot is proposed, which can realize far point positioning of imaging position and attitude tracking in large scene enhancement. Orbit precision motion through the framework overhead cameras and combining with the ground system of sensor array object such as mobile robot platform of various sensors, realize the good 3 d image registration, solve any artifacts that is mobile robot in the large space position initialization problem, effectively implement the large space no marks augmented reality, human-computer interaction, and information summary. Finally, the feasibility and effectiveness of the method are verified by experiments.
Article
Full-text available
Background: The aim of the present systematic review was to screen the literature and to describe current applications of augmented reality. Materials and methods: The protocol design was structured according to PRISMA-P guidelines and registered in PROSPERO. A review of the following databases was carried out: Medline, Ovid, Embase, Cochrane Library, Google Scholar and the Gray literature. Data was extracted, summarized and collected for qualitative analysis and evaluated for individual risk of bias (R.O.B.) assessment, by two independent examiners. Collected data included: year of publishing, journal with reviewing system and impact factor, study design, sample size, target of the study, hardware(s) and software(s) used or custom developed, primary outcomes, field of interest and quantification of the displacement error and timing measurements, when available. Qualitative evidence synthesis refers to SPIDER. Results: From a primary research of 17,652 articles, 33 were considered in the review for qualitative synthesis. 16 among selected articles were eligible for quantitative synthesis of heterogenous data, 12 out of 13 judged the precision at least as acceptable, while 3 out of 6 described an increase in operation timing of about 1 h. 60% (n = 20) of selected studies refers to a camera-display augmented reality system while 21% (n = 7) refers to a head-mounted system. The software proposed in the articles were self-developed by 7 authors while the majority proposed commercially available ones. The applications proposed for augmented reality are: Oral and maxillo-facial surgery (OMS) in 21 studies, restorative dentistry in 5 studies, educational purposes in 4 studies and orthodontics in 1 study. The majority of the studies were carried on phantoms (51%) and those on patients were 11 (33%). Conclusions: On the base of literature the current development is still insufficient for full validation process, however independent sources of customized software for augmented reality seems promising to help routinely procedures, complicate or specific interventions, education and learning. Oral and maxillofacial area is predominant, the results in precision are promising, while timing is still very controversial since some authors describe longer preparation time when using augmented reality up to 60 min while others describe a reduced operating time of 50/100%. Trial registration: The following systematic review was registered in PROSPERO with RN: CRD42019120058.
Article
Full-text available
Augmented reality (AR) technology, as a computer simulation technology, combines various technologies such as virtual reality, computer vision, computer network, and human-computer interaction. AR has been widely used in medicine. The introduction of AR can effectively help doctors complete preoperative planning, intraoperative guidance, postoperative evaluation and medical training. Oral medicine is a major branch of modern medicine. AR can enhance the doctor’s visual system, making the internal structure of the oral clearer and effectively reducing the difficulty of oral repair/surgery. Real-time tracking, registration, display, and interactive technologies for AR will play an important role in oral medicine. Among them, the registration technology has become an important indicator for evaluating the AR system, and it is also the main bottleneck restricting the stability and applicability of the current AR system. Therefore, we reviewed the registration technology of AR in oral medicine. Firstly, we conducted a hot spot analysis of AR keywords based on Citespace. And then, the registration technology is divided into static registration and real-time registration according to the actual clinical application, among which static registration is divided into rigid registration and non-rigid registration. We discussed problems and limitations of static registration and real-time registration in oral applications at this stage. Finally, the future direction of AR registration technology in oral medicine is proposed.
Article
Full-text available
Several studies have shown that near-infrared imaging has great potential for the detection of dental caries lesions. A miniature scanning fiber endoscope (SFE) operating at near-infrared (NIR) wavelengths was developed and used in this study to test whether the device could be used to discriminate demineralized enamel from sound enamel. Varying depths of artificial enamel caries lesions were prepared on 20 bovine blocks with smooth enamel surfaces. Samples were imaged with a SFE operating in the reflectance mode at 1310-nm and 1460-nm in both wet and dry conditions. The measurements acquired by the SFE operating at 1460-nm show significant difference between the sound and the demineralized enamel. There was a moderate positive correlation between the SFE measurements and micro-CT measurements, and the NIR SFE was able to detect the presence of demineralization with high sensitivity (0.96) and specificity (0.85). This study demonstrates that the NIR SFE can be used to detect early demineralization from sound enamel. In addition, the NIR SFE can differentiate varying severities of demineralization. With its very small form factor and maneuverability, the NIR SFE should allow clinicians to easily image teeth from multiple viewing angles in real-time.
Article
Full-text available
Endodontic treatment is performed to treat inflamed or infected root canal system of any involved teeth. It is estimated that 22.3 million endodontic procedures are performed annually in the USA. Preparing a proper access cavity before cleaning/shaping (instrumentation) of the root canal system is among the most important steps to achieve a successful treatment outcome. However, accidents such as perforation, gouging, ledge and canal transportation may occur during the procedure because of an improper or incomplete access cavity design. To reduce or prevent these errors in root canal treatments, this Letter introduces an assistive augmented reality (AR) technology on the head-mounted display (HMD). The proposed system provides audiovisual warning and correction in situ on the optical see-through HMD to assist the dentists to prepare access cavity. Interaction of the clinician with the system is via voice commands allowing the bi-manual operation. Also, dentist is able to review tooth radiographs during the procedure without the need to divert attention away from the patient and look at a separate monitor. Experiments are performed to evaluate the accuracy of the measurements. To the best of the authors’ knowledge, this is the first time that an HMD-based AR prototype is introduced for an endodontic procedure. © 2018 Institution of Engineering and Technology.All right reserved.
Article
Full-text available
Augmented reality (AR) has proven to be a useful, exciting technology in several areas of healthcare. AR may especially enhance the operator's experience in minimally invasive surgical applications by providing more intuitive and naturally immersive visualisation in those procedures which heavily rely on three-dimensional (3D) imaging data. Benefits include improved operator ergonomics, reduced fatigue, and simplified hand–eye coordination. Head-mounted AR displays may hold great potential for enhancing surgical navigation given their compactness and intuitiveness of use. In this work, the authors propose a method that can intra-operatively locate bone structures using tracked ultrasound (US), registers to the corresponding pre-operative computed tomography (CT) data and generates 3D AR visualisation of the operated surgical scene through a head-mounted display. The proposed method deploys optically-tracked US, bone surface segmentation from the US and CT image volumes, and multimodal volume registration to align pre-operative to the corresponding intra-operative data. The enhanced surgical scene is then visualised in an AR framework using a HoloLens. They demonstrate the method's utility using a foam pelvis phantom and quantitatively assess accuracy by comparing the locations of fiducial markers in the real and virtual spaces, yielding root mean square errors of 3.22, 22.46, and 28.30 mm in the x, y, and z directions, respectively.
Article
Full-text available
It is challenging to achieve high implant accuracy in dental implant placement, because high risk tissues need to be avoided. In this study, we present an augmented reality (AR) surgical navigation with an accurate cone beam computed tomography (CBCT)-patient registration method to provide clinically desired dental implant accuracy. A registration device is used for registration between preoperative data and patient outside the patient’s mouth. After registration, the registration device is worn on the patient’s teeth for tracking the patient. Naked-eye 3D images of the planning path and the mandibular nerve are superimposed onto the patient in situ to form an AR scene. Simultaneously, a 3D image of the drill is overlaid accurately on the real one to guide the implant procedure. Finally, implant accuracy is evaluated postoperatively. A model experiment was performed by an experienced dentist. Totally, ten parallel pins were inserted into five 3D-printed mandible models guided by our AR navigation method and through the dentist’s experience, respectively. AR-guided dental implant placement showed better results than the dentist’s experience (mean target error = 1.25 mm vs. 1.63 mm; mean angle error = 4.03° vs. 6.10°). Experimental results indicate that the proposed method is expected to be applied in the clinic. Open image in new window Graphical abstract ᅟ
Article
Although the term augmented reality appears increasingly in published studies, the real-time, image-guided (so-called ‘hands-free’ and ‘heads-up’) surgery techniques are often confused with other virtual imaging procedures. A systematic review of the literature was conducted to classify augmented reality applications in the fields of maxillofacial surgery. Publications containing the terms ‘augmented reality’ ‘hybrid reality’ and ‘surgery’ were sought through a search of three medical databases, covering the years 1995–2018. Thirteen publications containing enough usable data to perform a comparative analysis of methods used and results obtained were identified. Five out of 13 described a method based on a hands-free and heads-up augmented reality approach using smart glasses or a headset combined with tracking. Most of the publications reported a minimum error of less than 1 mm between the virtual model and the patient. Augmented reality during surgery may be classified into four categories: heads-up guided surgery (type I) with tracking (Ia) or without tracking (Ib); guided surgery using a semi-transparent screen (type II); guided surgery based on the digital projection of images onto the patient (type III); and guided surgery based on the transfer of digital data to a monitor display (type IV). © 2018 International Association of Oral and Maxillofacial Surgeons