ArticlePDF Available

Abstract and Figures

Background The topic of food is broad and global, thereby representing an influential sector of the economy. Motivated by the advent of Industry 4.0, massive potential exists to implement cutting-edge technologies in the food industry. Recent years have seen a growing interest towards the applications of augmented/mixed reality (AR/MR) in the food sector. Scope and approach An extensive search of online journals focusing on Scopus was conducted using terms including ‘augmented reality’, ‘mixed reality’ and ‘food’ in the search fields of Title, Abstract, and Keywords. Full paper reading was implemented and ineligible articles (i.e., non-English-language, review articles, not peer-reviewed and without full paper) were removed. Key findings and conclusions Our systematic search resulted in 111 eligible articles, eight of which related to MR technology. There is an overall increasing trend in the number of publications appearing annually since the first relevant publication in 2010. Analysing these publications demonstrates the multidisciplinary nature of this technology which is closely linked to machine learning, computer vision, the Internet of Things (IoT), and artificial intelligence. Our findings also revealed that AR/MR technology is mainly applied in the following areas: dietary assessment, food nutrition and traceability, food sensory science, retail food chain applications, food education and learning, and precision farming. Furthermore, we highlight the limitations and analytical challenges that hinder the application of AR/MR to food-related research, such as the lack of reliable wireless connection and the difficulty in recognizing food objects in a complex environment, while also describing future research needs and directions.
Content may be subject to copyright.
Trends in Food Science & Technology 124 (2022) 182–194
Available online 21 April 2022
0924-2244/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Augmented/mixed reality technologies for food: A review
Jackey J.K. Chai
a
, Carol OSullivan
a
, Aoife A. Gowen
b
, Brendan Rooney
c
, Jun-Li Xu
b
,
*
a
School of Computer Science and Statistics, Trinity College Dublin, College Green, Dublin 1, Ireland
b
School of Biosystems and Food Engineering, University College Dublin, Beleld, Dublin 4, Ireland
c
School of Psychology, University College Dublin, Beleld, Dublin 4, Ireland
ARTICLE INFO
Keywords:
Augmented reality
Mixed reality
Food industry
Nutrition
Sensory science
ABSTRACT
Background: The topic of food is broad and global, thereby representing an inuential sector of the economy.
Motivated by the advent of Industry 4.0, massive potential exists to implement cutting-edge technologies in the
food industry. Recent years have seen a growing interest towards the applications of augmented/mixed reality
(AR/MR) in the food sector.
Scope and approach: An extensive search of online journals focusing on Scopus was conducted using terms
including ‘augmented reality, ‘mixed realityand ‘foodin the search elds of Title, Abstract, and Keywords. Full
paper reading was implemented and ineligible articles (i.e., non-English-language, review articles, not peer-
reviewed and without full paper) were removed.
Key ndings and conclusions: Our systematic search resulted in 111 eligible articles, eight of which related to MR
technology. There is an overall increasing trend in the number of publications appearing annually since the rst
relevant publication in 2010. Analysing these publications demonstrates the multidisciplinary nature of this
technology which is closely linked to machine learning, computer vision, the Internet of Things (IoT), and
articial intelligence. Our ndings also revealed that AR/MR technology is mainly applied in the following areas:
dietary assessment, food nutrition and traceability, food sensory science, retail food chain applications, food
education and learning, and precision farming. Furthermore, we highlight the limitations and analytical chal-
lenges that hinder the application of AR/MR to food-related research, such as the lack of reliable wireless
connection and the difculty in recognizing food objects in a complex environment, while also describing future
research needs and directions.
1. Introduction
Our current food supply chain processes from ‘farm to fork have
evolved over centuries into a global system of immense size and
complexity (Floros et al., 2010). This vast system involves farming,
storage, food processing and manufacturing, distribution, retailing, food
service, food monitoring, and consumption. Today, we are able to pro-
vide a variety of food products that are largely safe, delicious, full of
nutrients, affordable, abundant, and more readily accessible than ever
before. This would not be possible without modern food science and
technology, integrating knowledge from chemistry, physics, biology,
microbiology, materials science, nutrition, toxicology, computer sci-
ence, and many more disciplines to solve challenging problems, such as
monitoring and controlling food safety, quality and manufacturing
processes in a sustainable way.
Due to its immense size and complexity, massive potential exists to
implement new technologies in the food sector (Suprem et al., 2013)
(Antonucci et al., 2019). The food-tech industry is expected to grow
exponentially in the coming years, and the intersection between
augmented reality (AR) technology and food is already becoming
noticeable. In AR, computer-generated virtual information is super-
imposed on the physical world (Casari et al., 2021). Although originally
used in the gaming and entertainment industries, AR technology is now
experiencing signicant growth in wider elds such as healthcare,
military operations, manufacturing, maintenance, education, market-
ing, and several others (Parida et al., 2021). Since AR opens up the
possibility to supplement the real world with rich virtual information,
the food industry is now endeavouring to capitalise on this technique for
competitive gain (Crofton et al., 2019). The benets of this technology
to the food industry are regularly published in academic research. For
example, the visualization capability of AR surpasses human limitations
and can enable humans to better monitor the nutrition of the purchased
* Corresponding author.
E-mail address: junli.xu@ucd.ie (J.-L. Xu).
Contents lists available at ScienceDirect
Trends in Food Science & Technology
journal homepage: www.elsevier.com/locate/tifs
https://doi.org/10.1016/j.tifs.2022.04.021
Received 11 February 2022; Received in revised form 6 April 2022; Accepted 18 April 2022
Trends in Food Science & Technology 124 (2022) 182–194
183
food product. In this respect, Naritomi and Yanai (2020) proposed an
innovative system to estimate the calorie content of a meal using AR
glasses, thereby enabling the user to make better food choices and form
healthy eating habits. To support the user in a virtual and interactive
way, AR technology can also present health warnings about potential
food intolerances, allergens, or other critical ingredients in a real-time
manner (Todorovi´
c et al., 2019). In addition, it offers the food in-
dustry new opportunities for exploring the sensory perceptions of food,
which is the key element to the design and development of new food
products (Wang et al., 2021). To this end, Ueda et al. (2020) studied the
relationship between the luminance distribution of food images and the
taste/avour experience using an AR technique. The authors found that
manipulating the luminance distribution inuenced the taste/avour of
the food (i.e., cake and tomato ketchup). More studies (Nakano et al.,
2019) (Nakano et al., 2021) were carried out to manipulate gustation
through changing the texture, colour, and appearance of the food using
AR technology. Manipulating food dynamically is advantageous for
sensory science because food becomes more realistic and appealing.
Therefore, previous research has demonstrated that food science and AR
represent a potentially useful combination.
Mixed reality (MR) fuses the virtual and physical environments, thus
enabling the virtual objects to interact with the physical world (Holz
et al., 2011). The motivation behind developing MR systems, in general,
is not only to ‘seethe virtual/digital overlay or object, as in AR systems,
but also to physically interact with and/or manipulate it (Maas &
Hughes, 2020). It is this capability of interaction between virtual objects
and the real world that discriminates AR and MR. However, many
researchers/scientists see MR as a synonym for AR. Indeed, we are still
far from a shared understanding of the term MR (Lungu et al., 2021).
Despite this, research and development on MR have progressed and
intensied over the past decade, with applications emerging in the food
area. In one example study, researchers applied MR to create an
immersive eating context within which the users could co-eat with their
remote friends, thereby improving their eating experience with more
social engagement (Korsgaard et al., 2020).
The benets of AR/MR to food industry stakeholders (e.g., food
suppliers, retail/food service) have received increasing attention and
recognition. Although development costs are still high, the integration
of AR/MR in food chains is seemingly compelling. It is believed that the
use of AR/MR can revolutionize food-related research because of the
ability to dynamically increase access to knowledge/experiences that
might not be available from existing technologies. This review aims to
investigate the potential of AR/MR in the food sector, focusing mainly
on the latest advances and applications. After an overview of the theo-
retical foundations of AR/MR technology, we will conduct a compre-
hensive review of published research related to the application of AR/
MR in the food sector. The latest developments from the reviewed
literature will be discussed in detail to advance knowledge and under-
standing of the technology. Finally, we will summarize the limitations of
current practice and offer future perspectives to expand the use of AR/
MR within the food sector.
2. Overview of AR/MR technologies
The denition of AR/MR is widely accepted based on a spectrum that
spans reality and virtuality (Milgram & Kishino, 1994), as seen in Fig. 1.
Reality is comprised of physical objects existing in the real world
(Fig. 1A), while Virtual Reality (VR) comprises a virtual environment
with no relation to the users real surrounding environment (Fig. 1F),
such as a completely computer-generated world or footage captured by a
360camera elsewhere. Augmented Reality (AR) refers to extra infor-
mation and/or virtual objects overlaid onto a real scene (Fig. 1B and C),
while Augmented Virtuality (AV) refers to enhancing a
computer-generated scene with objects from the surrounding environ-
ment (Fig. 1D and E). Both AR and AV sit within the spectrum of MR.
Currently, there is no unied description of the term Mixed Reality
(MR), which confuses many researchers and leads to situations where
the names MR and AR are used interchangeably (Lungu et al., 2021). On
the other hand, VR exists outside the spectrum of MR, at the opposite
end from AR. VR is dened as the complete immersion of a user ‘in a
virtual world without seeing the real world. That is, VR represents a
type of human-computer interaction where one usually interacts with a
computer-generated environment that is presented through a stereo-
scopic head-mounted display (Crofton et al., 2019).
The earliest technological development in AR dates back several
decades when Ivan Sutherland worked on his AR headset system in 1968
(Rejeb et al., 2021). In the early 1990s, Boeing researchers developed
the rst-ever industrial application of AR for assembling electrical cables
in an aircraft (Thomas & David, 1992). Ever since then, AR de-
velopments have been mainly restricted to research laboratories,
Fig. 1. Virtuality continuum. Adapted from Milgram and Kishino (1994).
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
184
because of technical challenges in hardware and software (Crofton et al.,
2019).
2.1. Visual rendering
For AR applications, the starting point is the creation of virtual
content which is to be superimposed onto the real world. A range of
software development kits (SDKs), application programming interfaces
(APIs), game engines (e.g., Unity, Unreal), and other commercially
available software packages can be used to realize AR functions. To
visualize 2D data, libraries such as CrowdOptic, OpenCV, and Vital-
Video are available. For 3D data, commonly used libraries include VTK,
CTK, ITK, IGSTK, and OpenCV (Lungu et al., 2021).
Display and visualization hardware technologies to present AR data
to the end-user have been developed and received timely attention. The
display and visualization of AR information are realized using two
platforms: 1) AR-enabled head-mounted displays (HMDs), i.e., devices
worn on the head or as part of a helmet; and 2) handheld displays with
cameras, such as smartphones and tablets. HMDs provide the user with
an egocentric viewpoint, while allowing for freedom of head movement
and, potentially, full-body mobility. Handheld displays are extremely
popular with consumers, yet they provide an arguably lower quality
experience. It is expected that AR applications on handheld devices will
continue to grow in the near future with the continuous improvement of
SDKs such as ARCore and ARKit for Android and iOS devices respec-
tively, as well as new sensors such as LiDAR being integrated into newer
devices, thus allowing them to understand the real environment even
better.
Generally speaking, display types can be broadly classied into three
categories: Video-See-Through (VST)/Digital Pass-Through, Optical
See-Through (OST), and projection-based (Badiali et al., 2020). For the
VST conguration (see Fig. 2A), the video camera (normally a stereo-
scopic camera) captures the physical scene, followed by the overlay of
the virtually generated image on the captured image with the help of a
Holographic Processing Unit (HPU) (named by Microsoft), and the
combined image is showed to the user via a standard digital display(s).
VST displays can be found in HMDs, hand-held devices, and standard
monitors. In the case of OST displays (see Fig. 2B), the virtually gener-
ated image is optically combined with the real view of the world, thus
allowing the user to view the physical world directly superimposed with
virtual objects. This is achieved with some kind of optical combiner,
such as a prism or the more advanced waveguide technology used in
most modern OST HMDs. OST mode preserves the direct view of the
world, thereby allowing a better sense of presence, and there is no
perspective conversion in viewpoint and eld of view, as with VST
systems. Today, OST HMDs are considered the cutting-edge technology
for AR research, and a series of commercial headsets are available now
(e.g., HoloLens 2, MagicLeap One, Avegant, Meta Two, Lumus DK
Vision) following the success of the Microsoft HoloLens 1 (Cutolo et al.,
2020).
For comparison, the VST AR display system uses a camera to capture
the real scene and superimpose the virtual data onto the captured
frames, which is much easier to implement than overlaying the virtual
image directly onto the real world, as with OST systems. On the other
hand, reality preservation allows the user to maintain an unchanged and
almost natural visual experience of the physical world during the
interaction with objects in the peripersonal space, which makes OST
advantageous over VST systems (Cutolo et al., 2019). With respect to
projection-based display, as the name suggests, this works by projecting
the virtual content on top of the real scene and can be directly viewed by
multiple users, i.e., virtually generated images could be projected onto
physical objects with the aid of light projectors or embedded directly in
the environment with at panel displays (Raskar et al., 1999). The
advantage of projection-based MR visualization is that it is suitable for
large operative eld overlays with a greater eld of view compared to
other displays. However, projection-based MR display lacks depth
perception and therefore often employs the 3D perspective technique
when designing the virtual contents in order to create the illusion of
depth.
2.2. Tracking
Marker-based and marker-less tracking are both used for tracking
purposes in AR applications. The marker-based solution involves the use
of an articial image (e.g., a barcode) to trigger an augmentation (Maas
& Hughes, 2020). The affordable ducial marker, i.e., an object placed
in the eld of view of an imaging system for use as a point of reference, is
widely used for positional tracking. Because of its distinct shape, the
marker used can be easily recognized, thereby reducing the time for
post-processing. Although the marker-based solution is well developed
and easy to implement, the fact that a physical marker is needed to
attach to the scene might hinder its potential applications within the
food industry. As technology keeps evolving, marker-less tracking so-
lutions are emerging and developing. Instead of using a physical marker,
marker-less AR exploits special device features such as GPS, camera, and
accelerometers to recognise location or position (Crofton et al., 2019).
Ease of use is the major advantage of marker-less AR since there is no
need to incorporate a marker. However, it usually imposes a heavy
computational burden compared to marker-based tracking, since it re-
quires computer vision algorithms to extract useful features for tracking
based on the captured frames obtained from the camera (Lungu et al.,
2021).
2.3. Registration
Registration, which refers to the accurate alignment of the virtual
content with the real scene, is another fundamental procedure in AR
applications. The illusion that the virtual content coexists in the real
scene is greatly compromised if there is no proper registration. A typical
registration is comprised of four procedures (Behzadan et al., 2015): (1)
Fig. 2. The conguration of Video See-Through (A) and Optical See-Through (B) as applied in wearable HMD.
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
185
positioning the users viewing volume in the world coordinate system;
(2) positioning virtual content in the world coordinate system; (3)
deciding the shape of the viewing volume, and (4) changing virtual
content from the world coordinate system to the eye/camera coordinate
system. Optimal AR visualization requires the real and virtual content to
be seamlessly fused in all dimensions. However, when the relative depth
of the real and virtual objects is not well considered, instead of blending
with real objects in the scene, incorrect occlusion (Lepetit & Berger,
2000) occurs, where the graphical entities appear to oatin the
physical background. It is more likely to occur when the relative depth
between the virtual and real objects is dynamically changing over time.
3. Literature review methodology
The recent advancements in AR/MR technology have produced a
broad array of literature from different elds, with food-related appli-
cations as one example. Although the publications pertaining to the use
of AR/MR in the food industry keeps increasing in the past few years, the
question of what major food areas can benet from AR/MR technology
still remains to be answered. Food researchers and practitioners would
also be intrigued to understand the limitations of current practice and
possible future development directions. The contribution of the
following literature review will help to ll the current research gap and
provide some answers.
3.1. Search strategy and exclusion criteria
To investigate state-of-the-art AR research applications in the food
industry, we conducted an extensive search of online journals focusing
on Scopus (www.scopus.com) results. The search was performed using
two terms including ‘augmented realityand ‘foodin the search elds of
Title, Abstract, and Keywords. For mixed reality, the search terms
applied were ‘mixed reality and ‘food. The search was conducted in
July 2021 without applying a time lter (i.e., there was no specic time
range). The inclusion criterion was simply publications that applied AR/
MR technology to food-related research. We further adopted the
following exclusion criteria: (1) publications in languages other than
English; (2) review articles; (3) literature that was not peer-reviewed;
(4) abstracts without a full paper. Team-based consensus coding was
used to address borderline cases. Mendeley (HYPERLINK "http://www.
mendeley.com" \o "http://www.mendeley.com"www.mendeley.com)
was used as the reference management software.
3.2. Research sample and analysis strategy
The initial search for AR applications led to 246 documents. After
applying the exclusion criteria as elaborated in Section 3 through full
paper reading, 103 publications were retained. A qualitative narrative
synthesis has been used to create an overview of the selected papers with
detailed discussions shown in the subsequent section (Sections 4 and 5).
In addition, a more detailed analysis was carried out on the subsec-
tion of the results that focused on MR applications. The search result
demonstrated that there was very little scholarly research focused on MR
applications alone. Initially, the literature search led to 31 documents.
Some unqualied articles were eliminated based on the same excluding
criteria. Since both AR and MR involve blending virtual data into the
physical world, researchers sometimes were unable to discriminate be-
tween AR and MR. Duplicate studies that appeared in the AR section
were also removed. Finally, only eight documents were retained for the
subset of MR applications and are fully discussed in Section 6.
4. Results from AR literature review
4.1. Analysis of terms/phrases
A word cloud strategy was implemented to capture the dominant
terms/phrases from the selected research articles. This is a visualization
method to highlight the most frequent words that appeared in a given
body of text, by making the size of each word proportional to its fre-
quency. All the words are then arranged in a cluster or cloud of words.
We rst manually sort all words (either from title or keywords) in
descending order based on their frequency and then input this to Wordc
louds.com developed by Zygomatic to generate the cloud. Results ob-
tained from titles and keywords are displayed in Fig. 3A and B,
respectively. As can be seen, the most frequently appearing words in the
titles of all selected publications include augmented, reality, food, tech-
nology, applications, estimation, system, mobile. Results from keywords are
more intuitive because phrases were used instead of single words.
Keywords such as augmented reality, mobile application, virtual reality,
human-food interaction, human-computer interaction, health, Internet of
Things, food, predominate. A closer inspection implies that contempo-
rary AR research topics target the following areas: nutrition, food calorie
estimation, precision farming, gustatory display, education, food consump-
tion, portion estimation, mobile health, and consumer behaviours, suggest-
ing that AR applications span a variety of subjects such as food nutrition,
precision farming, education and training, sensory science. In Section 5
we will further elaborate on the latest AR applications in different food-
related elds. Another important aspect to notice is that AR is closely
linked to other new analytical/technological tools including machine
learning, image processing, image recognition, Internet of Things, 3D visu-
alization, object detection, computer graphics, deep learning, cross-model
integration, and articial intelligence. Indeed, AR research is multidisci-
plinary by nature, integrating knowledge from diverse disciplines. Re-
sults showed that the word cloud helps to grasp the key research topics
from the many articles selected for discussion.
Fig. 3C displays the co-occurrence network visualization of content
based on keywords of selected articles. In this gure, each circle refers to
a keyword, while its size is proportional to the frequency of occurrence
in publications. Terms that co-occur frequently are distributed close to
each other. There are 9 signicant clusters identied using the same
colour. The primary cluster (green) is associated with augmented reality,
image processing, framework, android and industry 4.0, suggesting these
technologies are closely linked to AR. A red cluster representing food
calorie estimation, food image recognition, machine learning, human and
centered computing that situates a great distance from the majority is also
observable.
4.2. Analysis of authors
Using the same method as introduced in Bouzembrak et al. (2019),
Fig. 3D was created to visualize the authorsco-occurrence network of
all papers using VOSviewer. Each circle represents one author and the
size of it relates to the number of publications. It should be noted that
authors with fewer than 3 publications have been ltered out. Based on
the bibliographic coupling, the distance between authors indicates how
closely they are related to each other. As seen, most AR developments for
food-related research have been conducted in Japan, followed by
Austria and UK. The network visualization also indicates a close link
between researchers from the University of Tokyo (green cluster), Nara
Institute of Science and Technology (blue cluster) and The University of
Electro-Communications (yellow cluster) of Japan.
4.3. Analysis of publications by year
Fig. 4A shows the distribution of the published studies by year.
Although AR is a well-established technology, the rst application of AR
in food-related research from early pioneers appeared in 2010. We
divide the entire time range into three clusters: 20102012, 20132017,
and 20182020. With slight uctuations, there is a similar number of
studies per year within the same cluster. An overall increasing trend in
the number of publications per year can be witnessed when comparing
the clusters rather than individual years. From 2017 onwards, there is a
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
186
signicant rise in the annual number of publications, with a peak
appearing in 2019. This suggests that the year 2017 triggered a sub-
stantial interest for food researchers, scholars, and practitioners, leading
to more research endeavours and engagements in the following years. As
Rejeb et al. (2021) pointed out, this growing attention was related to the
shift towards Industry 4.0, which refers to a new phase in the industrial
revolution that focuses heavily on interconnectivity, automation, ma-
chine learning, and real-time data. In this sense, the new era of the food
industry with the aid of Industry 4.0 is expected to advance the appli-
cations of AR technology.
4.4. Analysis of application areas and devices used
From the previous section, we already know that the existing liter-
ature spans a variety of applications in the food sector. To explore the
distribution of different application areas, this work classies the
selected 103 articles into 10 application categories, which will be fully
discussed in the following section. We also found that the reported de-
vices vary widely from study to study including smartphones, tablets,
computers, projectors, HoloLens, Google glasses, magic mirrors and
various LCD displays. In this work, we classify all devices into three
categories: HMD, Handheld, and Stationary. As dened in Section 2.1,
HMDs should include all devices worn on the head or as part of a helmet,
i.e., HoloLens and other AR glasses. Meanwhile, handheld devices
include smartphones and tablets. We move the rest of the devices into
the Stationary category because these devices remain immobile during
use. Fig. 4B illustrates the distribution of the selected publications in
terms of application areas as well as the used devices. As seen, most
reported studies (approximately 50%) applied handheld devices, prob-
ably because they are convenient to use and affordable. HMDs are used
in 26 studies out of the total of 103 selected articles. Stationary devices
are in the minority compared to HMDs and Handhelds, occupying
around 12.6% in all studies, while the remaining 12.6% of studies do not
specify a device. With respect to application areas, the most concen-
tration of AR development so far has been in domains such as dietary
assessment, food nutrition visualization, and food traceability, food
sensory science (augmenting sensory perception and changing eating
behaviour), retail food chain applications, food education and learning.
Handheld devices dominate in several main applications areas, i.e., di-
etary assessment, food nutrition visualization, and food traceability,
retail food chain applications, food-related training, food education and
learning. This is partly because, as these applications are designed for a
Fig. 3. Word clouds computed from titles (A) and keywords (B) of the selected 103 articles. Network visualization of the content (C) and the authors (D). Colour
codes for D: University of Tokyo, Japan (Green); Nara Institute of Science and Technology, Japan (Blue); The University of Electro-Communications, Japan (Yellow);
Salzburg University of Applied Sciences, Austria (Red); University of Oxford, United Kingdom (Purple). (For interpretation of the references to colour in this gure
legend, the reader is referred to the Web version of this article.)
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
187
large number of individuals, ease of access, the convenience of use, and
cost are the primary factors to consider. In contrast, HMDs are mostly
used in food sensory science and precision farming. Although more
expensive, HMDs offer a truly immersive experience, and can be used
hands-free by the user.
5. Discussion of current AR applications
This review groups the observed applications into 7 themes: (1)
applications in dietary, food nutrition and traceability, (2) applications
in food sensory science, (3) retail food chain applications, (4) enhancing
the cooking experience, (5) food-related training, (6) food education and
learning, (7) food production and precision farming. A major theme to
emerge from the ndings is the use of AR for applications in dietary,
food nutrition and traceability. In the following section, an overview of
each theme will be presented.
5.1. Applications in dietary, food nutrition and traceability
5.1.1. Dietary assessment
Dietary assessment is signicantly important in determining the
nutritional status of individuals, and underpinning the understanding of
diet-disease relationships. Nevertheless, accurate measurement of di-
etary intake is always recognized as a research challenge because it re-
lies on self-reporting methods that are subjective and impossible to
validate. To improve accuracy, 2D food photographs are often used in
dietary assessment methods, which help individuals describe the types
and amounts of food they have eaten. As a new frontier in the eld of
food nutrition, AR can track the features embedded within an image
(Tan et al., 2018) and overlay virtual information, normally a 3D model,
onto the physical world to enhance the blending. In this sense, instead of
using printed food photographs or digital photographs, the 3D models
from AR enable to enhance the usersability to visualize the food con-
cerned, including its dimensions and volume (Mokmin, 2020) (Yang
et al., 2019). Our review found an earlier work (Domhardt et al., 2015)
that developed a mobile AR app to estimate carbohydrate intake, which
is benecial for patients with diabetes. A reference marker was required
to be placed in front of the food, both of which were captured by the
mobile camera to generate a virtual mesh with the shape of the real food.
The user was asked to redraw the shape of the mesh by touching the
screen to t the exact volume of the real food. Based on a selected food
type and the 3D mesh, an estimation of carbohydrate content can be
obtained. The result obtained from eight patients reported that error was
reduced by at least 6 g of carbohydrate in 44% of the estimations. A
more recent study (Lam et al., 2020) was performed to estimate food
portion size using 3D food models. A set of 3D food models with different
portion sizes were rst generated. Users were required to overlay the 3D
food models on the real food displayed in front of them and select one
food model (one portion size) best matching the actual size of the real
food, from which the portion size of food can be determined.
5.1.2. Food nutrition visualization
Due to the lack of nutritional knowledge during the selection or
purchase of food products, consumers may nd it difcult to make
healthier eating choices. Displaying and visualizing nutrition informa-
tion prior to purchase is benecial for consumers to make better choices
and form healthy eating habits (Barreiro-Hurl´
e et al., 2010). Hence, food
nutrition monitoring and visualization could be a promising approach to
slow the increasing prevalence of obesity (Jiang et al., 2018). A wear-
able, scalable, and user-friendly system for spotlighting food nutrition is
advantageous and suitable for daily use in real life, such as at a grocery
shop. The advancement of AR technology fulls the needs for such ap-
plications. A wearable AR-based food nutrition monitoring and visual-
ization system that could be applied in real-life scenarios is designed as
seen in Fig. 5. It is comprised of three main components: object recog-
nition, nutrition information retrieval, and visualization, by integrating
the knowledge from computer vision, machine learning and AR tech-
nique. First, the food image is captured and subsequently detected based
on the extracted features. Object recognition aims to obtain information
on the food type. It is also crucial to measure the actual size of the food
product in order to predict the nutritious value. The previous section has
elaborated on several AR-based strategies for portion estimation of food.
Then, it fetches the relevant nutritional information on the targeted food
from the online database. Finally, the system can track the specic food
and display the nutrient facts conveniently beside the real-world object.
Our review work found a few studies that explored the potential of
AR technology on this matter. For example, Naritomi and Yanai (2020)
proposed CalorieCaptorGlasswhich estimated calorie values of food
based on actual size using image recognition and AR/MR glasses (i.e.,
HoloLens). In detail, they rst used a deep convolutional neural network
to detect, classify, and segment food with the generation of a mask.
Afterward, the actual size/area of food was measured using the camera
projection matrix, from which the calories can be estimated based on a
regression equation to predict calories from the area of food. In addition,
Plecher et al. (2020) developed an iOS application called TrackSugAR
to visualize sugar amounts in foods using AR and track the daily
consumed sugar. Detailed information about the meals and food was
required as input for the tracking purpose. They created two methods for
the visualization of sugar. The rst approach determined the sugar
content in foods by scanning the label and displaying animated virtual
sugar cubes that were yingout of the product. The second approach
adopted a gamied manner where the user was asked to toss the virtual
sugar cubes inside the food product. The elements of pleasure and
playfulness brought by AR were expected not only to enhance the eating
experience but also decrease the consumption of sugar.
Fig. 4. Year-wise distribution of the number of published articles (A), and the
distribution of studies by application areas and the used devices (B). The inset
pie chart displays the proportion of each device type used in the selected
103 articles.
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
188
5.1.3. Food traceability
Recent changes in business requirements, health regulations and
technology advancements have triggered the need for more advanced/
intelligent functions of food packaging which are able to detect, sense,
record, communicate and trace, facilitating the extension of shelf life,
enhancement of food safety and quality, while also providing necessary
information for customers. Food traceability is of great importance to
protect consumers in terms of exposing relevant information and pre-
cisely identifying the provenance of the food product. The technical
enabler of mobile-based AR is becoming popular for food packaging by
overlaying the ubiquitous digital information (such as nutrition facts) in
the physical world. An application has been developed to enable cus-
tomers to use the camera of a mobile phone to obtain a reality extension
containing detailed information on the origin and content of the food
product (Todorovi´
c et al., 2019). In another example, arising from
growing concerns to verify the halal status of food products, Arshad
et al. (2017) carried out a study to employ optical character recognition
and AR technologies to help users identify the halal status of food
products (e.g., meat). The recruited users found this application easy to
use and useful in terms of reassuring the consumers of the integrity of
halal status.
5.2. Applications in food sensory science
5.2.1. Augmenting sensory perception
Flavours perceived as tastes are not determined exclusively by pure
gustatory stimuli, instead, they are inuenced by the integration and
interaction of multiple senses including hearing, vision (colour, opacity,
texture, and shape), olfaction, gustation, and tactile perception (Pre-
scott, 2015) (Ranasinghe et al., 2019). Consumers are often exposed to a
multisensory environment when they consume food, which makes it
important to identify factors that inuence their avour experience
(Wang et al., 2019). The results are essential for those working on un-
derstanding human avour perception, as well as those working on the
design of healthier food products. To promote research in sensory sci-
ence, Ueda et al. (2020) investigated how the luminance distribution of
food images inuences the perceived visual texture and the taste/-
avour experience using the AR technique. Dynamic image processing
was applied to modify the luminance distribution of the food image in a
real-time manner and present it to participants by means of an HMD for
simulating actual eating situations. By assessing the effects on Baum-
kuchen (a German baked cake) and tomato ketchup, results demon-
strated that manipulating the luminance distribution affected not only
the expected taste/avour of the food (e.g., expected moistness, wa-
teriness, and deliciousness), but also the perceived taste properties.
It is believed that the manipulation of food perception with cross-
modal illusions produced by these senses on taste sensations is bene-
cial in regaining the appetite, leading to satisfaction during eating. The
visual appearance of food and drink is linked to freshness, therefore,
triggering the desire to eat (Fujimoto, 2018). Indeed, prior to food
consumption, the visual appearance of the food helps to set expectations
concerning the taste, avour, and palatability, ultimately exerting a
signicant inuence over its acceptance and thereafter its consumption
(Piqueras-Fiszman Betina & Spence., 2015). AR can be used to modify
the appearance of food while keeping the food intact. To this end, AR
was proposed to manipulate gustation using a vision-induced approach
involving the change of the texture, colour, and appearance of the food.
Researchers (Nakano et al., 2019) (Nakano et al., 2021) introduced a
gustatory manipulation interface using generative adversarial network
(GAN)-based real-time image-to-image translation. An RGB image of
original food in a bowl is rst acquired by a front cameral of the Video
See-Through HMD. This image is subsequently sent to the server module
which is capable of translating it to another food image and returns it to
the client module, enabling the overlay of the processed image onto the
actual food via the use of the HMD. Since the user eats while viewing the
changed appearance of the food using AR, it produces a cross-modal
effect through visual modulation and therefore changes the users gus-
tatory experience. For example, the plain steamed rice (actual image) is
translated to a processed image with the combination of rice and curry,
presenting the user with the sensation of eating curry and rice. Ac-
cording to the authors, this system exibly supported multiple types of
target food, enabling the changes in its appearance dynamically and
interactively corresponding to the deformation of the original food.
More importantly, the users hand and chopsticks were not occluded by
the food, making the scene realistic.
In addition, the eating context or environment can inuence con-
sumersfood liking and taste experiences, which has interested re-
searchers for decades (Q. Wang et al., 2015). Immersive technologies
such as VR and AR have been recognized as promising methodological
tools in the eld of food consumption studies from different perspec-
tives. AR enables an environment or any object to be augmented or
modied via adding digital sensations to real environments. An earlier
study (Korsgaard et al., 2017a) was conducted to allow users to eat real
food in a virtual park environment through HMD. Two cameras moun-
ted on the HMD allowed for video-based stereoscopic see-through when
the participants head orientation pointed toward the food, while the
virtual park environment would display when the participant turned
elsewhere. More recently, the virtual eating environment was created in
a room where seamless 3D immersions can be projected on the walls,
ceiling, and oor to surround the person inside the room and to provide
Fig. 5. The schematic of a MR application for displaying nutrition information.
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
189
illusions of alternative realities (Pennanen et al., 2020). They found that
the virtual eating environment generated a more positive emotional
response that can raise consumersrating of a healthy snack compared to
an unhealthy snack consumed in a plain, unimmersed environment.
5.2.2. Changing eating behaviour
Humans estimate their fullness via indirect cues such as elevated
blood-glucose levels, distension in the stomach, and the perceived
amount of food consumed. This kind of estimation is not always accurate
because of the inuence of the surroundings. Previous research in psy-
chology and economics reported that there are a series of environmental
factors that could affect eating behaviours, such as the plate or pack-
aging size, type of food, and eating context or environment (Rolls et al.,
2002) (Bell & Pliner, 2003). This suggests that AR could be a potential
tool to alter the satisfaction of eating by manipulating some environ-
mental factors. An earlier work (Narumi, 2016) proposed the use of
HMD-based AR to visualize food portions and subsequently transform
them to be larger than their actual sizes. Their results revealed that the
alteration of the usersvisual perception of food size was linked to food
consumption. Specically, the augmented size which was larger than the
actual size led to less food consumption. A similar study was carried out
by Suzuki et al. (2014) who designed an interactive system to implicitly
inuence the satisfaction of drinking a beverage and for controlling
beverage consumption through creating a volume perception illusion
using the AR technique. As seen in Fig. 6A, the developed system con-
sists of a laptop computer, a video see-through HMD, and a magnetic
position tracker (Polhemus). To acquire the relative positions, one of the
source coils of the Polhemus was attached to the cup and the other was
integrated into the web camera. The image of the cup was captured and
sent to a laptop which returned the composed image. Then it was
superimposed on the image of the actual cup via HMD, creating the
illusion with a different of shape and size of the cup. Their study further
concluded that a visually lengthened cup contributed to more con-
sumption while a visually shortened cup related to a signicantly
smaller amounts of intake. Therefore, they (Suzuki et al., 2014) pro-
posed that the developed system could be used to decrease total
beverage consumption as a means to stay healthy and control the bal-
ance of nutritional quantities. As shown in Fig. 6B, with the integration
of a refractometer, the cup rst identied the sugar content of the
beverage, triggering the deformation of the shape and size of the cup.
For instance, the cup could be lengthened when the sugar content was
considered to be low, and vice versa. Moreover, a tabletop system called
CalibraTable(Sakurai et al., 2015) was proposed to change the
assessed apparent food volume interactively by projecting virtual dishes
around the food. The idea was based on the hypothesis that the ratio of
the size of the food to that of the virtual dish played a part in affecting
the amount of food intake. Their results demonstrated that the size of the
virtual dish can be used to control the amount of food consumed un-
consciously, without loss of palatability and satisfaction. Another
benet of the developed system is its easiness to implement, without the
need to use a wearable device.
5.3. Retail applications
Interactive AR in the retail food chain has gained growing interest
and is increasingly used by food retailers in both physical stores and
online shops to enhance the customer experience. For example, Yim and
Yoo (2020) reported that electronic digital menus with varying inter-
active features generated greater enjoyment and encouraged customers
to order more within a shorter time in restaurants. A typical example is
the compelling Domino Pizza ordering application which allows the
customers to streamline and customize the ordering process by selecting
the toppings and bases, bringing the pizza to life in front of them. As
such, some restaurants applied marker-based AR retail apps that inte-
grated immersive real-time and digitize retail customer experience
(Chiu et al., 2021). Another research study (Petit et al., 2021) was car-
ried out to examine how consumersintentions to purchase food change
depending on the visualization mode (3D vs. AR). As a popular visual-
ization mode, 3D refers to an approach adopting interactive features
including zooming in and out or 360rotation, which allows buyers to
assess the product from different perspectives against a neutral back-
ground. In this work (Petit et al., 2021), the served dishes (salad and a
burger) on a smart device app were presented using 3D and AR mode
where the dish was visualised as superimposed on participantsphysical
environment in the tablet camera view. Results showed that AR visu-
alization of a served food increased purchase intention through eliciting
mental simulation of the eating process, compared to 3D mode.
5.4. Enhancing the cooking experience
Nowadays, preparing food, cooking, serving and other gastronomic
processes have evolved with new trends blending state-of-the-art tech-
nologies such as the Internet of Things (IoT), AR/MR, and even robotics.
It is anticipated that rapid future growth in the global kitchen market
(Ergun et al., 2020) would orient research attempts to provide better
service operations that are more efcient, easy to use, enabling remote
control, time-saving, and cost-saving in gastronomic processes. AR has
been increasingly recognized as an ideal virtual assistant to provide
demonstrations in the cooking process due to its visualization and
interactive functions. A recent review paper compared various display
Fig. 6. Illusion cup: Interactive controlling of beverage consumption based on an illusion of volume perception. Modied from Suzuki et al. (2014).
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
190
methods (e.g., images with text, videos, and 3D animation) for in-
structions on cooking and investigated how each of these AR applica-
tions affects the understanding of complicated products and their
applications (Hasada et al., 2019). In an interesting study (Ergun et al.,
2020), researchers developed an innovative approach through a virtual
assistant to facilitate an interactive food preparation and training
experience. The user was required to wear the AR glasses and listened to
the detailed steps and instructions following the recipe and cooking
procedures step by step. During cooking, Microsofts HoloLens was
implemented as the AR tool which presented the 3D visualization on the
real cooker and pan.
5.5. Food-related training
AR is seen as a promising medium for industrial training (Martinetti
et al., 2017). By enabling food industry employees to more deeply
immerse themselves into their working environment, employees are
more likely to grasp the skills, making them more willing to learn (Beck
et al., 2016). From the perspective of food handlers, AR benets them
through the achievement of rigorous and proper food training, which is
key to preventing the occurrence of food contamination. As reported by
Clark et al. (2018), the use of AR HMD introduced an innovative and
effective approach to food safety training in the food industry. Albayrak
et al. (2019) developed a training program for fast-food restaurant
employees with the use of AR glasses. By gamifying, personalizing, and
shortening the training process, it will not only increase employee
satisfaction and service quality, but also benet the business nancially
via shortening or possibly fully removing one-on-one training sessions
instructed by senior employees. Vignali et al. (2018) designed an AR
system to enhance the safety of employees while performing mainte-
nance tasks on a food processing machine. First, a mobile device (tablet
or smartphone) was used to frame markers that were usually attached to
the machine. The relevant augmented information was then displayed
directly on the screen of a device, showing the details on performing the
maintenance task through videos and text descriptions. Useful infor-
mation for employees (e. g. warnings and alerts) was also included to
ensure the task could be completed correctly and safely. The developed
AR strategy can replace traditional paper-based training and make it
more interactive.
5.6. Food education and learning
As identied by many studies, the success of education programs
depends largely on studentsmotivation. The emerging AR technique,
which allows users to see a supplemented reality through superimposed
virtual objects over the real world, has gained interest from researchers
and it has been implemented in diverse educational scenarios (Fran-
co-Mariscal, 2018). Combining virtual information (e.g., sound, image,
video) with the physical and tangible reality that surrounds us enriches
our perception of reality, therefore stimulating motivation for learning
(Chen et al., 2016). Garzon et al. (2020) developed an AR-based
educational application to foster sustainable agriculture in the context
of aquaponics. When users focused the camera of their mobile devices on
the trigger image (marker), it displayed virtual information related to
specic topics. Another study applied mobile AR technology to improve
the learning of nutrition knowledge (Chanlin & Chan, 2018). The
developed AR system allowed the student to scan food images, received
information on nutrient content, and recorded daily nutrient intake.
The choice of carbohydrate intake is of paramount importance,
especially for people who follow a diet where carbohydrate intake is
limited, as in the case of diseases such as diabetes (Guti´
errez et al.,
2019). It is crucial to know how to calculate the number of carbohydrate
choices that a meal contains, which can help to appropriately control the
disease (Calle-Bustos et al., 2017). Rollo et al. (2017) investigated to
what extent an AR portion size app (ServAR) helped in estimating
standard servings of carbohydrates. This app works by showing virtual
carbohydrate serving on a real dish using an iPad Mini. In this study, 90
participants were randomized into (1) no information/aid (control
group); (2) verbal information on standard servings; or (3) the use of
ServAR. Their results obtained from nine food types proved that par-
ticipants with the aid of the AR technique signicantly improved the
accuracy in the portion estimation. Juan et al. (2019) presented an AR
app to help interpret the nutritional information about carbohydrates in
packaged foods. Users were rst guided by the app to locate the area
where the nutritional information appears. The help information for
interpreting the nutritional label then appeared on the screen. This study
further checked the effectiveness of this app regarding learning out-
comes, usability, and perceived satisfaction. An analysis of the
pre-knowledge and post-knowledge questionnaires from 40 participants
showed that the users had a statistically signicant increase in knowl-
edge about carbohydrates using the AR app.
5.7. Food production and precision farming
Precision farming relates to the integration of information technol-
ogy and sensor devices to effectively provide useful knowledge for
farmers, improve efciency and decrease managerial costs (Caria et al.,
2019). In recent years, research on using AR in precision farming has
emerged. It has been found that AR represents a key catalyst for
modernizing agriculture via enhancing efciency and productivity in
the management of farming activities. In 2010, Santana-Fern´
andez et al.
(2010) introduced wearable AR technology in an assisted guidance
system for tractors. Using AR glasses, farmers could visualize the parts of
the eld that had already been treated in a real-time manner while the
tractor was operating on the eld. Subsequently, Kaizu and Choi (2012)
upgraded the navigation system for tractors to enable night-time
farming using AR technology. Previous studies have witnessed the
implementation of the AR technique in identifying plants (Katsaros &
Keramopoulos, 2017), weeds (Vidal & Vidal, 2010), and pests (Nigam
et al., 2011). More recently, Huuskonen and Oksanen (2018) proposed a
novel soil sampling method based on the combination of drone imaging
and AR techniques. The locations for soil samples were rst determined
based on a soil map produced from drone imaging after ploughing. The
users could then be guided to the produced sample points with the aid of
AR glasses. Another study (Xi et al., 2019) was carried out on the prawn
farming industry to explore the potential of AR to help prawn farmers
optimise daily operations. In detail, they proposed an AR virtual work-
space that allows a farmer to see an overview of water conditions across
the entire farm using immersive AR headsets such as HoloLens, enabling
the users to instantly locate questionable ponds from the aggregated
real-time data overlaid on a farm hologram. A head-up-display interface
showing details of the focused pond can be triggered via hand or
eye-gaze. The proposed system also supports a shared AR experience
where multiple users can inspect the same data set when co-located.
Due to their scalable and environmentally friendly capabilities, IoT
technologies have been considered indispensable in precision farming.
Traditional methods for visualizing IoT data in agriculture involve a
totally textural and offsite environment, where the surrounding physical
context is often not displayed. It is not intuitive to interpret IoT data
without the sense of necessary physical context (J. M. Huang et al.,
2015), which motivates the development of interactive IoT data visu-
alization. IoT data can be superimposed onto a physical crop in a
real-time manner by AR, enabling farmers to interact with IoT data
directly from the real-world environment. As a result, it is expected to
improve monitoring tasks and reduce planting operation costs. Phu-
pattanasilp and Tong (2019) introduced the use of AR as a support to IoT
data visualization for crop monitoring, called AR-IoT, enabling super-
imposition of IoT data directly onto real-world objects and enhancing
object interaction. The multi-camera imaging platform of the IoT was
connected to the internet and integrated into the system to measure the
coordinates of the crop precisely while multiple sensors were imple-
mented to collect useful additional data (e.g. moisture content).
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
191
6. Discussion of current MR applications
Table 1 illustrates the selected 8 documents concerning the food
related applications of MR. The earliest research on this topic was found
in 2017, due to the fact that MR is a new-wave technological termi-
nology that has emerged in very recent years. There are several appli-
cations found in the table, although most studies focused on using MR to
improve the eating experience. It is also noted that the reported hard-
ware implemented in MR systems are exclusively wearable electronic
devices.
A recent research study (Low et al., 2021) was performed to study the
inuence of different contexts on consumer emotional response to food
products by the adoption of MR technology. Participants (n =120) were
asked to evaluate two tea-break snack products (i.e., biscuit and choc-
olate slice) across three different environmental contexts, namely, (1)
sitting in the sensory booth; (2) in a real-life caf´
e; 3) an evoked context of
the same caf´
e while physically sitting in the sensory booth based on MR
technology. The evoked context was achieved using a HoloLens device.
Their results showed that MR technology induced the same level of
emotional responses as in the real-life caf´
e, suggesting its usefulness for
assessing ecologically valid consumer responses. Another study (Fujii
et al., 2020) developed an MR system for co-eating with a robot to enrich
the dining experience. A real robot was used and connected to HMD (i.e.,
HoloLens) in this experiment, enabling the robot to handle the food
models as seen in the HMD. As such, the participant wearing the HMD
was able to see the process robot eating involving reaching out for the
virtual food, bringing it to the mouth and disappearing of the virtual
food. Experimental results obtained from 29 participants found that
users expressed more enjoyment and satiation when dining with the
robot rather than merely talking with the robot. A similar study (Kors-
gaard et al., 2020) was carried out to investigate the potential of MR
systems in enhancing the eating experience and manipulating the food
intake among older adults. The immersive eating context was achieved
by Oculus Rift CV1 HMD creating a virtual environment in form of a
living room, and it also enabled the users to see their own hands and
food on the table, facilitating the eating process. This MR system then
brings meal partners (i.e., friends of the participants) into this virtual
living room by presenting them as semitransparent white genderless
avatars. By using a microphone integrated into HMD, meal partners can
hear each other and facilitate conversation during eating. It is also
possible for meal partners to see others move their heads by the rotation
and translation of the HMD. Participants (n =30) found the MR-aided
technique helped to enhance their eating experience with more
engagement in social interactions, leading to positive mood changes. An
earlier study (Korsgaard et al., 2019) used MR technology to create
virtual eating environments optimized to ensure a pleasant atmosphere
for older adults. Wearing HMD, participants were able to experience two
virtual environments: an empty kitchen and a park environment with
the sounds of birds singing. Another application of MR (Naritomi et al.,
2019), called FoodChangeLens, involved the transformation of food
categories to make meals more enjoyable. In this study, a convolutional
neural network was used to convert the actual food image into another
type as selected by the user. HoloLens was subsequently used to overlay
transformed food images to the real food objects. Fuchs et al. (2019)
applied an MR headset-mediated intervention at a vending machine by
showing the nutritional properties. It was reported that the MR system
affected beverage purchasing choices by improving the nutritional
quality of the selected products as compared to the control group.
7. Current challenges
This section explains the challenges that AR/MR has to overcome in
order to meet its full potential in the food sector. Starting with the
technical challenges, a live (wireless) connection is required to transfer
information, which is usually achieved over a local network to a router.
The obvious disadvantage to this is that a reliable and consistent AR/MR
experience depends strongly on the coverage and speed of connection
(Academic, 2017).
Up to now, there have been substantial difculties in precisely
recognizing food types and identifying the locations and depths in un-
controlled environments (Zhou et al., 2019). For food recognition, im-
ages with a high illumination level and a pure white background are
more likely to be correctly identied, while low illumination and
overcomplicated backgrounds usually result in a lower recognition rate.
In addition to this, several food products that appear and aggregate in
the same image would also place some challenges for recognition.
Inferior identication also occurs when the orientation of the food dif-
fers from the images in the database, or the food is partially occluded.
There are enormous types of food items, some of which are difcult to
discriminate, even for humans. Therefore, better recognition algorithms
incorporating more features are desperately needed for future
development.
For AR/MR applications, it is of paramount importance to measure
the position and orientation of food objects in the real world. To achieve
this, methods based on visual measurement of computer vision tech-
nologies have been exhaustively investigated. Still, there are some
challenges in identifying objects based on images alone in the complex
and dynamic world, especially considering real-time processing. Fidu-
cial markers are therefore implemented to make the recognition task
easier. A retroreector, which is an optical marker that can reect light
back in the direction of the light source, is widely used to achieve stable
object recognition. Nevertheless, most available retroreectors are
made from glass or plastic, making it difcult to apply them to food
products (Oku et al., 2018). An edible retroreector made from trans-
parent foodstuff could be a good solution for applications of AR to foods.
Some earlier studies have reported edible AR markers patterned on solid
surfaces of cookies (Narumi et al., 2011). Meanwhile, Oku et al. (2018)
developed an edible retroreector made from transparent foodstuff, i.e.,
Japanese agar. It is expected that more research in the future should be
intensied on this topic to broaden the AR application in the food sector.
Rendering virtual objects is considered one of the most crucial yet
hardest tasks in AR. It refers to the process where 3D virtual objects are
projected onto the physical world, enabling visualization, blending
virtual elements with the real environment. Indeed, it presents a huge
challenge to have a perfect overlay in a real-time manner and with the
correct perspective. While technology is advancing rapidly, AR/MR still
cannot create a close-to photorealistic experience, especially during the
interaction with food items or humans. In addition, better interactivity
requires real-time object tracking techniques. Especially, tracking mul-
tiple moving objects such as hands and utensils is still a challenge that
requires sophisticated and complicated computer vision methods.
The smartphone-based AR system is affordable and widely available,
yet it requires the user to hold the phone during the whole process.
Furthermore, distortion might occur to a certain extent using the camera
of the smartphone. HMD can free the users hand during implementa-
tion, but it is normally expensive. For food-related research, participants
are asked to consume the food while wearing heavy and large HMD. In
Table 1
Literature review of published articles applying MR technology for food.
Research topic Device Reference
Emotional response towards food HoloLens Low et al. (2021)
Enhanced eating experience HoloLens Fujii et al. (2020)
Enhanced eating experience Oculus Rift
CV1
Korsgaard et al. (2020)
Food category transformation HoloLens Naritomi et al. (2019)
Enhanced eating experience Oculus Rift
CV1
Korsgaard et al. (2019)
Help make healthy food-related
purchases
HoloLens Fuchs et al. (2019)
Enhanced eating experience AR glasses Kanak et al. (2018)
Enhanced eating experience HMD Korsgaard et al.
(2017b)
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
192
such applications comfortableness during food consumption present
another challenge. It is expected that AR eyewear could be the future
when it is designed to be comfortable, visually appealing, affordable,
have a light weight and of superior quality.
Ethical issues regarding the use of AR in retail application is also
worth considering. One challenge relates to the overloading of infor-
mation in retail applications. Undesirable information would place more
difcult for consumers to concentrate on because the message that the
seller intends to deliver is not always considered relevant and interesting
to the customer. Indeed, AR data is usually crucial to researchers or
sellers, yet it gives rise to privacy issues; for example, consumers/par-
ticipants might not be aware that the interactive AR application is able
to record their responses or give access to their location information.
8. Future perspectives
Bearing in mind that the system of AR/MR is still under develop-
ment, applications in the food sector are relatively new, requiring more
research to explore the usability, validity, and potential. Although
promising results have been evidenced from literature, there are key
aspects meriting further investigation. With respect to studies focusing
on food sensory research, the incorporation of multiple sensors to cap-
ture behavioural and biometric data has the potential to advance future
research (Wang et al., 2021). A range of sensors, such as electroen-
cephalogram and galvanic skin response, are available in the market and
should be considered to measure multiple indices including brain ac-
tivity, skin conductance, body temperature, enriching the analysis of
participants response to augmented food products and environments.
There is no doubt that a close collaboration between diverse disciplines
would bring this research eld to the next level, by integrating the
knowledge from computer science, computer vision, data science, psy-
chology, food sensory science, and marketing science. As Velasco and
Obrist (2020) point out, multidisciplinary collaboration is desirable,
contributing to a more accelerated and impactful advancement of this
research eld, from studying the multisensory experience to manipu-
lating food choice.
Technology advancement has facilitated the development of 3D
images of the internal structures of food. It is not optimal to display and
visualize such 3D imaging data on a 2D at panel display, e.g., PC
screen, due to the lack of depth perception (Wheeler et al., 2018). As
innovative visualization technologies, AR/MR makes it possible to pre-
sent and visualize 3D images in the physical world. Integrating such
technologies in the food industry will pave the way for the development
of new research directions targeting the 3D visualization of internal
structures in a food product (Crofton et al., 2019). Combining imaging
techniques (e.g., x-ray micro-computer tomography) for generating the
3D structure of food products and AR/MR sophisticated volume
rendering and visualization capabilities, makes it possible to ‘step in-
sidea complex food product, providing an innovative evaluation of the
internal food structure, which is not considered possible with existing
technologies. This immersive and engaging tool enables researchers to
inspect/evaluate the internal structures of food, leading to the
advancement of new food research methodologies.
A range of real-time and non-invasive imaging techniques have been
developed in recent years to study and understand different perspectives
of food properties, with hyperspectral imaging being one of the popular
tools (Xu et al., 2016). Hyperspectral imaging (HSI) or multispectral
imaging (MSI), as an emerging optical imaging technique, integrates
spectral signature with the spatial information by the generation of a 3D
dataset, enabling non-contact, non-destructive, nonionizing, and
label-free measurements (Xu et al., 2015). The combination of HSI and
AR has been witnessed in different elds including medical (J. Huang
et al., 2020), remote sensing (Zhang et al., 2019), and geoscience
(Engelke et al., 2019). For instance, Urade et al. (2021) developed an
approach to overlay HSI images onto the operative eld using AR
technology, providing an effective surgical navigation tool. The utilized
HSI system was designed to correctly identify the demarcation line and
quantify surface liver oxygenation. Another study (J. Huang et al., 2020)
was performed to superimpose HSI images and segmentation results of a
brain tumor phantom onto the real scene using the HoloLens AR head-
set. Interaction between HSI images and the real world is available in
terms of repositioning, rotating, and changing visibility through either
hand or voice controls, producing an easy-to-use and multifaceted
visualization of brain tissue in a real-time manner. The integration of
HSI and AR should open up opportunities for the provision of innovative
visualisations within the food industry. For example, HSI-AR enables
monitoring and visualizing the temporal changes of chemical/physical
properties of a food product during processing on-site and in real-time,
facilitating an improved understanding of the dynamic food quality
changes. This should revolutionize the current implementation of pro-
cess analytical technology (PAT) in food manufacturing through
continuous monitoring in the real world, leading to a desired
end-product quality. HSI-AR could also be a good tool to identify and
display the impact of ingredient addition or applied treatment on food
chemical/physical properties. In particular, HSI is able to capture the
chemical/physical changes in terms of prediction images, while AR
could superimpose the resultant image on the physical object. This
direct visualization benets not only food scientists in better under-
standing of the complex chemistry/biochemistry of food systems but
also food science students in faster acquisition of required knowledge.
9. Conclusions
To the best of our knowledge, this paper provides the rst review of
the emerging augmented reality and mixed reality technologies for food-
related applications and research. This review has highlighted the po-
tential benets of AR/MR within the food industry (such as monitoring
food nutrition). The reviewed literature shows that these technologies
are promising for the purpose of experimentation, development, and
innovation. The unique fusion of food and AR/MR is able to go beyond
the limitations of conventional food-related research. Despite many
advantages, there still exist some challenges to impede its widespread
adoption, as outlined in this review. One interesting future direction is to
integrate AR/MR with other emerging and state-of-the-art techniques,
such as hyperspectral imaging, which will pave the way for the devel-
opment of new research and applications in the food sector. The future
progress of this eld demands an exceedingly interdisciplinary approach
combining knowledge from diverse elds such as food science, chem-
istry, biotechnology, psychology, computer science, and computer
vision. As the hardware and software keep evolving, the availability,
affordability, and accessibility of the key elements of AR/MR technol-
ogies will ultimately be achieved, opening a new chapter for food-
related AR/MR research.
Acknowledgments
This work was conducted with the nancial support of the Science
Foundation Ireland Centre for Research Training in Digitally-Enhanced
Reality (d-real) under Grant No. 18/CRT/6224.
References
Academic, W. (2017). Augmented reality for food marketers and consumers. In
Augmented reality for food marketers and consumers https://doi.org/10.3920/978
-90-8686-842-1.
Albayrak, M. S., Oner, A., Atakli, I. M., & Ekenel, H. K. (2019). Personalized training in
fast-food restaurants using augmented reality glasses. In Proceedings - 2019
international symposium on educational technology, ISET (pp. 129133). https://doi.
org/10.1109/ISET.2019.00035, 2019.
Antonucci, F., Figorilli, S., Costa, C., Pallottino, F., Raso, L., & Menesatti, P. (2019).
A review on blockchain applications in the agri-food sector. Journal of the Science of
Food and Agriculture, 99(14), 61296138. https://doi.org/10.1002/jsfa.9912
Arshad, H., Obeidy, W. K., & Abidin, R. Z. (2017). An interactive application for halal
products identication based on augmented reality. International Journal of Advanced
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
193
Science, Engineering and Information Technology, 139145. https://media.neliti.com
/media/publications/109858-EN-an-interactive-application-for-halal-pro.pdf.
Badiali, G., Cercenelli, L., Battaglia, S., Marcelli, E., Marchetti, C., Ferrari, V., & Cutolo, F.
(2020). Review on augmented reality in oral and cranio-Maxillofacial surgery:
Toward surgery-specic head-up displays. IEEE Access, 8, 5901559028. https://
doi.org/10.1109/ACCESS.2020.2973298
Barreiro-Hurl´
e, J., Gracia, A., & de-Magistris, T. (2010). Does nutrition information on
food products lead to healthier food choices? Food Policy, 35(3), 221229. https://
doi.org/10.1016/J.FOODPOL.2009.12.006
Beck, D. E., Crandall, P. G., OBryan, C. A., & Shabatura, J. C. (2016). Taking food safety
to the next levelan augmented reality solution. Journal of Foodservice Business
Research, 19(4), 382395. https://doi.org/10.1080/15378020.2016.1185872
Behzadan, A. H., Dong, S., & Kamat, V. R. (2015). Augmented reality visualization: A
review of civil infrastructure system applications. Advanced Engineering Informatics,
29(2), 252267. https://doi.org/10.1016/J.AEI.2015.03.005
Bell, R., & Pliner, P. L. (2003). Time to eat: The relationship between the number of
people eating and meal duration in three lunch settings. Appetite, 41(2), 215218.
https://doi.org/10.1016/S0195-6663(03)00109-0
Bouzembrak, Y., Klüche, M., Gavai, A., & Marvin, H. J. P. (2019). Internet of Things in
food safety: Literature review and a bibliometric analysis. Trends in Food Science &
Technology, 94(October), 5464. https://doi.org/10.1016/j.tifs.2019.11.002
Calle-Bustos, A. M., Juan, M. C., García-García, I., & Abad, F. (2017). An augmented
reality game to support therapeutic education for children with diabetes. PLoS One,
12(9), 123. https://doi.org/10.1371/journal.pone.0184645
Caria, M., Sara, G., Todde, G., Polese, M., & Pazzona, A. (2019). Exploring smart glasses
for augmented reality: A valuable and integrative tool in precision livestock farming.
Animals, 9(11), 116. https://doi.org/10.3390/ani9110903
Casari, F. A., Navab, N., Hruby, L. A., Kriechling, P., Nakamura, R., Tori, R., Nunes, F. de
L. dos S., Queiroz, M. C., Fürnstahl, P., & Farshad, M. (2021). Augmented reality in
orthopedic surgery is emerging from proof of concept towards clinical studies: A
literature review explaining the technology and current state of the art, 2021 Current
Reviews in Musculoskeletal Medicine, 14(2), 192203. https://doi.org/10.1007/
S12178-021-09699-3, 14(2).
Chanlin, L. J., & Chan, K. C. (2018). Augmented reality applied in dietary monitoring.
Libri - International Journal of Libraries and Information Services, 68(2), 137147.
https://doi.org/10.1515/LIBRI-2017-0024/HTML
Chen, C. H., Chou, Y. Y., & Huang, C. Y. (2016). An augmented-reality-based concept
map to support mobile learning for science. Asia-Pacic Education Researcher, 25(4),
567578. https://doi.org/10.1007/S40299-016-0284-3
Chiu, C. L., Ho, H. C., Yu, T., Liu, Y., & Mo, Y. (2021). Exploring information technology
success of Augmented Reality Retail Applications in retail food chain. Journal of
Retailing and Consumer Services, 61(January), Article 102561. https://doi.org/
10.1016/j.jretconser.2021.102561
Clark, J., Crandall, P., & Shabatura, J. (2018). Wearable technology effects on training
outcomes of restaurant food handlers. Journal of Food Protection, 81(8), 12201226.
https://doi.org/10.4315/0362-028X.JFP-18-033
Crofton, E. C., Botinestean, C., Fenelon, M., & Gallagher, E. (2019). Potential applications
for virtual and augmented reality technologies in sensory science. Innovative Food
Science & Emerging Technologies, 56(June), Article 102178. https://doi.org/10.1016/
j.ifset.2019.102178
Cutolo, F., Cattari, N., Fontana, U., & Ferrari, V. (2020). Optical see-through head-
mounted displays with short focal distance: Conditions for mitigating parallax-
related registration error. Frontiers in Robotics and AI, 196. https://doi.org/10.3389/
FROBT.2020.572001, 0.
Cutolo, F., Fontana, U., Cattari, N., & Ferrari, V. (2019). Off-line camera-based
calibration for optical see-through head-mounted displays. Applied Sciences, 10(1),
193. https://doi.org/10.3390/APP10010193, 2020, Vol. 10, Page 193.
Domhardt, M., Tiefengrabner, M., Dinic, R., Fotschl, U., Oostingh, G. J., Stutz, T.,
Stechemesser, L., Weitgasser, R., & Ginzinger, S. W. (2015). Training of carbohydrate
estimation for people with diabetes using mobile augmented reality. Journal of
Diabetes Science and Technology, 9(3), 516524. https://doi.org/10.1177/
1932296815578880
Engelke, U., Rogers, C., Klump, J., & Lau, I. (2019). HypAR: Situated mineralogy
exploration in augmented reality. In Proceedings - VRCAI 2019: 17th ACM SIGGRAPH
international conference on virtual-reality continuum and its applications in industry.
https://doi.org/10.1145/3359997.3365715
Ergun, S., Karadeniz, A. M., Tanriseven, S., & Simsek, I. Y. (2020). AR-supported
induction cooker AR-SI: One step before the food robot. In Proceedings of the 2020
IEEE international conference on human-machine systems. https://doi.org/10.1109/
ICHMS49158.2020.9209362. ICHMS 2020.
Floros, J. D., Newsome, R., Fisher, W., Barbosa-C´
anovas, G. V., Chen, H., Dunne, C. P.,
German, J. B., Hall, R. L., Heldman, D. R., Karwe, M. V., Knabel, S. J., Labuza, T. P.,
Lund, D. B., Newell-McGloughlin, M., Robinson, J. L., Sebranek, J. G.,
Shewfelt, R. L., Tracy, W. F., Weaver, C. M., & Ziegler, G. R. (2010). Feeding the
world today and tomorrow: The importance of food science and technology.
Comprehensive Reviews in Food Science and Food Safety, 9(5), 572599. https://doi.
org/10.1111/j.1541-4337.2010.00127.x
Franco-Mariscal, A. J. (2018). Discovering the chemical elements in food. Journal of
Chemical Education, 95(3), 403409. https://doi.org/10.1021/acs.jchemed.7b00218
Fuchs, K., Grundmann, T., Haldimann, M., & Fleisch, E. (2019). Impact of mixed reality
food labels on product selection: Insights from a user study using headset-mediated
food labels at a vending machine. In MADiMa 2019 - proceedings of the 5th
international workshop on multimedia assisted dietary management. https://doi.org/
10.1145/3347448.3357167. Co-Located with MM 2019, October, 715.
Fujii, A., Kochigami, K., Kitagawa, S., Okada, K., & Inaba, M. (2020). Development and
evaluation of mixed reality Co-eating system: Sharing the behavior of eating food
with a robot could improve our dining experience. 29th IEEE International Conference
on Robot and Human Interactive Communication, RO-MAN, 357362. https://doi.org/
10.1109/RO-MAN47096.2020.9223518, 2020.
Fujimoto, Y. (2018). Projection mapping for enhancing the perceived deliciousness of
food. IEEE Access, 6, 5997559985. https://doi.org/10.1109/
ACCESS.2018.2875775
Garzon, J., Baldiris, S., Acevedo, J., & Pavon, J. (2020). Augmented reality-based
application to foster sustainable agriculture in the context of aquaponics. In
Proceedings - IEEE 20th international conference on advanced learning technologies,
ICALT (pp. 316318). https://doi.org/10.1109/ICALT49669.2020.00101, 2020.
Guti´
errez, F., Htun, N. N., Charleer, S., De Croon, R., & Verbert, K. (2019). Designing
augmented reality applications for personal health decision-making. In , Vol. 6.
Proceedings of the 52nd Hawaii international conference on system sciences (pp.
17381747). https://doi.org/10.24251/hicss.2019.212
Hasada, H., Zhang, J., Yamamoto, K., Ryskeldiev, B., & Ochiai, Y. (2019). AR cooking:
Comparing display methods for the instructions of cookwares on AR Goggles (pp.
127140). https://doi.org/10.1007/978-3-030-22649-7_11. Lecture Notes in
Computer Science (Including Subseries Lecture Notes in Articial Intelligence and
Lecture Notes in Bioinformatics), 11570 LNCS.
Holz, T., Campbell, A. G., Ohare, G. M. P., Stafford, J. W., Martin, A., & Dragone, M.
(2011). MiRAmixed reality agents. International Journal of Human-Computer
Studies, 69(4), 251268. https://doi.org/10.1016/J.IJHCS.2010.10.001
Huang, J., Halicek, M., Shahedi, M., & Fei, B. (2020). Augmented reality visualization of
hyperspectral imaging classications for image-guided brain tumor resection, 29. https://
doi.org/10.1117/12.2549041
Huang, J. M., Ong, S.-K., & Nee, A. Y. (2015). Real-time nite element structural analysis
in augmented reality. Advances in Engineering Software, 87, 4356. https://www.scie
ncedirect.com/science/article/pii/S0965997815000733?casa_token=LV4YiSzWL5
QAAAAA:AqW0PBY4Ozs1YD08P9bmPN7hJSS8E0XffvdiPMN9knJrcQJXzbyGn7
jjXMywC4457c_FgmwS.
Huuskonen, J., & Oksanen, T. (2018). Soil sampling with drones and augmented reality
in precision agriculture. Computers and Electronics in Agriculture, 154(August), 2535.
https://doi.org/10.1016/j.compag.2018.08.039
Jiang, H., Starkman, J., Liu, M., & Huang, M. (2018). Food nutrition visualization on
Google glass: Design tradeoff and eld evaluation. IEEE Consumer Electronics
Magazine, 7(3), 2131. https://doi.org/10.1109/MCE.2018.2797740
Juan, M.-C., Charco, J. L., García-García, I., & Moll´
a, R. (2019). An augmented reality
app to learn to interpret the nutritional information on labels of real packaged foods.
Frontiers of Computer Science, 1(June). https://doi.org/10.3389/fcomp.2019.00001
Kaizu, Y., & Choi, J. (2012). Development of a tractor navigation system using
augmented reality. Engineering in Agriculture, Environment and Foo, 5(3), 96101.
https://www.sciencedirect.com/science/article/pii/S1881836612800218.
Kanak, A., Polat, O., & Erg, R. K. (2018). Akıllı bir Yemek Sahnesi Deneyimi. An intelligent
dining scene experience, 57.
Katsaros, A., & Keramopoulos, E. (2017). FarmAR, a farmers augmented reality
application based on semantic web. In Computer networks and social media conference
(SEEDA-CECNSM) (pp. 16). https://ieeexplore.ieee.org/abstract/document
/8088230/.
Korsgaard, D., Bjøner, T., & Nilsson, N. C. (2019). Where would you like to eat? A
formative evaluation of mixed-reality solitary meals in virtual environments for
older adults with mobility impairments who live alone. Food Research International,
117(September 2017), 3039. https://doi.org/10.1016/j.foodres.2018.02.051
Korsgaard, D., Bjorner, T., Bruun-Pedersen, J. R., Sorensen, P. K., & Perez-Cueto, F. J. A.
(2020). Eating together while being apart: A pilot study on the effects of mixed-
reality conversations and virtual environments on older eaterssolitary meal
experience and food intake. In Proceedings - 2020 IEEE conference on virtual reality and
3D user interfaces, VRW (pp. 365370). https://doi.org/10.1109/
VRW50115.2020.00079, 2020.
Korsgaard, D., Nilsson, N. C., & Bjorner, T. (2017a). Immersive eating: Evaluating the use
of head-mounted displays for mixed reality meal sessions. 2017 IEEE 3rd Workshop
on Everyday Virtual Reality, WEVR, 811. https://doi.org/10.1109/
WEVR.2017.7957709, 2017.
Korsgaard, D., Nilsson, N. C., & Bjorner, T. (2017b). Immersive eating: Evaluating the use
of head-mounted displays for mixed reality meal sessions. 2017 IEEE 3rd Workshop
on Everyday Virtual Reality, WEVR. https://doi.org/10.1109/WEVR.2017.7957709,
2017.
Lam, M. C., Suwadi, N. A., Mohd Zainul Arien, A. H., Poh, B. K., Sai, N. S., &
Wong, J. E. (2020). An evaluation of a virtual atlas of portion sizes (VAPS) mobile
augmented reality for portion size estimation. Virtual Reality. https://doi.org/
10.1007/s10055-020-00484-0. 0123456789.
Lepetit, V., & Berger, M. O. (2000). Semi-automatic method for resolving occlusion in
augmented reality. In , Vol. 2. Proceedings of the IEEE computer society conference on
computer vision and pattern recognition (pp. 225230). https://doi.org/10.1109/
CVPR.2000.854794
Low, J. Y. Q., Lin, V. H. F., Jun Yeon, L., & Hort, J. (2021). Considering the application of
a mixed reality context and consumer segmentation when evaluating emotional
response to tea break snacks. Food Quality and Preference, 88(August 2020), 104113.
https://doi.org/10.1016/j.foodqual.2020.104113
Lungu, A. J., Swinkels, W., Claesen, L., Tu, P., Egger, J., & Chen, X. (2021). A review on
the applications of virtual reality, augmented reality and mixed reality in surgical
simulation: An extension to different kinds of surgery. Expert Review of Medical
Devices, 18(1), 4762. https://doi.org/10.1080/17434440.2021.1860750
Maas, M. J., & Hughes, J. M. (2020). Virtual, augmented and mixed reality in K12
education: A review of the literature. Technology, Pedagogy and Education, 29(2),
231249. https://doi.org/10.1080/1475939X.2020.1737210
J.J.K. Chai et al.
Trends in Food Science & Technology 124 (2022) 182–194
194
Martinetti, A., Rajabalinejad, M., & Van Dongen, L. (2017). Shaping the future
maintenance operations: Reections on the adoptions of augmented reality through
problems and opportunities. Procedia CIRP, 59, 1417. https://doi.org/10.1016/J.
PROCIR.2016.10.130
Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE -
Transactions on Info and Systems, 77(12), 13211329. https://search.ieice.org/bin/s
ummary.php?id=e77-d_12_1321.
Mokmin, N. A. M. A. (2020). Augmented reality information for food (ARIF): Design and
development. In ACM international conference proceeding series (pp. 193196).
https://doi.org/10.1145/3439147.3439162
Nakano, K., Horita, D., Kawai, N., Isoyama, N., Sakata, N., Kiyokawa, K., Yanai, K., &
Narumi, T. (2021). A study on persistence of gan-based vision-induced gustatory
manipulation. Electronics (Switzerland), 10(10), 124. https://doi.org/10.3390/
electronics10101157
Nakano, K., Horita, D., Sakata, N., Kiyokawa, K., Yanai, K., & Narumi, T. (2019).
DeepTaste: Augmented reality gustatory manipulation with GAN-based real-time
food-to-food translation. In Proceedings - 2019 IEEE international symposium on mixed
and augmented reality, ISMAR (pp. 212223). https://doi.org/10.1109/
ISMAR.2019.000-1, 2019.
Naritomi, S., Tanno, R., Ege, T., & Yanai, K. (2019). FoodChangeLens: CNN-based food
transformation on hololens. In Proceedings - 2018 IEEE international conference on
articial intelligence and virtual reality, AIVR (p. 197). https://doi.org/10.1109/
AIVR.2018.00046, 2018, 199.
Naritomi, S., & Yanai, K. (2020). CalorieCaptorGlass: Food calorie estimation based on
actual size using HoloLens and deep learning. In Proceedings - 2020 IEEE conference
on virtual reality and 3D user interfaces, VRW (pp. 819820). https://doi.org/
10.1109/VRW50115.2020.00260, 2020.
Narumi, T. (2016). Multi-sensorial virtual reality and augmented human food
interaction. MHFI 2016 - 1st Workshop on Multi-Sensorial Approaches to Human-Food
Interaction. https://doi.org/10.1145/3007577.3007587
Narumi, T., Nishizaka, S., Kajinami, T., Tanikawa, T., & Hirose, M. (2011). Augmented
reality avors: Gustatory display based on Edible Marker and cross-modal
interaction. In Conference on human factors in computing systems - proceedings (pp.
93102). https://doi.org/10.1145/1978942.1978957
Nigam, A., Kabra, P., & Doke, P. (2011). Augmented reality in agriculture. In 2011 IEEE
7th international conference on wireless and mobile computing, networking and
communications (WiMob), IEEE (pp. 445448). https://ieeexplore.ieee.org/abstract
/document/6085361/?casa_token=CvHZNawWaOMAAAAA:st_4RG9rUbdVo
Vc4dn7PDGO_UPbfa-HjKvjnok4Il4ZVtNXxIkfQ-XN7J63Xa4KKYX8vWAKw.
Oku, H., Uji, T., Zhang, Y., & Shibahara, K. (2018). Edible ducial marker made of edible
retroreector. Computers & Graphics, 77, 156165. https://doi.org/10.1016/j.
cag.2018.10.002
Parida, K., Bark, H., & Lee, P. S. (2021). Emerging thermal technology enabled
augmented reality. Advanced Functional Materials. , Article 2007952. https://doi.org/
10.1002/adfm.202007952. June).
Pennanen, K., N¨
arv¨
ainen, J., Vanhatalo, S., Raisamo, R., & Sozer, N. (2020). Effect of
virtual eating environment on consumersevaluations of healthy and unhealthy
snacks. Food Quality and Preference, 82(January), Article 103871. https://doi.org/
10.1016/j.foodqual.2020.103871
Petit, O., Javornik, A., & Velasco, C. (2021). We eat rst with our (digital) eyes:
Enhancing mental simulation of eating experiences via visual-enabling technologies.
Journal of Retailing. https://doi.org/10.1016/j.jretai.2021.04.003
Phupattanasilp, P., & Tong, S. R. (2019). Augmented reality in the integrative internet of
things (AR-IoT): Application for precision farming. Sustainability, 11(9). https://doi.
org/10.3390/su11092658
Piqueras-Fiszman, B., & Spence, C. (2015). Sensory expectations based on product-
extrinsic food cues: An interdisciplinary review of the empirical evidence and
theoretical accounts. Food Quality and Preference, 40, 165179.
Plecher, D. A., Eichhorn, C., Steinmetz, C., & Klinker, G. (2020). In TrackSugAR.
International Conference on human-computer interaction (pp. 442459). https://doi.
org/10.1007/978-3-030-49904-4
Prescott, J. (2015). Multisensory processes in avour perception and their inuence on
food choice. Current Opinion in Food Science, 4752. https://www.sciencedirect.com/
science/article/pii/S221479931500048X?casa_token=H7TBIdmRAXUAAAAA:P9
5XaMjjrhJB1j1Kp_sml76JoBMGK_o23rdHpMmwSz_OnpPMtZTFyXO8994DJf1y_N2i
En4BVQ.
Ranasinghe, N., Tolley, D., Nguyen, T. N. T., Yan, L., Chew, B., & Do, E. Y. L. (2019).
Augmented avours: Modulation of avour experiences through electric taste
augmentation. Food Research International, 117(October 2017), 6068. https://doi.
org/10.1016/j.foodres.2018.05.030
Raskar, R., Welch, G., & Chen, W. C. (1999). Table-top spatially-augmented realty:
Bringing physical models to life with projected imagery. In Proceedings - 2nd IEEE and
ACM international workshop on augmented reality, IWAR (pp. 6471). https://doi.org/
10.1109/IWAR.1999.803807, 1999.
Rejeb, A., Rejeb, K., & Keogh, J. G. (2021). Enablers of augmented reality in the food
supply chain: A systematic literature review. Journal of Foodservice Business Research,
130. https://doi.org/10.1080/15378020.2020.1859973, 00(00).
Rollo, M. E., Bucher, T., Smith, S., & C. (2017). The effect of an augmented reality aid on
error associate- Google Scholar. Journal of Nutrition & Intermediary Metabolism, 8,
90. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=The+effect+of+
an+augmented+reality+aid+on+error+associated+with+serving+food&btnG=.
Rolls, B. J., Morris, E. L., & Roe, L. S. (2002). Portion size of food affects energy intake in
normal-weight and overweight men and women. The American Journal of Clinical
Nutrition, 76(6), 12071213. https://doi.org/10.1093/AJCN/76.6.1207
Sakurai, S., Narumi, T., Ban, Y., Tanikawa, T., & Hirose, M. (2015). CalibraTable: Tabletop
system for inuencing eating behavior (pp. 13). https://doi.org/10.1145/
2818466.2818483
Santana-Fern´
andez, J., G´
omez-Gil, J., & Del-Pozo-San-Cirilo, L. (2010). Design and
implementation of a GPS guidance system for agricultural tractors using Augmented
Reality technology. Sensors, 10(11), 1043510447. https://doi.org/10.3390/
s101110435
Suprem, A., Mahalik, N., & Kim, K. (2013). A review on application of technology
systems, standards and interfaces for agriculture and food sector. Computer Standards
& Interfaces, 35(4), 355364. https://doi.org/10.1016/J.CSI.2012.09.002
Suzuki, E., Narumi, T., Sakurai, S., Tanikawa, T., & Hirose, M. (2014). Illusion cup:
Interactive controlling of beverage consumption based on an illusion of volume
perception. In ACM international conference proceeding series. https://doi.org/
10.1145/2582051.2582092
Tan, S., & Arshad, H. (2018). An efcient and robust mobile augmented reality
application. Technol, A. A.-I. J. A. S. E. I. Researchgate.Net, 8(2). https://doi.org/
10.18517/ijaseit.8.4-2.6810. undened. (2018
Thomas, P. C., & David, W. M. (1992). Augmented reality: An application of heads-up
display technology to manual manufacturing processes. In Hawaii international
conference on system sciences (pp. 659669).
Todorovi´
c, V., Mili´
c, N., & Lazarevi´
c, M. (2019). Augmented reality in food production
traceability - use case. In EUROCON 2019 - 18th international conference on smart
technologies (pp. 15). https://doi.org/10.1109/EUROCON.2019.8861734
Ueda, J., Spence, C., & Okajima, K. (2020). Effects of varying the standard deviation of
the luminance on the appearance of food, avour expectations, and taste/avour
perception. Scientic Reports, 10(1), 112. https://doi.org/10.1038/s41598-020-
73189-8
Urade, T., Felli, E., Barberio, M., Al-Taher, M., Felli, E., Gofn, L., Agnus, V.,
Ettorre, G. M., Marescaux, J., Mutter, D., & Diana, M. (2021). Hyperspectral
enhanced reality (HYPER) for anatomical liver resection. Surgical Endoscopy, 35(4),
18441850. https://doi.org/10.1007/s00464-020-07586-5
Velasco, C., & Obrist, M. (2020). Multisensory Experiences: Where the senses meet
technology. Oxford University Press. https://books.google.com/books?hl=en&lr=&i
d=Of7-DwAAQBAJ&oi=fnd&pg=PP1&dq=Multisensory+Experiences:+Where+
the+senses+meet+technology&ots=SaLf1Ojvd1&sig=7jmgKol-oE65BGSAl
9U8kFvUOQE.
Vidal, N. R., & Vidal, R. A. (2010). Augmented reality systems for weed economic
thresholds applications. Planta Daninha, 28, 449454. https://www.scielo.br/j/
pd/a/mmYhK5Py984tNtz5bSnqRxQ/?stop=next&format=html&lang=en.
Vignali, G., Bertolini, M., Bottani, E., Di Donato, L., Ferraro, A., & Longo, F. (2018).
Design and testing of an augmented reality solution to enhance operator safety in the
food industry. International Journal of Food Engineering, 14(2), 116. https://doi.org/
10.1515/ijfe-2017-0122
Wang, Q. J., Barbosa Escobar, F., Alves Da Mota, P., & Velasco, C. (2021). Getting started
with virtual reality for sensory and consumer science: Current practices and future
perspectives. Food Research International, 145, Article 110410. https://doi.org/
10.1016/j.foodres.2021.110410
Wang, Q., & Beverages, C. S. (2015). Assessing the inuence of the multisensory
atmosphere on the taste of vodka, 2015, undened Mdpi.Com, 1, 204217. https://
doi.org/10.3390/beverages1030204.
Wang, Q. J., Mielby, L. A., Thybo, A. K., Bertelsen, A. S., Kidmose, U., Spence, C., &
Byrne, D. V. (2019). Sweeter together? Assessing the combined inuence of product-
related and contextual factors on perceived sweetness of fruit beverages. Journal of
Sensory Studies, 34(3), 111. https://doi.org/10.1111/joss.12492
Wheeler, G., Shujie, D., Nicolas, T., Pushparajah, K., Schnabel, J. A., Simpson, J. M., &
Gomez, A. (2018). Virtual interaction and visualisation of 3D medical imaging data
with VTK and Unity. Healthcare Technology Letters, 5(5), 148153. https://ieeexplor
e.ieee.org/document/8527762/.
Xi, M., Adcock, M., & McCollouch, J. (2019). An end-to-end augmented reality solution
to support aquaculture farmers with data collection, storage, and analysis. In
Proceedings - VRCAI 2019: 17th ACM SIGGRAPH international conference on virtual-
reality continuum and its applications in industry. https://doi.org/10.1145/
3359997.3365721
Xu, J.-L., Riccioli, C., & Sun, D.-W. (2015). An overview on nondestructive spectroscopic
techniques for lipid and lipid oxidation analysis in sh and sh products.
Comprehensive Reviews in Food Science and Food Safety, 14(4). https://doi.org/
10.1111/1541-4337.12138
Xu, J.-L., Riccioli, C., & Sun, D.-W. (2016). Development of an alternative technique for
rapid and accurate determination of sh caloric density based on hyperspectral
imaging. Journal of Food Engineering, 190. https://doi.org/10.1016/j.
jfoodeng.2016.06.007
Yang, Y., Jia, W., Bucher, T., Zhang, H., & Sun, M. (2019). Image-based food portion size
estimation using a smartphone without a ducial marker. Public Health Nutrition, 22
(7), 11801192. https://doi.org/10.1017/S136898001800054X
Yim, M. Y. C., & Yoo, C. Y. (2020). Are digital menus really better than traditional
menus? The mediating role of consumption visions and menu enjoyment. Journal of
Interactive Marketing, 50, 6580. https://doi.org/10.1016/j.intmar.2020.01.001
Zhang, Y., Yue, P., Zhang, G., Guan, T., Lv, M., & Zhong, D. (2019). Augmented reality
mapping of rock mass discontinuities and rockfall susceptibility based on unmanned
aerial vehicle photogrammetry. Remote Sensing, 11(11). https://doi.org/10.3390/
rs11111311
Zhou, L., Zhang, C., Liu, F., Qiu, Z., & He, Y. (2019). Application of deep learning in food:
A review. Comprehensive Reviews in Food Science and Food Safety, 18(6), 17931811.
https://doi.org/10.1111/1541-4337.12492
J.J.K. Chai et al.
... These smart appliances can guide users in cooking food in ways that preserve maximum nutritional value, making meals more nutritious and enhancing the health benefits of the food consumed. This leads to better utilization of the food's nutritional potential [128][129][130]. Biotechnology and precision agriculture can be used to fortify and biofortify crops with essential vitamins and minerals. ...
Article
Full-text available
Ensuring global food security is a critical challenge that necessitates innovative solutions and advanced technologies. This study explores how the fourth industrial revolution (Industry 4.0) technologies can transform global food security by enhancing availability, access, utilization, stability, agency, and sustainability. Technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), big data, blockchain, and robotics are detailed, highlighting their current applications in the food sector. Emphasizing Sustainable Development Goal 2: Zero Hunger, the study examines how precision agriculture, smart farming, automated machinery, real-time monitoring, data analytics, and digitalization can improve food production, distribution, and quality, ultimately fostering food security. This comprehensive analysis offers strategic insights and policy recommendations for stakeholders to leverage these transformative technologies, ensuring a sustainable and secure food future for all.
... In classrooms settings, VR is employed to construct realistic laboratory simulations and historical environments to enhance knowledge retention among students when compared to conventional learning methods. Nevertheless, in numerous communities, there remains a degree of concern regarding the adoption of immersive technologies (Chai et al. 2022). Despite observable increases in the acceptance of digital teaching methodologies in countries including India and Brazil, reluctance towards VR tools remains evident (Gonaygunta et al. 2023). ...
... Augmented reality (AR) applications are also being explored for their potential to revolutionize training and operational procedures in agriculture [85]. AR and mixed reality technologies contribute to food science areas such as dietary assessment, traceability, and food safety [86]. Generative AI enhances these technologies by providing realistic simulations and predictive models that can improve training effectiveness and operational efficiency. ...
... These interactive experiences provide entertainment and education and create memorable moments and social connections around food. Moreover, advancements in technology such as virtual reality (VR) and augmented reality (AR) are enhancing interactive cooking experiences by allowing participants to visualize recipes, explore virtual kitchens, and receive step-by-step guidance in real-time [15]. However, ensuring accessibility, inclusivity, and affordability are essential for adopting interactive cooking experiences. ...
Chapter
The food industry is undergoing rapid transformation driven by technological advancements, shifting consumer preferences, and evolving global trends. This paper uses insights from the hotel sector to explore the future trends and prospects in the food industry through the lens of hospitality and the future trends and prospects in the food industry. This study also investigates the changing hotel food service industry, analyzing present practices and innovations. We focus on the impact of technological improvements, shifting consumer preferences, and increased attention to sustainability. We examine prominent developments in the hotel sector, including customized dining experiences, technology integration, and sustainability measures, providing evidence to support these observations. The paper discusses the increasing importance of sustainability, health-conscious dining, and the demand for personalized culinary experiences. By examining case studies, market trends, and industry data, this finding reveals the techniques that hotels utilize to satisfy the changing demands of guests and maintain a competitive edge in the market. The study provides significant insights and recommendations for stakeholders in the hotel business looking to manage the future trajectory of the food industry based on a synthesis of evidence.
Chapter
Identifying and classifying foods accurately is critical to ensuring food safety. Augmented Reality (AR) stands out as a promising technology in this context. During studies at Companhia de Entreposto e Armazéns Gerais de São Paulo (CEAGESP), difficulties were identified in classifying aromatic herbs due to the diversity and similarities between species. The project aimed to develop the ARomaticLens application to address these challenges, employing the Design Science Research (DSR) methodology. Validation was carried out through practical tests and questionnaires with CEAGESP experts. The results obtained accurate identification of herbs and a score of 8 on a scale of 0 to 10 in the usability of the application.
Article
Full-text available
Vision-induced gustatory manipulation interfaces can help people with dietary restrictions feel as if they are eating what they want by modulating the appearance of the alternative foods they are eating in reality. However, it is still unclear whether vision-induced gustatory change persists beyond a single bite, how the sensation changes over time, and how it varies among individuals from different cultural backgrounds. The present paper reports on a user study conducted to answer these questions using a generative adversarial network (GAN)-based real-time image-to-image translation system. In the user study, 16 participants were presented somen noodles or steamed rice through a video see-through head mounted display (HMD) both in two conditions; without or with visual modulation (somen noodles and steamed rice were translated into ramen noodles and curry and rice, respectively), and brought food to the mouth and tasted it five times with an interval of two minutes. The results of the experiments revealed that vision-induced gustatory manipulation is persistent in many participants. Their persistent gustatory changes are divided into three groups: those in which the intensity of the gustatory change gradually increased, those in which it gradually decreased, and those in which it did not fluctuate, each with about the same number of participants. Although the generalizability is limited due to the small population, it was also found that non-Japanese and male participants tended to perceive stronger gustatory manipulation compared to Japanese and female participants. We believe that our study deepens our understanding and insight into vision-induced gustatory manipulation and encourages further investigation.
Article
Full-text available
While virtual reality (VR) has become increasingly popular in food-related research, there has been a lack of clarity, precision, and guidelines regarding what exactly constitutes a virtual reality study, as well as the options available to the researcher for designing and implementing it. This review provides a practical guide for sensory and consumer scientists interested in exploring the emerging opportunities offered by VR. We take a deep dive into the components that make up a VR study, including hardware, software, and response measurement methods, all the while being grounded in immersion and presence theory. We then review how these building blocks are put together to create two major categories of research scenarios: product selection, which can be entirely created in VR, and food evaluation, which involve tasting products in real life. For each category, we review current literature with a focus on experimental design, then highlight future avenues and technical development opportunities within sensory and consumer research. Finally, we evaluate limitations and ethical issues in VR food research, and offer future perspectives which go above and beyond ensuring ecological validity in product testing.
Article
Full-text available
This research examines how consumers’ intentions to purchase food change depending on the visualisation mode (3D vs. AR) and product format (served vs. packaged). In three studies, we demonstrate that mental simulation of eating experiences (process and outcome) mediate these effects. Study 1 shows that AR visualisation of a served food improves simulation of the eating process over 3D visualisation, with a positive effect on purchase intention. Study 2 reveals that 3D visualisation improves purchase intention for packaged products (high instrumental properties) over served products (low instrumental properties) while the opposite is true for AR visualisation. In addition, interactivity and immersion mediate the effects of 3D (vs. AR) on mental simulation of the eating process for packaged products. Study 3 extends these results by showing that 3D increases purchase intention by eliciting mental simulation of the eating outcome, when the food is visible due to transparent (vs. opaque) packaging (displaying both sensory and instrumental properties), but that no such differences emerge for AR. This research highlights the importance of using different visualisation modes to promote food depending on the product format. The findings have important implications for both offline and online retailers.
Article
Full-text available
In the past decade, remarkable progress has been made in the domain of augmented reality/virtual reality (AR/VR). The need for realistic and immersive augmentation has propelled the development of haptics interfaces‐enabled AR/VR. The haptics interfaces facilitate direct interaction and manipulation with both real and virtual objects, thus augmenting the perception and experiences of the users. The level of augmentation can be significantly improved by thermal stimulation or sensing, which facilitates a higher degree of object identification and discrimination. This review discusses the thermal technology‐enabled augmented reality and summarizes the recent progress in the development of different thermal technology such as thermal haptics including thermo‐resistive heater and Peltier devices, thermal sensors including resistive, pyroelectric, and thermoelectric sensors, which can be utilized to improve the realism of augmentation. The fundamental mechanism, design strategies, and the rational guidelines for the adoption of these technologies in AR/VR is explicitly discussed. The conclusion provides an outlook on the existing challenges and outlines the future roadmap for the realization of next‐generation thermo‐haptics enabled augmented reality.
Article
Full-text available
Purpose of Review Augmented reality (AR) is becoming increasingly popular in modern-day medicine. Computer-driven tools are progressively integrated into clinical and surgical procedures. The purpose of this review was to provide a comprehensive overview of the current technology and its challenges based on recent literature mainly focusing on clinical, cadaver, and innovative sawbone studies in the field of orthopedic surgery. The most relevant literature was selected according to clinical and innovational relevance and is summarized. Recent Findings Augmented reality applications in orthopedic surgery are increasingly reported. In this review, we summarize basic principles of AR including data preparation, visualization, and registration/tracking and present recently published clinical applications in the area of spine, osteotomies, arthroplasty, trauma, and orthopedic oncology. Higher accuracy in surgical execution, reduction of radiation exposure, and decreased surgery time are major findings presented in the literature. Summary In light of the tremendous progress of technological developments in modern-day medicine and emerging numbers of research groups working on the implementation of AR in routine clinical procedures, we expect the AR technology soon to be implemented as standard devices in orthopedic surgery.
Article
Augmented reality retail applications (ARRAs) have emerged as rapidly developing innovative and futuristic retail innovation used in both physical store and online shops to improve the retail settings and customer experience. So, the objective of this research was to identify predictors of user benefits of ARRA in the retail food chain. By integrating the theory of information system success model, this study proposes a model to investigate the mediating effect of two values: (1) user satisfaction and (2) user continuance intention between quality perspective as explanatory variables (system, service, and information quality) and user benefits as the outcome variable. Both the mediating factors are found positively playing mediating roles among all proposed relationships. This paper provides valuable course of action for retailers and marketers on assessing customers satisfaction and using ARRA to create marketing strategies effectively.
Article
The food industry represents one of the most critically important sectors of the economy. Food is a basic necessity for human life and requires specialized handling, preparation, and logistics to be safe for human consumption. Moreover, food supply chains have become more globalized, fragmented, and complex. Food industry stakeholders are forced to reconsider conventional ways of managing food chains and coping with the latest technology innovation trends. As technology unlocks several opportunities in supply chains, food businesses have a vested interest in exploring Augmented Reality (AR). AR denotes the technology of overlaying virtual objects upon the real-world environment. Despite being a well-established technology , AR possibilities in the food supply chain remain an unexplored research area. Therefore, to fill this knowledge gap and capture the latest developments in this field, we conducted a systematic literature review. Fifty-one (51) publications were thoroughly analyzed to identify the enablers of AR in the food industry. Findings revealed five main areas where AR potentially offers substantial business value. These include food process efficiencies, food decision-making, food marketing, food training , and food safety. Finally, research contributions, trajectories and limitations were highlighted.