PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.


Digital twin (DT), referring to a promising technique to digitally and accurately represent actual physical entities, has attracted explosive interests from both academia and industry. One typical advantage of DT is that it can be used to not only virtually replicate a system’s detailed operations but also analyze the current condition, predict the future behavior, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate the remote monitoring, diagnosis, prescription, surgery and rehabilitation, and hence significantly alleviate the heavy burden on the traditional health- care system. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and the conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT.
Networking Architecture and Key Technologies for
Human Digital Twin in Personalized Healthcare:
A Comprehensive Survey
Jiayuan Chen, Changyan Yi, Member, IEEE, Samuel D. Okegbile, Member, IEEE, Jun Cai, Senior Member, IEEE,
and Xuemin (Sherman) Shen, Fellow, IEEE
Abstract—Digital twin (DT), referring to a promising technique
to digitally and accurately represent actual physical entities, has
attracted explosive interests from both academia and industry.
One typical advantage of DT is that it can be used to not
only virtually replicate a system’s detailed operations but also
analyze the current condition, predict the future behavior, and
refine the control optimization. Although DT has been widely
implemented in various fields, such as smart manufacturing and
transportation, its conventional paradigm is limited to embody
non-living entities, e.g., robots and vehicles. When adopted in
human-centric systems, a novel concept, called human digital
twin (HDT) has thus been proposed. Particularly, HDT allows in
silico representation of individual human body with the ability
to dynamically reflect molecular status, physiological status,
emotional and psychological status, as well as lifestyle evolutions.
These prompt the expected application of HDT in personalized
healthcare (PH), which can facilitate the remote monitoring,
diagnosis, prescription, surgery and rehabilitation, and hence
significantly alleviate the heavy burden on the traditional health-
care system. However, despite the large potential, HDT faces
substantial research challenges in different aspects, and becomes
an increasingly popular topic recently. In this survey, with a
specific focus on the networking architecture and key technologies
for HDT in PH applications, we first discuss the differences
between HDT and the conventional DTs, followed by the universal
framework and essential functions of HDT. We then analyze its
design requirements and challenges in PH applications. After
that, we provide an overview of the networking architecture of
HDT, including data acquisition layer, data communication layer,
computation layer, data management layer and data analysis and
decision making layer. Besides reviewing the key technologies for
implementing such networking architecture in detail, we conclude
this survey by presenting future research directions of HDT.
Index Terms—Human digital twin, personalized healthcare,
artificial intelligence, reinforcement learning, federated learning,
networking architecture, life-cycle data management, pervasive
sensing, on-body communications, tactile Internet, semantic com-
munications, multi-access edge computing, edge-cloud collabora-
tion, blockchain, Metaverse,
J. Chen and C. Yi are with the College of Computer Science and Technol-
ogy, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu,
211106, China. (E-mail: {jiayuan.chen, changyan.yi}
S. D. Okegbile and J. Cai are with the Network Intelligence and Inno-
vation Laboratory (NI2Lab), Department of Electrical and Computer En-
gineering, Concordia University, Montreal QC H3G 1M8, Canada. (Email:
{samuel.okegbile, jun.cai}
X. Shen is with the Department of Electrical and Computer Engineer-
ing, University of Waterloo, Waterloo, ON N2L 3G1, Canada. (Email:
(Corresponding authors: Changyan Yi and Jun Cai.)
A. Background and Motivation
THE inherent shortage of healthcare resources has caused
ongoing challenges to the traditional healthcare system.
For example, COVID-19 pandemic since January 2020 has
resulted in surges of demands for medical facilities, such
as ventilators, extracorporeal membrane oxygenation (ECMO)
and testing machines, which exceeds the capacity of the tra-
ditional system, leading to tens of millions of people infected
and died [1]. Meanwhile, the cost burden on the traditional
healthcare system is rapidly growing in most worldwide re-
gions. For instance, the Centers for Medicare & Medicaid
Services predicts that U.S. healthcare spending will grow at a
rate 1.1% faster than that of the annual gross domestic product
(GDP) and is expected to increase from 17.7% of the GDP
in 2018 to 19.7% (reach to $6.2 trillion) by 2028 [2]. Despite
healthcare expenditures being projected to increase at such
a substantial rate, the traditional healthcare system produces
no better (and indeed sometimes worse) outcomes. A recent
analysis estimated that about one-quarter of total healthcare
spending in the U.S. (between $760 billion and $935 billion
annually) is wasteful, mainly attributed to ineffective and
inefficient treatments [3].
To this end, the relentless proliferation of disruptive infor-
mation technologies, such as 5G, big data, artificial intelli-
gence (AI), have open rich opportunities for the realization
of highly efficient personalized healthcare (PH) services. PH
is an innovative way that seeks to offer preventive care and
targeted treatments for each individual patient through his/her
unique medical records, genes and values. In other words,
PH can provide a more precise treatment approach, one-
size-fits-one approach, eliminating unnecessary side effects
and high cost of one-size-fits-all approach widely adopted in
the traditional healthcare system. Furthermore, PH can also
help medical personnel make better decisions and facilitate
breakthroughs for many difficult-to-treat rare diseases given
the individualized data-driven information, thereby improve
the quality of life of people around the world [4].
Unsurprisingly, AI plays a prominent role in the implemen-
tation of PH. As one of the most popular research hotspots,
AI has penetrated in all aspects of people’s lives with strong
muscle in data analysis. AI can fuel the progress of digitization
and intelligence of all industries, particularly for healthcare,
where AI emulates human cognition in the analysis of compli-
cated medical data using complex algorithms and software. To
guarantee superior PH services, AI requires accurate learning
and construction of many high-performance and personalized
feature models for individuals, based on massive high-quality
individual training datasets [5]. Nevertheless, this process of
acquiring individual datasets from a single person (particu-
larly for supervised learning which needs to manually label
these datasets) is significantly costly, error-prone and time-
consuming. Furthermore, the existing AI-driven efforts mostly
provide solutions through regression-based and classification-
based approaches for real-valued and discrete-valued attributes
predictions respectively, and thereby, such solutions are limited
in scope to specific diseases, diagnosis or a small subset of
the population [6].
To eliminate the limitations of the current AI-driven solu-
tions for PH, digital twin (DT), is envisioned as a promising
paradigm to adopt. Specifically, DT can be seen as an excep-
tionally vivid testbed for replicating the condition, function
and operation of each individual, and with the ability of
running an unlimited number of virtual “what if” simulations
without harming the human body. With this feature, AI in-
tegrating in DT could be trained much more efficiently with
massive and diverse individual synthetic datasets generated by
DT (besides those collected from physical world). Further-
more, DT can in turn utilize a myriad of high-performance
and coupled AI models to assist in capturing more compre-
hensive and high-fidelity of the extremely complex human
body system. Additionally, since the conditions of the human
body system are ever-changing (e.g., with aging, behavioral
and environmental changes), AI models integrating in DT can
be dynamically validated and updated for precisely abstracting
the real status of the human body system, and thus become
much more self-adapted.
B. Demystifying Human Digital Twin
The concept of DT was proposed in 2003 and referred
to the digital replica of a physical entity [7]. DT is the
convergence of several cutting-edge technologies, such as
big data and AI, 5G/6G and Internet of things (IoT), data
visualization and extended reality (XR)1, communication and
computation technologies, blockchain and cybersecurity. DT
is at the forefront of the Industry 4.0 revolution, and is being
widely implemented in diverse areas, e.g., manufacturing [9],
city transportation [10], smart construction [11], and smart
wireless systems [12]. These applications are mainly related
to non-living physical entities. When adopted in human-
centric applications and systems, a new concept, called human
digital twin (HDT), has emerged, which allows an in silico
representation of any individual with the ability to dynamically
reflect molecular status, physiological status, emotional and
psychological status, as well as lifestyle evolutions [13].
HDT is expected to play an essential role in shaping the
future of healthcare systems. It is, therefore, unsurprising that
HDT is now receiving wide attention in many healthcare
1XR is defined as an umbrella term that encompasses virtual reality (VR),
augmented reality (AR), mixed reality (MR) [8].
industries owing to its capabilities to improve PH. To mention
a few, we briefly discuss some benefits of HDT.
HDT can facilitate continuous monitoring of individual
health status with the ability to predict viral infections and
possible corresponding immune responses, thus allowing
rapid and proactive interventions.
HDT can improve the efficiency of clinical trials by
ensuring that clinical trials are carried out on the virtual
replica of human beings, thus promoting an efficient
pharmaceutical industry procedure while supporting safer
drugs and accelerating vaccine development process.
HDT can facilitate therapeutic plans option by recom-
mending safe and patient-specific medical therapies based
on the unique genetic profile of each individual. With
this, HDT can prevent harmful side effects and improves
medical outcomes while saving cost.
Through the use of current biomarkers, HDT can facilitate
early identification of genomic and epigenomic events
such as carcinogenesis in disease progression, thus al-
lowing earlier detection of illness.
HDT is capable of predicting the future health condition
of each individual, thereby allowing a proper activation
of efficient preventive measures.
HDT can reduce health inequalities through telemedicine.
Patients can remotely access PH irrespective of their
geographic locations.
HDT can promote precisions of diagnosis. With HDT,
digitized sensations of pain and anxiety are converted into
a form that can be observed by the medical personnel.
While HDT development is still in its infancy, many med-
tech giants, such as Siemens, Philips and IBM, are currently
exploring the possibility of facilitating the commercialization
of HDT by relying on their massive databases and strong
financial strengths. In Table I, we provide a glance of the
current industrial progress on HDT.
C. Related Work
As an underlay of HDT, DT continues to attract a myriad of
attentions. In [23], Barricelli et al. provided the state-of-the-art
definitions of DT including its fundamental characteristics as
well as its common application areas. The survey carried out
in [24] focused on introducing the concept of DT in wireless
systems for addressing issues such as security, privacy and air
interface design. In [25], the authors reviewed the framework
of DT in the view of industrial IoT applications.
Following [23]–[25], subsequent efforts delved into more
comprehensive and specific enabling technologies for DTs.
For example, Rasheed et al. in [26] enumerated the com-
mon challenges in DTs and surveyed the corresponding en-
abling technologies, such as digital platforms, cryptography,
blockchain, big data, data privacy and security, data compres-
sion, 5G and IoT for real-time communication. Alcaraz et al.
in [27] abstracted the architecture of DT into four layers,
i.e., data dissemination and acquisition, data management
and synchronization, data modelling and additional services,
data visualization and accessibility, and further explored the
enabling technologies that can be used to achieve each of
Company\Project Product\Service
Siemens Healthineers [14]
3D Digital Heart Twin 3D Digital Heart Twin solution is capable of making definitive and timely
diagnoses while facilitating simulations of surgical procedures and treatment trials. It is one of the first
full-fledged ward management DTs.
Philips [15] Heart model Heart model is a personalized heart DT.
European CompBioMed project [16] The project focuses on the use and development of computational methods for biomedical applications,
including the virtual human.
The SIMULIA living heart project [17]
Released by Dassault systems, the SIMULIA living heart a DT model translates electrical impulses
into mechanical contractions and is the first digital organ representation that possesses the same
functionality as the physical organ.
IBM [18] IBM DT simulates the body’s biochemical processes through AI techniques to detect cancerous cells in
any previously obtained health data.
Digitwins [19]
Digitwins aims to develop a revolutionary-based approach for healthcare systems through detailed
modelling processes to facilitate simulations of numerous treatments without subjecting patients to any
form of harm.
General Electric [20] An application called “Predix” was developed by General Electric in 2016 to create DTs for patients.
“Predix” is responsible for running data analytics and monitoring.
Sim & Cure [21] Sim & Cure developed a DT that can assist surgeons when choosing appropriate endovascular implants
that can optimize aneurysm repair.
Swedish DT Consortium (SDTC) [4], [22]
The SDTC strategy for PH is based on: i) constructing multiple HDT copies of any single patient; ii)
computationally treating each of these HDTs with different medications to identify the best-performing
medication; iii) treating the patient with this best-performing medication.
these architectural layers. Khan et al. [28] provided a brief
overview of key technologies for Industry 4.0, including
IoT, big data, AI, and DT (which is the confluence of the
technologies mentioned above). Additionally, this work briefly
introduced tools for the construction of DT, such as tools for
DT modelling and tools for data management in DT.
However, these surveys mainly focused on the implemen-
tation of DT in industries, and we call them as the conven-
tional DT hereafter. The design requirements of HDT and the
conventional DT are significantly different in many aspects,
especially when considering HDT from the PH perspective.
Hence, the enabling technologies of the conventional DT
described in the existing surveys may not be directly adopted
in HDT. Several recent studies have started to discuss en-
abling technologies for HDT solutions in PH applications.
For instance, Ferdousi et al. in [29] presented a very-high
level overview of HDT design requirements while highlighting
the differences between HDT and conventional DTs. The
authors briefly discussed some of the underlying technologies
used in the development of a HDT. They provided a use-
case scenario where a DT of a patient with a mental issue
was created to monitor and predict stress levels. Similarly,
El Saddik et al. [30] elaborated on the architecture (con-
sisting of data source, AI-inference engine and multimodal
interaction (MMI)) and design requirements of HDT. The
authors carried out an in-depth investigation of ve cutting-
edge technologies to support the proposed HDT in [31].
These five technologies include big data technology with
huge amount of data collected using IoT devices and social
networks, AI algorithms to extract information, cybersecurity
technology for securing the collected personal data, MMI
technology for interfacing the real and virtual twin, and quality
of experience (QoE) based communications for providing high
performing networks. Okegbile et al. in [32] provided an
insight into HDT for PH services. The authors presented
architectural frameworks as well as key design requirements
(including model conceptualization, data representation model,
scalable AI-driven analytics, model scalability and reliability,
and security) of HDT and investigated the key technologies
(such as connectivity, data collection, data processing, digital
modelling, AI solutions for decision making and cloud-edge
computing for storage and computation) with various chal-
lenges to suggest future research directions. Lin et al. [33]
conducted an extensive literature review on HDT, analyzing
enabling technologies and establishing typical frameworks.
Particularly, they focused on the sensing/perception technology
and two modelling technologies of human body or organ and
human behavior.
Despite the aforementioned surveys or magazines that have
discussed various aspects of HDT and the conventional DT
(as summarized in Table II), these works neither offered the
networking architecture enabling HDT in PH applications, nor
delve into the key technologies for supporting the networking
architecture. However, it is obvious that the practical imple-
mentation and operation of HDT heavily rely on its networking
architecture. This motivates us to compose this survey that
particularly discusses the networking architecture of HDT
in PH applications. Through surveying the key technologies
enabling the networking architecture, our survey provides
critical insights and useful guidelines for the readers to better
understand how to realize HDT for PH applications and
discover the open issues on this topic.
D. Contributions
We carry out a comprehensive survey on the networking
architecture and key technologies for HDT in PH applications.
Our survey can provide not only critical insights and useful
guidelines for readers to have a better understanding of the net-
working architecture of HDT for PH applications, but also the
key technologies for enabling such networking architecture.
The contributions of this survey is a bold, forward-looking
vision of HDT, and they are summarized as follows:
Reference Targeted
Acquisition Communication Computation Data
Management Data Analysis
Khan et al.
[24] 2022
DT % % " " " %
Tao et al.
[25] 2019
DT % % % % " %
Rasheed et al.
[26] 2020
DT % " " " " "
Alcaraz et al.
[27] 2022
DT " " " " " "
Khan et al.
[28] 2022
DT % " " " " "
Lin et al.
[33] 2023 HDT % " % % % %
This work HDT " " " " " "
We start this survey by conducting a comprehensive
exploration of HDT. We first identify the differences
between HDT and the conventional DT. Then, the univer-
sal framework of HDT is summarized from the existing
literature. Based on this, the essence of the HDT, i.e., a
hyper-realistic and hyper-intelligent testbed, is presented.
Following this, the design requirements and challenges
associated with deploying HDT in PH applications are
discussed. To realize the HDT in PH applications, a
networking architecture is imperative, and thereby, we
provide an overview of a ve-layered networking ar-
chitecture of HDT and its enabling key technologies.
Through all these, readers can gain an in-depth view of
this topic.
Since it is widely recognized that HDT in PH applications
requires sophisticated and high-quality data from multi-
source, we survey the enabling technologies in the data
acquisition layer of HDT, which includes wearable and
implantable biomedical devices, social networks sensing
and electronic health record. Our survey in this section
aims to provide general readers with a holistic view of
the data source of HDT, and stimulates the innovative
research in acquiring more comprehensive, fine-grained
and accurate data for HDT.
We identify that the communication in HDT can be
categorized into two tiers. The first tier is on-body
communication, where the technologies surveyed include
Bluetooth low energy, ZigBee and molecular communica-
tion for resource-constrained data acquisition equipment
to transmit data around the human bodies. The second tier
is beyond-body communication, which typically transmits
massive and multimodal data between the physical and
digital spaces, requiring extreme ultra-reliable and low-
latency and ultra-low round-trip time. Then, we survey
key communication solutions that will be fundamental
towards realizing these characteristics. We particularly
focus on next-generation communication, such as tactile
Internet and semantic communication. By this, readers
are able to understand how the existing and the devel-
opment of future communication systems can potentially
contribute to HDT.
We discuss key computation challenges towards realizing
the time-sensitive and computation-intensive of HDT
tasks for resource-constrained devices. We survey solu-
tions from two classifications of the existing works. The
first classification of our surveyed solutions leverages the
multi-access edge computing paradigm, and the second
one studies the edge-cloud collaboration paradigm. Our
survey in this section aims to provide readers with the
view of how HDT in PH applications can be achieved
ubiquitously and timely.
We identify the design requirements and desirable char-
acteristics of data management layer in HDT. Then, we
survey data management for HDT from three perspec-
tives, namely, data pre-processing, data storage, and data
security and privacy. Our survey in this section can assist
readers to thoroughly learn the data management proce-
dure of HDT, and stimulates researchers to develop more
powerful data management schemes to satisfy stringent
design requirements of HDT.
We explore how AI can aid in data analysis and decision
making for HDT in PH applications. We expound on how
AI can support HDT in personalized diagnosis, personal-
ized prescription, personalized surgery and personalized
rehabilitation. Based on this, readers can understand what
fuels the intelligence of HDT, and may be motivated to
conduct more out-of-the-box research for data analysis
and decision-making for HDT.
We outline several research directions to encourage future
studies in this area. Our survey can serve as an initial step
that precedes a holistic and insightful investigation of the
networking architecture and key enabling technologies
for HDT in PH applications, helping researchers quickly
grasp this area.
The structure of this survey is visualized as shown in Fig.
1. For convenience, Table III lists all common abbreviations.
A. Differences Between HDT and Conventional DT
HDT focuses on virtual replicas of human beings and pos-
sess unique characteristics compared to the conventional DT
(which is mostly applied in industries where physical entities
are usually machines). First, human beings are living entities,
Structure of the Survey
VI. Key Technologies for Data
Management Layer
V. Key Technologies for
Computation Layer
VII. Key Technologies for
Data Analysis and Decision
Making Layer
VIII. Future Research
IX. Conclusion
I. Introduction
II. A Comprehensive
Exploration of HDT
III. Key Technologies for Data
Acquisition Layer
A. Differences Between HDT and Conventional DT
IV. Key Technologies for
Communication Layer
D. Design Requirements and Challenges
B. The Universal Framework of HDT
A. Pervasive Sensing
B. Electronic Health Record
A. On-Body Communication
B. Beyond-Body Communication
A. Multi-Access Edge Computing
B. Edge-Cloud Collaboration
A. Data Pre-Processing
B. Data Storage
C. Data Security and Privacy
A. Diagnosis B. Prescription C. Surgery D. Rehabilitation
A. Federated HDT in the Cloud/Edge Network
B. Mobile HDT
C. Intelligent Blockchain for HDT
D. Green HDT
E. Full-Fledged Explainable AI for HDT
F. Generalized AI for HDT
E. Overview of Networking Architecture and Key Technologies for HDT
G. Metaverse
C. HDT: A Hyper-Realistic and Hyper-Intelligent Testbed
C. Lessons Learned
C. Lessons Learned
D. Lessons Learned
C. Lessons Learned
E. Lessons Learned
Fig. 1. The organization of this survey.
and the most significant difference between human beings
and machines is emotion and psychology [29], [33]. External
factors or physiological states can affect individual emotions
and psychology, which in turn could affect individual physio-
logical states. Second, human’s external behaviours depend on
individual subjective consciousness, while internal behaviours
(e.g., blood flow, the progress of diseases, activities of organs
and tissues) are known to generally result from multi-source
and complex factors. On the contrary, the behavioural rules of
machines are similar and predetermined. As a result, humans
are particularly complicated systems with more uncertainty
compared to machines, and the abstract process of humans is
significantly more difficult than machines [29], [33].
On top of this, ethical consideration is another unique
feature of HDT [29], [33], [34]. This may potentially lead to
healthcare inequality between the developed and developing
countries. Additionally, since HDT can reflect a patient’s real-
time and accurate health status, the question of whether or not
the patient has the authority to access the actual health status
when the patient is diagnosed with a certain disorder needs to
be considered on the ethical level.
Furthermore, human entities’ data are more heterogeneous
and unstructured. As a result, multi-source and sophisticated
data are required to provide a high-fidelity digital representa-
tion model of any human entity. Asides from physiological
data, the commonly unstructured environmental and social
Abbreviation Full Form
DT Digital twin
HDT Human digital twin
PT Physical twin
VT Virtual twin
PH Personalized healthcare
EHR Electronic health record
IoT Internet of things
IoMT Internet of medical things
ACC Accelerometers
ECG Electrocardiograms
EEG Electroencephalographic
PPG Photoplethysmography
EMG Electromyography
CT Computed tomography
MRI Magnetic resonance image
WBAN Wireless body area network
BLE Bluetooth low energy
MC Molecular communication
TI Tactile Internet
ML Machine learning
VR Virtual reality
AR Augmented reality
MR Mixed reality
XR Extended reality
MEC Multi-access edge computing
AI Artificial intelligence
DL Deep learning
FL Federated learning
SVM Support vector machine
CNN Convolutional neural network
DNN Deep neural network
GAN Generative adversarial network
GNN Graph neural network
KNN k-nearest neighbors
RL Reinforcement learning
DRL Deep reinforcement learning
LSTM Long-short-term memory
media data are important when abstracting human virtual twins
because of the high correlation that exists between humans
and such external data [29], [33]. Lastly, humans are mobile
agents, unlike machines. This human mobility poses several
challenges to the design of HDT, making the migration and
placement problems of HDT important issues to address [32].
The distinctions between HDT and the conventional DT are
summarized in Table IV.
B. The Universal Framework of HDT
HDT is capable of transforming the current healthcare sec-
tor. As shown in Fig. 2, we summarize the universal framework
of HDT from [29], [32], [33], [35]–[37], which consists of
six fundamental components, i.e., data acquisition; digital
modelling and virtualization; communication; computation;
data management; and data analysis and decision making.
1) Data Acquisition: Since HDT is a data-driven model,
a reliable data acquisition process is vital for HDT. HDT re-
quires both physiological and psychological data of each phys-
ical twin (PT), possibly acquired through multiple sources,
to establish high-fidelity digital representation of virtual twin
(VT). Specifically, it includes medical data from medical
institutions, such as electronic health records (EHR) including
biomedical examinations and medical images, physiological
data (e.g., heart rate, blood pressure and the concentration of
biomarkers) obtained through smart personal medical devices,
and data from users’ social media, such as posts, comments
or messages on Facebook, Instagram and Twitter, which can
be used to estimate emotions and psychologies.
2) Digital Modelling and Virtualization: In HDT, a VT,
built on in silico, is a virtual replica of its corresponding PT
located in the physical environment and co-evolutes with such
a PT via reliable connections. By adopting various digitization
technologies, physical geometries, properties, behaviours and
rules of each PT are digitized holistically to create high-fidelity
VT. Such VT depends on real-world data from the physical
world to formulate human real-time status. After the digital
modelling process, authorized users such as PT, caregivers,
relatives and medical personnel can access the VT through
interaction technologies, such as tangible XR and hologram,
to have immersive interactive experiences with VT.
3) Communication and Computation: Communication and
computation are significant for HDT. Communication schemes
facilitate real-time connectivity among PTs and VTs to ensure
synchronization. These connectivity schemes include PT-VT,
VT-PT, PT-PT and VT-VT connectivity modelling. Similarly,
computation schemes are required in HDT for the execution
of various tasks. Data must be properly extracted, processed,
securely transmitted and executed through some AI-driven
4) Data Management: Since data are obtained through
multiple sources including related PTs and VTs, the size of
data in HDT is massive. These data are generally heteroge-
neous, multi-scale, multi-source and with high noises. There-
fore, HDT requires efficient and effective data management
frameworks to ensure the construction and evolution of VT.
5) Data Analysis and Decision Making: This component
enables HDT to provide reliable data-driven analytics, with
the ability to accurately extract underlying information and
knowledge from any received massive data, thereby enhancing
PH services.
C. HDT: A Hyper-Realistic and Hyper-Intelligent Testbed
1) Profoundly Immersive Experience: In HDT, a high-
fidelity VT is built using powerful digital modeling and
virtualization techniques with real-time information collected
from the corresponding PT. Based on this, users access to
a VT with immersive equipment can obtained profoundly
immersive experience. For example, doctors utilize the VT
of a patient to establish a personalized surgical procedures
before the actual surgery. In this scenario, doctors with VR
and tactile equipment, among others, interact with the VT,
and all the human sensations, such as haptics (e.g., sense of
touching skin) and visual (e.g., flow of blood) can be fed back
vividly and timely to the doctors’ equipment.
2) Interaction-Driven Optimization: HDT is expected to be
a hyper-intelligent human body testbed for optimizing physical
fitness. For example, a doctor could use HDT to prescribe
medication by testing various potential prescriptions on the
patient’s VT, which would save costs and result in a more
Key Features Conventional DT HDT
Emotion and Psychology Lack of emotions and psychology.
Human beings are living entities. Hence, their emotions and
psychology can be affected by external factors or
physiological states.
Behavioural Rules The behavioural rules of similar machines are almost
predetermined and similar.
Human external behaviours depend on individual subjective
consciousness and internal behaviours, which are affected
by multi-source and complex factors.
Ethical Consideration Limited or no ethical considerations.
HDT can lead to healthcare inequality between developed
and developing countries. Besides, it is not clear whether
patients should be authorized to access some information
about their own health conditions.
Data Complexity Mostly structured and homogeneous.
Mostly heterogeneous and unstructured. Correlation also
exists between each individual and external data such as
environmental and social media data.
Mobility Mostly fixed with no or limited mobility Highly mobile with very complicated mobility patterns.
5G and Beyond Cellular Networks x URLLC
Electronic Health Record
Social Networks Sensing
Biomedical Sensing
Data Storage
Data Pre-processing
Blockchain Cloud Computing
Edge Computing
Edge-Cloud Collaboration
Artificial Intelligence
Semantic Communication
Digital Modelling and Virtualization
Data Center
Data Center
Data Center
Data Analysis and Decision Making
Health Monitoring
Diagnosis Surgery
Data Acquisition
Tactile Internet
Smart Gloves
Smart Jacket
Smart Shoes
Smart Socks
Smart Gloves
Smart Underwear
Smart Hat
Smart Armband
Smart Phone
Smart Glass
Smart Watch
Implantable Biomedical Devices
Social Networks
Social Networks
Magnetic Resonance
Base Station
Digital Stomach
Digital Kidneys
Digital Bladder
Data Management Computation
Digital Intestines
Digital Liver
Digital Lungs
Digital Thyroid
Digital Brain
Digital Heart
Fig. 2. The universal framework of HDT.
personalized prescription. The virtual prognosis accelerated by
the powerful computing power will output much quicker and
potentially more accurate compared to the physical world. The
results feedback to the doctors, and the doctors would then
optimize the prescription based on such feedback until the
optimal prescription is achieved. In addition, the VT with real-
time information collected from its corresponding PT could
analyze or even predict the PT’s health status, and provide
timely healthcare recommendations for the PT.
Overall, as one of the major features of HDT, feedback
commonly carries massive multimodal information, whether
in profoundly immersive experience or interaction-driven op-
timization scenarios. Achieving ultra-low round-trip time be-
tween physical and digital spaces requires significantly effi-
cient network resource optimization. These requirements will
be discussed in detail in the next subsection.
D. Design Requirements and Challenges
It is worth noting that HDT is an even more complex system
with many correlated components than the conventional DTs.
Especially when considering specific use cases of HDT in
various PH applications, such as real-time healthcare moni-
toring, personalized diagnosis and personalized prescription,
it is obvious that such application scenarios pose a number
of stringent design requirements and challenges for HDT, as
discussed below.
1) Sophisticated and High-Quality Data: Data are essential
for HDT. The data for HDT should be sophisticated, which
means that the data should be large-scale, real-time, multi-
source and multi-modal, and possess deep values. Specifi-
cally, HDT needs massive data gathering from multi-source
to build a high-fidelity VT of a human. The multi-source
and multi-modal data involves not only human data, but also
environmental data, providing more accurate information for
the construction of HDT through mutual supports, supplements
and corrections, for satisfying different HDT requirements.
Based on this, the VT in the digital space needs real-time
data to timely update itself to keep synchronized with the
PT. Besides, HDT needs data with deep values, where HDT
could gain deep insights from the collected data for providing
accurate and forward-looking feedback. However, missing or
inaccurate data poses a serious risk to the evolution of HDT
and can often lead to misleading information and suggestions
from VTs. Such scenarios are undesirable in HDT systems
since the outcome can be disastrous, thereby undermining the
essence of HDT. Therefore, to ensure that each VT is an
accurate replica of its counterpart PT, high-quality and noise-
free data must be also effectively shared among each PT-VT
pair for appropriate model evolution and subsequent decision
Although some routine healthcare data (such as step num-
ber, heart rate and body temperature) and medical data (e.g.,
medical images) can be easily captured by common sensing
devices such as accelerators, gyroscopes, pressure sensors or
through manual processes, there exist many physical states
that are difficult to capture. For instance, irritation measured
through skin rubbing count and polyphagia measured through
food intake count, which are two essential health insights when
predicting the risk of diabetes [38], are difficult to be captured
in physical states. Therefore, specific biomedical sensors must
be designed to retrieve these information. Additionally, in
HDT, since data collection frequencies of various data are in-
herently different, and the data may be collected from multiple
sources, asynchronous data acquisition and multimodal data
fusion are required to be addressed when establishing HDT.
2) Extreme Ultra-Reliable and Low-Latency Communica-
tion (xURLLC): PT and VT will generate and exchange
high volume and multidimensional data to maintain synchro-
nization. This synchronization is expected to be supported
by xURLLC, with data transmission rate of 100 Gbps,
reliability of 99.99999%and latency of 1 ms [39].
Specifically, when adopting HDT in time-critical healthcare
applications, e.g., remote surgery [40], xURLLC is required
to support real-time update of the VT as well as timely
receptions of feedbacks from such a VT to facilitate timely
decision optimizations at the paired PT. Furthermore, impor-
tant interaction technologies such as tangible XR [41] a
technology that combines XR and tactile internet to transmit
not only large capacity contents such as video and three-
dimensional computer graphics but also haptic signals, such
as feelings of touch also requires the supports of xURLLC
[42]. However, current networking technologies, i.e., fifth gen-
eration (5G) mobile networks, characterized by ultra-reliable
and low-latency communications (URLLC), cannot meet these
stringent requirements. Therefore, future communication tech-
nologies are necessary to provide xURLLC services to support
HDT applications. Nevertheless, xURLLC is still inevitable
to the high signaling overhead (e.g., clock synchronization
and handover costs), which may considerably deteriorate the
network efficiency in general, leading to the demand of further
integrating the deterministic network technologies, e.g., time-
sensitive networking (TSN) [43].
3) Ultra-Low Round-Trip Time (RTT): In specific appli-
cations, e.g., surgery simulation and massage simulation, is
imperative to support immersive interactions. For example, one
of the most significant challenges in haptic communication is
achieving an RTT of 1 ms [44]. RTT is great affected by
queuing and processing delays at the intermediate nodes and
packet transmission time. However, the packet size transmitted
between PTs and VTs are typically large, involving massive
multimodal information (e.g., text, audio, video, image, and
haptic) to achieve a profoundly immersive experience, which
poses a great challenge for achieving ultra-low RTT. Migrating
the VT to the vicinity of users (e.g., the corresponding PT or
doctors) can be a promising solution. Nevertheless, this will
trigger other issues, such as the deployment of migrated VT
and timely synchronization updates between the VT and its
distant PT. Overall, guaranteeing the ultra-low RTT is essential
in HDT applications, and novel network traffic scheduling
schemes for HDT may be developed with a careful consid-
eration of this metric.
4) Data Privacy, Security and Integrity: Healthcare-related
data with individual private information/metadata (e.g., name,
gender address) are privacy-sensitive and have no or limited
tolerance for privacy leakages. Appropriate mechanisms must
be developed to ensure that such data are not intercepted
or modified by unauthorized users while being stored and
transmitted over the network. In other words, key technologies
should provide secure and reliable communications among
PTs and VTs with sufficient data storage privacy. Authenti-
cation of data sources is also a necessity to ensure that all
sources are reliable, while fake data are detected and removed
through reliable security measures to guarantee integrity.
5) Data Storage: HDT relies on multi-source real-time data
from the physical world. Each HDT application can generate
up to a few gigabytes of data in a single day. Storage of
such massive data may be required for VT update process and
analytics. Therefore, key technologies are required to provide
appropriate mechanisms to store the huge amount of data.
6) Advanced Computing Power: In HDT, tasks such as
synchronization, model evolution and analytics are expected
to be time-sensitive and computation-intensive. To ensure a
real-time computation process, massive computation resources
are required by HDT to keep high-fidelity. This prompts HDT
to be deployed on the network side instead of local devices
(which are commonly resource-limited). Furthermore, this
paradigm requires optimal computation resource scheduling
mechanisms on the network side to guarantee the efficient
resource utilization.
7) AI-Driven Analytics: AI is a core key technology for
enabling HDT to have the ability to deliver PH services. AI-
driven analytical models provide insights at different scales
for HDT using real-time and historical collected data, thereby
providing decisions or predictions to individuals and updating
the VT model. Additionally, AI can support HDT from all
aspects (e.g., intelligent computation, intelligent communica-
tion network and intelligent data management). However, one
limitation of current AI solutions is that they rely on black-
Data Acquisition Layer
(Physical Space)
Data Management
Data Analysis and
Decision Making Layer
Transmiss ion
. . .
Data Analysis Methods
PH Services
. . .
Fig. 3. The networking architecture of HDT.
box models and may be inappropriate for problems requiring
explicit explanations (e.g., clinical applications). Therefore,
the interpretability of AI is an imperative issue that extremely
depends on the implementability of HDT. Moreover, the ever-
changing of human conditions impel AI models to dynamically
validate their effectiveness and update for more precisely
abstracting the real status of the human body. However, this
process is highly complex, which requires the assistance of a
strong computation capability.
E. Overview of Networking Architecture and Key Technologies
for HDT
In summary, a networking architecture is imperative to
enable the realization of HDT for PH applications. As shown
in Fig. 3, it is expected that the networking architecture
consists of five layers according to the end-to-end data stream
processing procedure. The data are first collected by the data
acquisition layer, and are transmitted to the data management
layer through the communication layer for pre-processing,
storing and sharing, and then by the support of the com-
putation layer, they are processed by the data analysis and
decision making layer for serving powerful applications (e.g.,
diagnosis, prescription, surgery and rehabilitation). In return,
the feedback of the VT in the digital space to its corresponding
PT is transmitted through all these five layers reversely to the
physical space.
Specifically, since HDT is an extremely complex data-
driven system, it needs multiple and heterogeneous nodes
to collaboratively acquire sophisticated and high-quality data
in the data acquisition layer (similar to the structure of
sensor networks). Then, for supporting the heterogeneous data
transmission requirements in HDT, multiple heterogeneous
communication paradigms are needed in the communication
layer, which includes communication protocols on and beyond
human bodies. Since PTs in the physical environment is mo-
bile, while requiring fast-responsive and muscular computing
power services for supporting time-sensitive and computation-
intensive tasks in HDT (e.g., model evolution, real-time ren-
dering), multi-node collaborative computing paradigm with
frequent information exchange is required to provide ubiq-
uitously advanced computing power in the computation layer.
Furthermore, considering that the data generated in HDT (e.g.,
data from data acquisition layer and feedback data from VTs)
are typically large-scale, multi-modal, multi-source, and with
high noises, it requires multiple servers to collaboratively
pre-process and store the data for providing the robust data
management service in the data management layer (similar to
the structure of data center networks). On top of this, data
security and privacy can also be potentially offered by such
paradigm through multi-node collaboratively authentication,
among others. For simultaneously realizing HDT’s different
PH services (e.g., synchronization of the PT and VT pair,
health monitoring and diagnosis), multi-heterogeneous data
analysis methods are expected to play vital roles in the data
analysis and decision making layer. Overall, the networking
architecture of HDT is a complex and sophisticated system
that requires collaboration across multiple layers and nodes.
For comprehensively implement such networking architec-
ture, a variety of key end-to-end technologies are required, as
shown in Fig. 4. Particularly, in the data acquisition layer,
sensing devices such as wearable biomedical devices and
implantable biomedical devices can be adopted to enable
pervasive sensing. In addition, social networks and electronic
health records can also serve as data sources to abstract the
physiology and psychological status of any PT. In the data
management layer, data cleaning, data reduction and data fu-
sion technologies can be used to pre-process the data generated
in HDT, before actual utilization and potential storage. The big
data storage frameworks (e.g., hadoop distributed file system,
HBase and openstack swift) can be tailored to robustly storing
HDT data. To guarantee data security and privacy, existing
tools of cybersecurity, privacy-preserving mechanisms and dis-
tributed ledger technology can be applied. In the data analysis
and decision making layer, powerful AI algorithms, such as
supervised learning, unsupervised learning and reinforcement
learning (RL), can be employed to facilitate the HDT-enabled
PH applications including personalized diagnosis, personal-
ized prescription, personalized surgery and personalized re-
habilitation. In the communication layer, the communication
paradigms of HDT must be carefully tailored through modi-
fications of existing and emerging communication techniques,
such as Bluetooth, ZigBee, molecular communication, tactile
Internet and semantic communication. While the computation
layer can be established by integrating novel computation
paradigms, including multi-access edge computing and edge-
cloud collaboration.
Collecting human physiological data for building HDT
can be enabled by wearable and implantable devices. Smart
Wearable Biomedical
Implantable Biomedical
Social Networks
Sensing Electronic
Data Storage
Data Sharing Privacy-Preserving
Distributed Ledger
Data Analysis
Supervised Learning Unsupervised Learning Rinforcement Learning
Data Stream Diagnosis Prescription Surgery Rehabilitation
Data Analysis
and Decision
Making Layer
Communication Layer
ZigBee Bluetooth
Low Energy
Communication 6G xURLLC
Computation Layer
Multi-Access Edge
Fig. 4. An overview of key technologies for implementing the networking architecture of HDT.
wearable devices with biomedical sensors (e.g., smart watches,
smart socks and smart garments) are developing rapidly in
recent years. These wearable devices offer an exciting oppor-
tunity for measuring human physiological signals in a nonin-
trusive and real-time manner by leveraging flexible electronic
packaging and semiconductor technology [45]. Nevertheless,
smart wearable devices are limited to monitoring only specific
types of physiological parameters that are readily accessible
from outside the human body (e.g., body temperature, heart
rate and step number). Smart in-body biomedical devices with
implantable biomedical sensors that are placed directly inside
human bodies promise an entirely new realm of applications.
Besides, the development of nanotechnology and its appli-
cation in medicine, through nanomaterials and nanodevices,
enables in-body biomedical devices to have diverse clinical
applications, such as biomarker concentration monitoring, net-
work tomography inference of human tissues and monitoring
of oxygen levels in the surrounding tissues.
In addition to physiological data, the high-fidelity digital
model of humans also needs psychological data. Software-
based soft sensors collect data mainly from social networks,
such as Instagram, Twitter and Facebook, where humans
sometimes post information about their feelings and emotions
[31]. Asides from data obtained through various sensing
devices (from pervasive sensing and social networks sensing),
electronic health records (EHRs), which record individual
health-related data generated by various medical institutions,
is another important source of data for HDT applications. All
different data acquisition approaches for HDT are visually
illustrated in Fig. 6.
In this section, we provide a review of data acquisition
solutions for HDT. First, we discuss wearable biomedical
sensing, implantable biomedical sensing and social network
sensing in Section III-A, which allows HDT to collect real-
time physiological and psychological information of PTs. Be-
sides, EHRs is another crucial data source accurately reflecting
PTs’ health information, which can be used to build the
prototype of VTs and improve diagnosis accuracy and patient
outcomes as reviewed in Section III-B. Finally, in Section
III-C, we provide a brief summary of the reviewed papers,
and discuss some opening issues that should be considered
in data acquisition for HDT. The roadmap of Section III is
illustrated in Fig. 5.
Structure of Section III.
Data Acquisition for HDT
A. Pervasive
B. Electronic
Health Record
1) Wearable
2) Implantable
a) Head biomedical
b) Torso biomedical
c) Limb biomedical
a) Biomarker detection
b) Vital signals monitoring
3) Social Networks
C. Lessons
c) Inferring the network
topology of cells
1) Prototype Building
2) Improving the Diagnosis Accuracy
and Patient Outcomes
Fig. 5. The roadmap of Section III.
Electronic Health
Pervasive Sensing
Biomedical Sensing
Biomedical Sensing
Social Networks
CT Scan X-ray
MRI Ultrasound
Biomarker Detection
Fig. 6. An illustration of data acquisition approaches for HDT.
A. Pervasive Sensing
In PH applications, there exists a high-natural variability
which can be explained through the innate differences of
human bodies, such as disease progression or response to
medical treatments. These parameters need to be understood
by biomedical sensors implemented in HDT to build a per-
sonalized digital representation of a PT and prevent false
positives. Therefore, in this section, we delve into the Internet
of medical things (IoMT) devices, which consists of wearable
biomedical devices, implantable biomedical devices embedded
with powerful sensors, and social networks that are often
implemented in HDT to ensure pervasive sensing.
1) Wearable Biomedical Sensing: Wearable biomedical de-
vices developed so far have been designed for different func-
tions and positions on human bodies. These wearable devices
can be classified into three categories, i.e., head, torso and limb
wearable biomedical devices. Wearable biomedical devices
equipped in those human body parts can sample diverse
physiological data which is required by the construction of
HDT. We survey them in the following.
a) Head biomedical devices: As the utmost part of a human
body, the head includes various important organs, such
as eyes, nose, ears and mouth. Corresponding wearable
health-tracking biomedical sensors designed for the head
may be smart glasses, contact lenses, helmets, hearing
aids, earrings, etc.
Smart glasses function as a kind of wearable micro-
computers embedded along with many biosensors,
like gyroscopes, accelerometers (ACC) and pressure
sensors. The representative smart glasses are Google
Glass, JINS MEME ( Jin-Co Ltd., Japan), and Recon
Jet (Recon Instruments; Vancouver, BC). Smart glass
is a versatile platform, which can assist in weight
management by detecting and recording the eating
and drinking habits of individuals [46], and provide
individuals with real-time electrocardiograms (ECG)
monitoring [47], while providing feedback (e.g.,
healthcare recommendations) to help users (espe-
cially people with Parkinson) with self-management
[48], etc.
Smart contact lenses, in terms of the architecture,
may consist of multiple biomedical sensors (e.g.,
capacitive, strain, microfluidic, channel electrochem-
ical, fluorescent and holographic biomedical sensors)
[49]. It is similar to implants and can be worn or
removed easily by users. Smart contact lenses can
be used for physiology monitoring by continuously
detecting glucose levels in tears, while tracking the
progression of patients’ glaucoma by continuously
measuring eye lens’ curvature [50].
Smart helmets are commonly embedded with biosen-
sors such as infrared temperature and heart rate
biomedical sensors. A smart helmet can be used as
an alternative for monitoring parameters commonly
obtained by other smart wearable devices, e.g., body
temperature and heart rate, via sensors located inside
and around the helmet [51]. In addition to these,
helmets worn on the head can also be employed
for detecting brain activities. For instance, SmartCap
Technologies in Australia developed a smart helmet,
called SmartCap, working as a fatigue tracking sys-
tem, which possesses the ability of measuring brain-
wave signals to alert the potential risk of microsleeps,
i.e., unintended or uncontrolled momentary sleeps
usually in a duration of 5-10 seconds [52]. Besides,
electroencephalographic (EEG) headsets can monitor
the EEG of brain signals to measure mental activity,
such as tracking a user’s confusion states for assess-
ing or quantification the user’s focus level [53].
Smart wearable devices worn on ears are usually in
the form of hearing aids, earphones and earbuds.
These devices possess the ability to monitor phys-
iological parameters such as ECG signals, breathing
rate, pondus hydrogenii and lactate values of sweat,
using biomedical sensors like amperometric and po-
tentiometric biomedical sensors [54], [55].
Some wearables can also be worn inside the mouth.
Examples of those devices are smart mouth guard
(MG)-type wearable devices which can monitor teeth
clenching with embedded force sensors [56]. An-
other example is DentiTrac, introduced by Braebon
Medical Corporation in Canada, which is a miniature
oral device integrated on an oral appliance, used for
monitoring patients’ sleep apnea and adherence [57].
b) Torso biomedical devices: Torso is the central part of
the human body, where many vital organs are located.
Common examples of wearable devices placed on the
torso are smart clothing, belts and underwear.
The realization of smart clothing significantly de-
pends on smart textile technologies, where the health
monitoring sensors are completely embedded in the
fabric. Such clothes can offer comfort and smart
healthcare to their users. For instance, a smart jacket
integrated with a health monitoring system was de-
veloped in [58] to monitor the pulse rate and EEG in
the human body. A low-power and wearable real-time
ECG monitoring system, integrated into a smart T-
shirt that can monitor the thermal status of an athlete,
has been developed as mentioned in [59].
Skin patches/tattoos with biomedical sensors may
also have a great potential in continuous monitoring
vital physiological signals to improve the healthcare
quality of patients. Several key applications of smart
patches/tattoos have been demonstrated in ECG mon-
itoring [60], pulse rate monitoring [61], biomarker
measurements in sweat [62] and continuous glucose
monitoring [63].
A typical application scenario of smart belts em-
bedded with sensors such as inertial sensor and
bend sensor is for monitoring shoulder and trunk
posture [64]. A smart belt can also be designed for
monitoring physiological signals such as real-time
respiratory signs monitoring [65] and detection of
respiratory rate, body movement, in-and-out-of-bed
activity and snoring events during sleep [66].
c) Limb biomedical devices: The four limbs of the human
body are the main executors of activities. Wearable
devices worn on limbs are mostly accessories, such as
smart bracelets, watches, armbands, rings and wristbands.
These devices can monitor physiological parameters
while posing little or no interference to the users’ normal
Smart armbands are wearable devices with embedded
sensors such as photoplethysmography (PPG), ACC
and ECG sensors, and are usually worn on the upper
arms to facilitate seamless health monitoring with
maximum comfort to the wearer. Application exam-
ples include continuous estimation of the respiration
rate [67], measurement of blood pressure [68] and
ECG signals [69], [70].
A smartwatch with versatile sensors such as ECG
sensor, PPG sensor and oxygen saturation (SpO2)
sensor is a small smartphone-like device and is usu-
ally worn on the wrist to assist users in monitoring
their health.
Smart jewelry such as bracelets and rings are worn
on the upper limbs. Relying on some built-in sensors,
smart jewelries can serve as a cost-effective method
to provide PH monitoring, such as daily wrist activ-
ity recognition [71], assisting urination recognition
[72], detection of gait characteristics [73] and early
detection of COVID-19 [74].
Smart pants with embedded sensors (e.g., inertial
sensors, textile pressure sensors) are worn on the
lower limbs. Their application scenarios include de-
tection of physical movements of patients in reha-
bilitation, as well as athletes’ customized training
programmes [75], [76]. Smart pants are also used
for measuring the pressure between some key muscle
groups of the lower limbs and skin to protect the keen
joint during exercise [77].
The recent development of conductive fibres and
elastomers motivates the realization of embedded
sensor fabrics such as smart socks. Smart socks can
measure the plantar pressure distribution to assist
in the recognition of movement patterns (e.g., gait
analysis) [78]. It can also be used to track daily
physiological status such as temperature [79], elec-
tromyography signals (EMG) [80].
2) Implantable Biomedical Sensing: The advances in nan-
otechnology continue to facilitate the growth of implantable
biomedical sensors such as implantable nanobiosensors. Un-
like wearable devices, implantable biomedical sensors are
generally deployed inside human bodies, for instance, on
organs and bloodstream, to perform more powerful tasks
ranging from precision drug delivery, precision sensing and
micro procedures in the inaccessible organs of the body [81].
In this subsection, we focus on the abilities of implantable
biomedical sensors to facilitate precision monitoring and ac-
tivity measurements such as disease biomarker detection, vital
signals monitoring and detection of cell network topology.
a) Biomarker detection: Biomarkers are often released into
the blood by certain diseases, such as cancers and dia-
betes. For instance, isopropanol (IPA) is the biomarker
for both types of diabetes [82], while α-Fetoprotein
is the common biomarker for hepatocellular carcinoma
(HCC) [83]. Continuous monitoring of biomarkers in
real-time can significantly advance precision medicine
and is a more effective method compared to conventional
blood tests, where the concentration of biomarkers in
any sample taken is usually very low, especially for
chronic diseases in their early stages. When adopted, the
biomarker detection technique can aid the detection of
the cancer biomarkers concentration in the blood vessels.
Through moving the nanobiosensors along the blood ves-
sels of the cardiovascular system, the biomarkers around
the cancer cells could be detected, thereby facilitating the
early diagnosis of cancer [84]–[86]. It can also facilitate
continuous measurement of biomarker concentration re-
leased by bacterial cells, thereby estimating the number
of infectious bacteria while deducing the progress of the
infection for early detection of infectious diseases [87].
The biomarker detection can also enhance continuous
monitoring of endothelial cells shedding in arteries as
an early sign of heart attacks [88].
b) Vital signals monitoring: Asides from biomarker con-
centrations, implantable nanobiosensors are also useful
for continuous monitoring of vital signals including lo-
cal temperature, cardiovascular system, musculoskeletal
system and pondus hydrogenii within the central nervous
of the human body. Treatments of traumatic brain injury
(TBI) can lead to increased intracranial pressure (ICP),
thereby interfering with vital functions. As a result, the
ICP must be constantly monitored, for instance through
implantable nanobiosensors [89]. Moreover, intracranial
temperature (ICT) monitoring is another vital signal since
it is associated with changes related to the volume of
air inside skulls. The changes in ICT in the range of
35-40 C can be monitored by biodegradable intracra-
nial nanobiosensors [90], [91]. Besides ICP and ICT,
implantable nanobiosensors are also used to monitor
intracranial electrical activity an important activity
when managing neural disorders such as Parkinson’s
disease, Alzheimer’s disease, depression and chronic pain
[90]. Furthermore, implantable blood flow monitoring
bio-compatible sensors have been developed to wrap
around the blood vessels and provide continuous in-
formation about the vessel patency [90]. The healing
process of musculoskeletal injuries requires continuous
real-time measurement of physiological pressures exerted
on soft tissues (e.g., tendons and muscles). To achieve
this, implantable biosensors such as piezoelectric and
capacitive sensors have been developed to monitor strain
and pressure on soft tissues [92], [93].
c) Inferring the network topology of cells: Advanced im-
plantable nanobiosensors-based techniques can precisely
map cellular connections and allow in-vivo characteriza-
tion of their activities, thereby assisting the modelling of
VT while improving the efficiency of diagnosis of dis-
orders. Indeed, the communications between implantable
nanobiosensors rely on the use of tissues as communi-
cation channels through the molecular communications
paradigm. This makes it possible to send back-back
signals between nanobiosensors. These signals can be
measured at different points of the tissue to infer the
actual network topology of cells. Similarly, the in-vivo
cellular activities measurement can be achieved through
implantable bio-compatible nanobiosensors. These sen-
sors interface with organs by establishing connections
among individual cells to measure cellular signals at
intracellular, intercellular and extracellular levels [94]. In
[95], the authors proposed a topology inference technique
for human brain cortex neuronal networks based on
network tomography theory. An implantable nanobiosen-
sors technology was used to achieve high-resolution and
high-precision brain neuron network mapping as well as
characterization of neuronal activities.
3) Social Networks Sensing: To maintain a typical digital
counterpart (i.e., the VT) with high fidelity, both physiological
states and emotions of the corresponding human (i.e., PT)
must be synchronized in real-time. While EEG signals, usually
obtained through hard sensors or devices, are common data
sources for human emotions recognition [96], [97], social
networks or platforms can similarly play a crucial role in the
detection of human emotions [31], [98], [99].
Nowadays, information including thoughts, mental states
and moments of individuals are sometimes available on their
respective social network platforms, which can contribute to
the amount of psychology-related data being generated every
second. For instance, people often share their thoughts and
feelings regarding the COVID-19 pandemic or other common
diseases through their social network platforms. This data can
be processed in real-time to comprehend human current psy-
chological state through the use of sentiment analysis and emo-
tion detection [99]. By leveraging the COVID-19 pandemic-
related data generated through social network platforms, the
authors in [100]–[102] carried out sentiment analysis and
emotion detection of Twitter users to limit the possibility of
various mental health issues such as depression. This effort
further justifies the importance of social networks sensing to
maintain an accurate synchronization of VT in HDT.
B. Electronic Health Record
EHRs are real-time, patient-centred records consisting of
medical imaging including computed tomography (CT) scan,
X-ray, magnetic resonance images (MRI) and ultrasound.
It also contains medical and treatment histories, allergies,
diagnoses, etc. [103]. EHR can be adopted in HDT to improve
diagnosis accuracy and patient outcomes.
1) Prototype Building: EHR data (e.g., medical imaging)
can be used to build a prototype VT of a PT (e.g., human,
tissues, organs). For instance, a modelling approach for a
patient-specific coronary artery (a VT of a coronary artery)
was proposed in [104]. The arterial models were developed
based on patient-specific medical imaging (i.e., coronary op-
tical coherence tomography (OCT) and angiography) that were
acquired during pre-treatment. Tai et al. in [105] reconstructed
a 3D patient-specific human lung VT model based on CT
images. Ahmadian et al. in [106] created a VT of the hu-
man vertebra by relying on a deep convolutional generative
adversarial network (DCGAN), which was trained through a
set of quantitative micro-computed tomography (micro-QCT)
images of the trabecular bone. Gillette et al. in [107] proposed
a framework for the generation of the cardiac VT of human
electrophysiology. Their solution targeted at providing a digital
replica of the human heart using clinically-attained MRI.
2) Improving the Diagnosis Accuracy and Patient Out-
comes: The use of EHR data when adopting HDT can
improve the diagnosis accuracy and patient outcomes. For
instance, Allen et al. in [108] used a VT model to forecast the
progression of relevant clinical measurements in the patient
at risk of ischemic stroke based on EHR data. The results
show that this VT model can accurately forecast the disease
progression, thereby allowing for tailored treatment to improve
patient outcomes. Guo et al. in [109] predicted disease onset
information based on the patient’s EHR data to guide disease
prevention and treatment personalization.
C. Lessons Learned
Data acquisition in HDT requires massive diverse devices
to collect physiological and psychological data for precisely
mapping the PT to its digital representation, i.e., VT. More
specifically, wearable biomedical devices, including head,
torso, and limb biomedical devices, are used to measure human
physiological signals, while implantable biomedical devices
are implanted inside the body to collect physiological data.
Social networks can also serve as soft sensors to gather
psychology-related data, and electronic health records (EHR)
can be another source of data for HDT, providing information
on the patient’s treatment histories, allergies, diagnoses and
While the diverse data sources mentioned above can be
utilized to dynamically map a real-time human body into the
digital space, there are several opening issues that need to
be considered. First, while human body sensing technologies
are continuously advancing, they are still unable to collect
ultra-high fine-grained data of the human body. Second, to
dynamically map the human body, real-time data collection
is required. However, it is not feasible to expect humans
to wear wearable biomedical devices constantly, and even
the battery capacity of these devices, whether wearable or
implantable, is limited. Third, integrating those heterogeneous
data into a unified model that represents a human body pose
a significant challenge. Fourth, the management and storage
of such massive data is also challenging. Finally, security
and privacy issues of those data have to be well addressed
by taking into account ethics and moralities, particularly for
healthcare-related data that are highly sensitive.
Communications in HDT can be categorized into two tiers,
i.e., on-body and beyond-body communications. On-body
communication refers to short-range communications around
the human body, typically including communications among
body sensors and communications between body sensors and
the gateway (e.g., smartphone). Beyond-body communication
focuses on communications between gateways and remote
servers that host HDT (e.g., cloud servers and data centers).
Sensor Visual
Sensor Glucose
Molecular Communication
Tactile Intern et/
Semantic Communication
Base Station
Data Center
Core Network
On-Body Communication Beyond-Body Communication
Fig. 7. The communication architecture of HDT.
The communication architecture of HDT is demonstrated in
Fig. 7 and analyzed in the following subsections.
In this section, we first review on-body communication
techniques for collecting physiological and psychological in-
formation from PTs to gateways, including Bluetooth low-
energy, ZigBee and molecular communication in Section IV-A.
Second, in Section IV-B, we discuss that the data transmitted
between the physical world and digital world using beyond-
body communication usually carries massive and multimodal
data for synchronization updates and immersive experience,
among others. This can be enabled by tactile Internet for
not only transmitting regular information (e.g., text, image
and video), also feeding back human sensations (e.g., haptic
feelings). Additionally, the explosive growth of data in HDT
and limited bandwidth necessitates a paradigm shift away
from the conventional focus of classical information theory.
In response, we review semantic communication solutions
for beyond-body communication of HDT, which can serve to
alleviate the spectrum scarcity for HDT applications. Finally,
we summarize the reviewed papers, and discuss some opening
issues that should be considered in Section IV-C.
A. On-Body Communication
The HDT on-body communication is generally realized
through an essential component, called the wireless body area
network (WBAN) [110]–[112], where on or in-body sensors
are connected and are responsible for the transmission of
collected data to the gateway through WBAN. The structure
of on-body communication can be divided into two types. In
the first type, sensors directly communicate with the gateway,
forming a star topology, as shown in Fig. 8 (a). In the second
type, sensors are connected to a body’s central processor in the
first level for preprocessing to potentially reduce the amount
of raw data while saving energy, and then the preprocessed
Biosensor Gateway Central Processor
(a) Star topology (b) Two-level topology
Direct Link
Fig. 8. The topology of on-body communication.
data are forwarded to a gateway in the second level, and thus
forming a two-level communication topology, as shown in Fig.
8 (b).
Since body sensors are typically low-power, resource-
constrained and low bit-rate, they require an energy-efficient
and low-range wireless link. Table V exemplifies the informa-
tion of some human body physiological parameters as well as
corresponding data rates. This defines baseline requirements
for wireless connectivity. Given the above requirements, the
majority of existing implementations in healthcare applications
rely on Bluetooth low energy (BLE) or ZigBee to wirelessly
transmit the data collected by the sensors to a gateway.
1) Bluetooth Low Energy: BLE has many lucrative features
that can be important to on-body communications, such as
low-power, low-rate and low-range [114]. It operates in 2.4
Signals Data Range Data Rate Resolution (bits) Frequency (Hz)
Glucose Concentration 0-20 mM 480-1600 bps 12-16 0-50
Blood Flow 1-300 ml/s 480 bps 12 40
ECG 0.5-4 mV 6-48 Kbps 12-16 0-1000
Blood pH 6.8-7.8 pH units 48 bps 12 4
Pulse Rate 0-150 BPM 48 bps 12 4
Respiratory Rate 2-50 breaths/min 240 bps 12 0.1-20
Blood Pressure 10-400 mm Hg 1.2 Kbps 12 0-100
Pathogen detection 0-1 2.4-160 bps 12 -
Blood Temperature 32-40 C 2.4-120 bps 12 0-1
Blood CRP 0-8 mg/l 2.4 bps 12 -
GHz frequency band while the time needed for connection
setup and data transfer is less than 3 ms. Furthermore, BLE
offers a data rate of up to 1 Mbps which makes it a suitable
choice for on-body communication. BLE consumes 90 %less
energy than the traditional Bluetooth due to its low duty cycle.
Moreover, BLE has a much lower synchronization time of
a few milliseconds, which makes it particularly valuable for
latency-critical devices used in healthcare applications [114].
BLE, however, does not support multicast communication,
which may be important for some HDT applications. Ad-
ditionally, since it only provides single-hop star topology,
it cannot be applied in multilevel hierarchical architectures
[115]. In addition, BLE has an inherent security flaw due to
its weak pairing protocol [116], and which can be exploited
by attackers.
2) ZigBee: It is developed atop the IEEE 802.15.4 standard
for low-power, short-range and low-rate data connectivity
[115]. Unlike BLE, ZigBee supports various network topolo-
gies and a huge number of sensors, making it a more robust
solution. Furthermore, ZigBee is known to be not only a secure
key technology that offers three levels of security mode to
prevent unauthorized access of data by attackers, but also
capable of supporting multicast [115]. However, ZigBee shares
the frequency bands with other types of radio technologies
(e.g., WiFi and Bluetooth), and therefore, suffers from unin-
tentional interference. Additionally, ZigBee communications
are vulnerable to radio jamming attacks due to the openness
of the wireless medium. When a malicious device emits a high-
power jamming signal, all ZigBee devices in its proximity will
be unable to work [117]. Therefore, its architecture requires a
significant upgrade to be suitable for adoption in HDT.
Since conveying information using electrical or electromag-
netic waves is impossible in small dimensions [118], radio
technologies (e.g., BLE and ZigBee) cannot be used inside
the human body, and this drives the emergence of molecular
communication (MC).
3) Molecular Communication: MC is a bio-inspired com-
munication method with the ability to mimic the communica-
tion mechanism of living cells [119]. As shown in Fig. 9, MC
relies on the use of molecules for the transmissions and recep-
tions of information. Specifically, a transmitter releases small
particles, called information particles, which are typically a
few nanometers to a few micrometres in size. An example is
when releasing molecules or lipid vesicles into an aqueous
medium (e.g., blood), tiny information particles propagate
Fig. 9. An illustration of the molecular communication.
freely until such particles arrive at a receiver. Subsequently,
the receiver detects and decodes the information encoded in
these particles. Despite its numerous advantages, it is not
clear how MC can establish interfaces for interconnecting
human bodies and the external environment. Such interfaces
are expected to possess the ability to convert chemical (or
molecular) signals into equivalents (e.g., electrical and optical
signals) acceptable by conventional communication mecha-
nisms [120]. Besides this interface issue, multiple-input and
multiple-output (MIMO) MC may be required to ensure real-
time health parameters detection in HDT, while guaranteeing
the protection of data security [84].
It is worth noting that, other wireless technologies, such as
narrowband IoT (NB-IoT) and IPv6 over low-power wireless
personal area networks (6LoWPAN) may also be alternatives
for on-body communication in HDT. A review of these tech-
nologies can be found in [81].
B. Beyond-Body Communication
HDT should be supported by the bidirectional real-time
synchronization between any PT-VT pair to ensure high fi-
delity of VT. This synchronization is, however, data-driven and
delay-sensitive. Furthermore, data captured through sensing
devices in the physical world are often complex, massive,
heterogeneous, multiscale, and with high noise. In addition
to real-time synchronization, interacting with VTs involves
more complex information that needs to be transmitted be-
tween the PT and users in the physical world. Multimodal
information, such as 3D virtual items, text, images, haptic
feedback, smells, among others, needs to be transmitted in
HDT under various applications to enhance the immersive
experience. These specific characteristics place a significant
burden on current communication networks. For instance, a
massive number of PTs simultaneously interact with their
respective VTs in the same area, which poses great challenges
to bandwidth-limited communication to support the delivery
of high-resolution contents [121]. To address these issues, we
provide a review of cutting-edge communication solutions that
can enable beyond-body communication for HDT.
1) Tactile Internet: HDT-enabled healthcare applications
require not only 360visual and auditory content for im-
mersive experiences but also haptic interactions. For example,
in the virtual simulation of liver surgery for optimal surgery
planning, visual and haptic feedback from the digital liver
is essential for accurate evaluations of complex intrahepatic
anatomical structures [122]. However, these feedbacks require
xURLLC service to ensure the timeliness necessary to facil-
itate efficient decision making in the physical environment.
Delayed feedback may cause serious disruptions, such as
deaths [123]. Fortunately, tactile Internet (TI) is a promising
solution, which facilitates fast multimodal interactions with
multisensory information [124]. Specifically, it can effectively
transmit audio-visual-haptic feedback in real-time between
real and virtual environments [125]. This will ensure that
PTs are immersed in virtual space in a holistic and multi-
sensory way. Beyond HDT, such a breakthrough in audio-
visual-haptic feedback transmission will also change the way
humans communicate worldwide.
Although TI has the potential to revolutionize the future
of wireless communication, it is still far from being deployed
on a large scale because of two major barriers. First, it is
still difficult to establish a consensus on the performance of
the TI, especially for large-scale implementations, owing to
the lack of a TI testbed. Second, the overall progress of TI
has been severely impeded due to the asynchronous efforts
among different disciplines of TI. To address these critical
issues, Gokhale et al. developed a common testbed, called
tactile Internet eXtensible testbed (TIXT), for TI applications
[125]. To present the TIXT proof of concept, two realistic use
cases belonging to two different classes human operator-
machine teleoperator in a virtual environment and a physical
environment were demonstrated. In [126], Polachan et al.
designed and implemented a tactile cyber-physical systems
(TCPS) testbed, called TCPSbed, to provide rapid prototyp-
ing and evaluation of TCPS applications. Different from the
previous testbed without assessments, TCPSbed includes tools
for the characterization of latency and control performance.
TI is expected to enable a paradigm shift from traditional
content-oriented communication to control-oriented communi-
cation [127]. However, such a paradigm has stringent require-
ments in terms of high reliability and sub-millisecond latency
which pose daunting challenges for resource allocation in
networks. Therefore, Gholipoor et al. investigated a joint radio
resource allocation and networks function virtualization (NFV)
resource allocation in a heterogeneous network [128]. The
authors jointly considered queuing delays, transmission delays
as well as delays resulting from the execution of the virtual
network function. Following this, the authors formulated a
resource allocation problem to minimize the total cost func-
tion subject to guaranteeing end-to-end delay of each tactile
user for this setup and proposed two heuristic algorithms to
solve the problem. In addition, since cellular networks are
resource constrained, accommodating haptic users along with
existing non-haptic users becomes a hard scheduling problem.
Therefore, Samanta et al. proposed an efficient latency-aware
uplink resource allocation scheme to satisfy the end-to-end
delay requirements of haptic users in a long-term evolution
(LTE)-based cellular network [129].
In summary, TI is already paving the way for HDT, where
PTs and VTs are connected via the extremely reliable and
responsive networks of the TI to enable real-time interactions.
2) Semantic Communication: The explosive growth of
data, expected in HDT, as well as the generally discussed
limited bandwidth issues in wireless communications, indi-
cates the necessity of revolutionizing the classical Shannon
theory based solutions. Semantic communication is a possi-
ble solution for HDT applications and is being considered
a disruptive technology with the ability to eliminate the
limitations of the conventional data-oriented communication
paradigm. Unlike traditions where channels with relatively
infinite capacities are required to ensure real-time traffic,
semantic-oriented communication-based paradigms allow in-
formation to be transmitted at the semantic level, rather
than bit sequences [130]. By integrating machine learning
(ML) algorithms, knowledge representation and reasoning
tools, semantic-oriented communication-based paradigms can
facilitate semantic recognition, knowledge modelling and co-
ordination [131]. Generally, semantic communication extracts
“meanings” of any transmitted information at the transmitter
through the use of ML algorithms and encoding the extracted
features with source knowledge base (KB). Then this semantic
information is transmitted to the intending receiver and is
successfully “interpreted” by the receiver using a matched
knowledge base between such a transmitter-receiver pair and
ML algorithms [130], [132].
As shown in Fig. 10, semantic communication mainly
consists of three components [130] :
-Semantic transmitter (encoder): This component is respon-
sible for the extraction and identification of the semantic
features of each raw message. It also performs message
compression as well as the removal of irrelevant information.
After this, it encodes the obtained features into symbols
(bits) for transmission.
-Semantic receiver (decoder): Semantic receiver decodes and
infers semantic features in a format or structure that is
understandable to the target user.
-Semantic noise: Semantic noise usually interferes with
semantic information during transmission and may result
in misunderstanding or misperception of the semantic in-
formation at the intending receiver. It commonly appears
in the procedures of semantic encoding, transmission and
Human Body Information Source
Semantic Encoding
Source KB
Physical Channel
Semantic Channel
Channel Decoding
Semantic Feature Extraction
Semantic Decoding
Destination KB
Semantic Feature Restoration
Semantic Noise
HDT Virtual Response
Semantic Transmitter Semantic Receiver
Bits Bits
Channel Encoding
Fig. 10. Semantic communication for HDT.
Semantic communication is recognized as a promising tech-
nology to support the wide proliferation of intelligent devices,
such as XR devices and smartphones, in HDT with specific
requirements of huge radio resources, time-sensitivity of trans-
mitted data, low latency and high accuracy. We highlight two
potential benefits that semantic communication can bring to
-Alleviating the burden on data transmission in HDT: Take
the application of XR in HDT as an example. In the scenario
of telemedicine through HDT, VTs of doctors and patients
are presented in a digital environment through XR, which
generate massive of data in various of forms (e.g., text,
audio, images, video and haptic) to be transmitted. To
guarantee the ideal immersive experience for users, the end-
to-end latency and data rate requirements have to be strictly
met. In the semantic communication paradigm, the data can
be extracted semantically first. This allows XR devices to
transmit the information concerned by the XR server for
operation after understanding and filtering out the irrelevant
information to save bandwidth and reduce computing latency
at the XR server. Meanwhile, the XR server can also extract
semantic information, ignoring irrelevant details in the face
of bandwidth constraints, and thereby reducing downlink
pressure. Moreover, with the decrease of the amount of
actual bits transmitted, all intelligent devices in HDT can
work in a more energy-efficient manner.
-Promoting the data security and privacy in data transmis-
sion: By pre-processing the source data in semantic com-
munication, the communication parties (e.g., PT-VT pairs)
only exchange semantic information extracted according to
the communication tasks, instead of the complete source
data, which can enhance the security of the network to a
large extent. Moreover, communication parties in semantic
communication are required to share their KBs to infer
the semantic information. This hinders eavesdroppers to
interpret valid information from the intercepted data without
obtaining the specific KBs from the communication parties.
It is of great significance to the privacy-sensitive health data
transmitted in HDT.
The limited computation and storage capabilities of these
intelligent devices, however, restrict the local implementation
of complex and energy-intensive ML algorithms (e.g., deep
neural networks (DNN) algorithms), training of semantic en-
coder, semantic decoder, channel encoder and channel decoder.
One way to eliminate this issue is to simplify the structure
of neural networks by performing model compression through
the adoption of network sparsification and quantization. Xie et
al. proposed a lightweight semantic communication system to
support the transmission of low-complexity text in IoT [133].
The presented approach removes redundant nodes and weights
from the semantic communication model by adopting neural
network pruning techniques, thus can reduce the required
computational resources at IoT devices. With this, IoT de-
vices can take advantage of semantic communication, thereby
lowering the required bandwidth when transmitting to the
cloud/edge. Besides, federated learning (FL) and distributed
learning are other learning-based techniques that can facilitate
efficient training of any ML-based semantic communication
system. The authors in [134] proposed an FL-enabled scheme
to support the information semantics of audio signals. The pre-
sented FL-enabled solution allows a joint training of federated
semantic communication models among IoT devices without
sharing sensitive information.
However, the lack of appropriate performance metrics is
a critical issue in semantic communication systems. Unlike
the traditional communication methods which often focus
on minimizing bit-error-rate (BER) and symbol-error rate
(SER) to ensure that more bits can be transmitted with fewer
communication resources, semantic communications are more
complicated. The performance metrics of semantic commu-
nication are diverse and depend on the type of messages. In
[135], sentence similarity was proposed as a suitable metric
to measure the semantic error of any transmitted sentence.
Similarly, a peak signal-to-noise ratio (PSNR) was used to
evaluate the performance of an image semantic communica-
tion system [136]. Indeed, the design guidance for semantic
communication is still provided by the Shannon theory. With
such a guide, semantic information can be encoded into bit
streams, and then transformed into physical signals before
being transmitted via communication channels. As a result,
various existing traditional signal processing schemes can
support semantic communications, while advanced wireless
communication technologies are expected to enable more
efficient semantic communication systems [130].
C. Lessons Learned
Communication in HDT involves both on-body and beyond-
body communications. On-body communication utilizes wire-
less technologies such as BLE, ZigBee, and MC to transmit
data around the human body, enabled by WBAN. Beyond-
body communication connects the PT and its corresponding
VT, utilizing cutting-edge technologies such as TI and seman-
tic communication to transmit the massive multimodal data.
While these communication technologies can enable the
communication layer in HDT, there are several opening is-
sues remaining. First, since multiple wireless communication
technologies may be implemented simultaneously, interfer-
ence among them should be addressed to ensure reliable
communications. Second, efficient resource optimization of
communication resources is crucial to enable effective in-
teraction between PT and VT. Resource optimization should
aim to balance the use of available resources and minimize
the delay in data transmission. Third, ensuring the security
and privacy of healthcare-related data during transmission is
critical. This involves implementing appropriate encryption
mechanisms and secure communication protocols to prevent
unauthorized access to sensitive patient data. Additionally,
different measures should be taken to ensure the integrity of
the data, such as digital signatures and secure timestamps.
In summary, addressing these communication-related issues
is essential to ensure the successful implementation of HDT.
Future research should focus on developing innovative solu-
tions for interference management, resource optimization, and
security and privacy protection to enable reliable and secure