Conference PaperPDF Available

A hierarchical distributed fog computing architecture for big data analysis in smart cities

A Hierarchical Distributed Fog Computing Architecture for
Big Data Analysis in Smart Cities
Bo Tang
Department of Electrical,
Computer, and Biomedical
University of Rhode Island
Zhen Chen
Department of Electrical,
Computer, and Biomedical
University of Rhode Island
Gerald Hefferman
Warren Alpert Medical School
Brown University
Department of Electrical,
Computer, and Biomedical
University of Rhode Island
Tao Wei
Department of Electrical,
Computer, and Biomedical
University of Rhode Island
Haibo He
Department of Electrical,
Computer, and Biomedical
University of Rhode Island
Qing Yang
Department of Electrical,
Computer, and Biomedical
University of Rhode Island
The ubiquitous deployment of various kinds of sensors in
smart cities requires a new computing paradigm to support
Internet of Things (IoT) services and applications, and big
data analysis. Fog Computing, which extends Cloud Com-
puting to the edge of network, fits this need. In this pa-
per, we present a hierarchical distributed Fog Computing
architecture to support the integration of massive number
of infrastructure components and services in future smart
cities. To secure future communities, it is necessary to build
large-scale, geospatial sensing networks, perform big data
analysis, identify anomalous and hazardous events, and offer
optimal responses in real-time. We analyze case studies us-
ing a smart pipeline monitoring system based on fiber optic
sensors and sequential learning algorithms to detect events
threatening pipeline safety. A working prototype was con-
structed to experimentally evaluate event detection perfor-
mance of the recognition of 12 distinct events. These exper-
imental results demonstrate the feasibility of the system’s
city-wide implementation in the future.
CCS Concepts
Computer systems organization Distributed ar-
chitectures; Computing methodologies Parallel com-
puting methodologies; Machine learning; Security and
privacy Network security;
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from
ASE BD&SI 2015, October 07-09, 2015, Kaohsiung, Taiwan
2015 ACM. ISBN 978-1-4503-3735-9/15/10. . . $15.00
DOI: 10.1145/2818869.2818898
Fog computing; smart city; big data analysis; distributed
computing architecture; pipeline safety monitoring
In the past decade, the concept of Smart City has drawn
great interest in both the science and engineering fields as
a means to overcome the challenges associated with rapidly
growing urbanization. A smart city is an urbanized area
where multiple sectors cooperate to achieve sustainable out-
comes through the analysis of contextual, real-time infor-
mation. Smart cities reduce traffic congestion and energy
waste, while allocating stressed resources more efficiently
and improving quality of life. Smart city technologies are
projected to become massive economic engines in the com-
ing decades, and are expected to be worth a cumulative 1.565
trillion dollars by 2020, and 3.3 trillion dollars by 2025. To-
day, companies are actively vying for a central role in the
smart city ecosystem, creating an expanding number of tech-
nologies and employment opportunities. Already, IBM, In-
tel, GE, and many other companies have initiated projects
to integrate their products and services into a smart city
framework [1]. Hundreds of millions of jobs will be created to
facilitate this smart city conversion; in June 2014, Intel and
the city of San Jose, CA began collaborating on a project
implementing Intel’s Smart City Demonstration Platform,
installing a network of air quality and climate sensors which
alone fostered 25,000 new high tech jobs in San Jose [2].
While rapid urbanization provides numerous opportuni-
ties, building smart cities presents many challenges, such
as large-scale geospatially distributed sensing networks, big
data analysis, machine-to-machine communication, etc. Cur-
rently, the “pay-as-you-go” Cloud Computing paradigm is
widely used in enterprises to address the emerging chal-
lenges of big data analysis because of its scalable and dis-
tributed data management scheme. However, data centers
in the Cloud faces great challenges on the burden of ex-
ploding amount of big data and the additional requirements
of location awareness and low latency at the edge of net-
work necessary for smart cites. Fog Computing recently,
proposed by Cisco, extends the Cloud Computing paradigm
to run geo-distributed applications throughout the network
[6]. In contrast to the Cloud, the Fog not only performs
latency-sensitive applications at the edge of network, but
also performs latency-tolerant tasks efficiently at powerful
computing nodes at the intermediate of network. At the
top of the Fog, Cloud Computing with data centers can be
still used for deep analytics.
In this paper, we introduce a hierarchical distributed Fog
Computing architecture for big data analysis in smart cities.
Due to the natural characteristic of geo-distribution in big
data generated by massive sensors, we distribute intelligence
at the edge of a layered Fog computing network. The com-
puting nodes at each layer perform latency-sensitive appli-
cations and provide quick control loop to ensure the safety
of critical infrastructure components. Using smart pipeline
monitoring as a use case, we implemented a prototypical
4-layer Fog-based computing paradigm to demonstrate the
effectiveness and the feasibility of the system’s city-wide im-
plementation in the future.
2.1 Computing and Communication Architec-
ture for Smart Cities
The new challenges of big data analysis posed by smart
cities demand that researchers investigate and develop novel
and high-performance computing architectures. The rising
of Cloud Computing and Cloud Storage in industry provides
a solution to support dynamic scalability in many smart city
applications, such as large scale data management for smart
house [16], smart lighting [7] and video surveillance [10], and
intensive business and academic computing tasks in educa-
tion institutions [20]. However, the deployment of massive
numbers of sensors in future smart cities requires location
awareness and low latency, which are lacking in current com-
mercial Cloud Computing models. In [6], a Fog Comput-
ing platform is developed to extend the Cloud Computing
paradigm to the edge of the machine-to-machine network
to support the Internet of Things. The work described in
this paper develops this Fog Computing concept further, and
the new paradigm will be described in detail in the following
2.2 Smart Computing Technologies in Smart
In addition to the large-scale data storage, the“smartness”
of infrastructure in future smart cities requires intelligent
data analysis for smart monitoring and actuation to achieve
automated decision making, thereby ensuring the reliabil-
ity of infrastructure components and the safety of public
health. Such “smartness”in smart cities derives from the em-
ployment of many advanced artificial intelligence algorithms
or the combination of several of them, including density
distribution modeling [18], supervised and non-supervised
machine learning algorithms [11] [17], and sequential data
learning [19], to name a few. The wide use of heteroge-
neous senors leads to another challenge to extract useful
information from a complex sensing environment at differ-
ent spatial and temporal resolutions [13]. Current state-of-
the-art methods usually shallow this problem: they firstly
apply supervised learning algorithms to identify pre-defined
patterns and use unsupervised learning algorithms to detect
data anomalies. Then, sequential learning methods with
spatial-temporal association are employed to infer local ac-
tivities or predefined events. Complex city-wide spatial and
longer temporal activities or behaviors could be further de-
tected at a higher layer [13]. It is worth noting that the
proposed hierarchical architecture in this paper is suitable
for such distributed employment of artificial intelligence al-
gorithms across multiple layers.
The big data in smart cities exhibits a new characteris-
tic: geo-distribution [5]. This new dimension of big data
requires that the data needs to be processed near the sen-
sors at the edge, instead of the data centers in traditional
Cloud computing paradigm. It is necessary to offer low la-
tency responses to protect the safety of critical infrastruc-
ture components. Fog Computing is a suitable paradigm by
extending the Cloud Computing to the edge of network. Be-
cause the data is processed at the edge, quick control loops
are feasible using the Fog Computing model.
The proposed 4-layer Fog computing architecture is shown
in Fig. 1. At the edge of network, layer 4, is the sens-
ing network which contains numerous sensory nodes. Those
sensors are non-invasive, highly reliable, and low cost; thus,
they can be widely distributed at various public infrastruc-
tures to monitor their condition changes over time. Note
that massive sensing data streams are generated from these
sensors that are geospatially distributed, which have to be
processed as a coherent whole.
The nodes at the edge forward the raw data into the next
layer, layer 3, which is comprised of many low-power and
high-performance computing nodes or edge devices. Each
edge device is connected to and responsible for a local group
of sensors that usually cover a neighborhood or a small com-
munity, performing data analysis in a timely manner. The
output of the edge device has two parts: the first are re-
ports of the results of data processing to an intermediate
computing node at its next upper layer, while the second is
simple and quick feedback control to a local infrastructure
to respond to isolated and small threats to the monitored
infrastructure components.
Layer 2 consists of a number of intermediate computing
nodes, each of which is connected to a group of edge de-
vices at layer 3 and associates spatial and temporal data to
identify potential hazardous events. Meanwhile, it makes
quick response to control the infrastructure when hazardous
events are detected. The quick feedback control provided at
layers 2 and 3 acts as localized “reflex” decisions to avoid
potential damage [15]. For example, if one segment of gas
pipeline is experiencing a leakage or a fire, these computing
nodes will detect the threat and shutdown the gas supply
to this area. Meanwhile, the data analysis results at these
two layers are reported to the top layer, for large-scaled and
long-term behavior analysis and condition monitoring.
The top layer is a Cloud Computing data center, provid-
ing city-wide monitoring and centralized controlling. Com-
plex, long-term, and city-wide behavior analyses can be also
performed at this layer, such as large-scale event detection,
long-term pattern recognition, and relationship modeling, to
Figure 1: The 4-layer Fog computing architecture in smart cities, in which scale and latency sensitive appli-
cations run near the edge.
support dynamic decision making. This allows municipali-
ties to perform city-wide response and resource management
in the case of a natural disaster or a large-scale service in-
In summary, the 4-layer Fog Computing architecture sup-
ports the quick response at neighborhood-wide, community-
wide, and city-wide levels, providing high computing perfor-
mance and intelligence in future smart cities.
In this section, we present the implementation of 4-layer
Fog Computing architecture for smart pipeline monitoring.
Pipelines play important role in resource and energy sup-
plying and are essential infrastructure components in cities.
However, several threats endanger the integrity of pipeline,
such as aging and sudden environmental changes. Those
threats lead to corrosion, leakage, and failure of pipelines,
with serious economic and ecologic consequences [3][4]. We
show that the hierarchical Fog Computing architecture is
suitable for accurate and real-time monitoring of city-wide
pipelines and provides quick responses when predefined threats
and hazardous events are detected.
4.1 Layer 4: Fiber Optic Sensing Networks
At layer 4, optical fibers are used as sensors to measure the
temperature along the pipeline. Optical frequency domain
reflectometry (OFDR) system is applied to measure the dis-
continuity of the regular optical fibers [12]. With the contin-
uous sweep method, the Rayleigh scatter (about -80dB) as a
function of length along the fiber under test can be obtained
via the Fourier transform. With the time-domain filter and
cross-correlation method, the extracted frequency patterns
at certain locations can be used to detect the ambient phys-
ical change, such as strain, stress and temperature [8]. For
the detailed description of OFDR interrogation system, we
refer interested readers to our previous work [9] [14].
4.2 Layer 3: Edge Device for Feature Extrac-
Layer 3 is composed of parallelized small computing nodes,
or edge devices. Each edge device usually performs two com-
puting tasks. The first task is to identify potential threat
patterns on the incoming data streams from sensors using
machine learning algorithms, and the second one is to per-
form feature extraction for the computing at the upper layer
for further analysis. Considering a region governed by one
edge device with a total length of hundreds of meters, mil-
lions of temperature sensors in our high resolution sensing
network produce massive data streams and lead to a high
data rate. Instead of transmitting the raw sensor data to
layer 2, it is necessary to reduce the communication load
between the edge devices and the intermediate computing
nodes. Thereafter, raw sensor data is discarded.
4.3 Layer 2: Intermediate Computing Node
for Event Recognition
The intermediate computing nodes at layer 2 are con-
nected to tens and hundreds of edge devices, governing the
community-level sensors. The data streams from these edge
devices represent measurements at various locations. The
key is to associate the spatial and temporal data and to
identify potential hazardous events.
Assume an intermediate computing node connects nedge
devices, and denote a m×1 vector si(t) by the features
outputted from the i-th edge device at time t. Since the
sensors are static, the features output from each edge de-
vice contains the geospatial information. After receiving
all the features from nedge devices, we combine these n
groups of feature vectors into a mn ×1 feature vector x(t).
Hence, from time 1 to time t, this intermediate computing
node receives the data sequences X={x(1),· · · ,x(t)}, and
the task of event recognition at this layer is to recognize
the event pattern given its previous data sequences. After
that, we apply hidden Markov model (HMM) for modeling
the spatial-temporal pattern of each event in a probabilis-
tic manner. Specifically, at the learning state, we apply
the Baum Welch learning algorithm to estimate model pa-
rameters, and at the evaluation stage, we use maximum a
posteriori (MAP) rule for making classifications.
4.4 Layer 1: Cloud for Data Management
The top layer is at data centers of the Cloud, which col-
lects data and information from each intermediate comput-
ing node on layer 2. We build the Cloud using the open
source Hadoop, taking advantage of the power of clusters
and high performance computing and storage. At this layer,
very large-scale (city-wide) and very high-latency (years)
computing tasks will be performed, such as long-term natu-
ral disaster detection and prediction.
5.1 Sensor Data Collection
In our experiments, we built a prototype of pipeline mon-
itoring system. The layout of pipeline structure is shown
in Fig. 2. The optical fiber sensors were distributed along
this the pipeline such that the temperature of pipeline is
measured. The real-time data was collected from the fiber
sensor network along the prototypical pipeline system with
a temporal resolution of 1 second and a spatial resolution of
0.01 meters.
Figure 2: The layout of prototype pipeline system.
We simulated 12 events around the pipeline and collected
the sensor data of pipeline temperature. Each event includes
a heating and a cooling process. A heat source was placed
nearby, blowing the hot air towards the pipeline system. In
each experiment, 100 frames of sensing data were gathered,
where in the first 10 frames the system remained stable, from
11 to 40 frames the heat source was on, and from 41 to 100
the heat source was off.
5.2 Spatial-Temporal Event Recognition
We trained a HMM for each event. Each HMM has Q
hidden states, and the observation probability distribution
is modeled by a Gaussian mixture model (GMM) with K
Gaussian components. We performed 10-fold cross valida-
tion to evaluate the recognition performance. All the follow-
ing reported results were averaged over 10 runs. For each
sequential test data, we run online prediction, i.e., at time
frame t, a decision was made based on its currently and
previously observed sequence x0:t.
The online recognition performance with different number
of hidden states is shown in Fig. 3, when K= 2 Gaussian
components are used in GMM, and the performance with
different number of components in GMM is given in Fig.
4, when Q= 2 hidden states are used. The results in Fig.
3 and Fig. 4 illustrate that more hidden states and Gaus-
sian components used in HMM would increase the inference
performance due to the growing capacity of HMMs. How-
ever, the complex HMM models need more training data for
model parameters estimation and the computational com-
plexity would increase. The results also show that we are
able to obtain more than 90% accuracy to classify 12 events
at the end of the heating process (at the 40-th frame).
10 20 30 40 50 60 70 80 90 100
Time Frame
Averaged Classification Accurancy
Q = 2
Q = 3
Q = 4
Figure 3: The online inference performance with dif-
ferent number of hidden states in each HMM: Q= 2,
Q= 3, and Q= 4, when two components GMMs are
used (K= 2).
10 20 30 40 50 60 70 80 90 100
Time Frame
Averaged Classification Accurancy
K = 1
K = 2
K = 3
Figure 4: The online inference performance with dif-
ferent number of Gaussian components in the obser-
vation distribution of each HMM: K= 1,K= 2, and
K= 3, when two hidden states are used (Q= 2).
5.3 Discussion
The Fog Computing architecture has significant advan-
tages over the Cloud Computing architecture for smart city
monitoring. First, the distributed computing and storage
Figure 5: The comparisons of the amount of data transmitted to the Cloud and the response time for
hazardous events within three different architectures: (a). The amount of data that are sent to the Cloud
per second; (b). The response time for hazardous events, when the Internet bandwidth is 1Mb/s; (c). The
response time for hazardous events with different Internet bandwidths. The log values in the y-axis are used
to clearly illustrate the comparisons.
nodes of Fog Computing ideally suited to support the mas-
sive numbers of sensors distributed throughout a city to
monitor infrastructure and environmental parameters. If
Cloud Computing alone is used for this task, huge amounts
of data will need to be transmitted to data centers, neces-
sitating massive communication bandwidth and power con-
sumption. Specifically, suppose that we use current sensing
setup with 1cm spatial-resolution and 0.5s time-resolution,
and that each edge device covers 10 meters pipeline and each
computing node connects 5 edge devices. Considering the
total pipeline length Lranging from 10km to 50km, we com-
pare the size of data that needs to be sent to the Cloud per
second in Fig. 5(a) for the following three cases: our cur-
rent Fog Computing architecture with layers 2 and 3, the
Fog Computing architecture with only layer 3 by removing
the computing tasks at layer 2 to the Cloud, and the tradi-
tional Cloud Computing architecture in which both comput-
ing tasks at layers 2 and 3 are executed at Cloud. To clearly
illustrate the difference among these three architectures, we
plot log values of data size. The results in Fig. 5(a) show
that using Fog Computing, the data transmitted is about
0.02% of the total size, significantly reducing transmission
bandwidth and power consumption.
Second, Fog Computing supports real-time interactions.
Because of high burdens on data transmission, Cloud Com-
puting fails to provide real-time control. To quantify the re-
sponse time for hazardous events under the above three com-
puting architectures, we assume that the execution speed in
computing node is 1GIPS, and we omit the memory access
time for simplifying our analysis. The comparison of re-
sponse time for these three architectures is shown in Fig.
5(b), when the Internet bandwidth connecting to the Cloud
is 1Mb/s. It is seen that the response time is dominated by
the data transmission in Cloud Computing. Fig. 5(c) also
shows the response time when different Internet bandwidths
are considered.
As shown in Fig. 1, different levels of latency of response
can be provided in the Fog computing, which is distinct from
the batch processing of Cloud Computing. These results il-
lustrate that Fog Computing addresses the big data anal-
ysis challenge by distributing computing tasks to the edge
devices and computing nodes at the edge of network, thus
offering optimal responses to changes in city environment.
In this paper, we introduce a hierarchical Fog Computing
architecture for big data analysis in smart cities. In contrast
to the Cloud, the Fog Computing parallelizes data process-
ing at the edge of network, which satisfies the requirements
of location awareness and low latency. The multi-layer Fog
computing architecture is able to support quick response
at neighborhood-wide, community-wide and city-wide lev-
els, providing high computing performance and intelligence
in future smart cities. We further enhance the“smartness”of
city infrastructure by employing advanced machine learning
algorithms across all system layers. To verify the effective-
ness of this architecture, we have implemented a prototypical
system for smart pipeline monitoring. A sequential learning
method, hidden Markov model, was successfully used for
hazardous event detection to monitor pipeline safety. These
observed performance of the hierarchical Fog Computing ar-
chitecture indicates its substantial potential as a method of
future smart city monitoring and control.
The authors are grateful to the anonymous reviewers for
providing comments and suggestions that improved the qual-
ity of the paper. This research is supported in part by NSF
grants CCF-1439011 and CCF-1421823. Any opinions, find-
ings, and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily
reflect the views of the National Science Foundation.
[1] Smart Cities.
en/smarter cities/overview/. Accessed: 2015-07-26.
[2] Smart Cities USA.
Accessed: 2015-07-26.
[3] R. Alzbutas, T. Ieˇsmantas, M. Povilaitis, and
J. Vitkut˙e. Risk and uncertainty analysis of gas
pipeline failure and gas combustion consequence.
Stochastic Environmental Research and Risk
Assessment, 28(6):1431–1446, 2014.
[4] B. Anifowose, D. Lawler, D. Horst, and L. Chapman.
Evaluating interdiction of oil pipelines at river
crossings using environmental impact assessments.
Area, 46(1):4–17, 2014.
[5] F. Bonomi, R. Milito, P. Natarajan, and J. Zhu. Fog
computing: A platform for internet of things and
analytics. In Big Data and Internet of Things: A
Roadmap for Smart Environments, pages 169–186.
[6] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog
computing and its role in the internet of things. In
Proceedings of the first edition of the MCC workshop
on Mobile cloud computing, pages 13–16, 2012.
[7] M. Castro, A. J. Jara, and A. F. Skarmeta. Smart
lighting solutions for smart cities. In International
Conference on Advanced Information Networking and
Applications Workshops (WAINA), pages 1374–1379,
[8] Z. Chen, G. Hefferman, and T. Wei. Multiplexed oil
level meter using a thin core fiber cladding mode
exciter. IEEE Photonics Technology Letters, (99):1–1,
[9] Z. Chen, Y. Zeng, G. Hefferman, and Y. Sun. Fiberid:
molecular-level secret for identification of things. In
IEEE International Workshop on Information
Forensics and Security (WIFS), pages 84–88, Dec
[10] S. Dey, A. Chakraborty, S. Naskar, and P. Misra.
Smart city surveillance: Leveraging benefits of cloud
data stores. In IEEE Conference on Local Computer
Networks Workshops, pages 868 – 876.
[11] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern
classification. John Wiley & Sons, 2012.
[12] M. Froggatt and J. Moore. High-spatial-resolution
distributed strain measurement in optical fiber with
Rayleigh scatter. Applied Optics, 37(10):1735–1740,
[13] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami.
Internet of Things (IoT): A vision, architectural
elements, and future directions. Future Generation
Computer Systems, 29(7):1645–1660, 2013.
[14] G. Hefferman, Z. Chen, L. Yuan, and T. Wei.
Phase-shifted terahertz fiber bragg grating for strain
sensing with large dynamic range. IEEE Photonics
Technology Letters, 27(15):1649–1652, Aug 2015.
[15] J. Kane, B. Tang, Z. Chen, J. Yan, T. Wei, H. He, and
Q. Yang. Reflex-Tree: A biologically inspired parallel
architecture for future smart cities. In International
Conference on Parallel Processing (ICPP), 2015.
[16] K. Su, J. Li, and H. Fu. Smart city and the
applications. In International Conference on
Electronics, Communications and Control (ICECC),
pages 1028–1031, 2011.
[17] B. Tang and H. He. ENN: Extended nearest neighbor
method for multivariate pattern classification. IEEE
Computational Intelligence Magazine (CIM),
10(3):52–60, 2015.
[18] B. Tang, H. He, Q. Ding, and S. Kay. A parametric
classification rule based on the exponentially
embedded family. IEEE Transactions on Neural
Networks and Learning Systems, 26(2):367–377, Feb
[19] B. Tang, S. Khokhar, and R. Gupta. Turn prediction
at generalized intersections. In IEEE Intelligent
Vehicles Symposium (IV), pages 1399–1404, 2015.
[20] S. Yamamoto, S. Matsumoto, and M. Nakamura.
Using cloud technologies for large-scale house data in
smart city. In IEEE 4th International Conference on
Cloud Computing Technology and Science, pages
141–148, 2012.
... e analysis and training process of deep learning is placed on the cloud, and the generated model is executed on the edge gateway, realizing the combination of cloud and edge cloud. Tang et al. [7] proposed a big data analysis framework based on smart city, which has good results on processing geographically widely distributed data. Crown et al. designed a lightweight plant disease identification model based on edge computing, deployed a lightweight deep neural convolutional network in an embedded system with limited computing power, and then realized a high-precision lightweight end model using quantitative blood river and simulation learning [8]. ...
Full-text available
Due to the development of information technology, the new type of smart campus integrates teaching, scientific research, office services, and many other applications, and online education information resources show a blowout growth. Art and painting teaching courses in the cloud era also face a series of digital challenges. In view of the development of digital teaching mode, art and painting course is different from other courses and has higher requirements for the transmission of audio and video. How to transmit the audio and video generated during interactive teaching efficiently and with low latency is a challenge. In order to realize the needs of art painting course teaching, based on the characteristics of edge cloud and mobile auxiliary equipment, build the “double cloud” mode of art painting teaching auxiliary system, through the cloud edge deployed different layers and design adaptive code switching mechanism, realize the art painting teaching and remote teaching video flow, help to realize the teaching interaction between teachers and students and instant feedback, and help to improve the art painting course teaching effect.
... Edge computing provides services near data sources, giving it a huge advantage in many mobile and Internet of ings applications. Tang et al. [4] propose a big data analysis framework based on smart city, which has good results on processing geographically widely distributed data. Wayne State University has built an edge computing platform on Geni Racks and has deployed three applications: real-time 3D campus map, vehicle status detection, and Internet of vehicles simulation. ...
Full-text available
Due to the development of information technology, the new type of smart campus integrated teaching, scientific research, office services, and many other applications, online education information resources show a blowout growth. However, huge amounts of data to the campus network carrying capacity put forward high requirements. With the help of edge computing and cloud computing platform and network, the teaching, scientific research, office services, and many other applications of big data integration mining analysis build intelligent management, teaching, scientific research, and life mode, to improve the utilization of education information resources. Improving the school management level and education quality is imperative. The research purpose of this paper is to study the integrated processing method of educational information resources based on edge computing. The research of this paper enriches the relevant optimization theory of edge computing on the one hand and provides reference for the practical integrated processing of practical educational information resources on the other hand, which has the dual significance of theory and practice.
... • Four Layers [130]: (1) IoT, (2) Although there are variations in the number of layers in these proposed architectures, we can see that IoT and Cloud layers are present in all of them. Thus, the variations can be seen as different ways of structuring the Fog Layer. ...
Full-text available
Fog computing is a paradigm that brings computational resources and services to the network edge in the vicinity of user devices, lowering latency and connecting with cloud computing resources. Unlike cloud computing, fog resources are based on constrained and heterogeneous nodes whose connectivity can be unstable. In this complex scenario, there is a need to define and implement orchestration processes to ensure that applications and services can be provided, considering the settled agreements. Although some publications have dealt with orchestration in fog computing, there are still some diverse definitions and functional intersection with other areas, such as resource management and monitoring. This article presents a systematic review of the literature with focus on orchestration in fog computing. A generic architecture of fog orchestration is presented, created from the consolidation of the analyzed proposals, bringing to light the essential functionalities addressed in the literature. This work also highlights the main challenges and open research questions.
... Usually, tasks needing real-time processing are executed on edge nodes while tasks needing big data analytics are sent over to the cloud [364]. Hierarchical load distribution between edge and cloud resources makes fog computing more successful. ...
Full-text available
The fog paradigm extends the cloud capabilities at the edge of the network. Fog computing-based real-time applications (Online gaming, 5G, Healthcare 4.0, Industrial IoT, autonomous vehicles, virtual reality, augmented reality, and many more) are growing at a very fast pace. There are limited resources at the fog layer compared to the cloud, which leads to resource constraint problems. Edge resources need to be utilized efficiently to fulfill the growing demand for a large number of IoT devices. Lots of work has been done for the efficient utilization of edge resources. This paper provided a systematic review of fog resource management literature from the year 2016–2021. In this review paper, the fog resource management approaches are divided into 9 categories which include resource scheduling, application placement, load balancing, resource allocation, resource estimation, task offloading, resource provisioning, resource discovery, and resource orchestration. These resource management approaches are further subclassified based on the technology used, QoS factors, and data-driven strategies. Comparative analysis of existing articles is provided based on technology, tools, application area, and QoS factors. Further, future research prospects are discussed in the context of QoS factors, technique/algorithm, tools, applications, mobility support, heterogeneity, AI-based, distributed network, hierarchical network, and security. A systematic literature review of existing survey papers is also included. At the end of this work, key findings are highlighted in the conclusion section.
Full-text available
Renewable energy and climate change are two of the most important and difficult issues facing the world today. The state of the art in these areas are changing rapidly, with new techniques and theories coming online seemingly every day. It is important for scientists, engineers, and other professionals working in these areas to stay abreast of developments, advances, and practical applications, and this volume is an outstanding reference and tool for this purpose. The paradigm in renewable energy and climate change shifts constantly. In today’s international and competitive environment, lean and green practices are important determinants to increase performance. Corresponding production philosophies and techniques help companies diminish lead times and costs of manufacturing, to improve delivery on time and quality, and at the same time become more ecological by reducing material use and waste, and by recycling and reusing. Those lean and green activities enhance productivity, lower carbon footprint and improve consumer satisfaction, which in reverse makes firms competitive and sustainable. These topics are all covered here, in this comprehensive, practical, new groundbreaking volume, written and edited by a team of experts in the field.
With the development of urbanization, artificial intelligence, communication technology, and the Internet of Things, cities have evolved a new ecology from traditional city structures, that is, smart city. Combining 5G and big data, the applications of smart cities have been extended to every aspect of residents' lives. Based on the popularization of communication equipment and sensors, the great improvement in data transmission and processing technology, the production efficiency in medical field, industrial field, and security field has been improved. This chapter introduces the current research related to smart cities, including its architecture, technologies, and equipment involved. Then it discussed the challenges and opportunities of explainable artificial intelligence (XAI), which is the next important development direction of AI, especially in the medical field, where patients and medical personnel have non-negligible needs for the interpretability of AI models. Then, taking COVID-19 as an example, it discussed how smart cities play a role during virus infection and introduced the specific applications designed so far. Finally, it discussed the shortcomings of the current situation and the aspects that can be improved in the future.
This paper endeavours to develop an investigation model of city cost, by joining the large information system with the data of city cost, in order to give the reference to the government to execute the strategy of the precise cost control. This paper thinks about that the investigation arrangement of city cost ought to incorporate basic leadership layer, supporting layer furthermore, demonstrating layer on the reasonable level, and its transmission way ought to incorporate the information assortment, information the executives, information mining, basic leadership and security insurance. At the point when it goes to the development of the subsystem, mechanical instruments for example, information mining, distributed computing and representation ought to be utilized, fundamentally so as to manufacture information procurement subsystem, information the board subsystem, information investigation subsystem, information transmission subsystem, etc., and give the realistic depiction for the comparing innovative way at the same time.
Full-text available
The importance of innovation policy is particularly important for Ecuador. A comprehensive social innovation has been introduced in the metropolitan district of Quito / Ecuador, which corresponds to all basic patterns of a spatial innovation process, but starts in the "Global South". It shows beginnings of a diffusion process on an international level. Institutional actors as possible barriers are identified as examples. Using the "intellectual capital of local government" approach, the local knowledge necessary for such a process can be more precisely delimited in institutional and spatial terms. Results include: (a) areas of knowledge with a potentially high impact on the introduction of a comprehensive process of social innovation can be identified, which clearly complement the previously rather technologically / economically dominated explanatory approach to knowledge regions, (b) such a process can also be set in motion without the cooperation of local universities - thus location structures can be presented in a new way, (c) there is a complementarity of knowledge and progress between Europe and Latin America in this field, (d) previous studies on the connection between innovations and urban development need to be supplemented, as existing studies always assumed economic innovations, as a rule in the "Global North".
Conference Paper
Full-text available
Navigating a car at intersections is one of the most challenging parts of urban driving. Successful navigation needs predicting of intention of other traffic participants at the intersection. Such prediction is an important component for both Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) Systems. In this paper, we present a driver intention prediction model for general intersections. Our model incorporates lane-level maps of an intersection and makes a prediction based on past position and movement of the vehicle. We create a real-world dataset of 375 turning tracks at a variety of intersections. We present turn prediction results based on Hidden Markov Model (HMM), Support Vector Machine (SVM), and Dynamic Bayesian Network (DBN). SVM and DBN models give higher accuracy compared to HMM models. We get over 90% turn prediction accuracy 1.6 seconds before the intersection. Our work advances the state of art in ADAS/AD systems with a turn prediction model for general intersections.
Conference Paper
Full-text available
This paper describes a new physical unclonable function for identification, FiberID, which uses the molecular level Rayleigh backscatter pattern within a small section of telecommunication-grade optical fiber as a means of verification and identification. The verification process via FiberID is experimentally studied, and an equal error rate (EER) of 0.06% is achieved. Systematic evaluation of FiberID is conducted in term of physical length and ambient temperature. Due to its inherent irreproducibility, FiberID holds the promise to significantly enhance current identification, security, and anti-counterfeiting technologies.
Full-text available
Recent advances in optical fiber sensing techniques have demonstrated the utility of terahertz (THz) gratings as a modality for strain and temperature sensing. However, these techniques remain reliant on the use of higher-order resonant peaks, enhancing their sensitivity at the cost of limited dynamic range. The use of a lower-order resonant peak for sensing can lead to a larger dynamic range at the cost of accuracy. This letter reports a π-phase-shifted THz fiber Bragg grating, fabricated using a femtosecond laser, capable of detecting changes in strain over a substantially larger dynamic range than previously reported methods with improved accuracy. A second THz grating without a π-phase-shifted structure, but otherwise identically constructed, was interrogated in series on the same optical fiber. The two devices were simultaneously experimentally investigated using a strain test (~1.0 mε in total), and the results presented in this letter. Additionally, theoretical models of the devices were created, which closely matched experimentally observed device physics.
Full-text available
In this paper, we extend the exponentially embedded family (EEF), a new approach to model order estimation and probability density function construction originally proposed by Kay in 2005, to multivariate pattern recognition. Specifically, a parametric classifier rule based on the EEF is developed, in which we construct a distribution for each class based on a reference distribution. The proposed method can address different types of classification problems in either a data-driven manner or a model-driven manner. In this paper, we demonstrate its effectiveness with examples of synthetic data classification and real-life data classification in a data-driven manner and the example of power quality disturbance classification in a model-driven manner. To evaluate the classification performance of our approach, the Monte-Carlo method is used in our experiments. The promising experimental results indicate many potential applications of the proposed method.
Full-text available
Taking into account a general concept of risk parameters and knowing that natural gas provides very significant portion of energy, firstly, it is important to insure that the infrastructure remains as robust and reliable as possible. For this purpose, authors present available statistical information and probabilistic analysis related to failures of natural gas pipelines. Presented historical failure data is used to model age-dependent reliability of pipelines in terms of Bayesian methods, which have advantages of being capable to manage scarcity and rareness of data and of being easily interpretable for engineers. The performed probabilistic analysis enables to investigate uncertainty and failure rates of pipelines when age-dependence is significant and when it is not relevant. The results of age-dependent modeling and analysis of gas pipeline reliability and uncertainty are applied to estimate frequency of combustions due to natural gas release when pipeline failure occurs. Estimated age-dependent combustion frequency is compared and proposed to be used instead of conservative and age-independent estimate. The rupture of a high-pressure natural gas pipeline can lead to consequences that can pose a significant threat to people and property in the close vicinity to the pipeline fault location. The dominant hazard is combustion and thermal radiation from a sustained fire. The second purpose of the paper is to present the combustion consequence assessment and application of probabilistic uncertainty analysis for modeling of gas pipeline combustion effects. The related work includes performance of the following tasks: to study gas pipeline combustion model, to identify uncertainty of model inputs noting their variation range, and to apply uncertainty and sensitivity analysis for results of this model. The performed uncertainty analysis is the part of safety assessment that focuses on the combustion consequence analysis. Important components of such uncertainty analysis are qualitative and quantitative analysis that identifies the most uncertain parameters of combustion model, assessment of uncertainty, analysis of the impact of uncertain parameters on the modeling results, and communication of the results’ uncertainty. As outcome of uncertainty analysis the tolerance limits and distribution function of thermal radiation intensity are given. The measures of uncertainty and sensitivity analysis were estimated and outcomes presented applying software system for uncertainty and sensitivity analysis. Conclusions on the importance of the parameters and sensitivity of the results are obtained using a linear approximation of the model under analysis. The outcome of sensitivity analysis confirms that distance from the fire center has the greatest influence on the heat flux caused by gas pipeline combustion.
Conference Paper
In the smart city environment, a wide variety of data are collected from sensors and devices to achieve value-added services. In this paper, we especially focus on data taken from smart houses in the smart city, and propose a platform, called Scallop4SC, that stores and processes the large-scale house data. The house data is classified into log data or configuration data. Since the amount of the log is extremely large, we introduce the Hadoop/MapReduce with a multi-node cluster. On top of this, we use HBase key-value store to manage heterogeneous log data in a schemaless manner. On the other hand, to manage the configuration data, we choose MySQL to process various queries to the house data efficiently. We propose practical data models of the log data and the configuration data on HBase and MySQL, respectively. We then show how Scallop4SC works as a efficient data platform for smart city services. We implement a prototype with 12 Linux servers. We conduct an experimental evaluation to calculate device-wise energy consumption, using actual house log recorded for one year in our smart house. Based on the result, we discuss the applicability of Scallop4SC to city-scale data processing.
Conference Paper
The smart cities of future need to have a robust and scalable video surveillance infrastructure. In addition it may also make use of citizen contributed video feeds, images and sound clips for surveillance purposes. Multimedia data from various sources need to be stored in large scalable data stores for compulsory retention period, on-line, off-line analytics and archival. Multimedia feeds related to surveillance are voluminous and varied in nature. Apart from large multimedia files, events detected using video analytics and associated metadata needs to be stored. The underlying data storage infrastructure therefore needs to be designed for mainly continuous streaming writes from video cameras and some variety in terms of I/O sizes, read-write mix, random vs. sequential access. As of now, the video surveillance storage domain is mostly dominated by iSCSI based storage systems. Cloud based storage is also provided by some vendors. Taking in account the need for scalability, reliability and data center cost minimization, it is worth investigating if large scale video surveillance backend can be integrated to the open source cloud based data stores available in the “big data” trend. We developed a multimedia surveillance backend system architecture based on the Sensor Web Enablement framework and cloud based “key-value” stores. Our framework gets data from camera/ edge device simulators, splits media files and metadata and stores those in a segregated way in cloud based data stores hosted on Amazons EC2. We have benchmarked performances of a few cloud based key-value stores under large scale video surveillance workload and demonstrated that those perform satisfactorily, bringing in inherent scalability and reliability of a cloud based storage system to a video surveillance system for a smart safe city. With a case study of the storage of video surveillance system, we show in this paper that with the availability of several cloud based d- stributed data stores and benchmarking tools, an application's data management needs can be served using hybrid cloud based data stores and selection of such stores can be facilitated using benchmark tools if the application workload characteristics are known.